From ff3f1a1a6ba524581818259645d2ab99d46d7c07 Mon Sep 17 00:00:00 2001 From: Javier Eguiluz Date: Mon, 26 May 2025 15:29:35 +0200 Subject: [PATCH] [HttpClient] Update the concurrent requests section --- http_client.rst | 75 +++++++++++++++++++++++++++---------------------- 1 file changed, 41 insertions(+), 34 deletions(-) diff --git a/http_client.rst b/http_client.rst index f4be8b2460b..e2b0b4f41bf 100644 --- a/http_client.rst +++ b/http_client.rst @@ -1314,65 +1314,72 @@ remaining to do. Concurrent Requests ------------------- -Thanks to responses being lazy, requests are always managed concurrently. -On a fast enough network, the following code makes 379 requests in less than -half a second when cURL is used:: +Symfony's HTTP client makes asynchronous HTTP requests by default. This means +you don't need to configure anything special to send multiple requests in parallel +and process them efficiently. +Here's a practical example that fetches metadata about several Symfony +components from the Packagist API in parallel:: + + $packages = ['console', 'http-kernel', '...', 'routing', 'yaml']; $responses = []; - for ($i = 0; $i < 379; ++$i) { - $uri = "https://http2.akamai.com/demo/tile-$i.png"; - $responses[] = $client->request('GET', $uri); + foreach ($packages as $package) { + $uri = sprintf('https://repo.packagist.org/p2/symfony/%s.json', $package); + // send all requests concurrently (they won't block until response content is read) + $responses[$package] = $client->request('GET', $uri); } - foreach ($responses as $response) { - $content = $response->getContent(); - // ... + $results = []; + // iterate through the responses and read their content + foreach ($responses as $package => $response) { + // process response data somehow ... + $results[$package] = $response->toArray(); } -As you can read in the first "for" loop, requests are issued but are not consumed -yet. That's the trick when concurrency is desired: requests should be sent -first and be read later on. This will allow the client to monitor all pending -requests while your code waits for a specific one, as done in each iteration of -the above "foreach" loop. +As you can see, the requests are sent in the first loop, but their responses +aren't consumed until the second one. This is the key to achieving parallel and +concurrent execution: dispatch all requests first, and read them later. +This allows the client to handle all pending responses efficiently while your +code waits only when necessary. .. note:: - The maximum number of concurrent requests that you can perform depends on - the resources of your machine (e.g. your operating system may limit the - number of simultaneous reads of the file that stores the certificates - file). Make your requests in batches to avoid these issues. + The maximum number of concurrent requests depends on your system's resources + (e.g. the operating system might limit the number of simultaneous connections + or access to certificate files). To avoid hitting these limits, consider + processing requests in batches. Multiplexing Responses ~~~~~~~~~~~~~~~~~~~~~~ -If you look again at the snippet above, responses are read in requests' order. -But maybe the 2nd response came back before the 1st? Fully asynchronous operations -require being able to deal with the responses in whatever order they come back. +In the previous example, responses are read in the same order as the requests +were sent. However, it's possible that, for instance, the second response arrives +before the first. To handle such cases efficiently, you need fully asynchronous +processing, which allows responses to be handled in whatever order they arrive. -In order to do so, the -:method:`Symfony\\Contracts\\HttpClient\\HttpClientInterface::stream` -accepts a list of responses to monitor. As mentioned +To achieve this, the +:method:`Symfony\\Contracts\\HttpClient\\HttpClientInterface::stream` method +can be used to monitor a list of responses. As mentioned :ref:`previously `, this method yields response -chunks as they arrive from the network. By replacing the "foreach" in the -snippet with this one, the code becomes fully async:: +chunks as soon as they arrive over the network. Replacing the standard ``foreach`` +loop with the following version enables true asynchronous behavior:: foreach ($client->stream($responses) as $response => $chunk) { if ($chunk->isFirst()) { - // headers of $response just arrived - // $response->getHeaders() is now a non-blocking call + // the $response headers just arrived + // $response->getHeaders() is now non-blocking } elseif ($chunk->isLast()) { - // the full content of $response just completed - // $response->getContent() is now a non-blocking call + // the full $response body has been received + // $response->getContent() is now non-blocking } else { - // $chunk->getContent() will return a piece - // of the response body that just arrived + // $chunk->getContent() returns a piece of the body that just arrived } } .. tip:: - Use the ``user_data`` option combined with ``$response->getInfo('user_data')`` - to track the identity of the responses in your foreach loops. + Use the ``user_data`` option along with ``$response->getInfo('user_data')`` + to identify each response during streaming. Dealing with Network Timeouts ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~