A brief analysis of the Android Volley library source code (HTTP Request part)

A brief analysis of the Android Volley library source code (HTTP Request part)

The directory tree of the source code:

  1. [android]
  2. ┗━[volley]
  3. ┣━AuthFailureError.java
  4. ┣━Cache.java
  5. ┣━CacheDispatcher.java
  6. ┣━DefaultRetryPolicy.java
  7. ┣━ExecutorDelivery.java
  8. ┣━InternalUtils.java
  9. ┣━Network.java
  10. ┣━NetworkDispatcher.java
  11. ┣━NetworkError.java
  12. ┣━NetworkResponse.java
  13. ┣━NoConnectionError.java
  14. ┣━ParseError.java
  15. ┣━RedirectError.java
  16. ┣━Request.java
  17. ┣━RequestQueue.java
  18. ┣━Response.java
  19. ┣━ResponseDelivery.java
  20. ┣━RetryPolicy.java
  21. ┣━ServerError.java
  22. ┣━TimeoutError.java
  23. ┣━[toolbox]
  24. ┃ ┣━AndroidAuthenticator.java
  25. ┃ ┣━Authenticator.java
  26. ┃ ┣━BasicNetwork.java
  27. ┃ ┣━ByteArrayPool.java
  28. ┃ ┣━ClearCacheRequest.java
  29. ┃ ┣━DiskBasedCache.java
  30. ┃ ┣━HttpClientStack.java
  31. ┃ ┣━HttpHeaderParser.java
  32. ┃ ┣━HttpStack.java
  33. ┃ ┣━HurlStack.java
  34. ┃ ┣━ImageLoader.java
  35. ┃ ┣━ImageRequest.java
  36. ┃ ┣━JsonArrayRequest.java
  37. ┃ ┣━JsonObjectRequest.java
  38. ┃ ┣━JsonRequest.java
  39. ┃ ┣━NetworkImageView.java
  40. ┃ ┣━NoCache.java
  41. ┃ ┣━PoolingByteArrayOutputStream.java
  42. ┃ ┣━RequestFuture.java
  43. ┃ ┣━StringRequest.java
  44. ┃ ┗━Volley.java
  45. ┣━VolleyError.java
  46. ┗━VolleyLog.java

It can be seen that the Volley source code is placed in a rather messy way, and the classes of different functional modules are not classified into different packages. In contrast, the source code structure of UIL is more standardized and reasonable.

Start with common cases and infer the project architecture

The simplest usage example given on the official website is as follows:

final TextView mTextView = (TextView) findViewById(R.id.text);

  1. // 1. Create a new Queue  
  2. RequestQueue queue = Volley.newRequestQueue( this );
  3. String url = "http://www.google.com" ;
  4.  
  5. // 2. Create a new Request and write the listener  
  6. StringRequest stringRequest = new StringRequest(Request.Method.GET, url,
  7. new Response.Listener<String>() {
  8. @Override  
  9. public   void onResponse(String response) {
  10. // Display the first 500 characters of the response string.  
  11. mTextView.setText( "Response is: " + response.substring( 0 , 500 ));
  12. }
  13. }, new Response.ErrorListener() {
  14. @Override  
  15. public   void onErrorResponse(VolleyError error) {
  16. mTextView.setText( "That didn't work!" );
  17. }
  18. });
  19. // 3. Put the Request into the Queue for execution  
  20. queue.add(stringRequest);
  21.  
  22. Combined with the following picture:
  23.  
  24. Architecture diagram
  25.  
  26. We can get a general idea of ​​how to use Volley (see comments) and its internal structure. The following is a brief source code level description of this use case.
  27. Volley Class
  28.  
  29. The Volley class provides four static methods to facilitate users to create new queues. Among them:
  30.  
  31. public   static RequestQueue newRequestQueue(Context context) {
  32. return newRequestQueue(context, null );
  33. }
  34.  
  35. One sentence will eventually call:
  36.  
  37. // Pass in context, stack=null, maxDiskCacheBytes=-1  
  38. public   static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {
  39. File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
  40.  
  41. String userAgent = "volley/0" ; //1. Set userAgent  
  42. try {
  43. String packageName = context.getPackageName();
  44. PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0 );
  45. userAgent = packageName + "/" + info.versionCode;
  46. } catch (NameNotFoundException e) {
  47. }
  48.  
  49. if (stack == null ) {
  50. if (Build.VERSION.SDK_INT >= 9 ) { //2. Choose which httpclient to use  
  51. stack = new HurlStack();
  52. } else {
  53. // Prior to Gingerbread, HttpUrlConnection was unreliable.  
  54. // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html  
  55. stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
  56. }
  57. }
  58.  
  59. Network network = new BasicNetwork(stack);
  60.  
  61. RequestQueue queue;
  62. if (maxDiskCacheBytes <= - 1 )
  63. {
  64. // No maximum size specified  
  65. queue = new RequestQueue( new DiskBasedCache(cacheDir), network); //3. Create a new Queue  
  66. }
  67. else  
  68. {
  69. // Disk cache size specified  
  70. queue = new RequestQueue( new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
  71. }
  72.  
  73. queue.start(); // 4. Pass in the Queue  
  74.  
  75. return queue;
  76. }

Worth noting:

Volley will decide whether to use java.net.HttpURLConnection (Build.VERSION.SDK_INT >= 9) or org.apache.http.client.HttpClient based on the SDK version.

After creating a new Queue, the Queue will be started immediately.

  1. The stack class is responsible for sending the request (com.android.volley.Request) and getting the response (org.apache.http.HttpResponse), and the network class is responsible for analyzing and processing the response and packaging it into a NetworkResponse (com.android.volley.NetworkResponse).
  2.  
  3. We first ignore the network-related details and look at the queue implementation and request scheduling strategy.
  4. RequestQueue
  5.  
  6. Let's first look at the construction method of RequestQueue:
  7.  
  8. public RequestQueue(Cache cache, Network network) {
  9. this (cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
  10. }

Call:

  1. public RequestQueue(Cache cache, Network network, int threadPoolSize) {
  2. this (cache, network, threadPoolSize,
  3. new ExecutorDelivery( new Handler(Looper.getMainLooper())));
  4. }
  5.  
  6. Here comes a new face ExecutorDelivery. According to the literal meaning, it is responsible for distributing the request results to the main thread, or executing the callback (listener) on the main thread. Continue calling:
  7.  
  8. public RequestQueue(Cache cache, Network network, int threadPoolSize,
  9. ResponseDelivery delivery) {
  10. mCache = cache;
  11. mNetwork = network;
  12. mDispatchers = new NetworkDispatcher[threadPoolSize];
  13. mDelivery = delivery;
  14. }

Here comes a new face, NetworkDispatcher. Note the literal meaning of the array length parameter threadPoolSize. Combined with the above Volley architecture diagram, we can guess that NetworkDispatcher is a worker thread that waits in a loop and executes requests on the Queue through the network.

After RequestQueue is instantiated, its start() method is called:

  1. public   void start() {
  2. stop(); // Make sure any currently running dispatchers are stopped.  
  3. // Create the cache dispatcher and start it.  
  4. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
  5. mCacheDispatcher.start();
  6.  
  7. // Create network dispatchers (and corresponding threads) up to the pool size.  
  8. for ( int i = 0 ; i < mDispatchers.length; i++) {
  9. NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
  10. mCache, mDelivery);
  11. mDispatchers[i] = networkDispatcher;
  12. networkDispatcher.start();
  13. }
  14. }
  15.  
  16. Accordingly:
  17.  
  18. public   void stop() {
  19. if (mCacheDispatcher != null ) {
  20. mCacheDispatcher.quit();
  21. }
  22. for ( int i = 0 ; i < mDispatchers.length; i++) {
  23. if (mDispatchers[i] != null ) {
  24. mDispatchers[i].quit();
  25. }
  26. }
  27. }

The logic here is simple:

Stop all old tasks before starting (i.e. interrupt all worker threads).

Start a worker thread responsible for cache.

Start n worker threads responsible for the network.

The worker thread starts to wait continuously for requests from the Queue.

Request

Next, execute queue.add(stringRequest); and a request is added to the queue. The code is as follows:

  1. public <T> Request<T> add(Request<T> request) {
  2. // Tag the request as belonging to this queue and add it to the set of current requests.  
  3. request.setRequestQueue( this );
  4. synchronized (mCurrentRequests) {
  5. mCurrentRequests.add(request);
  6. }
  7.  
  8. // Process requests in the order they are added.  
  9. request.setSequence(getSequenceNumber());
  10. request.addMarker( "add-to-queue" ); // marker is used to indicate the current status of the request, which is actually used for logging  
  11.  
  12. // If the request is uncacheable, skip the cache queue and go straight to the network.  
  13. if (!request.shouldCache()) {
  14. mNetworkQueue.add(request);
  15. return request;
  16. }
  17.  
  18. // Insert request into stage if there's already a request with the same cache key in flight.  
  19. synchronized (mWaitingRequests) {
  20. String cacheKey = request.getCacheKey();
  21. if (mWaitingRequests.containsKey(cacheKey)) {
  22. // There is already a request in flight. Queue up.  
  23. Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
  24. if (stagedRequests == null ) {
  25. stagedRequests = new LinkedList<Request<?>>();
  26. }
  27. stagedRequests.add(request);
  28. mWaitingRequests.put(cacheKey, stagedRequests);
  29. if (VolleyLog.DEBUG) {
  30. VolleyLog.v( "Request for cacheKey=%s is in flight, putting on hold." , cacheKey);
  31. }
  32. } else {
  33. // Insert 'null' queue for this cacheKey, indicating there is now a request in  
  34. // flight.  
  35. mWaitingRequests.put(cacheKey, null );
  36. mCacheQueue.add(request);
  37. }
  38. return request;
  39. }
  40. }

The logic here is:

Make some settings for the newly added request.

If cache is not needed, add the request directly to the network queue.

Check if the request is being executed based on the key. If it is, put it into the waiting list. It is assumed that when the request is completed, a method will be called to delete the key from the waiting list, and then execute the waiting request in turn. If not, add it to the cache queue.

CacheDispatcher

Assuming that the uri access is the first execution, the corresponding request will be placed in the cache queue. When the cache worker thread (cache dispatcher) finds that there is a request in the cache queue, it will immediately dequeue and execute it. Let's take a look at the run method of CacheDispatcher:

  1. public   class CacheDispatcher extends Thread {
  2.  
  3. private   final Cache mCache; // Initially passed in "new DiskBasedCache(cacheDir)"  
  4.  
  5. ...
  6.  
  7. public   void quit() {
  8. mQuit = true ;
  9. interrupt();
  10. }
  11.  
  12. @Override  
  13. public   void run() {
  14. if (DEBUG) VolleyLog.v( "start new dispatcher" );
  15. Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
  16.  
  17. // Make a blocking call to initialize the cache.  
  18. mCache.initialize();
  19.  
  20. Request<?> request;
  21. while ( true ) {
  22. // release previous request object to avoid leaking request object when mQueue is drained.  
  23. request = null ; // Ensure that the memory can be reclaimed in time after the last request is completed.  
  24. try {
  25. // Take a request from the queue.  
  26. request = mCacheQueue.take(); // blocking  
  27. } catch (InterruptedException e) {
  28. // We may have been interrupted because it was time to quit.  
  29. if (mQuit) {
  30. return ; // Exit point  
  31. }
  32. continue ;
  33. }
  34. try {
  35. request.addMarker( "cache-queue-take" );
  36.  
  37. // If the request has been canceled, don't bother dispatching it.  
  38. if (request.isCanceled()) {
  39. request.finish( "cache-discard-canceled" );
  40. continue ;
  41. }
  42.  
  43. // If the cache misses, the request is directly placed in the network queue  
  44. Cache.Entry entry = mCache.get(request.getCacheKey());
  45. if (entry == null ) {
  46. request.addMarker( "cache-miss" );
  47. // Cache miss; send off to the network dispatcher.  
  48. mNetworkQueue.put(request);
  49. continue ;
  50. }
  51.  
  52. // The cache has expired, so put the request directly into the network queue  
  53. if (entry.isExpired()) {
  54. request.addMarker( "cache-hit-expired" );
  55. request.setCacheEntry(entry);
  56. mNetworkQueue.put(request);
  57. continue ;
  58. }
  59.  
  60. //Wrap the data in the cache into a response  
  61. request.addMarker( "cache-hit" );
  62. Response<?> response = request.parseNetworkResponse(
  63. new NetworkResponse(entry.data, entry.responseHeaders));
  64. request.addMarker( "cache-hit-parsed" );
  65.  
  66. if (!entry.refreshNeeded()) {
  67. // The cache does not need to be refreshed, and the response is directly delivered to the delivery  
  68. mDelivery.postResponse(request, response);
  69. } else {
  70. // The cache needs to be refreshed. Now return the old content and put the request into the network queue.  
  71. request.addMarker( "cache-hit-refresh-needed" );
  72. request.setCacheEntry(entry);
  73.  
  74. // Mark the response as intermediate.  
  75. response.intermediate = true ;
  76.  
  77. // Post the intermediate response back to the user and have  
  78. // the delivery then forward the request along to the network.  
  79. final Request<?> finalRequest = request;
  80. mDelivery.postResponse(request, response, new Runnable() {
  81. @Override  
  82. public   void run() {
  83. try {
  84. mNetworkQueue.put(finalRequest);
  85. } catch (InterruptedException e) {
  86. // Not much we can do about this.  
  87. }
  88. }
  89. });
  90. }
  91. } catch (Exception e) {
  92. VolleyLog.e(e, "Unhandled exception %s" , e.toString());
  93. }
  94. }
  95. }
  96. }

Next, let’s take a look at the mDelivery.postResponse method.

ExecutorDelivery

From the above, we know that mDelivery is an instance of ExecutorDelivery (passed in when creating a new RequestQueue).

The initialization code of ExecutorDelivery is as follows:

  1. public ExecutorDelivery( final Handler handler) {
  2. // Make an Executor that just wraps the handler.  
  3. mResponsePoster = new Executor() { // java.util.concurrent.Executor;  
  4. @Override  
  5. public   void execute(Runnable command) {
  6. handler.post(command);
  7. }
  8. };
  9. }

For more information about java.util.concurrent.Executor, please refer to this article, which will not be expanded here.

The postResponse code is as follows:

  1. @Override  
  2. public   void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
  3. request.markDelivered(); //Mark as delivered  
  4. request.addMarker( "post-response" );
  5. mResponsePoster.execute( new ResponseDeliveryRunnable(request, response, runnable)); // Execute ResponseDeliveryRunnable in the handler passed in during initialization  
  6. }
  7.  
  8. ResponseDeliveryRunnable is a subclass of ExecutorDelivery, which is responsible for calling the corresponding listener method according to the different results of the request:
  9.  
  10. @SuppressWarnings ( "rawtypes" )
  11. private   class ResponseDeliveryRunnable implements Runnable {
  12.  
  13. private   final Request mRequest;
  14. private   final Response mResponse;
  15. private   final Runnable mRunnable;
  16.  
  17. public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
  18. mRequest = request;
  19. mResponse = response;
  20. mRunnable = runnable;
  21. }
  22.  
  23. @SuppressWarnings ( "unchecked" )
  24. @Override  
  25. public   void run() { // Execute in the main thread  
  26. // If this request has canceled, finish it and don't deliver.  
  27. if (mRequest.isCanceled()) {
  28. mRequest.finish( "canceled-at-delivery" ); // will call the finish method of RequestQueue  
  29. return ;
  30. }
  31.  
  32. // Deliver a normal response or error, depending.  
  33. if (mResponse.isSuccess()) {
  34. mRequest.deliverResponse(mResponse.result); //Call listener's onResponse(response)  
  35. } else {
  36. mRequest.deliverError(mResponse.error);
  37. }
  38.  
  39. // If this is an intermediate response, add a marker, otherwise we're done  
  40. // and the request can be finished.  
  41. if (mResponse.intermediate) {
  42. mRequest.addMarker( "intermediate-response" );
  43. } else {
  44. mRequest.finish( "done" );
  45. }
  46.  
  47. // If we have been provided a post-delivery runnable, run it.  
  48. if (mRunnable != null ) {
  49. mRunnable.run();
  50. }
  51. }
  52. }

Next, let's look back at how NetworkDispatcher handles the network queue.

NetworkDispatcher

The source code of NetworkDispatcher is as follows:

  1. public   class NetworkDispatcher extends Thread {
  2.  
  3. private   final Network mNetwork; // BasicNetwork instance  
  4.  
  5. ...
  6.  
  7. private   final BlockingQueue<Request<?>> mQueue; // network queue  
  8.  
  9. ...
  10.  
  11. public   void quit() {
  12. mQuit = true ;
  13. interrupt();
  14. }
  15.  
  16. @TargetApi (Build.VERSION_CODES.ICE_CREAM_SANDWICH)
  17. private   void addTrafficStatsTag(Request<?> request) { // Conveniently count Volley's network traffic  
  18. ...
  19. }
  20.  
  21. @Override  
  22. public   void run() {
  23. Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
  24. Request<?> request;
  25. while ( true ) {
  26. long startTimeMs = SystemClock.elapsedRealtime();
  27. // release previous request object to avoid leaking request object when mQueue is drained.  
  28. request = null ;
  29. try {
  30. //1. Block the request in the network queue  
  31. request = mQueue.take();
  32. } catch (InterruptedException e) {
  33. // We may have been interrupted because it was time to quit.  
  34. if (mQuit) {
  35. return ;
  36. }
  37. continue ;
  38. }
  39.  
  40. try {
  41. request.addMarker( "network-queue-take" );
  42.  
  43. // If the request was canceled already, do not perform the  
  44. // network request.  
  45. if (request.isCanceled()) {
  46. request.finish( "network-discard-cancelled" );
  47. continue ;
  48. }
  49.  
  50. addTrafficStatsTag(request);
  51.  
  52. //2. Block the execution request in the network object  
  53. NetworkResponse networkResponse = mNetwork.performRequest(request);
  54. request.addMarker( "network-http-complete" );
  55.  
  56. // If the server returned 304 AND we delivered a response already,  
  57. // we're done -- don't deliver a second identical response.  
  58. if (networkResponse.notModified && request.hasHadResponseDelivered()) { // 304 means the resource has not been modified  
  59. request.finish( "not-modified" );
  60. continue ;
  61. }
  62.  
  63. //3. Convert NetworkResponse to Response  
  64. Response<?> response = request.parseNetworkResponse(networkResponse);
  65. request.addMarker( "network-parse-complete" );
  66.  
  67. // Write to cache if applicable.  
  68. // TODO: Only update cache metadata instead of entire record for 304s.  
  69. if (request.shouldCache() && response.cacheEntry != null ) {
  70. // 4. Response is put into cache  
  71. mCache.put(request.getCacheKey(), response.cacheEntry);
  72. request.addMarker( "network-cache-written" );
  73. }
  74.  
  75. //5. Call back the result through Delivery  
  76. request.markDelivered();
  77. mDelivery.postResponse(request, response);
  78. } catch (VolleyError volleyError) {
  79. volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
  80. parseAndDeliverNetworkError(request, volleyError);
  81. } catch (Exception e) {
  82. VolleyLog.e(e, "Unhandled exception %s" , e.toString());
  83. VolleyError volleyError = new VolleyError(e);
  84. volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
  85. mDelivery.postError(request, volleyError);
  86. }
  87. }
  88. }
  89.  
  90. private   void parseAndDeliverNetworkError(Request<?> request, VolleyError error) {
  91. error = request.parseNetworkError(error);
  92. mDelivery.postError(request, error);
  93. }
  94. }

The processing flow of NetworkDispatcher is similar to that of CacheDispatcher, see the comments. An introduction to TrafficStats can be found here.

The key to the above code lies in the two calls mNetwork.performRequest(request) and request.parseNetworkResponse(networkResponse).

  1. Network
  2.  
  3. Network is an interface with only one performRequest(Request<?> request) method:
  4.  
  5. public   interface Network {
  6.  
  7. public NetworkResponse performRequest(Request<?> request) throws VolleyError;
  8.  
  9. }

The implementation class of Network in this example is BasicNetwork:

  1. public   class BasicNetwork implements Network {
  2. protected   static   final   boolean DEBUG = VolleyLog.DEBUG;
  3. private   static   int SLOW_REQUEST_THRESHOLD_MS = 3000 ;
  4. private   static   int DEFAULT_POOL_SIZE = 4096 ;
  5. protected   final HttpStack mHttpStack;
  6. protected   final ByteArrayPool mPool;
  7.  
  8. public BasicNetwork(HttpStack httpStack) {
  9. // If a pool isn't passed in, then build a small default pool that will give us a lot of  
  10. // benefit and not use too much memory.  
  11. this (httpStack, new ByteArrayPool(DEFAULT_POOL_SIZE));
  12. }
  13. ...
  14. }
  15.  
  16. Note the two key members of BasicNetwork: mHttpStack and mPool, and the dependency on apache:
  17.  
  18. import org.apache.http.Header;
  19. import org.apache.http.HttpEntity;
  20. import org.apache.http.HttpResponse;
  21. import org.apache.http.HttpStatus;
  22. import org.apache.http.StatusLine;

But let's first look at the execution flow of performRequest():

  1. public NetworkResponse performRequest(Request<?> request) throws VolleyError {
  2. long requestStart = SystemClock.elapsedRealtime();
  3. while ( true ) {
  4. // Depends on org.apache.http.HttpResponse  
  5. HttpResponse httpResponse = null ;
  6. byte [] responseContents = null ;
  7. Map<String, String> responseHeaders = Collections.emptyMap();
  8. try {
  9. // 1. Generate header  
  10. Map<String, String> headers = new HashMap<String, String>();
  11. addCacheHeaders(headers, request.getCacheEntry());
  12. // 2. Initiate a request through httpsstack. Note that the action of 'initiating a request' is not performed in request, and request only saves the request information.  
  13. httpResponse = mHttpStack.performRequest(request, headers);
  14. // 3. Get some information about the request result  
  15. StatusLine statusLine = httpResponse.getStatusLine();
  16. int statusCode = statusLine.getStatusCode();
  17.  
  18. responseHeaders = convertHeaders(httpResponse.getAllHeaders());
  19. // 4. Use statusCode(304) to determine whether cache can be used directly  
  20. if (statusCode == HttpStatus.SC_NOT_MODIFIED) {
  21.  
  22. Entry entry = request.getCacheEntry();
  23. if (entry == null ) {
  24. return   new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, null ,
  25. responseHeaders, true ,
  26. SystemClock.elapsedRealtime() - requestStart);
  27. }
  28.  
  29. // Get data from cache and return a new NetworkResponse  
  30. entry.responseHeaders.putAll(responseHeaders);
  31. return   new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, entry.data,
  32. entry.responseHeaders, true ,
  33. SystemClock.elapsedRealtime() - requestStart);
  34. }
  35.  
  36. // 5. Determine whether redirection is needed by statusCode  
  37. if (statusCode == HttpStatus.SC_MOVED_PERMANENTLY || statusCode == HttpStatus.SC_MOVED_TEMPORARILY) {
  38. String newUrl = responseHeaders.get( "Location" );
  39. request.setRedirectUrl(newUrl);
  40. }
  41.  
  42. // 6. Take out the data in the response, which is a byte array  
  43. // Some responses such as 204s do not have content. We must check.  
  44. if (httpResponse.getEntity() != null ) {
  45. // Read data from outputstream via entityToBytes, throws IOException  
  46. responseContents = entityToBytes(httpResponse.getEntity());
  47. } else {
  48. // Add 0 byte response as a way of honestly representing a  
  49. // no-content request.  
  50. responseContents = new   byte [ 0 ];
  51. }
  52.  
  53. // if the request is slow, log it.  
  54. long requestLifetime = SystemClock.elapsedRealtime() - requestStart;
  55. logSlowRequests(requestLifetime, request, responseContents, statusLine);
  56.  
  57. if (statusCode < 200 || statusCode > 299 ) {
  58. throw   new IOException();
  59. }
  60. return   new NetworkResponse(statusCode, responseContents, responseHeaders, false ,
  61. SystemClock.elapsedRealtime() - requestStart);
  62. } catch (SocketTimeoutException e) {
  63. attemptRetryOnException( "socket" , request, new TimeoutError());
  64. } catch (ConnectTimeoutException e) {
  65. attemptRetryOnException( "connection" , request, new TimeoutError());
  66. } catch (MalformedURLException e) {
  67. throw   new RuntimeException( "Bad URL " + request.getUrl(), e);
  68. } catch (IOException e) {
  69. // 7. If the entityToBytes method throws an IOException  
  70. int statusCode = 0 ;
  71. NetworkResponse networkResponse = null ;
  72. if (httpResponse != null ) {
  73. statusCode = httpResponse.getStatusLine().getStatusCode();
  74. } else {
  75. throw   new NoConnectionError(e);
  76. }
  77. if (statusCode == HttpStatus.SC_MOVED_PERMANENTLY ||
  78. statusCode == HttpStatus.SC_MOVED_TEMPORARILY) {
  79. VolleyLog.e( "Request at %s has been redirected to %s" , request.getOriginUrl(), request.getUrl());
  80. } else {
  81. VolleyLog.e( "Unexpected response code %d for %s" , statusCode, request.getUrl());
  82. }
  83. // If replyContent has data  
  84. if (responseContents != null ) {
  85. networkResponse = new NetworkResponse(statusCode, responseContents,
  86. responseHeaders, false , SystemClock.elapsedRealtime() - requestStart);
  87. // Then retry according to statusCode  
  88. if (statusCode == HttpStatus.SC_UNAUTHORIZED ||
  89. statusCode == HttpStatus.SC_FORBIDDEN) {
  90. attemptRetryOnException( "auth" ,
  91. request, new AuthFailureError(networkResponse));
  92. } else   if (statusCode == HttpStatus.SC_MOVED_PERMANENTLY ||
  93. statusCode == HttpStatus.SC_MOVED_TEMPORARILY) {
  94. attemptRetryOnException( "redirect" ,
  95. request, new RedirectError(networkResponse));
  96. } else {
  97. // TODO: Only throw ServerError for 5xx status codes.  
  98. throw   new ServerError(networkResponse);
  99. }
  100. } else {
  101. throw   new NetworkError(e);
  102. }
  103. }
  104. }
  105. }

The attemptRetryOnException() code is as follows:

  1. private   static   void attemptRetryOnException(String logPrefix, Request<?> request,
  2.  
  3. VolleyError exception) throws VolleyError {
  4.  
  5. RetryPolicy retryPolicy = request.getRetryPolicy();
  6.  
  7. int oldTimeout = request.getTimeoutMs();

  1. try {
  2. // Key statements  
  3. retryPolicy.retry(exception);
  4. } catch (VolleyError e) {
  5. request.addMarker(
  6. String.format( "%s-timeout-giveup [timeout=%s]" , logPrefix, oldTimeout));
  7. throw e;
  8. }
  9. request.addMarker(String.format( "%s-retry [timeout=%s]" , logPrefix, oldTimeout));
  10. }

RetryPolicy is an interface:

  1. public   interface RetryPolicy {
  2. public   int getCurrentTimeout();
  3. public   int getCurrentRetryCount();
  4. public   void retry(VolleyError error) throws VolleyError;
  5. }

If not specified otherwise, the RetryPolicy in the request is DefaultRetryPolicy, and its retry method is implemented as follows:

  1. public   void retry(VolleyError error) throws VolleyError {
  2. mCurrentRetryCount++;
  3. mCurrentTimeoutMs += (mCurrentTimeoutMs * mBackoffMultiplier);
  4. if (!hasAttemptRemaining()) {
  5. throw error;
  6. }
  7. }

If the retry limit has not been exceeded, no exception will be thrown and the process will return to the while loop of performRequest(). Next, let's analyze the entityToBytes() method of BaseNetwork:

  1. private   byte [] entityToBytes(HttpEntity entity) throws IOException, ServerError {
  2. // 1. Create a new PoolingByteArrayOutputStream  
  3. PoolingByteArrayOutputStream bytes =
  4. new PoolingByteArrayOutputStream(mPool, ( int ) entity.getContentLength());
  5. byte [] buffer = null ;
  6. try {
  7. InputStream in = entity.getContent();
  8. if (in == null ) {
  9. throw   new ServerError();
  10. }
  1. // 2. Take out a 1024-byte buffer from the byte pool  
  2. buffer = mPool.getBuf( 1024 );
  3. int count;
  4. // 3. Read data from the entity's inputStream into the buffer  
  5. while ((count = in.read(buffer)) != - 1 ) {
  6. // Write the buffer to PoolingByteArrayOutputStream  
  7. bytes.write(buffer, 0 , count);
  8. }
  9. // 4. Return all data  
  10. return bytes.toByteArray();
  11. finally
  12. try {
  13. // Close the InputStream and release the resources by "consuming the content".  
  14. entity.consumeContent();
  15. } catch (IOException e) {
  16. // This can happen if there was an exception above that left the entity in  
  17. // an invalid state.  
  18. VolleyLog.v( "Error occurred when calling consumingContent" );
  19. }
  20. // 5. Return the buffer to the byte pool  
  21. mPool.returnBuf(buffer);
  22. bytes.close();
  23. }
  24. }
  25.  
  26. The execution steps are described in the code comments. The ByteArrayPool class and PoolingByteArrayOutputStream are not expanded here.
  27. HttpStack
  28.  
  29. HttpStack is an interface that is only responsible for sending the request:
  30.  
  31. public   interface HttpStack {
  32. public HttpResponse performRequest(Request<?> request, Map<String, String> additionalHeaders)
  33. throws IOException, AuthFailureError;
  34. }

From the initial analysis of the Volley class, we can see that HurlStack (java.net.HttpURLConnection) is used when SDK version > 9, otherwise HttpClientStack (org.apache.http.client.HttpClient) is used.

Each stack implements the performRequest() method, which formally initiates an HTTP request internally. For specific usage, please refer to their respective API documents, which will not be elaborated here.

Request

The Request class mainly stores the parameters of the request and the current status of the request, and does not contain any request-related behavior:

  1. public   abstract   class Request<T> implements Comparable<Request<T>> {
  2.  
  3. ...
  4.  
  5. public   interface Method {
  6. int DEPRECATED_GET_OR_POST = - 1 ;
  7. int GET = 0 ;
  8. int POST = 1 ;
  9. int PUT = 2 ;
  10. int DELETE = 3 ;
  11. int HEAD = 4 ;
  12. int OPTIONS = 5 ;
  13. int TRACE = 6 ;
  14. int PATCH = 7 ;
  15. }
  16.  
  17. ...
  18.  
  19. private   final   int mMethod;
  20. private   final String mUrl;
  21. private String mRedirectUrl;
  22. private String mIdentifier;
  23. private   final   int mDefaultTrafficStatsTag;
  24. private Response.ErrorListener mErrorListener;
  25. private Integer mSequence;
  26. private RequestQueue mRequestQueue;
  27. private   boolean mShouldCache = true ;
  28. private   boolean mCanceled = false ;
  29. private   boolean mResponseDelivered = false ;
  30. private RetryPolicy mRetryPolicy;
  31.  
  32. ...
  33. }

Let's analyze the request.parseNetworkResponse(networkResponse) method. Take StringRequest as an example:

  1. @Override  
  2. protected Response<String> parseNetworkResponse(NetworkResponse response) {
  3. String parsed;
  4. try {
  5. parsed = new String(response.data, HttpHeaderParser.parseCharset(response.headers));
  6. } catch (UnsupportedEncodingException e) {
  7. parsed = new String(response.data);
  8. }
  9. return Response.success(parsed, HttpHeaderParser.parseCacheHeaders(response));
  10. }
  11.  
  12. You can see that it simply converts the data into a string and then returns a success response.
  13.  
  14. The implementation of JsonObjectRequest is as follows:
  15.  
  16. @Override  
  17. protected Response<JSONObject> parseNetworkResponse(NetworkResponse response) {
  18. try {
  19. String jsonString = new String(response.data,
  20. HttpHeaderParser.parseCharset(response.headers, PROTOCOL_CHARSET));
  21. return Response.success( new JSONObject(jsonString),
  22. HttpHeaderParser.parseCacheHeaders(response));
  23. } catch (UnsupportedEncodingException e) {
  24. return Response.error( new ParseError(e));
  25. } catch (JSONException je) {
  26. return Response.error( new ParseError(je));
  27. }
  28. }

It now converts data into a string and then generates a JSONObject to return.

Summarize

In summary, the general framework of Volley is as follows:

A RequestQueue contains two internal queues, namely the cache queue and the network queue. There is also a cache dispatcher and n network dispatchers, which are inherited from Thread and are responsible for executing cache and network requests respectively. There is also a delivery, which is responsible for distributing request results.

The cache dispatcher runs on a separate thread. The cache dispatcher loops around waiting, fetching, and executing requests from the cache queue, and passing the results to the delivery.

N network dispatchers run on independent threads. The network dispatcher waits in a loop, retrieves and executes requests from the network queue, and sends the results to delivery and adds them to the cache.

Delivery is responsible for passing the results to the corresponding listener callbacks on the main thread.

<<:  In Praise of the Independent Programmer

>>:  Cybersecurity firm offers millions for Apple's iOS 9 exploit

Recommend

Google Fiber's 1Gbps Internet speed is accused of hype

According to foreign media reports, the tradition...

Why do people dream?

Dreaming is one of the most wonderful experiences...

How many of these 57 promotional tools do you know?

If you want to do your work well, you must first ...

3 traffic thinking methods for e-commerce marketing

When it comes to e-commerce marketing , brands ma...

B2B companies, how to operate private domain traffic

This article starts from how B2B uses hot spots t...

Why do pandas love bamboo? A "symbiotic partner" hidden in their bodies!

(All pictures in this issue are from the copyrigh...

The chaos behind Internet TV: counterfeit set-top boxes are rampant

With its vast resources and better audio-visual e...

This tree has a "ghost face", but it is loved by people...

Many people shudder when seeing the word "gh...

Which is better, Android or iOS? Five must-see battles for consumers

Android or iOS (Windows Phone can be almost ignor...