先来一张图

RequestQueue queue=Volley.newRequestQueue(context);
Request<String> request=new Request<String>(Method.GET, "url", new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
}
}) {
@Override
protected Response<String> parseNetworkResponse(NetworkResponse response) {
return null;
}
@Override
protected void deliverResponse(String response) {
}
};
=====================================================
新增一个请求
queue.add(request);
新增一个请求
queue.add(request2);
新增一个请求
queue.add(request3);RequestQueue相当于一个调度员,负责调度所有事物。
主要有几个功能
start:开启缓存请求线程、开启网络请求线程
stop:关闭所有请求线程
add:在mCacheQueue、mNetworkQueue队列中添加请求数据
cancelAll:取消请求
......................
我们来看下源码
public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, (BaseHttpStack) null);
}
....最终调用下面方法...
private static RequestQueue newRequestQueue(Context context, Network network) {
final Context appContext = context.getApplicationContext();
// Use a lazy supplier for the cache directory so that newRequestQueue() can be called on
// main thread without causing strict mode violation.
DiskBasedCache.FileSupplier cacheSupplier =
new DiskBasedCache.FileSupplier() {
private File cacheDir = null;
@Override
public File get() {
if (cacheDir == null) {
cacheDir = new File(appContext.getCacheDir(), DEFAULT_CACHE_DIR);
}
return cacheDir;
}
};
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheSupplier), network);
queue.start();
return queue;
}
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher =
new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}
public void stop() {
if (mCacheDispatcher != null) {
mCacheDispatcher.quit();
}
for (final NetworkDispatcher mDispatcher : mDispatchers) {
if (mDispatcher != null) {
mDispatcher.quit();
}
}
}
public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}
// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
sendRequestEvent(request, RequestEvent.REQUEST_QUEUED);
beginRequest(request);
return request;
}
<T> void beginRequest(Request<T> request) {
// If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldCache()) {
sendRequestOverNetwork(request);
} else {
mCacheQueue.add(request);
}
}
1.上面源码我们可以看到
newRequestQueue方法中做了两件事,
1.定义了缓存路径,存放到data/volley/cache中。
2.调用stop方法关闭当前所有请求线程后,再重新开启1个缓存请求线程和4个网络请求线程。
add 方法中做了两件事
1.request.setRequestQueue(this);Request绑定RequestQueue,这样RequestQueue可以操作Request,比如关闭当前请求。Request finish(final String tag)方法 就是调用mRequestQueue.finish(this);
2.往缓存请求队列或者网络请求队列添加数据
注意代码中有mCurrentRequests Set容器。mCurrentRequests.add(request); 为什么要把请求添加到mCurrentRequests中?看下源码,我们发现在cancelAll、finish方法中使用。
public void cancelAll(RequestFilter filter) {
synchronized (mCurrentRequests) {
for (Request<?> request : mCurrentRequests) {
if (filter.apply(request)) {
request.cancel();
}
}
}
}
<T> void finish(Request<T> request) {
// Remove from the set of requests currently being processed.
synchronized (mCurrentRequests) {
mCurrentRequests.remove(request);
}
synchronized (mFinishedListeners) {
for (RequestFinishedListener<T> listener : mFinishedListeners) {
listener.onRequestFinished(request);
}
}
sendRequestEvent(request, RequestEvent.REQUEST_FINISHED);
}mCurrentRequests用来记录所有任务对象,每有一个网络请求,都会加入到这个队列中,而如果完成任务或者取消任务后,会把这个Request移除队列。
接下来,我们分别来看看缓存线程和网络线程做了什么事。
mCacheDispatcher.start()直接调用的是内部run方法
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
// Make a blocking call to initialize the cache.
mCache.initialize();
while (true) {
try {
processRequest();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
Thread.currentThread().interrupt();
return;
}
VolleyLog.e(
"Ignoring spurious interrupt of CacheDispatcher thread; "
+ "use quit() to terminate it");
}
}
}这里面主要做了两件事
1.我们在newRequestQueue可以得知,mCache的实现者是DiskBasedCache
第一件事,先初始化缓存,这里缓存我们在另一篇文章详细讲解
第二件事,循环遍历请求,我们来看下processRequest
private void processRequest() throws InterruptedException {
// Get a request from the cache triage queue, blocking until
// at least one is available.
final Request<?> request = mCacheQueue.take();
processRequest(request);
}
@VisibleForTesting
void processRequest(final Request<?> request) throws InterruptedException {
request.addMarker("cache-queue-take");
request.sendEvent(RequestQueue.RequestEvent.REQUEST_CACHE_LOOKUP_STARTED);
try {
-----------------------------------------------------//1
// If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
return;
}
// Attempt to retrieve this item from cache.
Cache.Entry entry = mCache.get(request.getCacheKey());
if (entry == null) {----------------------------------//2
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
mNetworkQueue.put(request);
}
return;
}
// Use a single instant to evaluate cache expiration. Otherwise, a cache entry with
// identical soft and hard TTL times may appear to be valid when checking isExpired but
// invalid upon checking refreshNeeded(), triggering a soft TTL refresh which should be
// impossible.
long currentTimeMillis = System.currentTimeMillis();
// If it is completely expired, just send it to the network.
if (entry.isExpired(currentTimeMillis)) {------------//3
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
mNetworkQueue.put(request);
}
return;
}
// We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response =
request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed");
if (!response.isSuccess()) {--------------------------//4
request.addMarker("cache-parsing-failed");
mCache.invalidate(request.getCacheKey(), true);
request.setCacheEntry(null);
if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
mNetworkQueue.put(request);
}
return;
}
if (!entry.refreshNeeded(currentTimeMillis)) { ------ //5
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else {
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);
// Mark the response as intermediate.
response.intermediate = true;
if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse( ---------------------//6
request,
response,
new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Restore the interrupted status
Thread.currentThread().interrupt();
}
}
});
} else {
// request has been added to list of waiting requests
// to receive the network response from the first request once it returns.
mDelivery.postResponse(request, response);---------//6
}
}
} finally {
request.sendEvent(RequestQueue.RequestEvent.REQUEST_CACHE_LOOKUP_FINISHED);
}
}上面我们可以看到,缓存线程是通过死循环从缓存队列中获取缓存请求,缓存队列是阻塞式的,所以缓存线程不会运行完,也就不会消亡。
接下来看下他是如何一步一步操作的。
1.判读请求是否取消,如果取消直接关闭缓存请求
2.通过key从缓存中获取对应的缓存,如果获取不到则添加网络请求队列,结束。
3.如果缓存已过期,清空当前缓存内容同时添加网络请求队列,结束。
4.如果缓存内容有问题,清空当前缓存内容同时添加网络请求队列,结束。
4.如果缓存不需要更新,则数据回调给使用者,结束。
5.如果缓存需要更新,则数据回调给使用者,同时把请求任务添加网络请求队列,结束。
6.其他情况直接数据回调给使用者,结束。
这里缓存相关内容我们在另一篇文章详细讲解
networkDispatcher.start();直接调用内部run
@Override
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
while (true) {
try {
processRequest();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
Thread.currentThread().interrupt();
return;
}
VolleyLog.e(
"Ignoring spurious interrupt of NetworkDispatcher thread; "
+ "use quit() to terminate it");
}
}
}private void processRequest() throws InterruptedException {
// Take a request from the queue.
Request<?> request = mQueue.take(); -----------//1
processRequest(request);
}和缓存线程一样,网络线程也是从阻塞队列获取请求,这样网络线程也一直活着。
我们来看下processRequest源码
@VisibleForTesting
void processRequest(Request<?> request) {
long startTimeMs = SystemClock.elapsedRealtime();
request.sendEvent(RequestQueue.RequestEvent.REQUEST_NETWORK_DISPATCH_STARTED);
try {
request.addMarker("network-queue-take");
// If the request was cancelled already, do not perform the
// network request.
--------------------------------------------//1
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
request.notifyListenerResponseNotUsable();
return;
}
addTrafficStatsTag(request);
// Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request);
request.addMarker("network-http-complete");
// If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
----------------------------------------------//2
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
request.notifyListenerResponseNotUsable();
return;
}
// Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");
// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
-----------------------------------------------//3
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
// Post the response back.
request.markDelivered();
-----------------------------------------------//4
mDelivery.postResponse(request, response);
request.notifyListenerResponseReceived(response);
} catch (VolleyError volleyError) {
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
parseAndDeliverNetworkError(request, volleyError);
request.notifyListenerResponseNotUsable();
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
VolleyError volleyError = new VolleyError(e);
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
mDelivery.postError(request, volleyError);
request.notifyListenerResponseNotUsable();
} finally {
request.sendEvent(RequestQueue.RequestEvent.REQUEST_NETWORK_DISPATCH_FINISHED);
}
}1.队列用的是无界有序的阻塞队列 BlockingQueue,它的特点就是从队列里取元素的时候,如果队列为空,则调用此方法的线程会挂起,直至队列有元素可取,线程才会继续运行。同样放入元素的时候,如果队列满了也会挂起,直至队列有空间可放(但是PriorityBlockingQueue是无最大限制的,所以不会满),同时它是线程安全的,所以这里while(true)不影响性能。
我们来一步一步看请求过程
1.判读请求是否取消,如果取消直接关闭网络请求
2.如果已经请求过了而且内容没有更新,则只要通知监听者,返回数据重复问题。
3.取得网络请求返回内容,需要缓存则缓存。
4.标记当前请求成功同时把数据回调给使用者。
我们来看下在哪里使用的,我们在NetworkDispatcher、CacheDispatcher中使用了mDelivery.postResponse(request, response);通过源码发现这个mDelivery实际是ExecutorDelivery
ExecutorDelivery是用来通信,使用的是Handler。它实现了ResponseDelivery接口。ResponseDelivery有3个方法。参数是Request,Response以及一个Runnable接口。这里的Request是用户创建的请求,response是用户请求回来的数据或者VollerErroy,Runnable是用来给Handler处理消息的。
何时声明?在实例化RequestQueue中声明
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(
cache,
network,
threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}看下ExecutorDelivery源码
public ExecutorDelivery(final Handler handler) {
// Make an Executor that just wraps the handler.
mResponsePoster =
new Executor() {
@Override
public void execute(Runnable command) {
handler.post(command);
}
};
}我们看下mDelivery.postResponse(request, response);对应的代码
@Override
public void postResponse(Request<?> request, Response<?> response) {
postResponse(request, response, null);
}
@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}mResponsePoster就是Executor,执行excute方法就是执行handler.post(command);这个可以看下另一篇文章讲解Handler。command是Runnable,这个是执行command的run方法,也即是ResponseDeliveryRunnable的run方法,我们来看看。
public void run() {
// NOTE: If cancel() is called off the thread that we're currently running in (by
// default, the main thread), we cannot guarantee that deliverResponse()/deliverError()
// won't be called, since it may be canceled after we check isCanceled() but before we
// deliver the response. Apps concerned about this guarantee must either call cancel()
// from the same thread or implement their own guarantee about not invoking their
// listener after cancel() has been called.
// If this request has canceled, finish it and don't deliver.
if (mRequest.isCanceled()) {
mRequest.finish("canceled-at-delivery");
return;
}
// Deliver a normal response or error, depending.
if (mResponse.isSuccess()) {
mRequest.deliverResponse(mResponse.result);
} else {
mRequest.deliverError(mResponse.error);
}
// If this is an intermediate response, add a marker, otherwise we're done
// and the request can be finished.
if (mResponse.intermediate) {
mRequest.addMarker("intermediate-response");
} else {
mRequest.finish("done");
}
// If we have been provided a post-delivery runnable, run it.
if (mRunnable != null) {
mRunnable.run();
}
}这个里面主要看mRequest.deliverResponse(mResponse.result);和mRequest.deliverError(mResponse.error);
deliverResponse是abstract,具体实现在Request实例化中,我们回看 Volley使用方式,可以看到mRequest.deliverResponse(mResponse.result);就是回调Volley使用方式中实现详情。
deliverError,我们进一步看源码
public void deliverError(VolleyError error) {
Response.ErrorListener listener;
synchronized (mLock) {
listener = mErrorListener;
}
if (listener != null) {
listener.onErrorResponse(error);
}
}哦,ErrorListener这个在 Volley使用方式 阶段我们可以看到,我们也实现他了,都是回调。
ExecutorDelivery里面用了Handler的优势,把请求结果传递给主线程。
Volley是一个消息收集处理分发过程
1.生产者消费者模式
RequestQueue作为生产者,不断add添加请求信息,NetworkDispatcher和CacheDispatcher不断消费,生产和消费之间不干涉,通过队列来关联。
2.策略模式
当Android SDK小于9时,基于HttpClient创建HttpStack,否则基于HttpURLConnection创建HttpStack,还可以自定义Stack
3.模板方法模式
Volley中对于Request的设计用到的就是模板方法模式,无论是请求String,JsonObject还是JsonArray,唯一的区别就是对返回数据的解析方式(parseNetworkError)不同,如果我们就可以通过模板方法模式对解析方式进行抽象,让子类分别实现。
1.为什么说Volley只适合于小数据请求(不超过3M)?
从源码中我们可以得知,网络请求线程只有4个,缓存请求线程只有1个,如果请求大数据,那就导致线程在一定的时间内被占用,5个线程很容易被用完,再有请求时回导致堵塞,UI体验较差。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。