我想写一个程序使用Qt下载大量的HTML网页,大约5000,每天从一个网站。下载这些页面后,我需要使用DOM查询提取一些数据,使用WebKit模块,然后将这些数据存储在数据库中。
最好的/正确的/有效的方法是什么,尤其是下载和分析阶段?如何处理大量请求以及如何创建“下载管理器”?
发布于 2010-10-12 06:49:30
要下载这些页面,使用像libcurl这样的专用库是有意义的
发布于 2013-03-20 04:14:59
这个问题已经得到了回答,但这里有一个使用您要求的解决方案,即使用QT完成此操作。
你可以使用QT (特别是QNetworkManager,QNetworkRequests,QNetworkReply)制作一个(网站爬虫)。我不确定这是否是处理这类任务的正确方法,但我发现使用多线程可以最大限度地提高效率并节省时间。(请有人告诉我是否有其他方法/或确认这是否是好的做法)
概念是工作列表被排队,一个工作者将执行该工作,在接收到信息/html之后,对其进行处理,然后继续下一项。
类工作者对象类应该接受Url,处理并下载url的html数据,然后在接收到信息时对其进行处理。
为队列创建一个队列和管理器我创建了一个QQueue< QString> urlList来控制正在处理的并发项目的数量和要完成的任务列表。
QQueue <String> workQueue; //First create somewhere a
int maxWorkers = 10;
//Then create the workers
void downloadNewArrivals::createWorkers(QString url){
checkNewArrivalWorker* worker = new checkNewArrivalWorker(url);
workQueue.enqueue(worker);
}
//Make a function to control the amount of workers,
//and process the workers after they are finished
void downloadNewArrivals::processWorkQueue(){
if (workQueue.isEmpty() && currentWorkers== 0){
qDebug() << "Work Queue Empty" << endl;
} else if (!workQueue.isEmpty()){
//Create the maxWorkers and start them in seperate threads
for (int i = 0; i < currentWorkers && !workQueue.isEmpty(); i++){
QThread* thread = new QThread;
checkNewArrivalWorker* worker = workQueue.dequeue();
worker->moveToThread(thread);
connect(worker, SIGNAL(error(QString)), this, SLOT(errorString(QString)));
connect(thread, SIGNAL(started()), worker, SLOT(process()));
connect(worker, SIGNAL(finished()), thread, SLOT(quit()));
connect(worker, SIGNAL(finished()), worker, SLOT(deleteLater()));
connect(thread, SIGNAL(finished()), this, SLOT(reduceThreadCounterAndProcessNext()));
connect(thread, SIGNAL(finished()), thread, SLOT(deleteLater()));
thread->start();
currentWorkers++;
}
}
}
//When finished, process the next worker
void downloadNewArrivals::reduceThreadCounterAndProcessNext(){
currentWorkers--; //This variable is to control amount of max workers
processWorkQueue();
}
//Now the worker
//The worker class important parts..
void checkNewArrivalWorker::getPages(QString url){
QNetworkAccessManager *manager = new QNetworkAccessManager(this);
QNetworkRequest getPageRequest = QNetworkRequest(url); //created on heap
getPageRequest.setRawHeader( "User-Agent", "Mozilla/5.0 (X11; U; Linux i686 (x86_64); "
"en-US; rv:1.9.0.1) Gecko/2008070206 Firefox/3.0.1" );
getPageRequest.setRawHeader( "charset", "utf-8" );
getPageRequest.setRawHeader( "Connection", "keep-alive" );
connect(manager, SIGNAL(finished(QNetworkReply*)), this, SLOT(replyGetPagesFinished(QNetworkReply*)));
connect(manager, SIGNAL(finished(QNetworkReply*)), manager, SLOT(deleteLater()));
manager->get(getPageRequest);
}
void checkNewArrivalWorker::replyGetPagesFinished(QNetworkReply *reply){
QString data = reply->readAll(); //Here data will hold your html to process as needed...
reply->deleteLater();
emit finished();
}在获得信息之后,我只是处理了来自QString的信息,但我相信,一旦达到这个阶段,您就可以掌握如何使用DOM解析器。
我希望这是一个足够的例子来帮助你。
https://stackoverflow.com/questions/3910105
复制相似问题