我学会了为什么在C# (http://www.digcode.com/default.aspx?page=ed51cde3-d979-4daf-afae-fa6192562ea9&article=bc3a7a4f-f53e-4f88-8e9c-c9337f6c05a0)中Request.Browser.Crawler总是假的。
有没有人使用某种方法来动态更新爬虫列表,所以Request.Browser.Crawler真的会很有用?
发布于 2009-01-10 23:51:42
我很高兴Ocean's Browsercaps提供的结果。它支持微软的配置文件没有检测到的爬虫程序。它甚至会解析出你的网站上的爬虫是什么版本,并不是说我真的需要这个级别的细节。
发布于 2009-01-10 21:40:49
您可以根据Request.UserAgent检查(正则表达式)。
Peter Bromberg写了一篇关于用ASP.NET编写ASP.NET Request Logger and Crawler Killer的很好的文章。
下面是他在Logger类中使用的方法:
public static bool IsCrawler(HttpRequest request)
{
// set next line to "bool isCrawler = false; to use this to deny certain bots
bool isCrawler = request.Browser.Crawler;
// Microsoft doesn't properly detect several crawlers
if (!isCrawler)
{
// put any additional known crawlers in the Regex below
// you can also use this list to deny certain bots instead, if desired:
// just set bool isCrawler = false; for first line in method
// and only have the ones you want to deny in the following Regex list
Regex regEx = new Regex("Slurp|slurp|ask|Ask|Teoma|teoma");
isCrawler = regEx.Match(request.UserAgent).Success;
}
return isCrawler;
}https://stackoverflow.com/questions/431765
复制相似问题