运行scrapy爬虫过程中报错:

2017-01-01 16:50:41 [scrapy.core.engine] INFO: Spider opened
2017-01-01 16:50:41 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-01 16:50:41 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-01-01 16:50:42 [scrapy.core.engine] DEBUG: Crawled (200)  (referer: None)
2017-01-01 16:50:42 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'weixin.sogou.com': 
2017-01-01 16:50:42 [scrapy.core.engine] INFO: Closing spider (finished)
2017-01-01 16:50:42 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 208,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 4292,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 1, 1, 8, 50, 42, 264434),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'offsite/domains': 1,
 'offsite/filtered': 2,
 'request_depth_max': 1,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 1, 1, 8, 50, 41, 766080)}
2017-01-01 16:50:42 [scrapy.core.engine] INFO: Spider closed (finished)

报错原因:
官方对这个的解释,是要request的地址和allow_domain里面的冲突,从而被过滤掉。

回头细查,在爬虫.py里面,明显将搜狗的域名写错,写成了“sougou.com”,而后面要爬取的url是“sogou.com/xxxxxx”,所以报错。

文章来源于互联网:爬虫运行报错:DEBUG: Filtered offsite request to ‘weixin.sogou.com’

发表评论