一、不同网页的爬取方法
1、静态网页:根据url即可方便的爬取
2、动态网页:分为两种:一种是通过F12查看控制台的xhr等文件,找到包含所要爬取的内容的文件,发现这个文件的url路径跟页码有联系,那么就可以根据构造的url来进行访问爬取了。还有一种情况是查看了包含所要爬取内容的文件,发现文件url是固定不变的或者跟页码没有关系,这个时候可以通过简单的模拟浏览器点击行为来请求网页再爬取,这种方案执行效率较慢,不适于多页爬取的情况。代码如下:
def parse(self, response): print 'parser>>>>>>>>>>>>>>>' try: self.varint = 1 while self.varint <= 100: url = self.ROOT_URL + '*' + str(self.varint) responsenow = requests.get(url) self.parser_html(responsenow) self.varint = self.varint + 1 # time.sleep(0.1) print('success>>>>>>>>>>>>') except: print('failed>>>>>>>>>')
对于动态网页,还可以通过模拟客户端发送post请求来爬取,代码如下:
def parse(self, response): formdata = { 'pageNo': '', 'categoryId': '', 'code': '', 'pageSize': 10, 'source': '', 'requestUri': '*', 'requestMethod': 'POST' } print 'parser>>>>>>>>>>>>>>>' try: self.varint = 1 formdata['source'] = str(2) formdata['pageNo'] = str(self.varint) while self.varint <= 46: responsenow = requests.post(self.ROOT_URL, data=formdata) self.parser_html(responsenow) self.varint = self.varint + 1 formdata['source'] = formdata['pageNo'] #这里就是我纠结了将近两天的bug所在处,后面会详细说明 formdata['pageNo'] = str(self.varint) print('success>>>>>>>>>>>>') except: print('failed>>>>>>>>>')
注:无论什么样的动态网页,都不需要过多关注网站实现的技术细节,便可以简单地通过模拟浏览器发送get或者post请求来获取页面信息。
二、遇到的问题
先贴上我之前错误的代码
def parse(self, response): formdata = { 'pageNo': self.varint, 'categoryId': '', 'code': '', 'pageSize': 10, 'source': self.source, 'requestUri': '*', 'requestMethod': 'POST' } print 'parser>>>>>>>>>>>>>>>' try: self.source = 2 self.varint = 1 formdata['source'] = str(2) formdata['pageNo'] = str(self.varint) while self.varint <= 46: responsenow = requests.post(self.ROOT_URL, data=formdata) self.parser_html(responsenow) self.source = self.varint self.varint = self.varint + 1 print('success>>>>>>>>>>>>') except: print('failed>>>>>>>>>')
注意红色标注的地方:出错的原因是根本没有成功给FormData赋值!!!!