Scrapy框架爬虫流程

参考自

https://blog.csdn.net/ck784101777/article/details/104468780

https://blog.csdn.net/ck784101777/article/details/104291634

  1. 创建项目
1
2
3
scrapy startproject TXmovies
cd TXmovies
scrapy genspider txms v.qq.com
  1. 修改setting
1
2
3
4
5
6
7
8
9
10
11
12
13
ROBOTSTXT_OBEY = False	#不遵循机器人协议

DOWNLOAD_DELAY = 1 #下载间隙

DEFAULT_REQUEST_HEADERS = { #请求头
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'User-Agent':'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.94 Safari/537.36'
}

ITEM_PIPELINES = { #打开一个管道,值越高优先级越高
'TXmovies.pipelines.TxmoviesPipeline': 300,
}
  1. 确认要提取的数据,item项

item定义你要提取的内容(定义数据结构),比如我提取的内容为电影名和电影描述,我就创建两个变量。Field方法实际上的做法是创建一个字典,给字典添加一个建,暂时不赋值,等待提取数据后再赋值。下面item的结构可以表示为:{'name':'','descripition':''}。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class TxmoviesItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
name = scrapy.Field()
description = scrapy.Field()

  1. 写爬虫程序

txms.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# -*- coding: utf-8 -*-
import scrapy
from ..items import TxmoviesItem

class TxmsSpider(scrapy.Spider):
name = 'txms'
allowed_domains = ['v.qq.com']
start_urls = ['https://v.qq.com/x/bu/pagesheet/list?append=1&channel=cartoon&iarea=1&listpage=2&offset=0&pagesize=30']
offset=0

def parse(self, response):
items=TxmoviesItem()
lists=response.xpath('//div[@class="list_item"]')
for i in lists:
items['name']=i.xpath('./a/@title').get()
items['description']=i.xpath('./div/div/@title').get()

yield items

if self.offset < 120:
self.offset += 30
url = 'https://v.qq.com/x/bu/pagesheet/list?append=1&channel=cartoon&iarea=1&listpage=2&offset={}&pagesize=30'.format(
str(self.offset))

yield scrapy.Request(url=url,callback=self.parse)

pipelines.py中,

交给管道输出,管道可以处理提取的数据,如存数据库,这里仅输出

1
2
3
4
5
6
7
8
9
10
11
12
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


class TxmoviesPipeline(object):
def process_item(self, item, spider):
print(item)
return item

init.py

1
2
3
4
5
6
7
8
9
from scrapy import cmdline

cmdline.execute('scrapy crawl txms'.split())

#导出为csv或者json格式如下

cmdline.execute('scrapy crawl baidu -o baidu.csv'.split())

cmdline.execute('scrapy crawl baidu -o baidu.json'.split())