首页 > 其他 > 详细

爬虫(scrapy--豆瓣TOP250)

时间:2017-07-21 19:20:09      阅读:297      评论:0      收藏:0      [点我收藏+]
# -*- coding: utf-8 -*-
import scrapy
from douban_top250.items import DoubanTop250Item


class MovieSpider(scrapy.Spider):
    name = movie
    header = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.109 Safari/537.36"
    }

    def start_requests(self):

        urls = https://movie.douban.com/top250
        yield scrapy.Request(url=urls,headers=self.header)

    def parse(self, response):
        item = DoubanTop250Item()
        info = response.xpath("//*[@id=‘content‘]/div/div[1]/ol/li")
        for each in info:
            item[ranking] = each.xpath("div/div[1]/em/text()").extract()
            item[name] = each.xpath("div/div[2]/div[1]/a/span[1]/text()").extract()
            item[grade] = each.xpath("div/div[2]/div[2]/div/span[2]/text()").extract()
            item[score_num] = each.xpath("div/div[2]/div[2]/div/span[4]/text()").extract()
            yield item
        next_url = response.xpath("//*[@id=‘content‘]/div/div[1]/div[2]/span[3]/link/@href").extract()
        if next_url:
            next_url = https://movie.douban.com/top250 + next_url[0]
            yield scrapy.Request(next_url,headers=self.header)

 

爬虫(scrapy--豆瓣TOP250)

原文:http://www.cnblogs.com/missmissmiss/p/7219185.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!