首页 > 其他 > 详细

寒假大数据学习笔记十四

时间:2020-02-15 21:22:58      阅读:86      评论:0      收藏:0      [点我收藏+]

  今天写了关于首都之窗信件爬取的Python脚本,因为老师给的教程都是java语言,我这两天只学习了一下python爬虫,所以直接就用Python了。

  在我开始研究首都之窗网页源代码后发现几个比较麻烦的问题,第一,在信件页面跳转下一页,网址没有发生变化,依旧是http://www.beijing.gov.cn/hudong/hdjl/com.web.search.replyMailList.flow,这其实是一个比较棘手的问题——因为我之前爬取的页面跳到下一页或上一页时网址都是有明显变化的,比如/page_1变成/page_2之类的,这就造成我无法通过网址变化抓取有关信息。第二,跳转方式的问题,在浏览信件详细内容的跳转方式并非a标签,而是采用了JavaScript动态加载URL的方式,这就意味着我无法通过爬取网页源代码得到相关网址。

  我上网搜索了好半天,最终成果是一句话:F12可以解决90%的爬虫问题,剩下的10%需要其他方法。

  重新研究F12,经过仔细观察,我发现信件详细内容的网址分为三部分部分,第一部分是不变的http://www.beijing.gov.cn/hudong/hdjl/,第二部分分为三种,投诉是com.web.complain.complainDetail.flow?,建议是com.web.suggest.suggesDetail.flow?,咨询是com.web.consult.consultDetail.flow?,而第三部分是originalId=[每一封信件的编号],所以我就可以分成三种类型分别抓取相关内容。研究透彻了网址问题,接下来就是如何获得相应网址,因为类型只有三种,所以可以枚举,但信件编号只能靠抓取,所以如何获得相应网址的问题就变成了如何获得信件编号的问题,每一页的源代码中都有本页的所有信件编号,所以此时的问题又变成了如何换到下一页?这时,我注意到了network中的一个响应,我注意到,当我每次点击下一页时,这个响应都会出现,果不其然,我从这个响应中也找到了页面的相关内容,没说的,观察一下它的Headers,多点击几下,多观察几个,果然找到了规律,原来我可以模仿它的Requests payload,规定信件类型,信件总数和信件页面显示个数,通过这三个的变化,我就可以直接得到回复,回复中包含信件的基本信息。

  好了,这下爬取页面的所有问题都解决了,可以正常显示信件基本信息了。然后根据提取出来的信件编号,定位信件详细内容页面,再通过xpath匹配规则,得到相应字段,保存进数据库。

  但在今天实操的过程中,由于网速的限制,我花了4个多小时才将投诉类型的信件存入数据库,所幸没有遇到什么大的差错。附上代码。

 

  1 # import json
  2 import requests
  3 from fake_useragent import UserAgent
  4 import time
  5 
  6 
  7 def url_openzixun():
  8     begin = 0
  9     count = 6
 10     original_id = []
 11     page = 1
 12     while page < 1671:
 13         url = "http://www.beijing.gov.cn/hudong/hdjl/com.web.search.mailList.replyMailList.biz.ext"
 14         payloadData = {
 15             PageCond/begin: begin,
 16             PageCond/length: count,
 17             PageCond/isCount: "true",
 18             keywords: "",
 19             orgids: "",
 20             startDate: "",
 21             endDate: "",
 22             letterType: "2",
 23             letterStatue: "1"
 24         }
 25         Header = {
 26             Host: www.beijing.gov.cn,
 27             Content-Type: text/json,
 28             Referer: http://www.beijing.gov.cn/hudong/hdjl/com.web.search.replyMailList.flow,
 29             User-Agent: UserAgent().random,
 30             Accept: application/json, text/javascript, */*; q=0.01}
 31         res = requests.post(url, json=payloadData, headers=Header)
 32         maillist = res.json()[mailList]
 33         page += 1
 34         with open("zixunall.txt", "a+") as p:
 35             p.write(str(maillist)+"\n")
 36         print(maillist)
 37         for m in maillist:
 38             original_id.append(m.get(original_id))
 39         begin = begin+count
 40         time.sleep(0.5)
 41 
 42     return original_id
 43 
 44 
 45 def url_openjianyi():
 46     begin = 0
 47     count = 6
 48     original_id = []
 49     page = 1
 50     while page < 2316:
 51         url = "http://www.beijing.gov.cn/hudong/hdjl/com.web.search.mailList.replyMailList.biz.ext"
 52         payloadData = {
 53             PageCond/begin: begin,
 54             PageCond/length: count,
 55             PageCond/isCount: "true",
 56             keywords: "",
 57             orgids: "",
 58             startDate: "",
 59             endDate: "",
 60             letterType: "2",
 61             letterStatue: "2"
 62         }
 63         Header = {
 64             Host: www.beijing.gov.cn,
 65             Content-Type: text/json,
 66             Referer: http://www.beijing.gov.cn/hudong/hdjl/com.web.search.replyMailList.flow,
 67             User-Agent: UserAgent().random,
 68             Accept: application/json, text/javascript, */*; q=0.01}
 69         res = requests.post(url, json=payloadData, headers=Header)
 70         maillist = res.json()[mailList]
 71         page += 1
 72         with open("jianyiall.txt", "a+") as p:
 73             p.write(str(maillist)+"\n")
 74         print(maillist)
 75         for m in maillist:
 76             original_id.append(m.get(original_id))
 77         begin = begin+count
 78         time.sleep(0.5)
 79 
 80     return original_id
 81 
 82 
 83 def url_opentousu():
 84     begin = 0
 85     count = 6
 86     original_id = []
 87     page = 1
 88     while page < 1600:
 89         url = "http://www.beijing.gov.cn/hudong/hdjl/com.web.search.mailList.replyMailList.biz.ext"
 90         payloadData = {
 91             PageCond/begin: begin,
 92             PageCond/length: count,
 93             PageCond/isCount: "true",
 94             keywords: "",
 95             orgids: "",
 96             startDate: "",
 97             endDate: "",
 98             letterType: "2",
 99             letterStatue: "3"
100         }
101         Header = {
102             Host: www.beijing.gov.cn,
103             Content-Type: text/json,
104             Referer: http://www.beijing.gov.cn/hudong/hdjl/com.web.search.replyMailList.flow,
105             User-Agent: UserAgent().random,
106             Accept: application/json, text/javascript, */*; q=0.01}
107         res = requests.post(url, json=payloadData, headers=Header)
108         maillist = res.json()[mailList]
109         page += 1
110         with open("tousuall.txt", "a+") as p:
111             p.write(str(maillist) + "\n")
112         print(maillist)
113         for m in maillist:
114             original_id.append(m.get(original_id))
115         begin = begin + count
116         time.sleep(0.5)
117 
118     return original_id
119 
120 
121 if __name__ == __main__:
122     # original_id = url_openzixun()
123     # for each in original_id:
124     #     with open("zixun.txt", "a+") as p:
125     #         p.write(each+"\n")
126     # original_id = url_openjianyi()
127     # for each in original_id:
128     #     with open("jianyi.txt", "a+") as p:
129     #         p.write(each + "\n")
130     original_id = url_opentousu()
131     for each in original_id:
132         with open("tousu.txt", "a+") as p:
133             p.write(each + "\n")

 

寒假大数据学习笔记十四

原文:https://www.cnblogs.com/YXSZ/p/12310059.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!