url=‘https://www.baidu.com/s?wd=123‘
head={‘User‐Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKi t/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36‘}
rep=requests.get(url,headers=head)
print(rep.text)
postData = { ‘username‘:‘Angela‘,
‘password‘:‘123456‘
}
response = requests.post(url,data=postData)
head={‘User‐Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKi t/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36‘}
rep=requests.get(url,headers=head)
添加其他的如:cookies、referer也是如此,在前面的urllib的反爬虫也介绍过了
url=‘https://www.baidu.com/‘ req=requests.get(url) req.cookies <RequestsCookieJar[Cookie(version=0, name=‘B...省略
保存cookie信息可以用内置的open方法以w的方式打开,在需要用到的时候带出
get_cookies=requests.session()
head={‘User‐Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKi t/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36‘}
url=‘https://www.baidu.com/‘
get_cookies.get(url,headers=head)
#此时已经从百度返回一个cookies信息
#我们可以直接调用该cookies
get_cookies.get(url+‘s?wd=123‘,headers=head)
<Response [200]>
requests发送数据和对反爬虫的处理 ----------python的爬虫学习
原文:https://www.cnblogs.com/lcyzblog/p/11269341.html