首页 > 其他 > 详细

使用正则表达式,取得点击次数,函数抽离

时间:2018-04-11 14:30:21      阅读:122      评论:0      收藏:0      [点我收藏+]

1. 用正则表达式判定邮箱是否输入正确。

# 验证邮箱
import re
r = r[a-zA-Z0-9]+(\.[a-zA-Z0-9_-]+){6,12}@[a-zA-Z0-9]+(\.[a-zA-Z0-9]+){0,4}$
e = (123456789@qq.com)
if re.match(r, e):
        print(re.match(r, e).group(0));
else:
        print(error)

2. 用正则表达式识别出全部电话号码。

# 识别电话号码
str = ‘‘‘版权所有:广州商学院 地址:广州市黄埔区九龙大道206号 学校办公室:020-82876130 招生电话:020-82872773 粤公网安备 44011602000060号    粤ICP备15103669号‘‘‘
telePhones = re.findall((\d{3,4})-(\d{6,8}), str)
print(telePhones)

3. 用正则表达式进行英文分词。re.split(‘‘,news)

import re
news=‘‘‘Earthquake early warning detection is more effective for minor quakes than major ones.
This is according to a new study from the United States Geological Survey.
Seismologists modelled ground shaking along California‘s San Andreas Fault, where an earthquake of magnitude 6.5 or more is expected within 30 years.
They found that warning time could be increased for residents if they were willing to tolerate a number of "false alarms" for smaller events.‘‘‘
new = re.split("[\s+\n\.\,\‘]", news)
print(new)

4. 使用正则表达式取得新闻编号

import re
url=http://news.gzcc.cn/html/2018/xiaoyuanxinwen_0404/9183.html
n=re.match(http://news.gzcc.cn/html/2018/xiaoyuanxinwen_0404/(.*).html,url).group(1)
print(n)

5. 生成点击次数的Request URL

6. 获取点击次数

7. 将456步骤定义成一个函数 def getClickCount(newsUrl):

8. 将获取新闻详情的代码定义成一个函数 def getNewDetail(newsUrl):

import requests

from bs4 import BeautifulSoup
from datetime import datetime

url = "http://news.gzcc.cn/html/xiaoyuanxinwen/"
res = requests.get(url)
res.encoding = utf-8
soup = BeautifulSoup(res.text, html.parser)

def getClickCount(newsUrl):
    clickUrl = http://oa.gzcc.cn/api.php?op=count&id=9172&modelid=80
    rest = requests.get(clickUrl).text.split(.html)[-1].lstrip("(‘").rstrip("‘);")
    print("新闻点击次数URL:", clickUrl)
    print("新闻点击次数:", rest)

def getNewDetail(Url):
    for news in soup.select(li):
        if len(news.select(.news-list-title))>0:
            t1=news.select(.news-list-title)[0].text
            d1=news.select(.news-list-description)[0].text
            a1=news.select(a)[0].attrs[href]

            res = requests.get(a1)
            res.encoding = utf-8
            soupd = BeautifulSoup(res.text, html.parser)
            c1=soupd.select(#content)[0].text
            info=soupd.select(.show-info)[0].text
            print("新闻标题:", t1)
            print("新闻链接:", a1)
            print("新闻详情:", c1)
            resd = requests.get(a1)
            resd.encoding = utf-8
            soupd = BeautifulSoup(resd.text, html.parser)
            time = soupd.select(.show-info)[0].text[0:24].lstrip(发布时间:)
            dt = datetime.strptime(time, %Y-%m-%d %H:%M:%S)
            print("新闻发布时间:", dt)
            author=info[info.find(作者):].split()[0].lstrip(作者:)
            fromwhere = info[info.find(来源):].split()[0].lstrip(来源:)
            photo = info[info.find(摄影):].split()[0].lstrip(摄影:)
            print("新闻作者:", author)
            print("新闻来源:", fromwhere)
            print("新闻摄影:", photo)
            getClickCount(a1)

def getPage(url):
    return int(soup.select(.a1)[0].text.rstrip())//10+1

def getlist(url):
    for i in soup.select(li):
        if len(i.select(.news-list-title)) > 0:
            place = i.select(.news-list-info)[0].contents[1].text  # 获取来源
            title = i.select(.news-list-title)[0].text  # 获取标题
            description = i.select(.news-list-description)[0].text  # 获取描述
            detailurl = i.select(a)[0].attrs[href]  # 获取链接
            print("来源:" + place)
            print("新闻标题:" + title)
            print("新闻描述:" + description)
            print("新闻链接:" + detailurl)

def getall(url):
   for num in range(2,getPage(url)):
     listpageurl="http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html".format(num)
     getlist(listpageurl)
     getNewDetail(listpageurl)

getall(url)

 

使用正则表达式,取得点击次数,函数抽离

原文:https://www.cnblogs.com/cs007/p/8794596.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!