首页 > 其他 > 详细

thread pool

时间:2020-09-18 15:49:03      阅读:52      评论:0      收藏:0      [点我收藏+]

thread pool

import concurrent.futures
import urllib.request

URLS = [http://www.foxnews.com/,
        http://www.cnn.com/,
        http://europe.wsj.com/,
        http://www.bbc.co.uk/,
        http://some-made-up-domain.com/]

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
    with urllib.request.urlopen(url, timeout=timeout) as conn:
        return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}  #最好这样用,结果的as_completed的时候顺序和之前的顺序并不能保证,可以尝试不使用as_complete,直接future.result()顺序或许可以保证
    for future in concurrent.futures.as_completed(future_to_url):
        url = future_to_url[future]
        try:
            data = future.result()
        except Exception as exc:
            print(%r generated an exception: %s % (url, exc))
        else:
            print(%r page is %d bytes % (url, len(data)))

 

thread pool

原文:https://www.cnblogs.com/buxizhizhoum/p/13690746.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!