首页 > 其他 > 详细

jieba 分词 聊斋

时间:2020-11-15 11:04:11      阅读:27      评论:0      收藏:0      [点我收藏+]
import io
import jieba
txt = io.open("liaozhai.txt", "r", encoding=utf-8).read()
words  = jieba.lcut(txt)
counts = {}
for word in words:
    if len(word)  == 1:
        continue
    else:
        counts[word] =counts.get(word,0)+1
items = list(counts.items())
items.sort(key=lambda x:x[1], reverse=True)
for i in range(15):
    word,count = items[i]
    print (u"{0:<10}{1:>5}".format(word, count))

 

jieba 分词 聊斋

原文:https://www.cnblogs.com/1113334study/p/13975905.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!