首页 > 编程语言 > 详细

python下结巴中文分词

时间:2015-04-27 16:55:03      阅读:535      评论:0      收藏:0      [点我收藏+]

http://blog.csdn.net/pipisorry/article/details/45311229

jieba中文分词的使用

import jieba
sentences = ["我喜欢吃土豆","土豆是个百搭的东西","我不喜欢今天雾霾的北京", ‘costumer service‘]
# jieba.suggest_freq(‘雾霾‘, True)
# jieba.suggest_freq(‘百搭‘, True)
words = [list(jieba.cut(doc)) for doc in sentences]
print(words)

[[‘我‘, ‘喜欢‘, ‘吃‘, ‘土豆‘],
 [‘土豆‘, ‘是‘, ‘个‘, ‘百搭‘, ‘的‘, ‘东西‘],
 [‘我‘, ‘不‘, ‘喜欢‘, ‘今天‘, ‘雾霾‘, ‘的‘, ‘北京‘],
 [‘costumer‘, ‘ ‘, ‘service‘]]
[https://github.com/fxsjy/jieba]

from:http://blog.csdn.net/pipisorry/article/details/45311229


python下结巴中文分词

原文:http://blog.csdn.net/pipisorry/article/details/45311229

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!