首页 > 其他 > 详细

【笔记】Deep Learning, NLP, and Representations

时间:2015-03-26 22:40:50      阅读:170      评论:0      收藏:0      [点我收藏+]

中文译版:深度学习、自然语言处理和表征方法

英文原文:Deep Learning, NLP, and Representations

1:It’s true, essentially, because the hidden layer can be used as a lookup table.

2:word embeddings;

3:It seems natural for a network to make words with similar meanings have similar vectors. 

4:You’ve seen all the words that you understand before, but you haven’t seen all the sentences that you understand before. So too with neural networks.

5:Word embeddings exhibit an even more remarkable property: analogies between words seem to be encoded in the difference vectors between words. 

6:This general tactic – learning a good representation on a task A and then using it on a task B – is one of the major tricks in the Deep Learning toolbox. It goes by different names depending on the details: pretraining, transfer learning, and multi-task learning. One of the great strengths of this approach is that it allows the representation to learn from more than one kind of data.

There’s a counterpart to this trick. Instead of learning a way to represent one kind of data and using it to perform multiple kinds of tasks, we can learn a way to map multiple kinds of data into a single representation!

7:Shared Representations

(1)Bilingual Word Embeddings;

(2)Embed images and words in a single representation;

8:By merging sequences of words, A takes us from representing words to representing phrases or even representing whole sentences! And because we can merge together different numbers of words, we don’t have to have a fixed number of inputs.

【笔记】Deep Learning, NLP, and Representations

原文:http://www.cnblogs.com/CheeseZH/p/4369983.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!