首页 > 其他 > 详细

机器学习笔记(Washington University)- Classification Specialization-week six & week 7

时间:2017-05-18 23:31:42      阅读:292      评论:0      收藏:0      [点我收藏+]

1. Precisoin and recall

precision is how precise i am at showing good stuff on my website

recall is how good i am at find all the postive reviews

 

  Predicted y=1 Predicted y =-1
True label =  1 true positive false negative
True label = -1 false positive true negative

 

 

 

 

 

precision = number of true positives / (number of true positives + number of false positives)

recall      = number of true positives / (number of true positives + number of false negatives)

Pessimistic model : high precision low recall

Optimistic model: low precision high recall

 

2. Stochastic ascent

Gradient ascent is slow because every update requires a full pass over data.

Stochastic gradient ascent only use only small subsets of data

Stochastic gradient converges faster than gradient ascent however it is very sensitive to parameters like the step size

Gradient is direction of steepest direction, but any direction that goes up would be useful for ascent.

Stocahstic gradient works for most data points are pointing in an upward direction.

At the end , stochastic ascent oscillates a bit (noisy) around the optimal.

Issues:

1. Systematic order in data can introduce significant bias 

  • shuffle the data before running stochastic ascent

2. if step size is small, the convergence takes a long time but if large, it oscilate much and behave crazy

  • step size that decreases with iterations is very important(Divided by iteration)

3. Never fully converge so do not trust last coefficients

  •  output the average weghts vector, 1/T(W1+... +WT)

机器学习笔记(Washington University)- Classification Specialization-week six & week 7

原文:http://www.cnblogs.com/climberclimb/p/6875889.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!