User Tools

Site Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
research:largescale [2012/12/24 11:56]
leonb [See also]
research:largescale [2013/02/25 09:54]
leonb [Tutorials]
Line 70: Line 70:
 complexity of the underlying optimization  complexity of the underlying optimization 
 algorithms in non-trivial ways. algorithms in non-trivial ways.
 +For instance, [[:research:stochastic|Stochastic Gradient Descent (SGD)]] algorithms
 +appear to be mediocre optimization algorithms and yet are shown to 
 +[[:projects/sgd|perform extremely well]] on large-scale learning problems.
  
- 
-For instance, [[:research:stochastic|Stochastic Gradient Descent (SGD)]] algorithms 
-appear to be mediocre optimization algorithms 
-and yet are shown to perform extremely well on large-scale learning problems. 
  
  
Line 81: Line 80:
   * NIPS 2007 tutorial "[[:talks/largescale|Large Scale Learning]]".   * NIPS 2007 tutorial "[[:talks/largescale|Large Scale Learning]]".
  
 +===== Related =====
 +  
 +  * [[:research:stochastic|Stochastic gradient]] algorithms for large scale learning.
 ===== Papers ===== ===== Papers =====
  
research/largescale.txt ยท Last modified: 2013/02/25 09:57 by leonb

Page Tools