User Tools

Site Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
research:largescale [2012/12/24 11:55]
leonb [Approximate Optimization]
research:largescale [2013/02/25 09:55]
leonb [Related]
Line 70: Line 70:
 complexity of the underlying optimization  complexity of the underlying optimization 
 algorithms in non-trivial ways. algorithms in non-trivial ways.
 +For instance, [[:research:stochastic|Stochastic Gradient Descent (SGD)]] algorithms
 +appear to be mediocre optimization algorithms and yet are shown to 
 +[[:projects/sgd|perform extremely well]] on large-scale learning problems.
  
- 
-For instance, [[:research:stochastic|Stochastic Gradient Descent (SGD)]] algorithms 
-appear to be mediocre optimization algorithms 
-and yet are shown to perform extremely well on large-scale learning problems. 
  
  
Line 81: Line 80:
   * NIPS 2007 tutorial "[[:talks/largescale|Large Scale Learning]]".   * NIPS 2007 tutorial "[[:talks/largescale|Large Scale Learning]]".
  
 +===== Related =====
 +
 +   * [[:research:stochastic|Stochastic gradient learning algorithms]]
 ===== Papers ===== ===== Papers =====
  
Line 90: Line 92:
 [[:papers/bottou-bousquet-2008|more...]] [[:papers/bottou-bousquet-2008|more...]]
 </box> </box>
- 
-===== See also ===== 
- 
-  * [[stochastic|Learning with Stochastic Gradient Descent]]. 
- 
  
  
research/largescale.txt ยท Last modified: 2013/02/25 09:57 by leonb

Page Tools