User Tools

Site Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research:largescale [2012/12/24 11:57]
leonb [Approximate Optimization]
research:largescale [2013/02/25 09:57] (current)
leonb [Papers]
Line 70: Line 70:
 complexity of the underlying optimization  complexity of the underlying optimization 
 algorithms in non-trivial ways. algorithms in non-trivial ways.
- 
- 
 For instance, [[:research:stochastic|Stochastic Gradient Descent (SGD)]] algorithms For instance, [[:research:stochastic|Stochastic Gradient Descent (SGD)]] algorithms
 appear to be mediocre optimization algorithms and yet are shown to  appear to be mediocre optimization algorithms and yet are shown to 
Line 82: Line 80:
   * NIPS 2007 tutorial "[[:talks/largescale|Large Scale Learning]]".   * NIPS 2007 tutorial "[[:talks/largescale|Large Scale Learning]]".
  
 +===== Related =====
 +
 +   * [[:research:stochastic|Stochastic gradient learning algorithms]]
 ===== Papers ===== ===== Papers =====
  
Line 92: Line 93:
 </box> </box>
  
 +<box 99% orange>
 +Léon Bottou and Yann LeCun:  **On-line Learning for Very Large Datasets**,  //Applied Stochastic Models in Business and Industry//, 21(2):137-151, 2005.
 +
 +[[:papers/bottou-lecun-2004a|more...]]
 +</box>
 +
 +<box 99% orange>
 +Léon Bottou:  **Online Algorithms and Stochastic Approximations**,  //Online Learning and Neural Networks//, Edited by David Saad, Cambridge University Press, Cambridge, UK, 1998.
 +
 +[[:papers/bottou-98x|more...]]
 +</box>
 +
 +<box 99% blue>
 +Léon Bottou:  //**Une Approche théorique de l'Apprentissage Connexionniste: Applications à la Reconnaissance de la Parole**//, Orsay, France, 1991.
 +
 +[[:papers/bottou-91a|more...]]
 +</box>
  
research/largescale.1356368245.txt.gz · Last modified: 2012/12/24 11:57 by leonb

Page Tools