User Tools

Site Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research:largescale [2012/12/24 11:47]
leonb [Approximate Optimization]
research:largescale [2013/02/25 09:57] (current)
leonb [Papers]
Line 49: Line 49:
 ===== Approximate Optimization ===== ===== Approximate Optimization =====
  
 +{{ wall2.png}}
 Large-scale machine learning was first approached as an engineering problem. For instance, to leverage a Large-scale machine learning was first approached as an engineering problem. For instance, to leverage a
 larger training set, we can use a parallel computer to run a known machine learning algorithm  larger training set, we can use a parallel computer to run a known machine learning algorithm 
Line 60: Line 61:
 takes into account the effect of approximate  takes into account the effect of approximate 
 optimization on learning algorithms. optimization on learning algorithms.
 +
 The analysis shows distinct tradeoffs for the  The analysis shows distinct tradeoffs for the 
 case of small-scale and large-scale learning problems. case of small-scale and large-scale learning problems.
Line 68: Line 70:
 complexity of the underlying optimization  complexity of the underlying optimization 
 algorithms in non-trivial ways. algorithms in non-trivial ways.
 +For instance, [[:research:stochastic|Stochastic Gradient Descent (SGD)]] algorithms
 +appear to be mediocre optimization algorithms and yet are shown to 
 +[[:projects/sgd|perform extremely well]] on large-scale learning problems.
 +
  
-For instance, 
-[[:research:stochastic|Stochastic Gradient Descent (SGD)]] 
-appears to be a mediocre optimization algorithms 
-and yet performs very well on large-scale learning problems. 
  
 ===== Tutorials ===== ===== Tutorials =====
  
-  * [[:talks/largescale|NIPS 2007 tutorial on large scale learning]].+  * NIPS 2007 tutorial "[[:talks/largescale|Large Scale Learning]]".
  
 +===== Related =====
 +
 +   * [[:research:stochastic|Stochastic gradient learning algorithms]]
 ===== Papers ===== ===== Papers =====
  
Line 88: Line 93:
 </box> </box>
  
 +<box 99% orange>
 +Léon Bottou and Yann LeCun:  **On-line Learning for Very Large Datasets**,  //Applied Stochastic Models in Business and Industry//, 21(2):137-151, 2005.
  
 +[[:papers/bottou-lecun-2004a|more...]]
 +</box>
  
 +<box 99% orange>
 +Léon Bottou:  **Online Algorithms and Stochastic Approximations**,  //Online Learning and Neural Networks//, Edited by David Saad, Cambridge University Press, Cambridge, UK, 1998.
  
-===== Active Learning =====+[[:papers/bottou-98x|more...]] 
 +</box>
  
-One simple way to handle large-scale learning problems is to chose examples wisely+<box 99% blue> 
-This idea was explored in our work on [[lasvm|Active and Online Support Vector Machines]]. +Léon Bottou:  //**Une Approche théorique de l'Apprentissage Connexionniste: Applications à la Reconnaissance de la Parole**//, Orsay, France, 1991
-But there is still much work to do about active learning as a way to handle very large data repositories.+ 
 +[[:papers/bottou-91a|more...]] 
 +</box>
  
research/largescale.1356367631.txt.gz · Last modified: 2012/12/24 11:47 by leonb

Page Tools