This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
research:largescale [2007/08/16 17:09] leonb |
research:largescale [2013/02/25 09:57] (current) leonb [Papers] |
||
---|---|---|---|
Line 49: | Line 49: | ||
===== Approximate Optimization ===== | ===== Approximate Optimization ===== | ||
+ | {{ wall2.png}} | ||
+ | Large-scale machine learning was first approached as an engineering problem. For instance, to leverage a | ||
+ | larger training set, we can use a parallel computer to run a known machine learning algorithm | ||
+ | or adapt more advanced numerical methods to optimize a known machine learning | ||
+ | objective function. Such approaches rely on the appealing | ||
+ | assumption that one can decouple the statistical aspects from the computational aspects of the machine | ||
+ | learning problem. | ||
- | FIXME fill the blanks | + | This work shows that this assumption is incorrect, and that giving it up leads to considerably |
+ | more effective learning algorithms. A new theoretical framework | ||
+ | takes into account | ||
+ | optimization on learning algorithms. | ||
- | ===== Stochastic Gradient | + | The analysis shows distinct tradeoffs |
+ | case of small-scale and large-scale learning problems. | ||
+ | Small-scale learning problems are subject to the | ||
+ | usual approximation--estimation tradeoff. | ||
+ | Large-scale learning problems are subject to | ||
+ | a qualitatively different tradeoff involving the computational | ||
+ | complexity of the underlying optimization | ||
+ | algorithms in non-trivial ways. | ||
+ | For instance, [[: | ||
+ | appear to be mediocre optimization algorithms and yet are shown to | ||
+ | [[: | ||
- | [[stochastic]] | ||
- | ===== Weak Supervision | + | ===== Tutorials |
- | [[transduction]] | + | * NIPS 2007 tutorial "[[: |
- | [[structured]] | + | ===== Related ===== |
+ | * [[: | ||
+ | ===== Papers ===== | ||
- | ===== Reprocessing | + | <box 99% orange> |
+ | Léon Bottou | ||
+ | //Advances in Neural Information Processing Systems//, 20, | ||
+ | MIT Press, Cambridge, MA, 2008. | ||
+ | |||
+ | [[: | ||
+ | </ | ||
+ | |||
+ | <box 99% orange> | ||
+ | Léon Bottou and Yann LeCun: | ||
+ | |||
+ | [[: | ||
+ | </ | ||
+ | |||
+ | <box 99% orange> | ||
+ | Léon Bottou: | ||
+ | |||
+ | [[: | ||
+ | </ | ||
+ | |||
+ | <box 99% blue> | ||
+ | Léon Bottou: | ||
+ | |||
+ | [[: | ||
+ | </ | ||
- | [[lasvm]] |