I am a research scientist with broad interests in machine learning and artificial intelligence. My work on large scale learning and stochastic gradient algorithms has received attention in the recent years. I am also known for the DjVu document compression system. I joined Facebook AI Research in March 2015.
Use the sidebar to navigate this site.
A page has been allocated for my segment of the NIPS 2007 Tutorials. The second part of the tutorial Learning with Large Datasets was given by Alex Gray. Alex had to replace Andrew Moore on short notice because airplane delays conspired against our initial plans. The page contains the slides and a video recording a the lecture I gave at Microsoft Research a few days after NIPS.
During the 4th Annual Gala of the New York Academy of Sciences, I became one of the happy winners of the first Blavatnik Award for Young Scientists. The other finalists were very impressive. Choosing the winners must have been difficult. Leonard_Blavatnik told me he attended the Nobel ceremony a few years ago and thought that something similar should be done in New York for younger scientists. Apparently he plans to fund a similar award every year.
The talks page contains pointers to my most significant lectures. Slides are available under both the PDF and DjVu formats.
You can now download fast stochastic gradient optimizers for linear Support Vector Machines (SVMs) and Conditional Random Fields (CRFs). Stochastic Gradient Descent has been historically associated with back-propagation algorithms in multilayer neural networks. These nonlinear nonconvex problems can be very difficult. Therefore it is useful to see how Stochastic Gradient Descent performs on such simple linear and convex problems. The benchmarks are very clear!