I am a research scientist with broad interests in machine learning and artificial intelligence. My work on large scale learning and stochastic gradient algorithms has received attention in the recent years. I am also known for the DjVu document compression system. I joined Facebook AI Research in March 2015.
Use the sidebar to navigate this site.
Pointing out the very well written report Causality for Machine Learning recently published by Cloudera's Fast Forward Labs. Nisha Muktewar and Chris Wallace must have put a lot of work into this. This report stands out because they have a complete section about Causal Invariance and they neatly summarizes the purpose of our own Invariant Risk Minimization with beautiful experimental results.
Alex Peysakhovich and I represent Facebook on the organizing committee of the NYC Data Science Seminar Series. This rotating seminar organized by Columbia, CornellTech, Facebook, Microsoft Research NYC, and New York University has featured a number of prominent speakers.
I was scavenging my old emails a couple weeks ago and found a copy of an early technical report that not only describes Graph Transformer Networks in a couple pages but also explains why they are defined the way they are.
Why settle for 60000 MNIST training examples when you can have one trillion?
The MNIST8M dataset was generated using the elastic deformation code originally written
for (Loosli, Canu, and Bottou, 2007). Unfortunately the
original MNIST8M files were accidentally deleted from the NEC servers a couple weeks ago.
Instead of regenerating the files, I have repackaged the generation code in
a convenient form.