I am a research scientist with broad interests in machine learning and artificial intelligence. My work on large scale learning and stochastic gradient algorithms has received attention in the recent years. I am also known for the DjVu document compression system. I joined Facebook AI Research in March 2015.
Use the sidebar to navigate this site.
Machine learning conferences nowadays are too large for my enjoyment. I made the trip to Singapore for two posters and a talk in the associative memory workshop. I spent my time listening to the morning keynotes, walking briskly in the poster room, and catching up with friends and colleagues from both industry and academia.
On my way back, I met Kyunghyun Cho in the airport. We had a drink over what we had learned. Cho always has great insights; he has invented attention mechanisms; he dines with Korean stars. Therefore, I know what I must do when he tells me “You should tweet that!”
Two weakly related points.
Léon Bottou and Bernhard Schölkopf https://arxiv.org/abs/2310.01425
We started this work mid-2022. AI was already turning into a mainstream topic. Both as a scientist and a member of the society, I was troubled by the ambient confusion between the actual AI technology and the AI of our dreams or nightmares. We seem unable to grasp this technology and its impact without referring to an AI mythology that maybe starts with Homer's golden maiden and was popularized by modern science fiction.
Therefore we decided to instead interpret the advances of AI using a very different lens: the fiction of Jorge Luis Borges, whose subtly ironical stories illuminate how language works and relates to reality. This intellectual exercise turned into a very fruitful exercise, one that has reframed our outlook on AI:
Pointing out the very well written report Causality for Machine Learning recently published by Cloudera's Fast Forward Labs. Nisha Muktewar and Chris Wallace must have put a lot of work into this. This report stands out because they have a complete section about Causal Invariance and they neatly summarizes the purpose of our own Invariant Risk Minimization with beautiful experimental results.
Alex Peysakhovich and I represent Facebook on the organizing committee of the NYC Data Science Seminar Series. This rotating seminar organized by Columbia, CornellTech, Facebook, Microsoft Research NYC, and New York University has featured a number of prominent speakers.