I am a research scientist with broad interests in machine learning and artificial intelligence. My work on large scale learning and stochastic gradient algorithms has received attention in the recent years. I am also known for the DjVu document compression system. I joined Facebook AI Research in March 2015.
Use the sidebar to navigate this site.
Léon Bottou and Bernhard Schölkopf https://arxiv.org/abs/2310.01425
We started this work mid-2022. AI was already turning into a mainstream topic. Both as a scientist and a member of the society, I was troubled by the ambient confusion between the actual AI technology and the AI of our dreams or nightmares. We seem unable to grasp this technology and its impact without referring to an AI mythology that maybe starts with Homer's golden maiden and was popularized by modern science fiction.
Therefore we decided to instead interpret the advances of AI using a very different lens: the fiction of Jorge Luis Borges, whose subtly ironical stories illuminate how language works and relates to reality. This intellectual exercise turned into a very fruitful exercise, one that has reframed our outlook on AI:
Pointing out the very well written report Causality for Machine Learning recently published by Cloudera's Fast Forward Labs. Nisha Muktewar and Chris Wallace must have put a lot of work into this. This report stands out because they have a complete section about Causal Invariance and they neatly summarizes the purpose of our own Invariant Risk Minimization with beautiful experimental results.
Alex Peysakhovich and I represent Facebook on the organizing committee of the NYC Data Science Seminar Series. This rotating seminar organized by Columbia, CornellTech, Facebook, Microsoft Research NYC, and New York University has featured a number of prominent speakers.
I was scavenging my old emails a couple weeks ago and found a copy of an early technical report that not only describes Graph Transformer Networks in a couple pages but also explains why they are defined the way they are.