Bubeck is the author of the book Convex optimization: Algorithms and complexity (2015).[6] He has also been on the editorial board of several scientific journals and conferences, including the Journal of the ACM[7] and Neural Information Processing Systems (NeurIPS) and was program committee chair for the 2018 Conference on Learning Theory (COLT)[8]
In 2023, Bubeck and his collaborators published a paper that claimed to observe "sparks of artificial general intelligence" in an early version of GPT-4, a large language model developed by OpenAI. The paper presented examples of GPT-4 performing tasks across various domains and modalities, such as mathematics, coding, vision, medicine, and law. The paper sparked wide interest and debate in the scientific community and the popular media, as it challenged the conventional understanding of learning and cognition in AI systems.[9][10][11][12][13][14] Bubeck also investigated the potential use of GPT-4 as an AI chatbot for medicine in a paper that evaluated the strengths, weaknesses, and ethical issues of relying on such a tool for medical purposes[15]
In October 2024, Bubeck left Microsoft to join OpenAI.[16]
Honors and awards
Bubeck has received numerous honors and awards for his work, including the Alfred P. Sloan Research Fellowship in Computer Science in 2015,[17] and Best Paper Awards at the Conference on Learning Theory (COLT) in 2016,[18]Neural Information Processing Systems (NeurIPS) in 2018 and 2021 and in the ACM Symposium on Theory of Computing (STOC) 2023.[19][20] He has also received the Jacques Neveu prize for the best French PhD in Probability/Statistics,[21] the AI prize for a French PhD in Artificial Intelligence,[22] and the Gilles Kahn prize for a French PhD in Computer Science.[23]
Selected publications
Minimax policies for adversarial and stochastic bandits (2009), with Jean-Yves Audibert.
Best arm identification in multi-armed bandits (2010), with Jean-Yves Audibert and Rémi Munos.
Kernel-based methods for bandit convex optimization (2017), with Yin Tat Lee and Ronen Eldan.
A universal law of robustness via isoperimetry (2020), with Mark Sellke.
K-server via multiscale entropic regularization (2018), with Michael B. Cohen, Yin Tat Lee, James R. Lee, and Aleksander Madry.
Competitively chasing convex bodies (2019), with Yin Tat Lee, Yuanzhi Li, and Mark Sellke.
Regret analysis of stochastic and nonstochastic multi-armed bandit problems (2012), with Nicolò Cesa-Bianchi.