Stéphane Rivaud

Stéphane Rivaud

Researcher in Machine Learning

INRIA Saclay, TAU Team

Email: stephane.a.rivaud@inria.fr

Phone: (+33) 06 77 07 80 89

GitHub | LinkedIn | Google Scholar

Download CV

Research Interests

Research Interests: My research focuses on efficiency and scalability of deep learning models, in both resource constrained and large-scale distributed/decentralized environment. My PhD work at Sony CSL focused on integrating application specific expert knowledge in generative modeling techniques as a way to improve the quality of generated samples. This work led to the commercialization of a music production tool by Sony Music. My first post-doc at ISIR, Sorbonne Université, focused on distributed and decentralized optimization techniques to train deep neural networks at scale, with an emphasis on model parallelism. My current work at INRIA Saclay focuses on designing training algorithms able to grow neural architecture during training by performing functional gradient descent. Such procedures aim at performing neural architecture search at a fraction of the cost required by traditional techniques. This approach is part of a broader research agenda aiming at designing more frugal AI systems.

Background: I have a strong theoretical background in mathematics and computer science. I have experience in signal processing, generative modeling techniques, especially when applied to audio signals. Finally, I also have experience with distributed training of large scale neural architecture with a focus on optimizing resource efficiency.

Education

  • PhD in Artificial Intelligence, Sony CSL and University of Reims (2016-2020)
  • Master in Acoustics, Signal Processing and Computer Science applied to Music (ATIAM), IRCAM, Centre George Pompidou (2014-2015)
  • Agrégation of Mathematics with major in Computer Science, ENS Rennes (2013)
  • Magistère of Mathematics, ENS Rennes (2010-2014)

Research Experience

Post-doc

  • Neural Architecture Growth for Frugal AI, INRIA Saclay, since 2024
  • Decentralized Training of Deep Neural Networks, ISIR, Sorbonne Université, 2022 - 2024

Thesis (Industrial)

  • Integration of Expert Knowledge into Generative Models: Application to Music Production, Sony CSL, 2016 - 2020

Publications

Peer-Reviewed Conference Articles

  • 2025 - Douka S., Verbockhaven M., Rudkiewicz T., Rivaud S., Landes F., Chevallier S., Charpiat G., Growth strategies for arbitrary DAG neural architectures., ESANN 2025. (Link)
  • 2025 - Rivaud S., Fournier L., Pumir T., Belilovsky E., Eickenberg M., Oyallon E., PETRA: Parallel End-to-end Training with Reversible Architectures., ICLR 2025 (spotlight). (Link)
  • 2023 - Fournier L., Rivaud S., Belilovsky E., Eickenberg M., Oyallon E., Can forward gradient match backpropagation?, ICML 2023 (Link)
  • 2016 - Rivaud S., Pachet F., Roy P., Sampling Markov models under binary equality constraints is hard., JFRB 2016 (Link)

Open Research Articles

  • 2017 - Rivaud S., Pachet F., Sampling Markov models under constraints: Complexity results for binary equalities and grammar membership. Technical report. (Link)
  • 2020 - Rivaud S., Integration of Expert Knowledge in Generative Modelling: Application to Music Generation. PhD Thesis. (Link)

Patent

  • 2022 - Deruty E., Rivaud S., Addressing interferences in multi-channel audio mixing, (US PATENT 11363377). (Link)

Teaching

  • Fall 2024 - Applied Statistics, M1 Artificial Intelligence, Université Paris-Saclay (Lectures and Practical Sessions)
  • Fall 2023 - Advanced Machine Learning and Deep Learning, M2 DAC, Sorbonne Université (Practical Sessions)
  • Fall 2019 - Introduction to Neural Networks, M2 Computer Science, Université de Reims (Lectures)
  • Fall 2017 & 2018 - Introduction to Artificial Intelligence, M1 & M2 Computer Science, Université de Reims (Lectures)
  • Fall 2013 - Analysis and Algebra, L3 Mathematics, Université de Rennes (Oral Examiner)