Research Interests
Research Interests: My research focuses on efficiency and scalability of deep learning models, in both resource constrained and large-scale distributed/decentralized environment. My PhD work at Sony CSL focused on integrating application specific expert knowledge in generative modeling techniques as a way to improve the quality of generated samples. This work led to the commercialization of a music production tool by Sony Music. My first post-doc at ISIR, Sorbonne Université, focused on distributed and decentralized optimization techniques to train deep neural networks at scale, with an emphasis on model parallelism. My current work at INRIA Saclay focuses on designing training algorithms able to grow neural architecture during training by performing functional gradient descent. Such procedures aim at performing neural architecture search at a fraction of the cost required by traditional techniques. This approach is part of a broader research agenda aiming at designing more frugal AI systems.
Background: I have a strong theoretical background in mathematics and computer science. I have experience in signal processing, generative modeling techniques, especially when applied to audio signals. Finally, I also have experience with distributed training of large scale neural architecture with a focus on optimizing resource efficiency.