Skip to content MaLGa logo MaLGa black extended MaLGa white extended UniGe ¦ MaLGa UniGe ¦ MaLGa Universita di Genova | MaLGa Universita di Genova

Foundations of deep convolutional models through kernel methods

16 Feb 2021, 15:00 — Remote

Speaker:
Alberto Bietti — New York University
Abstract:
Deep learning has been most widely successful in tasks where the data presents a rich structure, such as images, audio, or text. The choice of network architecture is believed to play a key role in exploiting this structure, for instance through convolutions and pooling on natural signals, yet a precise study of these properties and how they affect learning guarantees is still missing. Another challenge for the theoretical understanding of deep learning models is that they are often over-parameterized and known to be powerful function approximators, while being seemingly easy to optimize using gradient methods. We study deep models through the lens of kernel methods, which naturally define functional spaces for learning in a non-parametric manner, and naturally appear when considering the optimization of infinitely-wide networks in certain regimes. This allows us to study invariance and stability properties of various convolutional architectures by studying the geometry of the kernel mapping, as well as approximation properties of learning in different regimes.
Bio:
Alberto is a Faculty Fellow/Postdoc at the NYU Center for Data Science in New York. He completed his PhD in 2019 from Inria and Université Grenoble-Alpes under the supervision of Julien Mairal, and later spent part of 2020 as a postdoc at Inria Paris hosted by Francis Bach. His research interests revolve around machine learning, optimization and statistics, with a focus on developing the theoretical foundations of deep learning.

← Back to seminars