The emergence of Deep Learning has marked a profound shift in the paradigm of machine learning, a change driven by the numerous breakthroughs it has achieved in recent years. However, as the field evolves and Deep Learning becomes increasingly present in everyday tools and applications, there is an increasing need to address unresolved challenges related to its efficiency and sustainability. This dissertation delves into the role of inductive biases, particularly continuous modeling and symmetry preservation, to address these challenges and enhance the efficiency of Deep Learning. The dissertation is structured in two main parts. The first part investigates continuous modeling as a tool to improve the efficiency of Deep Learning algorithms. Continuous modeling involves the idea of parameterizing neural operations directly in a continuous space. The research presented in this part highlights the substantial benefits of continuous modeling for the (i) computational efficiency –in time and memory–, (ii) the parameter efficiency, and (iii) the complexity of designing neural architectures for new datasets and tasks, coined "design efficiency". In the second half, the focus shifts towards the influence of symmetry preservation on the efficiency of Deep Learning algorithms. Symmetry preservation involves designing neural operations that align with the inherent symmetries of data. The research presented in this part highlights significant gains both in data and parameter efficiency through the use of symmetry preservation. However, it also acknowledges a resulting trade-off of increased computational costs. The dissertation concludes with a thorough critical evaluation of the research findings, openly discussing their limitations and proposing strategies to address them, informed by literature and the author's insights. It ends by identifying promising future research avenues in the exploration of inductive biases for efficiency, and their wider implications for Deep Learning.
More information on the thesis