I work at the intersection of deep learning and probabilistic machine learning. I am studying the use of curvature-based Bayesian uncertainty to functionally enhance the training of neural networks. I'm based at the group of Philipp Hennig.
KEYWORDS:
The currently prevailing practice is to evaluate network performance solely by the attained final validation loss/accuracy. By acknowledging the probabilistic nature of the problem, constructing and leveraging uncertainty, I aim to generalise training to realistic and practically relevant settings like continual learning. This requires algorithmic advances to enhance efficiency through matrix-free, compressed, iterative linear algebra, but also the establishment of new design patterns in deep learning, like Kalman filtering on the weight-space.
Before my PhD, I was working in the field of causality, mainly exploring how to solve causal representation learning where the sources generate observations via a nonlinear transformation. During my education, I was studying various topics in computational neuroscience such as Brain Computer Interfaces, constructing 3D brain atlas and neuron simulation.