Fourier Neural Operators
My honours research focused on the approximation theory of Fourier Neural Operators (FNOs), with an emphasis on understanding when these models can approximate solution operators of partial differential equations efficiently in terms of model complexity, rather than only demonstrating strong empirical performance. The central objective was to study operator learning from a mathematical perspective: not just whether an FNO works in practice, but what classes of parametric PDE maps it can represent, how approximation error scales with the architecture, and when operator learning can break the curse of dimensionality associated with high-dimensional parameter spaces. This involved combining tools from approximation theory, functional analysis, and numerical PDEs to derive rigorous error bounds for operator-valued learning problems, while also situating the results relative to both classical discretisation methods and existing neural operator theory.
A major part of the project developed new approximation results for PDE solution operators arising in groundwater and flow modelling, together with computational experiments designed to validate the theory and make the results operational. In addition to the theoretical work, I built numerical pipelines to generate and analyse PDE datasets, compare approximation behaviour across problem classes, and translate the abstract bounds into concrete computational workflows. More broadly, the project aimed to clarify the mathematical mechanisms that make FNOs effective for structured infinite-dimensional learning problems, and to identify regimes in which they offer a genuinely efficient alternative to more traditional surrogate modelling approaches. The work is intended for research dissemination and reflects my broader interest in combining rigorous mathematics with practical modelling of complex systems.
The thesis can be found here.