Discretization-Invariant Operator Learning: Algorithms and Theory
Abstract: Learning operators between infinitely dimensional spaces is an important learning task arising in wide applications in machine learning, data science, mathematical modeling and simulations, etc. This talk introduces a new discretization-invariant operator learning approach based on data-driven kernels for sparsity via deep learning. Compared to existing methods, our approach achieves attractive accuracy in solving forward and inverse problems, prediction problems, and signal processing problems with zero-shot generalization, i.e., networks trained with a fixed data structure can be applied to heterogeneous data structures without expensive re-training. Under mild conditions, quantitative generalization error will be provided to understand discretization-invariant operator learning.