Standard neural networks learn point-wise mappings: \( \mathbb{R}^n \rightarrow \mathbb{R}^m \). But operators map functions to functions. How do we represent infinite-dimensional functions with finite data?

Traditional approach limitations:

  • Fixed discretization: Networks trained on specific grids can't generalize to different resolutions.
  • Curse of dimensionality: High-dimensional function spaces are computationally intractable.
  • No theoretical foundation: No guarantee that standard networks can approximate operators.
 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Access Statement
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)