The Fundamental Question
We've learned how neural networks can approximate functions: \( f: \mathbb{R}^d \rightarrow \mathbb{R}^m \).
But what if we want to learn mappings between infinite-dimensional function spaces?
Enter operators: mappings that take functions as input and produce functions as output.
\[ \mathcal{G}: \mathcal{A} \rightarrow \mathcal{U} \]
where \( \mathcal{A} \) and \( \mathcal{U} \) are function spaces.
Examples of operators:
- Derivative operator: \( \mathcal{G}u = \frac{du}{dx} \)
- Integration operator: \( \mathcal{G}f = \int_0^x f(t) dt \)
- PDE solution operator: Given boundary conditions or source terms, map to the PDE solution.
©
|
Cornell University
|
Center for Advanced Computing
|
Copyright Statement
|
Access Statement
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)
CVW material development is supported by NSF OAC awards 1854828, 2321040, 2323116 (UT Austin) and 2005506 (Indiana University)