Neural Operators</p>\n","updatedAt":"2026-05-14T01:36:37.558Z","author":{"_id":"65cf12f7d8b82d378f816e14","avatarUrl":"/avatars/c5e6d9d823f9036ca5645b2f7b7bbb27.svg","fullname":"An Luo","name":"lainmn","type":"user","isPro":false,"isHf":false,"isHfAdmin":false,"isMod":false,"isUserFollowing":false}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.589451789855957},"editors":["lainmn"],"editorAvatarUrls":["/avatars/c5e6d9d823f9036ca5645b2f7b7bbb27.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2605.12997","authors":[{"_id":"6a05270db1a8cbabc9f0864b","name":"Runlong Xie","hidden":false},{"_id":"6a05270db1a8cbabc9f0864c","name":"An Luo","hidden":false}],"publishedAt":"2026-05-13T00:00:00.000Z","submittedOnDailyAt":"2026-05-14T00:00:00.000Z","title":"Frequency Bias and OOD Generalization in Neural Operators under a Variable-Coefficient Wave Equation","submittedOnDailyBy":{"_id":"65cf12f7d8b82d378f816e14","avatarUrl":"/avatars/c5e6d9d823f9036ca5645b2f7b7bbb27.svg","isPro":false,"fullname":"An Luo","user":"lainmn","type":"user","name":"lainmn"},"summary":"Neural operators learn to map initial conditions to the terminal solution of partial differential equations (PDEs), providing a surrogate for the full operator mapping. This enables rapid prediction across different input configurations. While recent neural operator architectures have demonstrated strong performance on diverse PDE tasks, their behavior under structured distribution shifts remains insufficiently understood. To investigate this, we study operator learning in a wave propagation setting governed by a one-dimensional variable-coefficient wave equation, using two representative architectures, the Fourier Neural Operator (FNO) and the Deep Operator Network (DeepONet). To examine their generalization under distribution shifts, we consider structured out-of-distribution (OOD) settings that independently vary input frequency and coefficient smoothness. The results show that under smoothness shifts, both models maintain stable performance, with FNO achieving lower error. In contrast, under frequency shifts, FNO exhibits a sharp increase in error under unseen high-frequency inputs, whereas DeepONet shows milder degradation despite higher overall error. Our analysis reveals that these differences arise from how each architecture represents and responds to variations in frequency structure. Together, these findings highlight a fundamental gap between strong in-distribution performance and generalization under distribution shifts in operator learning, underscoring the role of architectural representation bias in developing more reliable neural operators for physics-based PDE simulations beyond the training distribution.","upvotes":1,"discussionId":"6a05270db1a8cbabc9f0864d","ai_summary":"Neural operators for PDE solving show different generalization behaviors under distribution shifts, with Fourier Neural Operators and Deep Operator Networks exhibiting distinct responses to smoothness and frequency variations.","ai_keywords":["neural operators","partial differential equations","Fourier Neural Operator","Deep Operator Network","distribution shifts","out-of-distribution generalization","frequency structure","representation bias"]},"canReadDatabase":false,"canManagePapers":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"65cf12f7d8b82d378f816e14","avatarUrl":"/avatars/c5e6d9d823f9036ca5645b2f7b7bbb27.svg","isPro":false,"fullname":"An Luo","user":"lainmn","type":"user"}],"acceptLanguages":["en"],"dailyPaperRank":0,"markdownContentUrl":"https://huggingface.co/buckets/huggingchat/papers-content/resolve/2605/2605.12997.md"}">
Frequency Bias and OOD Generalization in Neural Operators under a Variable-Coefficient Wave Equation
Published on May 13
· Submitted by An Luo on May 14 Abstract
Neural operators for PDE solving show different generalization behaviors under distribution shifts, with Fourier Neural Operators and Deep Operator Networks exhibiting distinct responses to smoothness and frequency variations.
AI-generated summary
Neural operators learn to map initial conditions to the terminal solution of partial differential equations (PDEs), providing a surrogate for the full operator mapping. This enables rapid prediction across different input configurations. While recent neural operator architectures have demonstrated strong performance on diverse PDE tasks, their behavior under structured distribution shifts remains insufficiently understood. To investigate this, we study operator learning in a wave propagation setting governed by a one-dimensional variable-coefficient wave equation, using two representative architectures, the Fourier Neural Operator (FNO) and the Deep Operator Network (DeepONet). To examine their generalization under distribution shifts, we consider structured out-of-distribution (OOD) settings that independently vary input frequency and coefficient smoothness. The results show that under smoothness shifts, both models maintain stable performance, with FNO achieving lower error. In contrast, under frequency shifts, FNO exhibits a sharp increase in error under unseen high-frequency inputs, whereas DeepONet shows milder degradation despite higher overall error. Our analysis reveals that these differences arise from how each architecture represents and responds to variations in frequency structure. Together, these findings highlight a fundamental gap between strong in-distribution performance and generalization under distribution shifts in operator learning, underscoring the role of architectural representation bias in developing more reliable neural operators for physics-based PDE simulations beyond the training distribution.
Community
Upload images, audio, and videos by dragging in the text input, pasting, or clicking here.
Tap or paste here to upload images
Cite arxiv.org/abs/2605.12997 in a model README.md to link it from this page.
Cite arxiv.org/abs/2605.12997 in a dataset README.md to link it from this page.
Cite arxiv.org/abs/2605.12997 in a Space README.md to link it from this page.
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.