Neurodata Without Boredom: Benchmarking Agentic AI for Data Reuse
Mirrored from arXiv — Machine Learning for archival readability. Support the source by reading on the original site.
Computer Science > Machine Learning
Title:Neurodata Without Boredom: Benchmarking Agentic AI for Data Reuse
Abstract:Neuroscience data are highly fragmented across labs, formats, and experimental paradigms, and reuse often requires substantial manual effort. A persistent roadblock to data reuse and integration is the need to decipher bespoke and diverse data formatting choices. Common data formats have been proposed in response, but the field continues to struggle with a fundamental tension: formats flexible enough to accommodate diverse experiments are rarely descriptive enough to be self-explanatory, and sufficiently descriptive formats demand detailed documentation and curation effort that few labs can sustain. Agentic AI is a natural candidate to solve this problem: LLMs read code and text faster and with sustained attention to the low-level details humans tend to skim over. To measure how well agentic AI performs on this task, we selected eight recent papers studying large-scale mouse neural population recordings that shared both data and code, spanning diverse recording modalities, behavioral paradigms, and dataset formats (e.g., NWB, specialized APIs, and general-purpose Python or MATLAB files). We provided agents with the data, code, and paper, and prompted them to load, understand, and reformat the data for a common downstream task: training a decoder from neural activity to task or behavioral variables. General-purpose coding agents commonly used by scientists performed well on each sub-task, but rarely strung together a fully error-free end-to-end solution. We characterize the types of mistakes agents made and the dataset properties that elicited them, and propose data-sharing best practices for the agentic-AI era. We further find that agents-as-judges are unreliable at catching errors, especially without ground-truth references, so interactive, human-in-the-loop coding remains necessary.
| Subjects: | Machine Learning (cs.LG) |
| Cite as: | arXiv:2605.12808 [cs.LG] |
| (or arXiv:2605.12808v1 [cs.LG] for this version) | |
| https://doi.org/10.48550/arXiv.2605.12808
arXiv-issued DOI via DataCite (pending registration)
|
Submission history
From: Kristin Branson [view email][v1] Tue, 12 May 2026 23:00:18 UTC (11,544 KB)
Access Paper:
- View PDF
- TeX Source
References & Citations
Bibliographic and Citation Tools
Code, Data and Media Associated with this Article
Demos
Recommenders and Search Tools
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.
More from arXiv — Machine Learning
-
Learning When to Act: Communication-Efficient Reinforcement Learning via Run-Time Assurance
May 14
-
CAWI: Copula-Aligned Weight Initialization for Randomized Neural Networks
May 14
-
Towards Robust Federated Multimodal Graph Learning under Modality Heterogeneity
May 14
-
OceanCBM: A Concept Bottleneck Model for Mechanistic Interpretability in Ocean Forecasting
May 14
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.