r/LocalLLaMA · · 1 min read

Automated AI researcher running locally with llama.cpp

Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.

Automated AI researcher running locally with llama.cpp

Hi everyone, I'm happy to share ml-intern, which is a harness for agents to have tighter integration with Hugging Face's open-source libraries (transformers, datasets, trl, etc) and Hub infrastructure:

https://github.com/huggingface/ml-intern

The harness is quite simple (basically tools + system prompt) and we built it initially for Claude Opus. However, now that open models are getting really good at agentic workflows, I just added support for running ml-intern with local models via llama.cpp or ollama. As you can see in the video, Qwen3.6-35B-A3B is able to SFT a model end-to-end by orchestrating CPU/GPU sandboxes and jobs on the Hub. I find this pretty neat because we can now have an AI researcher running 24/7 on a laptop, without maxing out token limits :)

Anyway, I hope this is useful to the community and please let me know if there are any features that you'd like us to include.

submitted by /u/lewtun
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/LocalLLaMA