r/LocalLLaMA · · 1 min read

Using Local LLMs for research

Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.

Hey there. I am an undergrad who has been doing mostly SWE, but will be doing ML research under my professor over the summer. So I am new to research - I ask not to be judged too harshly. Generally, we will be working on Physics-Informed Neural Networks.

I have seen some articles people using AI agents for research. Of course, I am not expecting (nor do I desire to) write an entire paper with an AI. Rather, I am looking for an agent that would help me with retrieval or, for example, finding relevant papers while I'm asleep or away from my PC.

I have an access to NVIDIA RTX6000 PRO, and can selfhost a big enough model. But I don't really know how to build a research agent. Right now, I have a qwen-3.6-35b running as a base for my hermes agent that I use occasionally. But how do I make a research agent that is actually useful? The only solution I could see now is either creating a skill for my hermes agent or using something like Karpathy's LLM Wiki Agent?

I am really confused but really curious and motivated to learn about this matter. I would incredibly value any guidance!

submitted by /u/AggressiveMention359
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/LocalLLaMA