r/LocalLLaMA · · 1 min read

I've seen a lot of folks ask "can local LLMs actually do anything useful?"

Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.

And I'm here to share my experience. The answer is resoundingly 'yes'.

Let me start with the local model I use every day in my AI harness: embedding models. I'm using an embedding model to give my AI's persistent memory system a semantic search protocol that makes its memory recall feel seamless to the human user.

Now my more recent use case:

Lately, I have been trying new applications for Qwen3.6-35B-A3B. I have been experimenting with a flow where Qwen evaluates a database based on criteria I give it on a regular weekly interval. It then sends me an email based on the data that meets my criteria. I respond via email with my choice of which items it found to move forward with. It then takes my choice and runs that against our list of sources and our knowledge base to create a document, which it then pushes to a Google Doc, then emails me said Doc. I then edit the Google doc and leave comments for Qwen to incorporate as feedback. When we are done iterating, I email Qwen and tell it to convert the doc to our PDF template. It then converts the work into a nicely formatted PDF and emails it back to me so I can prepare it to send to the end user.

I'm starting simple and moving to more complex tasks, but so far Qwen3.6-35B-A3 is just knocking down every task I put in front of it. I'll report back as things develop but seriously, verdict is yes. You can do many useful things with local LLMs.

What are you doing with your local LLMs?

submitted by /u/NoWorking8412
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/LocalLLaMA