What is the most unexpected thing you have gotten a local model to do?
Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.
Most local LLM use cases I see are chat, coding, and RAG. But with vision models getting better and faster on consumer hardware, I feel like there is a lot of untapped territory.
I got a local VLM to play a board game by just looking at the screen and it worked way better than I expected.
What is the weirdest or most unexpected thing you have used a local model for?
[link] [comments]
More from r/LocalLLaMA
-
Orthrus-Qwen3-8B : up to 7.8×tokens/forward on Qwen3-8B, frozen backbone, provably identical output distribution
May 15
-
I built a self-hosted open-source MCP server that gives any local LLM real financial data — SEC filings, 13F, insider & congressional trades, short data, FRED
May 15
-
Qwen 3.6 27B: IQ3XXS KV Q8 vs Q4XL KV Q4 (262K context)
May 15
-
Built a fully offline suitcase robot around a Jetson Orin NX SUPER 16GB. Gemma 4 E4B, ~200ms cached TTFT, 30+ sensors, no WiFi/BT/cellular. He has opinions.
May 15
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.