Opencode you naughty minx
Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.
Man, AI agents getting pretty crazy these days. :)
(local, I just decided to try to get an orchestrator in there, when Qwen and Gemma aren't up to it.)
[link] [comments]
More from r/LocalLLaMA
-
Finding the 4x 3090 Sweet Spot
May 15
-
RAG on Snapdragon X2 Laptop, 200K documents.
May 15
-
Dynamically allocating compute budget to hard set of problems and evolving the sections with Qwen-35B-A3B gets you near GPT-5.4-xHigh on HLE
May 15
-
Gemma4 26b MoE running in MLX with turboquant (and custom kernel)
May 15
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.