r/LocalLLaMA · · 1 min read

How many of you tried BeeLlama.cpp? How's it? Agentic coding possible with 8GB VRAM?

Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.

We'll be getting those features(check bottom link) on mainline soon or later anyway. But for now this fork could be useful to see the full potential of our poor GPUs(and also big, large GPUs).

Any 8GB VRAM(and 32GB RAM) folks already doing Agentic coding with models(@ Q4 at least) like Qwen3.6-35B-A3B, Qwen3.6-27B, Gemma-4-31B, Gemma-4-26B-A4B? I would love to see some t/s stats, full commands & more details on that. I'm not expecting any miracle with 8GB VRAM, still want to do something decent with limited constraints. Though I'm getting new rig this month, I want to use my current laptop(8GB VRAM) too for Agentic coding.

Others(who has more than 8GB VRAM), please share your stats, full commands & comparison with mainline.

Below is related thread by creator. Hope the creator adds more features continuously.

submitted by /u/pmttyji
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/LocalLLaMA