r/LocalLLaMA · · 2 min read

TextGen is now a native desktop app. Open-source alternative to LM Studio (formerly text-generation-webui).

Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.

TextGen is now a native desktop app. Open-source alternative to LM Studio (formerly text-generation-webui).

Hi all,

I have been making a lot of updates to my project, and I wanted to share them here.

TextGen (previously text-generation-webui, also known as my username oobabooga or ooba) has been in development since December 2022, before LLaMa and llama.cpp existed.

In the last two months, the project has evolved from a web UI to a no-install desktop app for Windows, Linux, and macOS with a polished UI. I have created a very minimal and elegant Electron integration for that. (Did you know LM Studio is also a web UI running over Electron? Not sure many people know that.)

https://preview.redd.it/tk8oibhgjw0h1.png?width=1686&format=png&auto=webp&s=95c70f769766466885c8fdc6e7211525a371a920

It works like this:

  1. You download a portable build from the releases page
  2. Unzip it
  3. Double-click textgen
  4. A window appears

There is no installation, and no files are ever created outside the extracted folder. It's fully self-contained. All your chat histories and settings are stored in a user_data folder shipped with the build.

There are builds for CUDA, Vulkan, CPU-only, Mac (Apple Silicon and Intel), and ROCm.

Some differentiating features:

  • Full privacy. Unlike LM Studio, it doesn't phone home on every launch with your OS, CPU architecture, app version, and inference backend choices. Zero outbound requests.
  • ik_llama.cpp builds (LM Studio and Ollama only ship vanilla llama.cpp). ik_llama.cpp has new quant types like IQ4_KS and IQ5_KS with SOTA quantization accuracy.
  • Built-in web search via the ddgs Python library, either through tool-calling with the built-in web_search tool (works flawlessly with Qwen 3.6 and Gemma 4), or through an "Activate web search" checkbox that fetches search results as text attachments.
  • Tool-calling support through 3 options: single-file .py tools (very easy to create your own custom functions), HTTP MCP servers, and stdio MCP servers. You can enable confirmations so that each tool call shows up with approve/reject buttons before it executes. I have written a guide here.
  • The ability to create custom characters for casual chats, in addition to regular instruction-following conversations:

https://preview.redd.it/anlkyz6ijw0h1.png?width=1686&format=png&auto=webp&s=e8783773865c8c0721bd1474d583fd96604c3d38

  • OpenAI and Anthropic compliant API with very strict spec compliance. It works with Claude Code: you can load a model and run ANTHROPIC_BASE_URL=http://127.0.0.1:5000 claude and it will work.
  • Accurate PDF text extraction using the PyMuPDF Python library.
  • trafilatura for web page fetching, which strips navigation and boilerplate from pages, saving a lot of tokens on agentic tool loops.
  • Chat templates are rendered through Python's Jinja2 library, which works for templates where llama.cpp's C++ reimplementation of jinja sometimes crashes.

I write this as a passion project/hobby. It's free and open source (AGPLv3) as always:

https://github.com/oobabooga/textgen

submitted by /u/oobabooga4
[link] [comments]

Discussion (0)

Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.

Sign in →

No comments yet. Sign in and be the first to say something.

More from r/LocalLLaMA