Open WebUI

OpenWebUI is a user-friendly, extensible and feature-rich self-hosting web interface for Large Language Models (LLMs) that is designed to work completely offline.

OpenWebUI lets you decide whether you want to run your AI application in a cloud environment or on your own infrastructure.

OpenWebUI makes you independent of large cloud providers and ensures that your data runs in a secure environment.

Open WebUI as the basis for your internal knowledge management solution

1

Easy installation based on Docker and Kubernetes technology.

2

Simple chat-based user interface

3

Open interfaces for the use of alternative LLMs.

4

Simple integration of internal data via a RAG architecture.

5

Granular authorizations and freely definable user groups.

6

No time-consuming training of an LLM model.

Additional functions and architectural concepts

  • Tools: Plugins that can use LLMs to collect and utilize real-time data from the real world (e.g. weather forecasts, stock quotes, flight tracking).
  • Functions: Allow customizing or adding functions within OpenWebUI itself (e.g. adding new AI models, creating custom buttons, implementing improved filter functions).
  • Pipelines and Agents: Enable the creation of custom „agents/models“ or integrations with complex workflows that can combine multiple models or external APIs. You can convert features into OpenAI API-compatible formats.
  • Workspace: An area for administrators to select models, manage prompts, add documents for RAG and access tools and features.
  • Valves and UserValves: Functions for providing dynamic details such as API keys or configuration options, where Valves can be customized by the administrator and UserValves by the end user.

OpenWebUI aims to provide a flexible, secure and user-friendly platform for interacting with and managing different AI models in a self-hosted environment. The active community continuously contributes to new features and improvements.

Interested? We are looking forward to talk to you!