A modular, terminal-based AI chatbot that supports Ollama, LM Studio, and LocalAI (Mudler) — all using OpenAI-compatible APIs. Built with a clean, scalable architecture using Python, Rich, and OpenAI SDK.
This project provides a streaming markdown-based CLI chatbot that can dynamically switch between different AI providers. It uses a fully modular structure (providers, UI, chat engine) for maintainability and future expansion.
| Component | Technology |
|---|---|
| Language | Python 3.9+ |
| CLI UI Rendering | rich |
| AI Client | openai (OpenAI-compatible mode) |
| Local AI Providers | Ollama / LM Studio / LocalAI |
| Architecture | Modular Python package (app folder) |
project/
│
├── app.py
│
└── app/
├── providers.py
├── ui.py
├── chat.py
└── __init__.py
richFollow these steps to install and run the chatbot.
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
You can choose any provider. Each runs locally.
Download & install: https://lmstudio.ai/
Run the server:
Install: https://ollama.com/download
List models:
ollama list
Start server automatically runs with Ollama.
GitHub: https://github.com/mudler/LocalAI
Docker example:
docker run -p 8080:8080 localai/localai:latest
This exposes the OpenAI-compatible API at:
http://localhost:8080/v1
Providers are defined inside:
app/providers.py
You can modify base URLs or API keys here.
Example:
PROVIDERS = {
"Ollama": {
"base_url": "http://localhost:11434/v1",
"api_key": "ollama-key"
},
"LMStudio": {
"base_url": "http://localhost:1234/v1",
"api_key": "lm-studio"
},
"LocalAI (Mudler)": {
"base_url": "http://localhost:8080/v1",
"api_key": "localai-key"
}
}
From project root:
python app.py
License: MIT — see LICENSE