Harbor: Effortless Local LLM Stack Management with CLI and App

Summary
Harbor is a comprehensive containerized toolkit designed to simplify the setup and management of local LLM environments. It allows users to effortlessly run LLM backends, APIs, frontends, and various related services with simple commands or via a companion application. This project streamlines experimentation and development with large language models on your local machine.
Repository Info
Introduction
Harbor is an innovative, containerized LLM toolkit that empowers users to effortlessly set up and manage their local Large Language Model (LLM) environments. It provides a powerful command-line interface (CLI) and a companion desktop application, enabling the seamless deployment of LLM backends, frontends, APIs, and a wide array of related services. Designed for both beginners and experienced developers, Harbor simplifies the complex process of running cutting-edge AI models and tools locally.
Installation
Getting started with Harbor is straightforward. The project provides comprehensive guides for installing both the CLI and the companion application. Once installed, you can launch a default stack, including Open WebUI and Ollama, with a single command:
harbor up
For more detailed installation instructions, refer to the official documentation.
Examples
Harbor's versatility shines through its wide range of supported services and functionalities. Here are a few examples of what you can achieve:
Running Local LLMs and Inference Engines
Easily deploy popular LLM backends and advanced inference engines, all pre-connected to frontends like Open WebUI:
harbor up ollama
harbor up llamacpp
harbor up vllm
harbor up vllm llamacpp tgi litellm tabbyapi aphrodite sglang ktransformers mistralrs airllm
Web RAG and Deep Research
Integrate powerful search capabilities with your LLMs. Harbor includes SearXNG, which is pre-connected to various services for Web RAG:
harbor up searxng
harbor up searxng chatui
Image Generation
Utilize image generation models like FLUX with ComfyUI, integrated directly into Open WebUI:
harbor up comfyui
Access from Anywhere
Access your Harbor services from your phone via QR code or expose them to the internet with a built-in tunneling service (use with caution):
harbor qr
harbor tunnel
Eject to Docker Compose
When you're ready to transition to a custom setup, Harbor can generate a docker-compose
file replicating your current configuration:
harbor eject searxng llamacpp > docker-compose.harbor.yml
Why Use Harbor
Harbor offers significant advantages for anyone working with local LLM environments:
- Simplicity and Convenience: Run complex LLM stacks with minimal configuration, often with a single command or click.
- Centralized Workflow: All services, configurations, logs, and data files are managed from a single, consistent interface, reducing setup complexity and ensuring predictability.
- Experimentation Hub: It serves as an excellent starting point for experimenting with a vast array of LLMs and related services, providing a robust and ready-to-use environment.
- Extensive Service Catalog: Access a wide selection of UIs, backends, and satellite services, from chat interfaces to RAG tools and workflow automation.
Links
- GitHub Repository: av/harbor
- Official Documentation: Harbor Wiki
- Discord Community: Join Discord