Getting Started
Prerequisites
- Python 3.13 (Poetry recommended)
- Optional: Docker, Ollama or LM Studio
Set up a local LLM provider (recommended): Local LLM providers
Install
# Add core library to your project
poetry add l6e-forge
# Add CLI as a dev dependency
poetry add --group dev l6e-forge-cli
Using uv:
uv add l6e-forge
uv add --dev l6e-forge-cli
Using pip:
pip install l6e-forge l6e-forge-cli
Create a Workspace
poetry run forge init ./my-workspace
cd my-workspace
Create an Agent
poetry run forge create agent my-assistant --template=assistant
poetry run forge create agent my-ollama --provider=ollama --model llama3.2:3b
Bootstrap Models
poetry run forge models bootstrap agents/my-ollama --provider-order ollama,lmstudio --interactive
See more details on recommendations, flags, and how it updates your agent: Auto model selection & bootstrapping
Run the Stack
poetry run forge up
API: http://localhost:8000 — Monitor: http://localhost:8321 — UI: http://localhost:8173
Chat
poetry run forge chat my-ollama -w ./my-workspace
Workspace Structure
my-workspace/
├── forge.toml
├── agents/
│ └── my-ollama/
│ ├── agent.py
│ ├── config.toml
│ └── tools.py (optional)
└── .forge/
├── logs/
└── data/
Next steps
- Customize your agent and add few‑shot examples: Customize your agent