TeleZero is a lightweight (~226 KB) TypeScript autonomous agent that runs via Telegram and a local dashboard without Docker. It uses an LLM reasoning loop with real tools, stores conversations and jobs in SQLite, and supports cron-based background tasks.
TeleZero is in beta and not recommended for production use yet. Note: Use at your own risk. The agent can access files in your folders/repositories based on granted tool capabilities.
Author: ronaldaug
Project page: ai.jsx.pm
I wanted something like OpenClaw, but simpler: small, light, works with free models, and easy to read the code.
- OpenClaw is huge (about 400 MB). It can be hard to see what it is doing under the hood.
- TeleZero is about 226 KB. You can open the files and follow how it works.
- A lot of similar tools use Docker. Docker uses RAM even when it is just sitting there.
- TeleZero runs without Docker. You can hook it up to Telegram with webhooks and avoid keeping big containers running all day.
- Run interactive setup:
npm run setupSetup will:
- install dependencies (
npm install) - ask for owner/agent names and patch workspace identity files
- create/update
.env(includingTELEGRAM_BOT_TOKEN,TELEGRAM_CHAT_ID, and optional model keys) - run
npm run build
- Start development runtime (main app + dashboard):
npm run telezero- Open dashboard:
http://localhost:1337
- Telegram bot powered by
gramiowith persisted chat history - Agent reasoning loop with tool execution:
read_filewrite_filelist_directoryrun_command
- SQLite-backed storage for sessions, messages, jobs, and dashboard users
- Dashboard APIs for:
- auth/login setup
- live status, logs, and recent agent/job activity
- model/provider + model-id switching
- Telegram polling/webhook controls
- max agent-loop-steps controls
- web chat with multi-message history
- Cron scheduler + agent-driven cron task runner
TeleZero loads skills from src/workspace/skills/<skill-name>/.
Each skill directory can use:
SOUL.md(preferred)SKILL.md(legacy fallback)
- Create a skill directory:
mkdir -p src/workspace/skills/my-skill- Add
SOUL.mdwith usage and examples:
# My Skill
## When to use
- User asks to do X
## Inputs
- input: string- If running with
npm run telezero, changes undersrc/workspace/skillssync automatically for dev runtime.
Otherwise run:
npm run build- Test via Telegram or dashboard web chat.
Core behavior comes from markdown files in src/workspace/:
SOUL.md- reasoning principlesIDENTITY.md- assistant persona/voiceUSER.md- owner context and preferencesTOOLS.md- capability and skill-directory instructionsHEARTBEAT.md- runtime state/pulse guidanceAGENT_STEP.md- JSON step protocol (thought,action,input,done)
AGENT_STEP.md can be overridden with TELEZERO_AGENT_STEP_PATH (absolute path or path relative to project root).
npm run setup- interactive initialization (workspace + env + install + build)npm run telezero- dev launcher (build, workspace watch sync, bot, dashboard)npm run build- TypeScript build + copy workspace/database schema intodist/npm start- run production build fromdist/index.jsnpm run db:migrate- apply SQLite schemanpm run agent -- "your task"- run one task from CLInpm run dashboard- run dashboard server onlynpm run dashboard:dev- run Vite dev servernpm run dashboard:build- build dashboard frontend assetsnpm test- run Jest tests
From .env.example:
TELEGRAM_BOT_TOKEN- Telegram bot token from @BotFatherGEMINI_API_KEY- Gemini key (if using Gemini provider/models)DATABASE_URL- SQLite file path (defaultdata/telezero.db)TELEZERO_AGENT_STEP_PATH- optional custom prompt path forAGENT_STEP.md
Setup also prompts for:
TELEGRAM_CHAT_IDOPENROUTER_API_KEY(optional)GEMINI_API_KEY(optional)
Depending on entries in .telezero-model.json, you may also need:
GLM_API_KEYTOKEN_MAX_API_KEYOPENROUTER_API_KEY
Runtime/provider settings are in .telezero-model.json.
Supported provider keys:
qwenollamageminiopenaiopenrouter
Each provider can define model entries with:
idname- optional
baseUrl - optional
envKey
- Input arrives from Telegram, web chat, CLI, or cron task
- Conversation/history is loaded when available
- Agent composes context from workspace files + registered skills
- LLM returns a JSON next step
- Selected tool executes
- Loop repeats until
done: true - Final answer is returned to caller (Telegram, web chat, CLI, or logs)
telezero/
βββ src/
β βββ index.ts # Main app entry (bot + scheduler)
β βββ agent/ # Task orchestration + reasoning loop
β βββ llm/ # Provider/model routing + prompt assembly
β βββ services/ # Qwen/Ollama/Gemini/OpenAI-compatible handlers
β βββ telegram/ # Telegram bot + middleware
β βββ dashboard/ # Express dashboard/API server
β βββ database/ # SQLite connection + schema
β βββ cron/ # Scheduler bootstrapping
β βββ tasks/ # Agent-driven cron tasks
β βββ tools/ # Tool implementations (files/shell)
β βββ workspace/ # Agent context + skills
βββ scripts/ # setup/dev/dashboard/migrate/task scripts
βββ public/ # Served dashboard assets
βββ docs/ # Extended documentation
βββ .telezero-model.json # Provider/model config
βββ .env.example # Environment template
Extended docs in docs/:
docs/getting-started.mddocs/architecture.mddocs/agent-system.mddocs/skills-system.mddocs/database.mddocs/telegram-bot.mddocs/llm-integration.mddocs/dashboard.mddocs/development.mddocs/deployment.md
MIT
