PRD Automated Stress Testing Platform — Upload your PRD, get 10 ai users. PreUser simulates real user behavior before your product ships.
English | 中文
- Intelligent Document Parsing — Upload PDF or DOCX PRDs; structured information is extracted automatically
- Knowledge Graph Construction — LLM-powered multi-step chain decomposes your PRD into feature nodes and dependency graphs
- Virtual User Generation — Automatically generates diverse user personas (Personas), fully editable
- Multi-Scenario Narrative Simulation — Each virtual user independently simulates an operation path and produces an experience narrative
- Stress Test Report — Aggregates blind spots and bottlenecks into a visual report
- Deep-Dive Conversation — AI Q&A grounded in analysis results for deeper product exploration
┌──────────────────────────────────────────────────┐
│ Frontend (React) │
│ React 18 + TypeScript + Tailwind │
│ React Flow · Recharts · Framer Motion │
├──────────────────────────────────────────────────┤
│ Backend (FastAPI) │
│ LiteLLM · SQLAlchemy · AsyncPG │
│ 5-Stage LLM Pipeline with Checkpoints │
├───────────────────┬──────────────────────────────┤
│ PostgreSQL 16 │ Redis 7 │
└───────────────────┴──────────────────────────────┘
| Stage | Name | Description |
|---|---|---|
| Chain 1 | Structure Parsing | Splits the PRD into semantic blocks |
| Chain 2 / 2.5 | Relation Extraction & Graph Fusion | Extracts feature dependencies and builds the knowledge graph |
| Chain 3 | Persona Generation | Generates diverse virtual users from the graph |
| Chain 4 | Narrative Simulation | Simulates multi-scenario operation paths per user |
| Chain 5 | Report Generation | Aggregates discovered blind spots and bottlenecks |
Each stage supports checkpointing — a failed run resumes from the last successful stage.
- Python 3.11+
- Node.js 18+
- PostgreSQL 16
- Redis 7
- A DeepSeek API Key (or any other LiteLLM-compatible LLM API)
git clone https://github.com/your-username/PreUser.git
cd PreUserdocker compose up -dThis starts PostgreSQL and Redis. Skip this step if you already have them running locally.
cp .env.example .envEdit .env and fill in your API key and database password:
# Required: your LLM API key
DEEPSEEK_API_KEY=your_deepseek_api_key_here
# Database password
DB_PASSWORD=your_db_password_hereThen do the same for the backend:
cp .env.example backend/.envGet a DeepSeek API Key: Sign up at DeepSeek Open Platform and create a key. Any LiteLLM-compatible model works — just change
LLM_MODEL.
cd backend
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
uvicorn app.main:app --reload --port 8000cd frontend
npm install
npm run devOpen your browser at http://localhost:5173.
- Upload PRD — Upload a PDF or DOCX product requirements document on the home page
- Wait for Analysis — The 5-stage pipeline runs automatically with real-time progress updates
- Explore the Graph — Browse the structured knowledge graph of your PRD
- Edit Personas — View and adjust the generated user personas
- Read Narratives — Browse each virtual user's simulated operation scenarios
- Review the Report — Read the stress test report to discover blind spots and bottlenecks
- Deep Dive — Ask follow-up questions via AI conversation
PreUser/
├── frontend/ # React frontend
│ └── src/
│ ├── pages/ # Page components
│ ├── components/ # Shared components
│ ├── store/ # Zustand state management
│ ├── api/ # API request wrappers
│ └── hooks/ # Custom hooks
├── backend/ # FastAPI backend
│ └── app/
│ ├── api/ # Routes (upload, analysis, conversation, ws)
│ ├── llm/ # LLM call wrappers
│ ├── models/ # Database models & Pydantic schemas
│ ├── prompts/ # LLM prompt templates (Chain 1–5)
│ ├── services/ # Core business logic (pipeline, parser, etc.)
│ └── config.py # Configuration management
├── docker-compose.yml # PostgreSQL + Redis
├── .env.example # Environment variable template
└── README.md
| Variable | Description | Default |
|---|---|---|
DEEPSEEK_API_KEY |
DeepSeek API key | (required) |
ANTHROPIC_API_KEY |
Anthropic API key (optional) | — |
LLM_MODEL |
LLM model identifier | deepseek/deepseek-chat |
LLM_TEMPERATURE |
Generation temperature | 0.7 |
DB_HOST |
PostgreSQL host | localhost |
DB_PORT |
PostgreSQL port | 5432 |
DB_NAME |
Database name | vul |
DB_USER |
Database username | postgres |
DB_PASSWORD |
Database password | (required) |
REDIS_HOST |
Redis host | localhost |
REDIS_PORT |
Redis port | 6379 |
MAX_UPLOAD_SIZE_MB |
Max upload file size | 10 |
MAX_CONCURRENT_ANALYSES |
Max concurrent analyses | 3 |
PreUser uses LiteLLM as the LLM gateway, supporting 100+ models. Just update .env:
# OpenAI
LLM_MODEL=gpt-4o
OPENAI_API_KEY=sk-xxx
# Anthropic Claude
LLM_MODEL=claude-sonnet-4-20250514
ANTHROPIC_API_KEY=sk-ant-xxx
# See LiteLLM docs for other providersMIT