Thank you for your interest in contributing to the Kodezi Chronos research repository! While the Chronos model itself is proprietary, we welcome contributions to improve our benchmarks, evaluation frameworks, and research documentation.
- Code of Conduct
- What Can I Contribute?
- Getting Started
- Contribution Process
- Style Guidelines
- Community
This project adheres to the Kodezi Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to [email protected].
- Benchmark Improvements: Enhance evaluation protocols or propose new metrics
- Test Cases: Submit real-world debugging scenarios for benchmarks
- Documentation: Improve clarity, fix errors, or add examples
- Visualizations: Create better ways to present results
- Analysis Tools: Build tools to analyze benchmark results
- Research Extensions: Propose extensions to our methodology
- Requests for model access (available only through Kodezi OS)
- Implementation details of proprietary algorithms
- Attempts to reverse-engineer the model
- Confidential or proprietary code examples
-
Fork the Repository
git clone https://github.com/kodezi/chronos-research.git cd chronos-research -
Set Up Environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt
-
Explore the Repository
- Read the README.md
- Review existing benchmarks
- Check open issues
Before starting:
- Search existing issues and PRs
- Review the project roadmap
- Join relevant discussions
For significant changes:
- Open an issue describing your proposal
- Wait for maintainer feedback
- Proceed once approved
git checkout -b feature/your-feature-name
# or
git checkout -b fix/issue-descriptionFollow our style guidelines and ensure:
- Code is well-documented
- Tests pass (if applicable)
- Documentation is updated
- Push your branch to your fork
- Create a PR with a clear description
- Link related issues
- Wait for review
## Description
Brief description of changes
## Type of Change
- [ ] Bug fix
- [ ] New feature
- [ ] Documentation update
- [ ] Performance improvement
## Testing
- [ ] Tests pass locally
- [ ] Added new tests (if applicable)
## Checklist
- [ ] Code follows style guidelines
- [ ] Self-review completed
- [ ] Documentation updated- Follow PEP 8
- Use type hints
- Write docstrings for functions
- Keep functions focused and small
Example:
def calculate_debug_success_rate(
attempts: List[DebugAttempt],
criteria: EvaluationCriteria
) -> float:
"""Calculate the success rate of debugging attempts.
Args:
attempts: List of debugging attempts to evaluate
criteria: Criteria for determining success
Returns:
Success rate as a percentage (0-100)
"""
successful = sum(1 for a in attempts if criteria.is_successful(a))
return (successful / len(attempts)) * 100 if attempts else 0.0- Use clear, concise language
- Include examples where helpful
- Keep formatting consistent
- Update table of contents
Format:
type(scope): brief description
Longer explanation if needed. Wrap at 72 characters.
Fixes #123
Types: feat, fix, docs, style, refactor, test, chore
-
Initial Review (1-3 days)
- Maintainers check alignment with project goals
- Basic quality assessment
-
Detailed Review (3-7 days)
- Code quality and style
- Documentation completeness
- Test coverage
-
Iteration
- Address feedback
- Update PR as needed
-
Merge
- Squash and merge when approved
- Delete branch after merge
- Quality: Is the code/documentation high quality?
- Alignment: Does it fit project goals?
- Completeness: Is it ready to merge?
- Impact: Does it improve the project?
- Issues: For bugs and feature requests
- Discussions: For questions and ideas
- Email: [email protected] for research queries
Contributors are recognized in:
- Release notes
- Annual research report
- Conference presentations
When contributing:
- Respect intellectual property
- Maintain academic integrity
- Cite sources appropriately
- Follow responsible AI practices
By contributing, you agree that your contributions will be licensed under the same MIT License that covers the project.
If you have questions about contributing:
- Check the FAQ
- Search existing issues
- Ask in GitHub Discussions
- Email [email protected]
Thank you for helping advance debugging AI research!