A C# benchmarking tool that compares data serialization formats (JSON vs. TOON) when interacting with OpenAI's language models.
ToonBenchmark evaluates the efficiency of different data serialization formats by measuring token usage and response times when sending employee data to OpenAI's GPT model for analysis. The tool compares:
- JSON: Traditional JSON serialization
- TOON: ToonSharp serialization format
- 📊 Benchmark JSON vs. TOON serialization formats
- 🤖 Integration with OpenAI's GPT models
- 📈 Detailed performance metrics (tokens, duration)
- 🎨 Beautiful console output using Spectre.Console
- 📝 Sample dataset with 200 employee records
- .NET 9.0 SDK
- OpenAI API key
- Clone the repository
- Restore dependencies:
dotnet restore
-
Run the application:
dotnet run
-
Enter your OpenAI API key when prompted
-
The tool will:
- Load employee data from
data.json - Send the data in JSON format to OpenAI
- Send the same data in TOON format to OpenAI
- Display comparison metrics
- Load employee data from
The data.json file contains employee records with the following structure:
{
"id": 1,
"name": "Alice",
"department": "Engineering",
"salary": 120000
}The benchmark measures:
- Prompt Tokens: Number of input tokens sent
- Completion Tokens: Number of tokens in the response
- Duration: Time taken for the API call
- Azure.AI.OpenAI (2.1.0)
- Spectre.Console (0.53.0)
- ToonSharp (1.0.0)
This project is intended for benchmarking and educational purposes.
Feel free to open issues or submit pull requests for improvements.