Skip to content

[Autoloop: perf-comparison] #243

@github-actions

Description

@github-actions

🤖 This PR is maintained by Autoloop. Each accepted iteration adds a commit to this branch.

Goal

Systematically benchmark every tsb function against its pandas equivalent. Each iteration adds a new benchmark pair (TypeScript + Python) and the results are published to the playground benchmarks page.

Program Issue: #221
Best Metric: 636 benchmarked function pairs

Latest Accepted Iteration (298)

Added benchmark pairs for:

  • assignDataFrame.assign() with callable column derivations
  • pipe_applypipe() + seriesApply() + dataFrameApplyMap()
  • to_from_dicttoDictOriented() / fromDictOriented() round-trip

Metric: 636 (previous best: 635, delta: +1)
Run: §25129063173

Generated by Autoloop · ● 2.8M ·


Note

This was originally intended as a pull request, but the git push operation failed.

Workflow Run: View run details and download patch artifact

The patch file is available in the agent artifact in the workflow run linked above.

To create a pull request with the changes:

# Download the artifact from the workflow run
gh run download 25129063173 -n agent -D /tmp/agent-25129063173

# Create a new branch
git checkout -b autoloop/perf-comparison

# Apply the patch (--3way handles cross-repo patches where files may already exist)
git am --3way /tmp/agent-25129063173/aw-autoloop-perf-comparison.patch

# Push the branch to origin
git push origin autoloop/perf-comparison

# Create the pull request
gh pr create --title '[Autoloop: perf-comparison]' --base main --head autoloop/perf-comparison --repo githubnext/tsessebe
Show patch preview (253 of 253 lines)
From db58c0242605983d1007d77d384d4f989a737d65 Mon Sep 17 00:00:00 2001
From: "github-actions[bot]" <github-actions[bot]@users.noreply.github.com>
Date: Wed, 29 Apr 2026 19:25:53 +0000
Subject: [PATCH] [Autoloop: perf-comparison] Iteration 298: Add assign,
 pipe_apply, to_from_dict benchmarks

Run: https://github.com/githubnext/tsessebe/actions/runs/25129063173

Co-authored-by: Copilot <[email protected]>
---
 benchmarks/pandas/bench_assign.py       | 27 ++++++++++++++++++
 benchmarks/pandas/bench_pipe_apply.py   | 29 +++++++++++++++++++
 benchmarks/pandas/bench_to_from_dict.py | 26 +++++++++++++++++
 benchmarks/tsb/bench_assign.ts          | 37 +++++++++++++++++++++++++
 benchmarks/tsb/bench_pipe_apply.ts      | 37 +++++++++++++++++++++++++
 benchmarks/tsb/bench_to_from_dict.ts    | 33 ++++++++++++++++++++++
 6 files changed, 189 insertions(+)
 create mode 100644 benchmarks/pandas/bench_assign.py
 create mode 100644 benchmarks/pandas/bench_pipe_apply.py
 create mode 100644 benchmarks/pandas/bench_to_from_dict.py
 create mode 100644 benchmarks/tsb/bench_assign.ts
 create mode 100644 benchmarks/tsb/bench_pipe_apply.ts
 create mode 100644 benchmarks/tsb/bench_to_from_dict.ts

diff --git a/benchmarks/pandas/bench_assign.py b/benchmarks/pandas/bench_assign.py
new file mode 100644
index 0000000..82e4489
--- /dev/null
+++ b/benchmarks/pandas/bench_assign.py
@@ -0,0 +1,27 @@
+"""Benchmark: DataFrame.assign — add new columns derived from existing ones"""
+import json, time
+import numpy as np
+import pandas as pd
+
+ROWS = 100_000
+WARMUP = 3
+ITERATIONS = 10
+
+a = np.arange(ROWS, dtype=float) * 0.1
+b = np.arange(ROWS, dtype=float) * 0.2
+df = pd.DataFrame({"a": a, "b": b})
+
+for _ in range(WARMUP):
+    df.assign(c=lambda d: d["a"] + d["b"], d=lambda d: d["c"] * 2)
+
+start = time.perf_counter()
+for _ in range(ITERATIONS):
+    df.assign(c=lambda d: d["a"] + d["b"], d=lambda d: d["c"] * 2)
+total = (time.perf_counter() - start) * 1000
+
+print(j
... (truncated)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions