Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions .claude/commands/sweep-accuracy.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,19 @@ Optional arguments: $ARGUMENTS

---

## Step 0 -- Detect CUDA availability

Before discovering modules, probe the host for CUDA:

```bash
python -c "from numba import cuda; print(cuda.is_available())" 2>/dev/null
```

Capture the result as `CUDA_AVAILABLE` (`true` if the command prints `True`,
`false` otherwise — including import failure). Interpolate this flag into
each subagent prompt below so the agent knows whether to run cupy and
dask+cupy paths or limit itself to static review of the GPU code.

## Step 1 -- Gather module metadata via git

Enumerate candidate modules:
Expand Down Expand Up @@ -127,6 +140,26 @@ Read these files: {module_files}
Also read xrspatial/utils.py to understand _validate_raster() behavior and
xrspatial/tests/general_checks.py for the cross-backend comparison helpers.

CUDA available on this host: {cuda_available}

If CUDA_AVAILABLE is true:
- When auditing the cupy / dask+cupy backends, actually run the matching
tests in xrspatial/tests/ against those backends. The cross-backend
helpers in general_checks.py already dispatch to all four backends —
invoke them directly so cupy and dask+cupy paths execute, not just
numpy.
- For CUDA-specific findings (kernel correctness, NaN propagation in
device code, backend divergence), validate by running the kernel on
a small input rather than reasoning from source alone.
- A /rockout fix that touches CUDA code must include a cupy run in its
verification step before opening the PR.

If CUDA_AVAILABLE is false:
- Read the cupy / dask+cupy paths and flag patterns by inspection only.
- Skip executing tests on those backends. Add the token
`cuda-unavailable` to the `notes` column of the state CSV so a future
re-run on a GPU host knows to re-validate the GPU paths.

**Your task:**

1. Read all listed files thoroughly, including the matching test file(s)
Expand Down
29 changes: 29 additions & 0 deletions .claude/commands/sweep-api-consistency.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,19 @@ Optional arguments: $ARGUMENTS

---

## Step 0 -- Detect CUDA availability

Before discovering modules, probe the host for CUDA:

```bash
python -c "from numba import cuda; print(cuda.is_available())" 2>/dev/null
```

Capture the result as `CUDA_AVAILABLE` (`true` if the command prints `True`,
`false` otherwise — including import failure). Interpolate this flag into
each subagent prompt below so the agent knows whether to run cupy and
dask+cupy paths or limit itself to static review of the GPU code.

## Step 1 -- Gather module metadata via git

Enumerate candidate modules:
Expand Down Expand Up @@ -106,6 +119,22 @@ For comparison, read 2-3 sibling modules (analogous functions). Examples:
The point is to compare parameter naming and return shapes against
modules with similar function families.

CUDA available on this host: {cuda_available}

If CUDA_AVAILABLE is true:
- When checking signature parity, also import the cupy backend variants
and confirm they accept the same kwargs. Run a quick smoke test on a
cupy DataArray for each public function so signature drift between
numpy and cupy paths surfaces.
- A /rockout fix that touches public signatures must verify both numpy
and cupy entry points before opening the PR.

If CUDA_AVAILABLE is false:
- Inspect the cupy backend signatures by reading the source only.
- Add the token `cuda-unavailable` to the `notes` column of the state
CSV so a future re-run on a GPU host knows to re-validate the cupy
signatures.

**Your task:**

1. Read all listed files thoroughly. For each public function, build a
Expand Down
31 changes: 31 additions & 0 deletions .claude/commands/sweep-metadata.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,19 @@ Optional arguments: $ARGUMENTS

---

## Step 0 -- Detect CUDA availability

Before discovering modules, probe the host for CUDA:

```bash
python -c "from numba import cuda; print(cuda.is_available())" 2>/dev/null
```

Capture the result as `CUDA_AVAILABLE` (`true` if the command prints `True`,
`false` otherwise — including import failure). Interpolate this flag into
each subagent prompt below so the agent knows whether to run cupy and
dask+cupy paths or limit itself to static review of the GPU code.

## Step 1 -- Gather module metadata via git

Enumerate candidate modules:
Expand Down Expand Up @@ -122,6 +135,24 @@ Also read xrspatial/utils.py to understand:

Read xrspatial/tests/general_checks.py for cross-backend test helpers.

CUDA available on this host: {cuda_available}

If CUDA_AVAILABLE is true:
- For Cat 1 (attrs), Cat 2 (coords), Cat 3 (dims), Cat 4 (dtype/nodata),
and Cat 5 (backend-inconsistent metadata), construct cupy and
dask+cupy DataArrays and run the function end-to-end. Check
attrs/coords/dims on the actual returned object — do not infer from
source.
- A /rockout fix that touches metadata-emitting code must verify all
four backends (numpy, cupy, dask+numpy, dask+cupy) before opening
the PR.

If CUDA_AVAILABLE is false:
- Inspect the cupy / dask+cupy paths by reading the source only.
- Skip executing tests on those backends. Add the token
`cuda-unavailable` to the `notes` column of the state CSV so a
future re-run on a GPU host knows to re-validate the GPU paths.

**Your task:**

1. Read all listed files thoroughly, including the matching test file(s)
Expand Down
33 changes: 33 additions & 0 deletions .claude/commands/sweep-performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,20 @@ Parse $ARGUMENTS for these flags (multiple may combine):
| `--no-fix` | Audit only; subagents do not run /rockout. Useful for re-triage without producing PRs. |
| `--high-only` | Drop modules whose state row shows zero HIGH findings from the last triage within the past 30 days. |

## Step 0.5 -- Detect CUDA availability

After parsing arguments and before discovering modules, probe the host
for CUDA:

```bash
python -c "from numba import cuda; print(cuda.is_available())" 2>/dev/null
```

Capture the result as `CUDA_AVAILABLE` (`true` if the command prints `True`,
`false` otherwise — including import failure). Interpolate this flag into
each subagent prompt below so the agent knows whether to run cupy and
dask+cupy paths or limit itself to static review of the GPU code.

## Step 1 -- Discover modules in scope

Enumerate all candidate modules. For each, record its file path(s):
Expand Down Expand Up @@ -138,6 +152,25 @@ Read these files: {module_files}
Also read xrspatial/utils.py for _validate_raster() behavior, and
xrspatial/tests/general_checks.py for cross-backend test helpers.

CUDA available on this host: {cuda_available}

If CUDA_AVAILABLE is true:
- For Cat 3 (GPU transfer) and Cat 6 (OOM verdict), validate findings
by actually running the cupy and dask+cupy paths. Construct a small
cupy-backed DataArray and execute the function end-to-end. Time the
result and confirm there is no host-device round trip.
- For register-pressure findings, compile the kernel with
`numba.cuda.compile_ptx` or run it on a small input and report the
observed register count rather than guessing from source.
- A /rockout fix that touches CUDA code must include a cupy run in its
verification step before opening the PR.

If CUDA_AVAILABLE is false:
- Inspect the cupy / dask+cupy paths by reading the source only.
- Skip executing CUDA kernels and skip cupy benchmarking. Add the
token `cuda-unavailable` to the `notes` column of the state CSV so
a future re-run on a GPU host knows to re-validate the GPU paths.

**Your task:**

1. Read all listed files thoroughly, including the matching test file(s)
Expand Down
34 changes: 34 additions & 0 deletions .claude/commands/sweep-security.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,19 @@ Optional arguments: $ARGUMENTS

---

## Step 0 -- Detect CUDA availability

Before discovering modules, probe the host for CUDA:

```bash
python -c "from numba import cuda; print(cuda.is_available())" 2>/dev/null
```

Capture the result as `CUDA_AVAILABLE` (`true` if the command prints `True`,
`false` otherwise — including import failure). Interpolate this flag into
each subagent prompt below so the agent knows whether to run cupy and
dask+cupy paths or limit itself to static review of the GPU code.

## Step 1 -- Gather module metadata via git and grep

Enumerate candidate modules:
Expand Down Expand Up @@ -134,6 +147,27 @@ Read these files: {module_files}

Also read xrspatial/utils.py to understand _validate_raster() behavior.

CUDA available on this host: {cuda_available}

If CUDA_AVAILABLE is true:
- For Cat 4 (GPU kernel bounds), validate suspected missing bounds
guards by running the kernel on adversarial input shapes (1x1, Nx1,
large prime dimensions) and confirm no out-of-bounds access. Use
`compute-sanitizer` if installed; otherwise rely on test runs that
exercise edge sizes.
- For Cat 1 (unbounded allocation) on cupy paths, confirm the
allocation actually executes on the GPU and observe peak memory via
`cupy.cuda.runtime.memGetInfo()` rather than reasoning from source.
- A /rockout fix that touches CUDA code must include a cupy run in its
verification step before opening the PR.

If CUDA_AVAILABLE is false:
- Inspect the cupy / dask+cupy paths and CUDA kernels by reading the
source only.
- Skip executing CUDA kernels. Add the token `cuda-unavailable` to the
`notes` column of the state CSV so a future re-run on a GPU host
knows to re-validate the GPU paths.

**Your task:**

1. Read all listed files thoroughly.
Expand Down
32 changes: 32 additions & 0 deletions .claude/commands/sweep-test-coverage.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,20 @@ Optional arguments: $ARGUMENTS

---

## Step 0 -- Detect CUDA availability

Before discovering modules, probe the host for CUDA:

```bash
python -c "from numba import cuda; print(cuda.is_available())" 2>/dev/null
```

Capture the result as `CUDA_AVAILABLE` (`true` if the command prints `True`,
`false` otherwise — including import failure). Interpolate this flag into
each subagent prompt below so the agent knows whether new tests can be
executed against cupy / dask+cupy backends or only added with a `pytest.skip`
guard for environments without CUDA.

## Step 1 -- Gather module metadata via git

Enumerate candidate modules:
Expand Down Expand Up @@ -108,6 +122,24 @@ Read these files:
- xrspatial/utils.py (ArrayTypeFunctionMapping, _validate_raster)
- xrspatial/conftest.py (shared fixtures)

CUDA available on this host: {cuda_available}

If CUDA_AVAILABLE is true:
- New cupy / dask+cupy tests must execute locally before /rockout opens
a PR. Use the cross-backend helpers in general_checks.py so the new
test exercises all four backends on a CUDA host.
- Verify the test actually fails before the fix and passes after — do
not commit a test that was never observed running on a GPU.

If CUDA_AVAILABLE is false:
- New cupy / dask+cupy tests are still added (CI runs them on a GPU
host) but must be guarded with the project's existing GPU-skip
decorator so local runs without CUDA do not error. Note that the
test was not executed locally.
- Add the token `cuda-unavailable` to the `notes` column of the state
CSV so a future re-run on a GPU host knows to re-validate that the
newly added cupy tests pass.

**Your task:**

1. Read the module and its tests thoroughly. Build a mental matrix:
Expand Down
Loading