refac(piecewise): introduce Slopes class, remove breakpoints(slopes=) mode#673
Open
refac(piecewise): introduce Slopes class, remove breakpoints(slopes=) mode#673
Conversation
Introduces ``linopy.Slopes`` — a frozen dataclass that carries
per-piece slopes + initial y-value, deferred until an x grid is known.
Used as the second element of a tuple in ``add_piecewise_formulation``
where another tuple in the same call provides the x grid::
m.add_piecewise_formulation(
(power, [0, 30, 60, 100]),
(fuel, Slopes([1.2, 1.4, 1.7], y0=0)),
)
* Constructor: ``Slopes(values, y0=0.0, align="pieces", dim=None)``
* Standalone resolution: ``Slopes(...).to_breakpoints(x_points)`` returns
the resolved breakpoint ``DataArray`` — useful for inspection or
building breakpoints outside the formulation pipeline.
* Dispatch: ``add_piecewise_formulation`` adds a one-pass resolution that
borrows the x grid from the first non-Slopes tuple (deterministic).
All-Slopes calls raise with a pointer to the standalone resolution.
* Supports the same shape variations as ``breakpoints(slopes=...)``
(1D, dict, DataFrame, DataArray) and the ``align`` modes from #672.
This commit is purely additive: ``breakpoints(slopes=..., x_points=...,
y0=...)`` and ``slopes_to_points`` keep working unchanged. A follow-up
commit removes them in favour of ``Slopes``.
Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
…o_points Now that ``Slopes`` covers the deferred-and-standalone slopes use case with a clearer type story, drop the duplicated paths: * ``breakpoints(slopes=, x_points=, y0=, slopes_align=)`` removed. ``breakpoints`` is now points-only: ``breakpoints(values, *, dim=None)``. * ``slopes_to_points`` made private (``_slopes_to_points``) — it's a list-level primitive used only by ``Slopes.to_breakpoints``. Public callers should use ``Slopes(...)``; users who need list output can call ``Slopes(...).to_breakpoints([...]).values.tolist()``. Both surfaces shipped earlier in this development cycle (``Slopes`` mode of ``breakpoints`` from #602 and #672, ``slopes_to_points`` from #602) and have not been released, so the breakage window is the same as the rest of the v0.7.0 piecewise work. Tests migrated: * The slopes-mode tests on ``TestBreakpointsFactory`` and the entire ``TestSlopesAlignLeading`` class are removed; the same shapes are exercised in expanded ``TestSlopesClass`` tests (Series / DataArray / DataFrame / shared x grid / shared y0 / leading-align ragged / bad-y0 validation). * ``TestSlopesToPoints`` becomes ``TestSlopesToPointsPrivate``, importing the helper under its private name. * Inline ``breakpoints(slopes=...)`` callers in feasibility/envelope tests migrated to ``Slopes(...)`` (or ``Slopes(...).to_breakpoints(x_pts)`` for the standalone path). Docs: * ``doc/api.rst``: drop ``slopes_to_points``, add ``Slopes``. * ``doc/release_notes.rst``: replace the ``breakpoints`` slopes-mode bullet with one describing ``Slopes``. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
* ``doc/piecewise-linear-constraints.rst``:
- Replace the ``breakpoints(slopes=, x_points=, y0=)`` quick-reference
line with ``Slopes(values, y0=)`` (deferred form).
- Rewrite the "From slopes" section to use ``Slopes`` inside
``add_piecewise_formulation``, plus a note on standalone resolution
via ``Slopes.to_breakpoints(x_pts)``.
* ``examples/piecewise-linear-constraints.ipynb``: add section 8
"Specifying with slopes — ``Slopes``" that reproduces the section-1
gas-turbine fit using slopes [1.2, 1.6, 2.15] over the same x grid,
and demonstrates standalone ``Slopes.to_breakpoints(...)``.
The inequality-bounds notebook doesn't reference the removed slopes
APIs and stays focussed on curvature/LP dispatch — no changes there.
Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
…s bulky values
Default ``@dataclass`` repr was noisy:
Slopes(values=[1.2, 1.6, 2.15], y0=0, align='pieces', dim=None)
and would dump the full DataArray/DataFrame for non-list inputs. New repr:
Slopes([1.2, 1.6, 2.15], y0=0)
Slopes([nan, 1, 2], y0=0, align='leading')
Slopes(<DataArray gen: 2, _breakpoint: 4>, y0=0, dim='gen')
Slopes(<DataFrame shape=(2, 3)>, y0=..., dim='gen')
* The primary ``values`` arg renders without a keyword (positional like the
constructor call) and inline only for plain lists/tuples; complex types
(DataArray/DataFrame/Series/dict) get a one-line shape summary.
* ``align`` and ``dim`` are omitted when at their defaults.
* New ``_summarise_breakslike`` helper handles the value rendering.
Notebook section 8 gains a "what does Slopes look like" peek cell that
renders the repr before the in-formulation usage, so users see the
value-type semantics directly.
Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
…classes The flat list of ``test_to_breakpoints_*`` methods had drifted into one case per (input shape × input type) combination — duplicated bodies, hard to scan, easy to miss a type. Restructure into five classes, each pinning one aspect of the contract: * ``TestSlopesValueType`` — immutability + repr. Repr behaviour parametrised over (1d-defaults-hidden, non-default-align, non-default-dim) for the format check, and over (DataFrame, DataArray, Series, dict) for the bulky-value summary. * ``TestSlopesToBreakpoints1D`` — same arithmetic anchor (slopes [1, 2] over x [0, 1, 2] → y [0, 1, 3]) under every accepted 1D input type pairing (list, tuple, ndarray, Series, DataArray, mixed). Plus a separate parametrised "arithmetic anchors" set covering negative slopes, non-zero y0, and uneven x spacing. * ``TestSlopesToBreakpointsPerEntity`` — same per-entity anchor (gen=a → [0, 10, 30]; gen=b → [10, 50, 110]) under every accepted multi-entity container type (dict, DataFrame, DataArray). Plus shared-x-grid broadcast and ``y0`` shape coverage (scalar, dict, Series, DataArray) under one parametrised test. * ``TestSlopesToBreakpointsAlignment`` — ``align="pieces"`` and ``align="leading"`` must produce equal output for matching inputs; parametrised over 1D and per-entity-dict shapes. Ragged per-entity case kept as a dedicated test. * ``TestSlopesValidationErrors`` — three rejection paths (leading-first-not-NaN, 1D + dict y0, bad y0 type) parametrised in one test. Net: 17 individual tests collapse into 32 parametrised cases under 5 classes, with each behaviour-of-interest in exactly one place. Also adds the missing ``BreaksLike`` import in the test-only ``TYPE_CHECKING`` block (used in the new parametrised signatures). Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
… metadata * ``test/test_piecewise_constraints.py``: hoist the ``from linopy.piecewise import _slopes_to_points`` to module scope — was repeated inside each of the three ``TestSlopesToPointsPrivate`` methods. * ``examples/piecewise-linear-constraints.ipynb``: strip ``cell.metadata.execution`` (iopub timestamps) from all cells. The ``jupyter-notebook-cleanup`` pre-commit hook clears outputs but doesn't touch this field, so it accumulated noise in the diff every time the notebook was re-executed. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
The previous metadata-strip pass round-tripped the notebook through ``json.dump(..., indent=1)`` which defaults ``ensure_ascii=True`` and escaped all em-dashes (and any other non-ASCII chars) across the whole file — pure encoding churn. Surgical fix: byte-level replace ``—`` → ``—`` rather than another JSON round-trip, so nothing else changes. Future re-encodes should use ``ensure_ascii=False``. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
…adata Two more accidental edits from the json round-trip caught by reviewing the master diff: * ``≤`` and ``≥`` in section 4 (existing master content) had been escaped to ``≤`` / ``≥``. Restored to UTF-8. * Notebook ``language_info.version`` metadata had drifted from ``"3.13.2"`` (master) to ``"3.11.11"`` (whatever kernel I happened to run). Reverted. Net: the notebook diff vs master is now 63 insertions / 0 deletions — only the four new section-8 cells, no incidental churn. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Addresses review of #673: * **Slopes now actually emits the EvolvingAPIWarning** it advertises in its docstring. The warning fires from ``__post_init__`` so the standalone ``Slopes(...).to_breakpoints(...)`` migration path doesn't silently bypass the evolving-API signal that the previous ``breakpoints(slopes=...)`` form indirectly inherited. ``_EvolvingApiKey`` extended to include ``"Slopes"``; per-key dedup keeps construction cheap on repeated use. * **``_summarise_breakslike`` truncates long sequences** instead of dumping them verbatim. Sequences over 8 entries render as ``[0, 1, 2, ..., 48, 49] (50 items)`` — the previous "small size" comment promised this without enforcing it. * **``test_two_non_slopes_picks_first_x_grid``** previously asserted only that the formulation was registered. Now uses distinguishable x grids (10× scale difference), pins the model onto piece 1, and verifies ``z == 10`` (the value implied by the *first* tuple's grid) rather than ``z == 100`` (the second tuple's). * **New ``test_multiple_slopes_share_x_grid``** covers the ``(non-Slopes, Slopes, Slopes)`` shape — both Slopes resolve against the same borrowed grid. Reviewer-flagged coverage gap. * **New ``test_slopes_construction_warns_and_dedups``** in ``TestEvolvingAPIWarning`` pins the new warning behaviour. * **New ``test_repr_truncates_long_sequences``** in ``TestSlopesValueType`` pins the truncation. * Hoisted ``set(slopes_idx)`` out of the ``non_slopes_idx`` comprehension in the dispatch (cosmetic; N is small). * Added a module-level ``TOL = 1e-6`` constant in ``test_piecewise_constraints.py`` matching the convention in ``test_piecewise_feasibility.py``; the new dispatch test uses it. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
1. **Stacklevel was off by one** for warnings emitted from ``Slopes.__post_init__``. The dataclass-generated ``__init__`` adds an extra frame (helper → ``_warn_evolving_api`` → ``__post_init__`` → synthetic ``__init__`` → user code), so ``stacklevel=3`` landed inside the synthetic init instead of the user's call site. Made ``_warn_evolving_api`` accept ``stacklevel`` as a parameter (default 3, matching the function-call entry points) and pass ``stacklevel=4`` from ``Slopes``. 2. **Equality crashed with array values.** Frozen dataclasses default to elementwise ``__eq__``, so ``Slopes(np.array([1, 2])) == Slopes(np.array([1, 2]))`` raised ``ValueError: truth value of an array with more than one element is ambiguous``. Added ``eq=False`` to opt out and fall back to identity equality. ``Slopes`` is now safely usable as a set member or dict key. 3. **Numpy scalar repr noise.** ``_summarise_breakslike`` previously called ``list(v)`` which preserved numpy scalar types; their reprs differ from Python scalars (and across numpy versions). Switched to ``np.asarray(v).tolist()`` which normalises numpy types to Python types up front, so ``Slopes(np.array([1, 2, 3], dtype=np.int64), y0=0)`` renders as ``Slopes([1, 2, 3], y0=0)`` uniformly. Added a 0-D guard for the edge case. Each fix is pinned by a new test in ``TestSlopesValueType`` (``test_repr_normalises_numpy_scalars``, ``test_equality_with_array_values_does_not_raise``) and ``TestEvolvingAPIWarning`` (``test_slopes_warning_stacklevel_points_to_user_call``). Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Earlier ``eq=False`` (identity equality) was a footgun for tests:
``assert pwf_spec == expected_slopes`` would silently return ``False``
even when the two specs described the same curve.
Replace with a custom ``__eq__`` that compares each field by value:
* ``align`` / ``dim`` — plain ``==``.
* ``y0`` / ``values`` — dispatched on type via ``_values_equal``:
- ``ndarray`` → ``np.array_equal(equal_nan=True)``
- ``DataFrame`` / ``Series`` → ``.equals(...)``
- ``DataArray`` → ``.equals(...)``
- ``dict`` → recurse on matching keys
- scalar ``float`` → NaN-safe ``==`` (treats nan==nan as ``True`` to
match the array path's ``equal_nan=True``)
- everything else → strict ``type(a) is type(b)`` then ``==``.
``__hash__`` set to ``None`` (unhashable) since ``values`` may be a
mutable container. Documented edges:
* List vs ndarray of the same numeric content compare unequal — strict
type matching, same as Python's general ``[1,2] != np.array([1,2])``
behaviour.
Tests: parametrised ``TestSlopesValueType.test_equality`` covers nine
shapes (lists, ndarrays, dicts, NaN scalars, NaN in arrays, mismatched
y0, mismatched values, mismatched types, dict inner-value mismatch).
Plus ``test_eq_against_non_slopes_returns_notimplemented`` for the
non-Slopes branch and ``test_unhashable`` pinning the hash opt-out.
Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Previously a multi-dim ndarray fell through to the seq path, ``np.asarray(v).tolist()`` returned nested lists, and the repr dumped them in full. Even a moderate ``np.zeros((5, 20))`` produced a 2-line wall of ``0.0`` entries; an earlier ``np.zeros((20, 5, 30))`` case would have been worse. Treat 2-D+ ndarrays the same way ``DataArray`` / ``DataFrame`` / ``Series`` are treated: a one-line shape summary (``<ndarray shape=(20, 5, 30)>``). 1-D ndarrays still render inline with the existing head + tail truncation, so user-facing slope specifications stay readable. The ``np.asarray(v)`` call is hoisted so we don't double-normalise on the 1-D path. New parametrised case ``multi_dim_ndarray`` in ``TestSlopesValueType.test_repr_summarises_bulky_values`` pins the new behaviour. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Equality (``Slopes.__eq__`` via ``_values_equal``) was strict-type to a
fault. Four edge cases produced surprising ``False`` results despite
the operands describing the same curve:
1. ``Slopes(y0=0) != Slopes(y0=0.0)`` — ``int`` and ``float`` are
semantically the same y-coordinate (``_breakpoints_from_slopes``
calls ``float(y0)`` downstream), but the strict ``type(a) is
type(b)`` gate rejected them.
2. ``Slopes(y0=np.float64(0)) != Slopes(y0=0.0)`` — same root cause
for numpy scalars.
3. ``Slopes([float('nan'), 1.0], align='leading')`` was unequal to
itself — Python's list equality uses ``is`` before ``==`` per
element, so it only worked accidentally when the user happened to
write ``np.nan`` (a CPython singleton) instead of ``float('nan')``.
4. ``np.array_equal(..., equal_nan=True)`` raises ``TypeError`` on
object/string ndarrays.
Rewrite ``_values_equal`` to:
* Treat any two ``numbers.Real`` (excluding ``bool``) as numerically
comparable with a NaN-safe float fallback.
* Promote ``list`` / ``tuple`` to ndarray before the array branch so
in-place ``float('nan')`` content compares element-wise NaN-safe.
* Fall back to ``np.array_equal`` without ``equal_nan`` when the
array has a non-numeric dtype.
Document the new semantics on ``__eq__`` and explicitly note that
``.equals`` for pandas / xarray containers is order-sensitive.
Tests:
* Flip ``different_value_types`` (now ``list_and_ndarray_same_content``)
to expect ``True``.
* Rename ``nan_in_list_via_array_path`` → ``np_nan_in_list``; add
parallel ``float_nan_in_list`` case.
* Add ``int_and_float_y0`` and ``numpy_scalar_and_float_y0`` cases.
* Add ``test_eq_dataframe_is_order_sensitive`` pinning the documented
``.equals`` caveat.
* Add ``test_eq_object_dtype_ndarray_does_not_raise`` covering the
non-numeric ndarray fallback path.
Release notes: trim the ``Slopes`` entry to the user-facing purpose
(specify a curve by marginal costs / per-piece slopes) and the
canonical call form. Drop the dev-cycle "**replaces** the slopes mode
of ``breakpoints()``..." sentence — those API surfaces never shipped,
so v0.7.0 readers have no context for the removal note.
Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Section 8 was 6 cells where 2 do the same job — the surrounding sections (1, 7) all use the 1-markdown-intro + 1-code-cell pattern. Drops: * The repr-explanation markdown + a standalone ``Slopes(...)`` cell showing the repr. The repr is incidental; users will see it whenever they instantiate a ``Slopes``. * The ``to_breakpoints`` intro markdown and demo cell. Standalone resolution is documented in the ``.rst`` page; the notebook should show the canonical ``add_piecewise_formulation`` use only. * The ``# Same curve as section 1 — slopes 1.2, 1.6, 2.15 …`` inline comment, now that the markdown intro says the same thing. Also tighten the markdown intro: drop the bold emphasis on "borrowed from the sibling tuple" and the trailing transition sentence. Net result: section-8 diff vs master drops from 63 lines to 30 (roughly halved), and the section now mirrors the visual rhythm of the rest of the tutorial. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
…_formulation The previous "borrow x grid from the first non-Slopes tuple" rule was silently order-dependent when more than one non-Slopes tuple was present. Each non-Slopes tuple is a y-vector for its own variable, so there is no canonical x axis — picking the *first* meant tuple order changed the resolved breakpoints, and therefore the optimisation problem itself. Reject the ambiguous case at the dispatch boundary instead. The new ValueError points users at ``Slopes(...).to_breakpoints(x_pts)`` so they can opt into a specific x grid explicitly when their setup has multiple breakpoint vectors in play. * ``Slopes`` docstring updated: states the "exactly one non-Slopes" rule and the ``to_breakpoints`` escape hatch up front. * ``test_three_tuple_deferred`` removed — its (power, fuel, Slopes) shape is now invalid and the equivalent (power, Slopes, Slopes) is already covered by ``test_multiple_slopes_share_x_grid``. * ``test_two_non_slopes_picks_first_x_grid`` → ``test_multiple_non_slopes_with_slopes_raises``: the test that previously pinned the order-dependent behaviour now pins the ValueError. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Follow-up to #638 and #672 — replaces the slopes-mode of
breakpoints()and the standaloneslopes_to_points()helper with a proper value type,linopy.Slopes.The API
Slopesis a frozen value type carrying per-piece slopes plus an initial y-value, deferred until an x grid is known:Slopes(values, y0=0.0, align="pieces", dim=None).Slopes.to_breakpoints(x_points)resolves the spec to aDataArrayfor standalone use or inspection.y0,align="leading"from feat: add slopes_align to breakpoints() #672.Dispatch rule
When any
Slopestuple is present, exactly one other tuple must carry explicit breakpoints — that tuple's values are the x grid against which allSlopesare integrated. Two error cases:Slopes→ValueErrorpointing atSlopes(...).to_breakpoints(x_pts).Slopestuples →ValueError. Each non-Slopestuple is a y-vector for its own variable, so there is no canonical x axis; picking one would silently depend on tuple order, and the resolved breakpoints (and therefore the optimisation problem) would change with it. Users in this shape resolve theSlopesexplicitly viato_breakpointsso the integration grid is visible at the call site.Removed
breakpoints(slopes=, x_points=, y0=, slopes_align=)—breakpointsis now points-only:breakpoints(values, *, dim=None).slopes_to_points— privatised to_slopes_to_points(list-level primitive used internally bySlopes.to_breakpoints).Both surfaces shipped earlier in this development cycle (slopes-mode in #602 and just extended by #672 with
slopes_align;slopes_to_pointsfrom #602) and have not been released, so the breakage window is the same as the rest of the v0.7.0 piecewise work.Why
breakpoints()always returns aDataArray.Slopesis a separate, deferred type. Dispatch becomes anisinstancecheck rather than coordinate sniffing.Slopes(...)(deferred / inherit) orSlopes(...).to_breakpoints(x_pts)(standalone).Slopesremoves.Value-type behaviour
<DataArray ...>/<DataFrame shape=...>/<Series len=...>/<dict N entries>/<ndarray shape=(...)>for multi-dim arrays. 1-D sequences over 8 entries truncate to head + tail with item count. Numpy scalar dtypes render as plain Python numbers._values_equal: numeric scalars coerce acrossint/float/np.float64;list/tupleare promoted tondarrayso NaN content compares element-wise;ndarrayusesnp.array_equal(equal_nan=True)with a fallback for non-numeric dtypes; pandas / xarray containers use.equals(order-sensitive);dictrecurses on matching keys.__hash__ = None(mutable inner values).Slopes(...).to_breakpoints(...)path doesn't silently bypass the evolving-API signal.Migration (pre-release → pre-release)
breakpoints(slopes=[1.2, 1.4, 1.7], x_points=[0, 30, 60, 100], y0=0)Slopes([1.2, 1.4, 1.7], y0=0).to_breakpoints([0, 30, 60, 100])breakpoints(slopes=[...], x_points=[...], y0=..., slopes_align="leading")Slopes([...], y0=..., align="leading").to_breakpoints([...])add_piecewise_formulation: pass the resolved DataArraySlopes(...)directly — x grid inherited from the sibling tupleslopes_to_points([0, 1, 2], [1, 2], 0)Slopes([1, 2], y0=0).to_breakpoints([0, 1, 2]).values.tolist()🤖 Generated with Claude Code
Manual notes for review
Slopes.__repr__andSlopes.__eq__are debatable. We could choose to check values identity instead of values equality, which is worse UX but simpler and more stable probably.