Skip to content

WIP: OTA-1956: install: Add a TechPreviewNoUpgrade cluster-update console plugin#1388

Open
wking wants to merge 2 commits into
openshift:mainfrom
wking:cluster-update-console-plugin
Open

WIP: OTA-1956: install: Add a TechPreviewNoUpgrade cluster-update console plugin#1388
wking wants to merge 2 commits into
openshift:mainfrom
wking:cluster-update-console-plugin

Conversation

@wking
Copy link
Copy Markdown
Member

@wking wking commented May 12, 2026

The console folks are pushing to decentralize console implementation away from the console repository, so we've created a new console plugin for cluster updates. It's built by both CI and ART. Checking on app.ci ImageStreams:

$ oc whoami -c
default/api-ci-l2s4-p1-openshiftapps-com:6443/wking
$ oc -n ocp get -o json imagestream 5.0 | jq -r '.status.tags[] | select(.tag == "cluster-update-console-plugin").items[] | .created + " " + .image'
2026-05-01T20:55:38Z sha256:10e4f1b5763f40372823173b2a9528777ff8e97d416c5447e10df023c0e35656
2026-04-29T05:00:56Z sha256:b0433455cbbff13bdda03ee78371e18c139adebc00432dea716e8cdf83eeb042
2026-04-17T11:10:31Z sha256:e1296b64ffb35757fb2fb56bb5dd9cbd55c7f17f9f59c0a10a2d71a0ad6702d3
$ oc -n ocp get -o json imagestream 5.0-art-latest | jq -r '.status.tags[] | select(.tag == "cluster-update-console-plugin").items[] | .created + " " + .image'
2026-05-12T19:45:33Z sha256:b943be0ae0eba97c27741d0184e99a77ea928749cc578ae7e17a8e5329652642
2026-05-12T14:42:55Z sha256:c29ba37ef5a426de5320d680d3fb58befc530274e6df4c32b4dc4fd0acaaaae0
2026-05-12T09:02:35Z sha256:901bc6aac1142fe1da2a756bb4d91ae8fe14b459ce2bf9904ceae6d2fc818fc2
2026-05-12T04:38:56Z sha256:1f4e8b200d97f82db1784b4d7fc9f0bd3ccaf2cc664fd5f0ff6b80485da16950
2026-05-11T22:54:34Z sha256:d369ca7c73d7a3abe159e9e6f5644f63e0b091f0b75f202773a023e91c7faaf6

This commit sets up an image-references file, so oc adm release new ... knows that we'll want that image injected in the Deployment manifest. I'm using placeholder.url.oc.will.replace.this.example.org as part of my placeholder name. That's similar to the machine-config operator's use of placeholder.url.oc.will.replace.this.org, except that I'm using a subdomain of the reserved example.com, to avoid any possible confusion with an actually in-use domain.

The new manifests are in run-level 50, which is the default, so they can roll out in parallel with other components to avoid slowing updates.

The new manifests are tried to the Console capability and the TechPreviewNoUpgrade feature set (in the absence of a specific feature gate for this functionality).

I'm just carrying the old exclude.release.openshift.io/internal-openshift-hosted annotation over from other CVO manifests. It predates cluster profiles, and I'm not sure anyone still uses it, but it seems like the CVO should be consistent about whether it matters or not anymore. Perhaps we can drop it from all CVO manifests in follow-up work.

Otherwise these manifests are loosely based on my attempts to meld the plugin's Help chart templates with existing CVO manifest conventions.

Summary by CodeRabbit

  • New Features

    • Added "Cluster Updates" console plugin (Tech Preview) with UI integration and backend service.
  • Chores

    • Provisioned namespace, deployment, service, and image reference for the plugin.
    • Enforced a default‑deny network policy and applied pod security levels for the plugin namespace.
    • Added Prometheus scraping and alerting rules to monitor the plugin and cluster update conditions.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label May 12, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented May 12, 2026

@wking: This pull request references OTA-1956 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "5.0.0" version, but no target version was set.

Details

In response to this:

The console folks are pushing to decentralize console implementation away from the console repository, so we've created a new console plugin for cluster updates. It's built by both CI and ART. Checking on app.ci ImageStreams:

$ oc whoami -c
default/api-ci-l2s4-p1-openshiftapps-com:6443/wking
$ oc -n ocp get -o json imagestream 5.0 | jq -r '.status.tags[] | select(.tag == "cluster-update-console-plugin").items[] | .created + " " + .image'
2026-05-01T20:55:38Z sha256:10e4f1b5763f40372823173b2a9528777ff8e97d416c5447e10df023c0e35656
2026-04-29T05:00:56Z sha256:b0433455cbbff13bdda03ee78371e18c139adebc00432dea716e8cdf83eeb042
2026-04-17T11:10:31Z sha256:e1296b64ffb35757fb2fb56bb5dd9cbd55c7f17f9f59c0a10a2d71a0ad6702d3
$ oc -n ocp get -o json imagestream 5.0-art-latest | jq -r '.status.tags[] | select(.tag == "cluster-update-console-plugin").items[] | .created + " " + .image'
2026-05-12T19:45:33Z sha256:b943be0ae0eba97c27741d0184e99a77ea928749cc578ae7e17a8e5329652642
2026-05-12T14:42:55Z sha256:c29ba37ef5a426de5320d680d3fb58befc530274e6df4c32b4dc4fd0acaaaae0
2026-05-12T09:02:35Z sha256:901bc6aac1142fe1da2a756bb4d91ae8fe14b459ce2bf9904ceae6d2fc818fc2
2026-05-12T04:38:56Z sha256:1f4e8b200d97f82db1784b4d7fc9f0bd3ccaf2cc664fd5f0ff6b80485da16950
2026-05-11T22:54:34Z sha256:d369ca7c73d7a3abe159e9e6f5644f63e0b091f0b75f202773a023e91c7faaf6

This commit sets up an image-references file, so oc adm release new ... knows that we'll want that image injected in the Deployment manifest. I'm using placeholder.url.oc.will.replace.this.example.org as part of my placeholder name. That's similar to the machine-config operator's use of placeholder.url.oc.will.replace.this.org, except that I'm using a subdomain of the reserved example.com, to avoid any possible confusion with an actually in-use domain.

The new manifests are in run-level 50, which is the default, so they can roll out in parallel with other components to avoid slowing updates.

The new manifests are tried to the Console capability and the TechPreviewNoUpgrade feature set (in the absence of a specific feature gate for this functionality).

I'm just carrying the old exclude.release.openshift.io/internal-openshift-hosted annotation over from other CVO manifests. It predates cluster profiles, and I'm not sure anyone still uses it, but it seems like the CVO should be consistent about whether it matters or not anymore. Perhaps we can drop it from all CVO manifests in follow-up work.

Otherwise these manifests are loosely based on my attempts to meld the plugin's Help chart templates with existing CVO manifest conventions.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 12, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 3d026c9f-29f4-49b8-8e4b-07f5637a3584

📥 Commits

Reviewing files that changed from the base of the PR and between 62d193f and bf8f4c2.

📒 Files selected for processing (1)
  • pkg/payload/render_test.go
✅ Files skipped from review due to trivial changes (1)
  • pkg/payload/render_test.go

Walkthrough

Adds Kubernetes/OpenShift manifests to install the Cluster Updates console plugin: a namespace with pod-security labels and annotations, a default-deny NetworkPolicy, a hardened Deployment and ClusterIP Service, ServiceMonitor and PrometheusRule alerts, a ConsolePlugin resource, an ImageStream reference, and a small test change to skip image-references during rendering.

Changes

Cluster Update Console Plugin

Layer / File(s) Summary
Namespace and network baseline
install/0000_50_cluster-update-console-plugin_10_namespace.yaml, install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml
Creates Namespace openshift-cluster-update-console-plugin with descriptive and OpenShift release/capability annotations and labels (monitoring, pod-security). Adds NetworkPolicy default-deny in that namespace with policyTypes: [Ingress,Egress] and empty rules to enforce default-deny.
Deployment runtime and pod security
install/0000_50_cluster-update-console-plugin_50_deployment.yaml
Adds Deployment cluster-update-console-plugin in the plugin namespace: rolling update strategy, labels/selectors, single-replica template with nodeSelector: node-role.kubernetes.io/infra, priorityClassName: system-cluster-critical, automountServiceAccountToken: false, terminationGracePeriodSeconds: 30, toleration for infra nodes, cert volume from secret cluster-update-console-plugin-cert, container plugin (image placeholder) exposing port 9001, resource requests (20m CPU, 50Mi), non-root and hardened securityContext (no privilege escalation, drop ALL capabilities, seccomp RuntimeDefault).
Service and metadata annotations
install/0000_50_cluster-update-console-plugin_60_service.yaml
Adds Service openshift-cluster-update-console-plugin (ClusterIP) selecting app: cluster-update-console-plugin, exposes port 9001targetPort: https, and includes OpenShift capability/feature-set and release include/exclude annotations.
Prometheus scraping & alerting
install/0000_50_cluster-update-console-plugin_80_servicemonitor.yaml
Adds ServiceMonitor cluster-version-operator in openshift-cluster-version scraping metrics port over HTTPS with client TLS and 30s interval. Adds PrometheusRule cluster-version-operator with rule groups cluster-version and cluster-operators and alerts: ClusterVersionOperatorDown, CannotRetrieveUpdates, UpdateAvailable, ClusterReleaseNotAccepted, ClusterNotUpgradeable, ClusterOperatorDown, ClusterOperatorDegraded, ClusterOperatorFlapping, CannotEvaluateConditionalUpdates (with PromQL exprs, for durations, severity labels, and annotations).
Console plugin registration and image reference
install/0000_50_cluster-update-console-plugin_90_consoleplugin.yaml, install/image-references
Adds ConsolePlugin openshift-cluster-update-console-plugin (displayName "Cluster Updates", TechPreviewNoUpgrade, preloaded i18n, backend service cluster-update-console-plugin over https at /). Adds install/image-references ImageStream tag cluster-update-console-plugin referencing a placeholder DockerImage.
Test render skip for image-references
pkg/payload/render_test.go
Updates Test_cvoManifests directory walk to skip files named image-references to avoid attempting to render that file as a manifest during the test run.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 11 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (11 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and accurately reflects the main change: adding a new TechPreviewNoUpgrade cluster-update console plugin to the installation manifests.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed All test names in the modified test file are stable and deterministic. No dynamic information like pod names, timestamps, UUIDs, node names, or IP addresses found in test titles.
Test Structure And Quality ✅ Passed The PR modifies pkg/payload/render_test.go, which uses standard Go testing.T patterns, not Ginkgo test patterns. Custom check requires Ginkgo code review.
Microshift Test Compatibility ✅ Passed This PR does not add any Ginkgo e2e tests. It adds Kubernetes manifest files and a unit test modification. The custom check is not applicable.
Single Node Openshift (Sno) Test Compatibility ✅ Passed No new Ginkgo e2e tests added. PR only adds Kubernetes manifests and modifies an existing Go unit test to skip image-references files. Check not applicable.
Topology-Aware Scheduling Compatibility ✅ Passed Deployment uses infra nodeSelector with matching toleration, compatible with all topologies (SNO, Two-Node, HyperShift). No problematic affinity rules or control-plane selectors.
Ote Binary Stdout Contract ✅ Passed PR modifies only test function Test_cvoManifests with conditional skip logic. No process-level code changes. No stdout writes. OTE contract not violated.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed No new Ginkgo e2e tests are added. Changes include Kubernetes manifests (YAML) and modification to an existing Go unit test. Check is not applicable.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 12, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 12, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml`:
- Around line 13-16: The NetworkPolicy as written uses podSelector: {} and
policyTypes: [Ingress, Egress], which creates a default-deny for all pods;
change the podSelector to target only the console-plugin pods by matching the
plugin's label (e.g., matchLabels for the plugin deployment/service) instead of
{} or, if you intend to keep default-deny, add explicit allow rules: in the same
NetworkPolicy add ingress rules permitting traffic from the console/frontend
selector (or namespace) and add egress rules allowing DNS (UDP/TCP 53), the
Kubernetes API server, and any external endpoints the plugin needs; reference
the podSelector, policyTypes, Ingress and Egress fields when making these
changes so the policy only isolates the plugin pods as intended rather than all
pods.

In `@install/0000_50_cluster-update-console-plugin_50_deployment.yaml`:
- Around line 47-65: The volumeMount name "cluster-update-console-plugin" in the
container spec does not match the declared volume
"cluster-update-console-plugin-cert", causing the mount to fail; update the
volumeMount name to "cluster-update-console-plugin-cert" (or rename the volume
to match) so the names align between the volumeMount entry and the volumes list
(look for the volumeMounts.name and volumes.name fields).

In `@install/0000_50_cluster-update-console-plugin_60_service.yaml`:
- Around line 18-34: Remove the stray/malformed second Service block that
defines "name: cluster-version-operator" (the orphaned lines starting after the
metadata) and replace it with a proper spec: for the existing Service resource
named "openshift-cluster-update-console-plugin" so the name matches the
ConsolePlugin backend, which expects "cluster-update-console-plugin"; ensure the
Service's metadata.name is corrected to "cluster-update-console-plugin" (no
"openshift-" prefix) or update the ConsolePlugin to match—then add a spec that
contains the selector "k8s-app: cluster-update-console-plugin" and a ports entry
(e.g., name: metrics, port: 9099, targetPort: metrics) to expose the plugin
metrics instead of the unrelated "cluster-version-operator" fragment.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 57e8afcc-7964-49f1-8538-116d60ef55b5

📥 Commits

Reviewing files that changed from the base of the PR and between 4cfd94b and 612aac0.

📒 Files selected for processing (7)
  • install/0000_50_cluster-update-console-plugin_10_namespace.yaml
  • install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml
  • install/0000_50_cluster-update-console-plugin_50_deployment.yaml
  • install/0000_50_cluster-update-console-plugin_60_service.yaml
  • install/0000_50_cluster-update-console-plugin_80_servicemonitor.yaml
  • install/0000_50_cluster-update-console-plugin_90_consoleplugin.yaml
  • install/image-references

Comment on lines +13 to +16
podSelector: {}
policyTypes:
- Ingress
- Egress
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | 🏗️ Heavy lift

This default-deny policy will fully isolate the plugin pods.

With Line 13 selecting all pods and Line 15-16 denying both directions without any allow policy in this PR, console-to-plugin traffic and required outbound traffic (for example DNS/API) will be blocked.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml` around
lines 13 - 16, The NetworkPolicy as written uses podSelector: {} and
policyTypes: [Ingress, Egress], which creates a default-deny for all pods;
change the podSelector to target only the console-plugin pods by matching the
plugin's label (e.g., matchLabels for the plugin deployment/service) instead of
{} or, if you intend to keep default-deny, add explicit allow rules: in the same
NetworkPolicy add ingress rules permitting traffic from the console/frontend
selector (or namespace) and add egress rules allowing DNS (UDP/TCP 53), the
Kubernetes API server, and any external endpoints the plugin needs; reference
the podSelector, policyTypes, Ingress and Egress fields when making these
changes so the policy only isolates the plugin pods as intended rather than all
pods.

Comment thread install/0000_50_cluster-update-console-plugin_50_deployment.yaml
Comment thread install/0000_50_cluster-update-console-plugin_60_service.yaml Outdated
@wking wking changed the title OTA-1956: install: Add a TechPreviewNoUpgrade cluster-update console plugin WIP: OTA-1956: install: Add a TechPreviewNoUpgrade cluster-update console plugin May 12, 2026
@openshift-ci openshift-ci Bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 12, 2026
@wking wking force-pushed the cluster-update-console-plugin branch from 612aac0 to eda10fc Compare May 12, 2026 22:51
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml (1)

13-16: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Namespace-wide deny will isolate all pods unless allow policies are added.

podSelector: {} with both Ingress and Egress deny types blocks traffic for every pod in the namespace. If this is intended as baseline deny, pair it with explicit allow policies in this change set (or narrow the selector to plugin pods).

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml` around
lines 13 - 16, The NetworkPolicy currently uses podSelector: {} with
policyTypes: Ingress and Egress which enforces namespace-wide deny; narrow the
scope or add explicit allow rules: change the podSelector to target only console
plugin pods (e.g., by label selector for the plugin) or include companion
NetworkPolicy objects that explicitly allow required Ingress and/or Egress
traffic for the plugin (reference podSelector and policyTypes in this manifest
and ensure allow policies cover necessary ports/peers).
install/0000_50_cluster-update-console-plugin_60_service.yaml (1)

4-4: ⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Service name mismatches ConsolePlugin backend reference.

This Service is openshift-cluster-update-console-plugin, but the ConsolePlugin backend points to cluster-update-console-plugin. The plugin backend won’t resolve this Service as-is.

Suggested minimal fix
-  name: openshift-cluster-update-console-plugin
+  name: cluster-update-console-plugin
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@install/0000_50_cluster-update-console-plugin_60_service.yaml` at line 4, The
Service resource name "openshift-cluster-update-console-plugin" does not match
the ConsolePlugin backend service name "cluster-update-console-plugin"; update
one to match the other so the ConsolePlugin backend can resolve the
Service—either rename the Service name value to "cluster-update-console-plugin"
or change the ConsolePlugin backend service reference to
"openshift-cluster-update-console-plugin", and ensure any references to Service
name in resources or RBAC (if present) are updated consistently (check the
Service resource name and the ConsolePlugin backend.service.name fields).
🧹 Nitpick comments (1)
install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml (1)

4-4: ⚡ Quick win

Use a component-scoped NetworkPolicy name.

default-deny is too generic for install/** manifests; include the full component name to keep resource ownership clear.

As per coding guidelines, "Resource names should use the full component name (e.g., cluster-version-operator) not acronyms."

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml` at line
4, The NetworkPolicy currently uses the generic name "default-deny"; rename the
resource to a component-scoped DNS-1123-compliant name that includes the full
component identifier (for example, "cluster-update-console-plugin-default-deny"
or similar) by updating the metadata.name value for the NetworkPolicy resource
(the symbol "default-deny") and any references to that name; ensure the new name
follows lowercase/dash rules and update related manifests or Role/RoleBinding
references if they refer to "default-deny".
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Duplicate comments:
In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml`:
- Around line 13-16: The NetworkPolicy currently uses podSelector: {} with
policyTypes: Ingress and Egress which enforces namespace-wide deny; narrow the
scope or add explicit allow rules: change the podSelector to target only console
plugin pods (e.g., by label selector for the plugin) or include companion
NetworkPolicy objects that explicitly allow required Ingress and/or Egress
traffic for the plugin (reference podSelector and policyTypes in this manifest
and ensure allow policies cover necessary ports/peers).

In `@install/0000_50_cluster-update-console-plugin_60_service.yaml`:
- Line 4: The Service resource name "openshift-cluster-update-console-plugin"
does not match the ConsolePlugin backend service name
"cluster-update-console-plugin"; update one to match the other so the
ConsolePlugin backend can resolve the Service—either rename the Service name
value to "cluster-update-console-plugin" or change the ConsolePlugin backend
service reference to "openshift-cluster-update-console-plugin", and ensure any
references to Service name in resources or RBAC (if present) are updated
consistently (check the Service resource name and the ConsolePlugin
backend.service.name fields).

---

Nitpick comments:
In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml`:
- Line 4: The NetworkPolicy currently uses the generic name "default-deny";
rename the resource to a component-scoped DNS-1123-compliant name that includes
the full component identifier (for example,
"cluster-update-console-plugin-default-deny" or similar) by updating the
metadata.name value for the NetworkPolicy resource (the symbol "default-deny")
and any references to that name; ensure the new name follows lowercase/dash
rules and update related manifests or Role/RoleBinding references if they refer
to "default-deny".

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 80d8a20e-32f7-4b5c-a5ed-6d5166be7a3f

📥 Commits

Reviewing files that changed from the base of the PR and between 612aac0 and eda10fc.

📒 Files selected for processing (7)
  • install/0000_50_cluster-update-console-plugin_10_namespace.yaml
  • install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml
  • install/0000_50_cluster-update-console-plugin_50_deployment.yaml
  • install/0000_50_cluster-update-console-plugin_60_service.yaml
  • install/0000_50_cluster-update-console-plugin_80_servicemonitor.yaml
  • install/0000_50_cluster-update-console-plugin_90_consoleplugin.yaml
  • install/image-references
🚧 Files skipped from review as they are similar to previous changes (4)
  • install/0000_50_cluster-update-console-plugin_10_namespace.yaml
  • install/image-references
  • install/0000_50_cluster-update-console-plugin_50_deployment.yaml
  • install/0000_50_cluster-update-console-plugin_80_servicemonitor.yaml

The console folks are pushing to decentralize console implementation
from [1], so we've created a new console plugin for cluster updates
[2].  It's built by both CI [3] and ART [4].  Checking on app.ci
ImageStreams:

  $ oc whoami -c
  default/api-ci-l2s4-p1-openshiftapps-com:6443/wking
  $ oc -n ocp get -o json imagestream 5.0 | jq -r '.status.tags[] | select(.tag == "cluster-update-console-plugin").items[] | .created + " " + .image'
  2026-05-01T20:55:38Z sha256:10e4f1b5763f40372823173b2a9528777ff8e97d416c5447e10df023c0e35656
  2026-04-29T05:00:56Z sha256:b0433455cbbff13bdda03ee78371e18c139adebc00432dea716e8cdf83eeb042
  2026-04-17T11:10:31Z sha256:e1296b64ffb35757fb2fb56bb5dd9cbd55c7f17f9f59c0a10a2d71a0ad6702d3
  $ oc -n ocp get -o json imagestream 5.0-art-latest | jq -r '.status.tags[] | select(.tag == "cluster-update-console-plugin").items[] | .created + " " + .image'
  2026-05-12T19:45:33Z sha256:b943be0ae0eba97c27741d0184e99a77ea928749cc578ae7e17a8e5329652642
  2026-05-12T14:42:55Z sha256:c29ba37ef5a426de5320d680d3fb58befc530274e6df4c32b4dc4fd0acaaaae0
  2026-05-12T09:02:35Z sha256:901bc6aac1142fe1da2a756bb4d91ae8fe14b459ce2bf9904ceae6d2fc818fc2
  2026-05-12T04:38:56Z sha256:1f4e8b200d97f82db1784b4d7fc9f0bd3ccaf2cc664fd5f0ff6b80485da16950
  2026-05-11T22:54:34Z sha256:d369ca7c73d7a3abe159e9e6f5644f63e0b091f0b75f202773a023e91c7faaf6

This commit sets up an image-references file [5], so 'oc adm release
new ...' knows that we'll want that image injected in the Deployment
manifest.  I'm using placeholder.url.oc.will.replace.this.example.org
as part of my placeholder name.  That's similar to the machine-config
operator's use of placeholder.url.oc.will.replace.this.org [6], except
that I'm using a subdomain of the reserved example.com [7], to avoid
any possible confusion with an actually in-use domain.

The new manifests are in run-level 50, which is the default, so they
can roll out in parallel with other components to avoid slowing
updates.

The new manifests are tried to the Console capability [8] and the
TechPreviewNoUpgrade feature set [9] (in the absence of a specific
feature gate for this functionality).

I'm just carrying the old
exclude.release.openshift.io/internal-openshift-hosted annotation over
from other CVO manifests.  It predates cluster profiles [10], and I'm
not sure anyone still uses it, but it seems like the CVO should be
consistent about whether it matters or not anymore.  Perhaps we can
drop it from all CVO manifests in follow-up work.

Otherwise these manifests are loosely based on my attempts to meld the
plugin's Help chart templates [11] with existing CVO manifest
conventions.

[1]: https://github.com/openshift/console
[2]: https://github.com/openshift/cluster-update-console-plugin
[3]: openshift/release#77945
[4]: openshift-eng/ocp-build-data#10393
[5]: https://github.com/openshift/enhancements/blob/4f67eee19ad16f1d5e9e8a2622b708e2ea6d8e6a/dev-guide/cluster-version-operator/dev/operators.md#how-do-i-ensure-the-right-images-get-used-by-my-manifests
[6]: https://github.com/openshift/machine-config-operator/blob/99cb8a46e6a31b2b72d6a8371c6cd4ee45393263/install/image-references#L10
[7]: https://www.rfc-editor.org/rfc/rfc6761#section-6.5
[8]: https://github.com/openshift/enhancements/blob/4f67eee19ad16f1d5e9e8a2622b708e2ea6d8e6a/enhancements/installer/component-selection.md#manifest-annotations
[9]: https://github.com/openshift/enhancements/blob/4f67eee19ad16f1d5e9e8a2622b708e2ea6d8e6a/enhancements/update/cvo-techpreview-manifests.md#proposal
[10]: https://github.com/openshift/enhancements/blob/4f67eee19ad16f1d5e9e8a2622b708e2ea6d8e6a/enhancements/update/ibm-public-cloud-support.md#cluster-version-operator-changes-for-beta
[11]: https://github.com/openshift/cluster-update-console-plugin/tree/9778f4fc0c19e60cad55a45591a066b6b7a3cb12/charts/openshift-console-plugin/templates
@wking wking force-pushed the cluster-update-console-plugin branch from eda10fc to 62d193f Compare May 13, 2026 01:45
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@install/0000_50_cluster-update-console-plugin_90_consoleplugin.yaml`:
- Line 10: The manifest currently only has
include.release.openshift.io/self-managed-high-availability: "true", which may
unintentionally restrict profile inclusion; update the cluster-profile
annotations on this manifest to either add the standard set used by install
manifests (e.g., include.release.openshift.io/self-managed: "true",
include.release.openshift.io/managed: "true" alongside the existing
include.release.openshift.io/self-managed-high-availability: "true") or add an
in-file comment/PR description that explicitly justifies why only
include.release.openshift.io/self-managed-high-availability is required; change
the annotation keys in the YAML (referencing the annotation name
include.release.openshift.io/self-managed-high-availability) or add the
additional include.release.openshift.io/* keys so the manifest is included in
all intended cluster profiles.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 01e45865-f5de-4def-ae29-f1cfdbe7268b

📥 Commits

Reviewing files that changed from the base of the PR and between eda10fc and 62d193f.

📒 Files selected for processing (7)
  • install/0000_50_cluster-update-console-plugin_10_namespace.yaml
  • install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml
  • install/0000_50_cluster-update-console-plugin_50_deployment.yaml
  • install/0000_50_cluster-update-console-plugin_60_service.yaml
  • install/0000_50_cluster-update-console-plugin_80_servicemonitor.yaml
  • install/0000_50_cluster-update-console-plugin_90_consoleplugin.yaml
  • install/image-references
✅ Files skipped from review due to trivial changes (3)
  • install/image-references
  • install/0000_50_cluster-update-console-plugin_60_service.yaml
  • install/0000_50_cluster-update-console-plugin_10_namespace.yaml
🚧 Files skipped from review as they are similar to previous changes (3)
  • install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml
  • install/0000_50_cluster-update-console-plugin_50_deployment.yaml
  • install/0000_50_cluster-update-console-plugin_80_servicemonitor.yaml

capability.openshift.io/name: Console
release.openshift.io/feature-set: TechPreviewNoUpgrade
exclude.release.openshift.io/internal-openshift-hosted: "true"
include.release.openshift.io/self-managed-high-availability: "true"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Broaden (or explicitly justify) cluster-profile inclusion annotations.

On Line [10], only include.release.openshift.io/self-managed-high-availability: "true" is present, which can unintentionally exclude this manifest from other self-managed profiles. If that’s not intentional, add the standard profile includes used by install manifests.

Suggested manifest update
 metadata:
   name: openshift-cluster-update-console-plugin
   annotations:
     kubernetes.io/description: The OpenShift cluster-update console plugin provides a web-console interface for managing ClusterVersion updates.
     capability.openshift.io/name: Console
     release.openshift.io/feature-set: TechPreviewNoUpgrade
     exclude.release.openshift.io/internal-openshift-hosted: "true"
     include.release.openshift.io/self-managed-high-availability: "true"
+    include.release.openshift.io/single-node-developer: "true"
+    include.release.openshift.io/ibm-cloud-managed: "true"

As per coding guidelines, "All manifests must have appropriate cluster-profile annotations (include.release.openshift.io/self-managed-high-availability, etc.)".

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
include.release.openshift.io/self-managed-high-availability: "true"
metadata:
name: openshift-cluster-update-console-plugin
annotations:
kubernetes.io/description: The OpenShift cluster-update console plugin provides a web-console interface for managing ClusterVersion updates.
capability.openshift.io/name: Console
release.openshift.io/feature-set: TechPreviewNoUpgrade
exclude.release.openshift.io/internal-openshift-hosted: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
include.release.openshift.io/ibm-cloud-managed: "true"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@install/0000_50_cluster-update-console-plugin_90_consoleplugin.yaml` at line
10, The manifest currently only has
include.release.openshift.io/self-managed-high-availability: "true", which may
unintentionally restrict profile inclusion; update the cluster-profile
annotations on this manifest to either add the standard set used by install
manifests (e.g., include.release.openshift.io/self-managed: "true",
include.release.openshift.io/managed: "true" alongside the existing
include.release.openshift.io/self-managed-high-availability: "true") or add an
in-file comment/PR description that explicitly justifies why only
include.release.openshift.io/self-managed-high-availability is required; change
the annotation keys in the YAML (referencing the annotation name
include.release.openshift.io/self-managed-high-availability) or add the
additional include.release.openshift.io/* keys so the manifest is included in
all intended cluster profiles.

Avoid:

  $ go test ./pkg/payload
  ...
  --- FAIL: Test_cvoManifests (0.02s)
      --- FAIL: Test_cvoManifests/install_dir (0.02s)
          render_test.go:355: failed to load manifests: error parsing: error unmarshaling JSON: while decoding JSON: Resource with fields Group: "image.openshift.io" Kind: "ImageStream" Name: "" must contain kubernetes required fields kind and name
  ...

These image-refernces files are helpers for 'oc adm release new ...',
they don't need all the properties set that they'd need to be pushed
into a cluster.
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 13, 2026

@wking: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-techpreview bf8f4c2 link true /test e2e-aws-ovn-techpreview
ci/prow/e2e-agnostic-ovn-techpreview-serial-3of3 bf8f4c2 link true /test e2e-agnostic-ovn-techpreview-serial-3of3
ci/prow/e2e-agnostic-ovn-upgrade-out-of-change bf8f4c2 link true /test e2e-agnostic-ovn-upgrade-out-of-change
ci/prow/e2e-agnostic-operator bf8f4c2 link true /test e2e-agnostic-operator
ci/prow/e2e-agnostic-ovn-techpreview-serial-2of3 bf8f4c2 link true /test e2e-agnostic-ovn-techpreview-serial-2of3
ci/prow/e2e-hypershift-conformance bf8f4c2 link true /test e2e-hypershift-conformance
ci/prow/e2e-hypershift bf8f4c2 link true /test e2e-hypershift
ci/prow/e2e-agnostic-ovn-upgrade-into-change bf8f4c2 link true /test e2e-agnostic-ovn-upgrade-into-change
ci/prow/e2e-agnostic-ovn bf8f4c2 link true /test e2e-agnostic-ovn
ci/prow/e2e-agnostic-ovn-techpreview-serial-1of3 bf8f4c2 link true /test e2e-agnostic-ovn-techpreview-serial-1of3

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants