WIP: OTA-1956: install: Add a TechPreviewNoUpgrade cluster-update console plugin#1388
WIP: OTA-1956: install: Add a TechPreviewNoUpgrade cluster-update console plugin#1388wking wants to merge 2 commits into
Conversation
|
@wking: This pull request references OTA-1956 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "5.0.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository YAML (base), Central YAML (inherited) Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
WalkthroughAdds Kubernetes/OpenShift manifests to install the Cluster Updates console plugin: a namespace with pod-security labels and annotations, a default-deny NetworkPolicy, a hardened Deployment and ClusterIP Service, ServiceMonitor and PrometheusRule alerts, a ConsolePlugin resource, an ImageStream reference, and a small test change to skip ChangesCluster Update Console Plugin
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 11 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (11 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: wking The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml`:
- Around line 13-16: The NetworkPolicy as written uses podSelector: {} and
policyTypes: [Ingress, Egress], which creates a default-deny for all pods;
change the podSelector to target only the console-plugin pods by matching the
plugin's label (e.g., matchLabels for the plugin deployment/service) instead of
{} or, if you intend to keep default-deny, add explicit allow rules: in the same
NetworkPolicy add ingress rules permitting traffic from the console/frontend
selector (or namespace) and add egress rules allowing DNS (UDP/TCP 53), the
Kubernetes API server, and any external endpoints the plugin needs; reference
the podSelector, policyTypes, Ingress and Egress fields when making these
changes so the policy only isolates the plugin pods as intended rather than all
pods.
In `@install/0000_50_cluster-update-console-plugin_50_deployment.yaml`:
- Around line 47-65: The volumeMount name "cluster-update-console-plugin" in the
container spec does not match the declared volume
"cluster-update-console-plugin-cert", causing the mount to fail; update the
volumeMount name to "cluster-update-console-plugin-cert" (or rename the volume
to match) so the names align between the volumeMount entry and the volumes list
(look for the volumeMounts.name and volumes.name fields).
In `@install/0000_50_cluster-update-console-plugin_60_service.yaml`:
- Around line 18-34: Remove the stray/malformed second Service block that
defines "name: cluster-version-operator" (the orphaned lines starting after the
metadata) and replace it with a proper spec: for the existing Service resource
named "openshift-cluster-update-console-plugin" so the name matches the
ConsolePlugin backend, which expects "cluster-update-console-plugin"; ensure the
Service's metadata.name is corrected to "cluster-update-console-plugin" (no
"openshift-" prefix) or update the ConsolePlugin to match—then add a spec that
contains the selector "k8s-app: cluster-update-console-plugin" and a ports entry
(e.g., name: metrics, port: 9099, targetPort: metrics) to expose the plugin
metrics instead of the unrelated "cluster-version-operator" fragment.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 57e8afcc-7964-49f1-8538-116d60ef55b5
📒 Files selected for processing (7)
install/0000_50_cluster-update-console-plugin_10_namespace.yamlinstall/0000_50_cluster-update-console-plugin_20_networkpolicy.yamlinstall/0000_50_cluster-update-console-plugin_50_deployment.yamlinstall/0000_50_cluster-update-console-plugin_60_service.yamlinstall/0000_50_cluster-update-console-plugin_80_servicemonitor.yamlinstall/0000_50_cluster-update-console-plugin_90_consoleplugin.yamlinstall/image-references
| podSelector: {} | ||
| policyTypes: | ||
| - Ingress | ||
| - Egress |
There was a problem hiding this comment.
This default-deny policy will fully isolate the plugin pods.
With Line 13 selecting all pods and Line 15-16 denying both directions without any allow policy in this PR, console-to-plugin traffic and required outbound traffic (for example DNS/API) will be blocked.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml` around
lines 13 - 16, The NetworkPolicy as written uses podSelector: {} and
policyTypes: [Ingress, Egress], which creates a default-deny for all pods;
change the podSelector to target only the console-plugin pods by matching the
plugin's label (e.g., matchLabels for the plugin deployment/service) instead of
{} or, if you intend to keep default-deny, add explicit allow rules: in the same
NetworkPolicy add ingress rules permitting traffic from the console/frontend
selector (or namespace) and add egress rules allowing DNS (UDP/TCP 53), the
Kubernetes API server, and any external endpoints the plugin needs; reference
the podSelector, policyTypes, Ingress and Egress fields when making these
changes so the policy only isolates the plugin pods as intended rather than all
pods.
612aac0 to
eda10fc
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (2)
install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml (1)
13-16:⚠️ Potential issue | 🟠 Major | ⚡ Quick winNamespace-wide deny will isolate all pods unless allow policies are added.
podSelector: {}with both Ingress and Egress deny types blocks traffic for every pod in the namespace. If this is intended as baseline deny, pair it with explicit allow policies in this change set (or narrow the selector to plugin pods).🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml` around lines 13 - 16, The NetworkPolicy currently uses podSelector: {} with policyTypes: Ingress and Egress which enforces namespace-wide deny; narrow the scope or add explicit allow rules: change the podSelector to target only console plugin pods (e.g., by label selector for the plugin) or include companion NetworkPolicy objects that explicitly allow required Ingress and/or Egress traffic for the plugin (reference podSelector and policyTypes in this manifest and ensure allow policies cover necessary ports/peers).install/0000_50_cluster-update-console-plugin_60_service.yaml (1)
4-4:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winService name mismatches ConsolePlugin backend reference.
This Service is
openshift-cluster-update-console-plugin, but the ConsolePlugin backend points tocluster-update-console-plugin. The plugin backend won’t resolve this Service as-is.Suggested minimal fix
- name: openshift-cluster-update-console-plugin + name: cluster-update-console-plugin🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@install/0000_50_cluster-update-console-plugin_60_service.yaml` at line 4, The Service resource name "openshift-cluster-update-console-plugin" does not match the ConsolePlugin backend service name "cluster-update-console-plugin"; update one to match the other so the ConsolePlugin backend can resolve the Service—either rename the Service name value to "cluster-update-console-plugin" or change the ConsolePlugin backend service reference to "openshift-cluster-update-console-plugin", and ensure any references to Service name in resources or RBAC (if present) are updated consistently (check the Service resource name and the ConsolePlugin backend.service.name fields).
🧹 Nitpick comments (1)
install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml (1)
4-4: ⚡ Quick winUse a component-scoped NetworkPolicy name.
default-denyis too generic forinstall/**manifests; include the full component name to keep resource ownership clear.As per coding guidelines, "Resource names should use the full component name (e.g.,
cluster-version-operator) not acronyms."🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml` at line 4, The NetworkPolicy currently uses the generic name "default-deny"; rename the resource to a component-scoped DNS-1123-compliant name that includes the full component identifier (for example, "cluster-update-console-plugin-default-deny" or similar) by updating the metadata.name value for the NetworkPolicy resource (the symbol "default-deny") and any references to that name; ensure the new name follows lowercase/dash rules and update related manifests or Role/RoleBinding references if they refer to "default-deny".
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml`:
- Around line 13-16: The NetworkPolicy currently uses podSelector: {} with
policyTypes: Ingress and Egress which enforces namespace-wide deny; narrow the
scope or add explicit allow rules: change the podSelector to target only console
plugin pods (e.g., by label selector for the plugin) or include companion
NetworkPolicy objects that explicitly allow required Ingress and/or Egress
traffic for the plugin (reference podSelector and policyTypes in this manifest
and ensure allow policies cover necessary ports/peers).
In `@install/0000_50_cluster-update-console-plugin_60_service.yaml`:
- Line 4: The Service resource name "openshift-cluster-update-console-plugin"
does not match the ConsolePlugin backend service name
"cluster-update-console-plugin"; update one to match the other so the
ConsolePlugin backend can resolve the Service—either rename the Service name
value to "cluster-update-console-plugin" or change the ConsolePlugin backend
service reference to "openshift-cluster-update-console-plugin", and ensure any
references to Service name in resources or RBAC (if present) are updated
consistently (check the Service resource name and the ConsolePlugin
backend.service.name fields).
---
Nitpick comments:
In `@install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml`:
- Line 4: The NetworkPolicy currently uses the generic name "default-deny";
rename the resource to a component-scoped DNS-1123-compliant name that includes
the full component identifier (for example,
"cluster-update-console-plugin-default-deny" or similar) by updating the
metadata.name value for the NetworkPolicy resource (the symbol "default-deny")
and any references to that name; ensure the new name follows lowercase/dash
rules and update related manifests or Role/RoleBinding references if they refer
to "default-deny".
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 80d8a20e-32f7-4b5c-a5ed-6d5166be7a3f
📒 Files selected for processing (7)
install/0000_50_cluster-update-console-plugin_10_namespace.yamlinstall/0000_50_cluster-update-console-plugin_20_networkpolicy.yamlinstall/0000_50_cluster-update-console-plugin_50_deployment.yamlinstall/0000_50_cluster-update-console-plugin_60_service.yamlinstall/0000_50_cluster-update-console-plugin_80_servicemonitor.yamlinstall/0000_50_cluster-update-console-plugin_90_consoleplugin.yamlinstall/image-references
🚧 Files skipped from review as they are similar to previous changes (4)
- install/0000_50_cluster-update-console-plugin_10_namespace.yaml
- install/image-references
- install/0000_50_cluster-update-console-plugin_50_deployment.yaml
- install/0000_50_cluster-update-console-plugin_80_servicemonitor.yaml
The console folks are pushing to decentralize console implementation from [1], so we've created a new console plugin for cluster updates [2]. It's built by both CI [3] and ART [4]. Checking on app.ci ImageStreams: $ oc whoami -c default/api-ci-l2s4-p1-openshiftapps-com:6443/wking $ oc -n ocp get -o json imagestream 5.0 | jq -r '.status.tags[] | select(.tag == "cluster-update-console-plugin").items[] | .created + " " + .image' 2026-05-01T20:55:38Z sha256:10e4f1b5763f40372823173b2a9528777ff8e97d416c5447e10df023c0e35656 2026-04-29T05:00:56Z sha256:b0433455cbbff13bdda03ee78371e18c139adebc00432dea716e8cdf83eeb042 2026-04-17T11:10:31Z sha256:e1296b64ffb35757fb2fb56bb5dd9cbd55c7f17f9f59c0a10a2d71a0ad6702d3 $ oc -n ocp get -o json imagestream 5.0-art-latest | jq -r '.status.tags[] | select(.tag == "cluster-update-console-plugin").items[] | .created + " " + .image' 2026-05-12T19:45:33Z sha256:b943be0ae0eba97c27741d0184e99a77ea928749cc578ae7e17a8e5329652642 2026-05-12T14:42:55Z sha256:c29ba37ef5a426de5320d680d3fb58befc530274e6df4c32b4dc4fd0acaaaae0 2026-05-12T09:02:35Z sha256:901bc6aac1142fe1da2a756bb4d91ae8fe14b459ce2bf9904ceae6d2fc818fc2 2026-05-12T04:38:56Z sha256:1f4e8b200d97f82db1784b4d7fc9f0bd3ccaf2cc664fd5f0ff6b80485da16950 2026-05-11T22:54:34Z sha256:d369ca7c73d7a3abe159e9e6f5644f63e0b091f0b75f202773a023e91c7faaf6 This commit sets up an image-references file [5], so 'oc adm release new ...' knows that we'll want that image injected in the Deployment manifest. I'm using placeholder.url.oc.will.replace.this.example.org as part of my placeholder name. That's similar to the machine-config operator's use of placeholder.url.oc.will.replace.this.org [6], except that I'm using a subdomain of the reserved example.com [7], to avoid any possible confusion with an actually in-use domain. The new manifests are in run-level 50, which is the default, so they can roll out in parallel with other components to avoid slowing updates. The new manifests are tried to the Console capability [8] and the TechPreviewNoUpgrade feature set [9] (in the absence of a specific feature gate for this functionality). I'm just carrying the old exclude.release.openshift.io/internal-openshift-hosted annotation over from other CVO manifests. It predates cluster profiles [10], and I'm not sure anyone still uses it, but it seems like the CVO should be consistent about whether it matters or not anymore. Perhaps we can drop it from all CVO manifests in follow-up work. Otherwise these manifests are loosely based on my attempts to meld the plugin's Help chart templates [11] with existing CVO manifest conventions. [1]: https://github.com/openshift/console [2]: https://github.com/openshift/cluster-update-console-plugin [3]: openshift/release#77945 [4]: openshift-eng/ocp-build-data#10393 [5]: https://github.com/openshift/enhancements/blob/4f67eee19ad16f1d5e9e8a2622b708e2ea6d8e6a/dev-guide/cluster-version-operator/dev/operators.md#how-do-i-ensure-the-right-images-get-used-by-my-manifests [6]: https://github.com/openshift/machine-config-operator/blob/99cb8a46e6a31b2b72d6a8371c6cd4ee45393263/install/image-references#L10 [7]: https://www.rfc-editor.org/rfc/rfc6761#section-6.5 [8]: https://github.com/openshift/enhancements/blob/4f67eee19ad16f1d5e9e8a2622b708e2ea6d8e6a/enhancements/installer/component-selection.md#manifest-annotations [9]: https://github.com/openshift/enhancements/blob/4f67eee19ad16f1d5e9e8a2622b708e2ea6d8e6a/enhancements/update/cvo-techpreview-manifests.md#proposal [10]: https://github.com/openshift/enhancements/blob/4f67eee19ad16f1d5e9e8a2622b708e2ea6d8e6a/enhancements/update/ibm-public-cloud-support.md#cluster-version-operator-changes-for-beta [11]: https://github.com/openshift/cluster-update-console-plugin/tree/9778f4fc0c19e60cad55a45591a066b6b7a3cb12/charts/openshift-console-plugin/templates
eda10fc to
62d193f
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@install/0000_50_cluster-update-console-plugin_90_consoleplugin.yaml`:
- Line 10: The manifest currently only has
include.release.openshift.io/self-managed-high-availability: "true", which may
unintentionally restrict profile inclusion; update the cluster-profile
annotations on this manifest to either add the standard set used by install
manifests (e.g., include.release.openshift.io/self-managed: "true",
include.release.openshift.io/managed: "true" alongside the existing
include.release.openshift.io/self-managed-high-availability: "true") or add an
in-file comment/PR description that explicitly justifies why only
include.release.openshift.io/self-managed-high-availability is required; change
the annotation keys in the YAML (referencing the annotation name
include.release.openshift.io/self-managed-high-availability) or add the
additional include.release.openshift.io/* keys so the manifest is included in
all intended cluster profiles.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 01e45865-f5de-4def-ae29-f1cfdbe7268b
📒 Files selected for processing (7)
install/0000_50_cluster-update-console-plugin_10_namespace.yamlinstall/0000_50_cluster-update-console-plugin_20_networkpolicy.yamlinstall/0000_50_cluster-update-console-plugin_50_deployment.yamlinstall/0000_50_cluster-update-console-plugin_60_service.yamlinstall/0000_50_cluster-update-console-plugin_80_servicemonitor.yamlinstall/0000_50_cluster-update-console-plugin_90_consoleplugin.yamlinstall/image-references
✅ Files skipped from review due to trivial changes (3)
- install/image-references
- install/0000_50_cluster-update-console-plugin_60_service.yaml
- install/0000_50_cluster-update-console-plugin_10_namespace.yaml
🚧 Files skipped from review as they are similar to previous changes (3)
- install/0000_50_cluster-update-console-plugin_20_networkpolicy.yaml
- install/0000_50_cluster-update-console-plugin_50_deployment.yaml
- install/0000_50_cluster-update-console-plugin_80_servicemonitor.yaml
| capability.openshift.io/name: Console | ||
| release.openshift.io/feature-set: TechPreviewNoUpgrade | ||
| exclude.release.openshift.io/internal-openshift-hosted: "true" | ||
| include.release.openshift.io/self-managed-high-availability: "true" |
There was a problem hiding this comment.
Broaden (or explicitly justify) cluster-profile inclusion annotations.
On Line [10], only include.release.openshift.io/self-managed-high-availability: "true" is present, which can unintentionally exclude this manifest from other self-managed profiles. If that’s not intentional, add the standard profile includes used by install manifests.
Suggested manifest update
metadata:
name: openshift-cluster-update-console-plugin
annotations:
kubernetes.io/description: The OpenShift cluster-update console plugin provides a web-console interface for managing ClusterVersion updates.
capability.openshift.io/name: Console
release.openshift.io/feature-set: TechPreviewNoUpgrade
exclude.release.openshift.io/internal-openshift-hosted: "true"
include.release.openshift.io/self-managed-high-availability: "true"
+ include.release.openshift.io/single-node-developer: "true"
+ include.release.openshift.io/ibm-cloud-managed: "true"As per coding guidelines, "All manifests must have appropriate cluster-profile annotations (include.release.openshift.io/self-managed-high-availability, etc.)".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| include.release.openshift.io/self-managed-high-availability: "true" | |
| metadata: | |
| name: openshift-cluster-update-console-plugin | |
| annotations: | |
| kubernetes.io/description: The OpenShift cluster-update console plugin provides a web-console interface for managing ClusterVersion updates. | |
| capability.openshift.io/name: Console | |
| release.openshift.io/feature-set: TechPreviewNoUpgrade | |
| exclude.release.openshift.io/internal-openshift-hosted: "true" | |
| include.release.openshift.io/self-managed-high-availability: "true" | |
| include.release.openshift.io/single-node-developer: "true" | |
| include.release.openshift.io/ibm-cloud-managed: "true" |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@install/0000_50_cluster-update-console-plugin_90_consoleplugin.yaml` at line
10, The manifest currently only has
include.release.openshift.io/self-managed-high-availability: "true", which may
unintentionally restrict profile inclusion; update the cluster-profile
annotations on this manifest to either add the standard set used by install
manifests (e.g., include.release.openshift.io/self-managed: "true",
include.release.openshift.io/managed: "true" alongside the existing
include.release.openshift.io/self-managed-high-availability: "true") or add an
in-file comment/PR description that explicitly justifies why only
include.release.openshift.io/self-managed-high-availability is required; change
the annotation keys in the YAML (referencing the annotation name
include.release.openshift.io/self-managed-high-availability) or add the
additional include.release.openshift.io/* keys so the manifest is included in
all intended cluster profiles.
Avoid:
$ go test ./pkg/payload
...
--- FAIL: Test_cvoManifests (0.02s)
--- FAIL: Test_cvoManifests/install_dir (0.02s)
render_test.go:355: failed to load manifests: error parsing: error unmarshaling JSON: while decoding JSON: Resource with fields Group: "image.openshift.io" Kind: "ImageStream" Name: "" must contain kubernetes required fields kind and name
...
These image-refernces files are helpers for 'oc adm release new ...',
they don't need all the properties set that they'd need to be pushed
into a cluster.
|
@wking: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
The console folks are pushing to decentralize console implementation away from the
consolerepository, so we've created a new console plugin for cluster updates. It's built by both CI and ART. Checking onapp.ciImageStreams:This commit sets up an
image-referencesfile, sooc adm release new ...knows that we'll want that image injected in the Deployment manifest. I'm usingplaceholder.url.oc.will.replace.this.example.orgas part of my placeholder name. That's similar to the machine-config operator's use ofplaceholder.url.oc.will.replace.this.org, except that I'm using a subdomain of the reservedexample.com, to avoid any possible confusion with an actually in-use domain.The new manifests are in run-level 50, which is the default, so they can roll out in parallel with other components to avoid slowing updates.
The new manifests are tried to the
Consolecapability and theTechPreviewNoUpgradefeature set (in the absence of a specific feature gate for this functionality).I'm just carrying the old
exclude.release.openshift.io/internal-openshift-hostedannotation over from other CVO manifests. It predates cluster profiles, and I'm not sure anyone still uses it, but it seems like the CVO should be consistent about whether it matters or not anymore. Perhaps we can drop it from all CVO manifests in follow-up work.Otherwise these manifests are loosely based on my attempts to meld the plugin's Help chart templates with existing CVO manifest conventions.
Summary by CodeRabbit
New Features
Chores