From c294814b6b0a3a52ca6fcb4eece0b43576d5d24c Mon Sep 17 00:00:00 2001 From: Oleksandr Kulychok Date: Tue, 21 Apr 2026 04:44:03 +0300 Subject: [PATCH 1/2] Fix preStop behavior for KEDA scale-down Updated the preStop hook to decouple it from Selenium node command-line arguments. This prevents random node terminations during KEDA scale-down. --- website_and_docs/content/blog/2022/scaling-grid-with-keda.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website_and_docs/content/blog/2022/scaling-grid-with-keda.md b/website_and_docs/content/blog/2022/scaling-grid-with-keda.md index ee3b20eef140..18e5cc942024 100644 --- a/website_and_docs/content/blog/2022/scaling-grid-with-keda.md +++ b/website_and_docs/content/blog/2022/scaling-grid-with-keda.md @@ -118,7 +118,7 @@ spec: lifecycle: preStop: exec: - command: ["/bin/sh", "-c", "curl --request POST 'localhost:5555/se/grid/node/drain' --header 'X-REGISTRATION-SECRET;'; tail --pid=$(pgrep -f '[n]ode --bind-host false --config /opt/selenium/config.toml') -f /dev/null; sleep 30s"] + command: ["/bin/sh", "-c", "curl --request POST 'localhost:5555/se/grid/node/drain' --header 'X-REGISTRATION-SECRET;'; while curl -fs localhost:5555/status >/dev/null; do sleep 1; done; sleep 10"] ``` #### Breaking this down @@ -219,4 +219,4 @@ spec: command: ["/bin/sh", "-c", "curl --request POST 'localhost:5555/se/grid/node/drain' --header 'X-REGISTRATION-SECRET;'; tail --pid=$(pgrep -f '[n]ode --bind-host false --config /opt/selenium/config.toml') -f /dev/null; sleep 30s"] ``` -That is it, your Selenium Grid pods should now scale up and down properly without any lost sessions! \ No newline at end of file +That is it, your Selenium Grid pods should now scale up and down properly without any lost sessions! From 9f90be079745837ac41123ff47a1ba34aff0e6c0 Mon Sep 17 00:00:00 2001 From: Oleksandr Kulychok Date: Tue, 21 Apr 2026 05:10:38 +0300 Subject: [PATCH 2/2] Update pod termination process for Selenium Grid apply fix for all axamples --- .../content/blog/2022/scaling-grid-with-keda.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/website_and_docs/content/blog/2022/scaling-grid-with-keda.md b/website_and_docs/content/blog/2022/scaling-grid-with-keda.md index 18e5cc942024..d24a65a3c75a 100644 --- a/website_and_docs/content/blog/2022/scaling-grid-with-keda.md +++ b/website_and_docs/content/blog/2022/scaling-grid-with-keda.md @@ -132,8 +132,8 @@ grace period for your cluster nodes as well. - We curl the `localhost` of our pod to tell it to drain. The pod will no longer accept new session requests and will finish its current test. More information on this [can be found in the Selenium Grid documentation](https://www.selenium.dev/documentation/grid/advanced_features/endpoints/#drain). -- We then tail the internal node process that will continue to run until the node has been drained. -- After this we give the pod 30 seconds to finish anything else before giving the full termination command. +- Poll the Selenium node /status endpoint and wait until it becomes unavailable, indicating that the node has been fully drained and terminated. +- After this we give the pod 10 seconds to finish anything else before giving the full termination command. And with that our application can now safely scale down our selenium browser deployments! @@ -216,7 +216,7 @@ spec: lifecycle: preStop: exec: - command: ["/bin/sh", "-c", "curl --request POST 'localhost:5555/se/grid/node/drain' --header 'X-REGISTRATION-SECRET;'; tail --pid=$(pgrep -f '[n]ode --bind-host false --config /opt/selenium/config.toml') -f /dev/null; sleep 30s"] + command: ["/bin/sh", "-c", "curl --request POST 'localhost:5555/se/grid/node/drain' --header 'X-REGISTRATION-SECRET;'; while curl -fs localhost:5555/status >/dev/null; do sleep 1; done; sleep 10"] ``` That is it, your Selenium Grid pods should now scale up and down properly without any lost sessions!