Description
The caching architecture relies on watcherRegistry to ensure only a single cache listener goroutine executes per cluster context. CheckForChanges() records the contextKey in this map natively to lock it before invoking go runWatcher(ctx, k8scache, contextKey, kContext).
However, runWatcher() suffers from a state mismanagement issue. It has several early-exit conditions during startup that abort the thread:
kContext.RESTConfig() errs.
dynamic.NewForConfig(config) errs.
discoveryClient.ServerPreferredResources() errs.
When runWatcher bails out on any of these checks, it simply returns nil and dies. Crucially, it does not unregister its assigned contextKey from the watcherRegistry.
Impact
As a result, the watcherRegistry believes an active goroutine is maintaining caching for this cluster forever. Subsequent calls dynamically bypass watcher startup:
if _, loaded := watcherRegistry.Load(contextKey); loaded {
return // Silently assumes thread is active tracking Kubernetes cache
}
Description
The caching architecture relies on
watcherRegistryto ensure only a single cache listener goroutine executes per cluster context.CheckForChanges()records thecontextKeyin this map natively to lock it before invokinggo runWatcher(ctx, k8scache, contextKey, kContext).However,
runWatcher()suffers from a state mismanagement issue. It has several early-exit conditions during startup that abort the thread:kContext.RESTConfig()errs.dynamic.NewForConfig(config)errs.discoveryClient.ServerPreferredResources()errs.When
runWatcherbails out on any of these checks, it simply returnsniland dies. Crucially, it does not unregister its assignedcontextKeyfrom thewatcherRegistry.Impact
As a result, the
watcherRegistrybelieves an active goroutine is maintaining caching for this cluster forever. Subsequent calls dynamically bypass watcher startup: