feat: update pika_exporter with new metrics and test files#3236
feat: update pika_exporter with new metrics and test files#3236orangesapor wants to merge 271 commits intoOpenAtomFoundation:3.5from
Conversation
…omFoundation#2456) (OpenAtomFoundation#2457) Co-authored-by: liuchengyu <[email protected]>
…d in its config. this must execute full sync(OpenAtomFoundation#2436) (OpenAtomFoundation#2444) Co-authored-by: liuchengyu <[email protected]>
add rename-command config
…cation_id in its config. this must execute full sync(OpenAtomFoundation#2436) (OpenAtomFoundation#2444)" (OpenAtomFoundation#2460) This reverts commit ab9ed71. Co-authored-by: liuchengyu <[email protected]>
Co-authored-by: wuxianrong <[email protected]>
Co-authored-by: wuxianrong <[email protected]>
* fix: ACL user authentication errors * blacklist instead of acl user * add rename command (OpenAtomFoundation#2462) * support config get userblacklist ---------
…omFoundation#2467) Co-authored-by: liuyuecai <[email protected]>
…top (OpenAtomFoundation#2475) Co-authored-by: liuchengyu <[email protected]>
Co-authored-by: liuchengyu <[email protected]>
…dation#2474) Co-authored-by: liuchengyu <[email protected]>
…enAtomFoundation#2458) Co-authored-by: liuchengyu <[email protected]>
…#2451) Co-authored-by: liuchengyu <[email protected]>
add rename-command go test
…enAtomFoundation#2473) Co-authored-by: liuchengyu <[email protected]>
* add tcl test * Update default.conf --------- Co-authored-by: wuxianrong <[email protected]> Co-authored-by: Xin.Zh <[email protected]>
* delete migrate loog --------- Co-authored-by: chejinge <[email protected]>
* fix: dbsize caculate error
…ion#2485) * fix:Binlog is written repeatedly during the data migration process && not delete cache --------- Co-authored-by: chejinge <[email protected]>
* add multi key slot migrate * config set migrate thread --------- Co-authored-by: chejinge <[email protected]>
…undation#2480) * add kubeblock component post start demo * convert pika kubeblocks helm chart to new component definition api (cherry picked from commit 60535d6) * modify cluster yaml * remove dirty code * refine pika cluster yaml * add PIKA_CODIS_DASHBOARD_SVC_NAME var reference * add namespace for kubeblocks pika cluster helm chart * add namespace for sa and rolebinding
* fix: pika slave support multi/exec cmd OpenAtomFoundation#2422 Signed-off-by: HappyUncle <[email protected]> * fix: pika slave support watch cmd OpenAtomFoundation#2422 Signed-off-by: HappyUncle <[email protected]> * test: add replication-test-go, let slave support watch/unwatch/multi/exec/discard Signed-off-by: HappyUncle <[email protected]> --------- Signed-off-by: HappyUncle <[email protected]>
* add codis list test Co-authored-by: liuyuecai <[email protected]>
…s Pika core dump (OpenAtomFoundation#3138) * fix telnet core dump --------- Co-authored-by: wuxianrong <[email protected]>
…OpenAtomFoundation#3139) Co-authored-by: wuxianrong <[email protected]>
Co-authored-by: chejinge <[email protected]>
Removed unused intercepted_list_cmds set from pika_client_conn.cc. --------- Co-authored-by: chejinge <[email protected]>
Co-authored-by: chejinge <[email protected]>
This reverts commit cc4d580.
* add migrate for 4.0 --------- Co-authored-by: chejinge <[email protected]>
* fix:unstable_CI --------- Co-authored-by: chejinge <[email protected]>
* Add disk usage debugging step in workflow
* 修复write-buffer-size溢出问题 --------- Co-authored-by: zhouguanghan <[email protected]>
* fix: ttl error for hlen update cache
* feat:add big_key Co-authored-by: chejinge <[email protected]>
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughMakefile and version constants set PikaExporter to 3.5.5. Config loading now provides full defaults and treats missing/failed files as non-fatal. Parsers add version-aware checkers and quieter missing-data handling. New cache/server/stats metrics added. Many test fixtures for multiple Pika versions were introduced. Changes
Sequence Diagram(s)sequenceDiagram
participant Exporter
participant Client
participant Parser
participant Collector
participant MetricsStore
Exporter->>Client: request INFO / METRICS
Client-->>Exporter: raw INFO text
Exporter->>Parser: parse version -> select VersionChecker
Parser-->>Exporter: parsed fields (labels, values)
Exporter->>Collector: map fields to metric configs (version-gated)
Collector->>MetricsStore: record metrics (with labels)
MetricsStore-->>Exporter: scrape-ready metrics
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tools/pika_exporter/exporter/metrics/keyspace.go (1)
91-100:⚠️ Potential issue | 🟠 MajorMirror this spelling fallback in the sibling
keyspace_infoparsers.These older-range
keyspace_info_allparsers now accept bothinvaild_keysandinvalid_keys, but the matchingkeyspace_infoparsers for~3.0.5and~3.1.0still require the misspelled token. On nodes that emitinvalid_keys,expire_keys/invalid_keyswill scrape whilekeysdisappears becausekeyspace_infono longer matches.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/pika_exporter/exporter/metrics/keyspace.go` around lines 91 - 100, The keyspace_info parsers for the ~3.0.5 and ~3.1.0 branches currently only accept the misspelled token "invaild_keys", causing metrics to be dropped when nodes emit "invalid_keys"; update the sibling regexes used in the versionMatchParser entries for those keyspace_info parsers (the regexParser instances named like "keyspace_info_..._~3.0.5" and "keyspace_info_..._~3.1.0") to mirror the fallback used in keyspace_info_all by changing the token to an alternation (?:invaild_keys|invalid_keys) so both spellings are matched.
🧹 Nitpick comments (4)
tools/pika_exporter/exporter/test/v3.2.0_master.go (1)
3-4: Preferconstfor immutable fixture data.This fixture is static; declaring it as
constreduces accidental mutation risk and clarifies intent.💡 Suggested change
-var V320MasterInfo = `# Server +const V320MasterInfo = `# Server🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/pika_exporter/exporter/test/v3.2.0_master.go` around lines 3 - 4, The fixture V320MasterInfo is immutable so change its declaration from a var to a const: update the declaration of V320MasterInfo (the raw string literal holding the Pika 3.2.0 master instance info) to use const instead of var to prevent accidental mutation and clarify intent.tools/pika_exporter/exporter/test/v3.2.8_master.go (1)
3-3: Document the intentionalinvaild_keysspelling in the fixture comment.Line 59–Line 63 use
invaild_keys; if this mirrors upstream 3.2.8 output, call it out in the exported var comment to avoid accidental correction later.Proposed tweak
-// V328MasterInfo represents Pika 3.2.8 master instance info +// V328MasterInfo represents Pika 3.2.8 master instance info. +// Note: `invaild_keys` is intentionally preserved to match upstream INFO output. var V328MasterInfo = `# ServerAlso applies to: 59-63
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/pika_exporter/exporter/test/v3.2.8_master.go` at line 3, Update the exported fixture comment for V328MasterInfo to explicitly note that the key name "invaild_keys" is intentionally misspelled to mirror upstream Pika 3.2.8 output; locate the V328MasterInfo declaration and amend its top-line comment to state that "invaild_keys" is preserved as in upstream (do not correct the spelling in the fixture) so future readers won't change it accidentally.tools/pika_exporter/exporter/test/v3.5.5_slave.go (1)
3-3: Add a doc comment for exported fixtureV355SlaveInfo.The exported variable lacks a doc comment. Similar fixtures in newer versions (3.2.x, 3.3.6) include doc comments; adding one here maintains consistency and satisfies Go's convention for exported symbols.
Suggested comment
+// V355SlaveInfo represents Pika 3.5.5 slave instance info. var V355SlaveInfo = `# Server🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/pika_exporter/exporter/test/v3.5.5_slave.go` at line 3, Add a Go doc comment immediately above the exported variable V355SlaveInfo describing what the fixture represents (e.g., "V355SlaveInfo contains the raw Pika/Redis INFO output for a 3.5.5 slave used in tests"). Ensure the comment is a full sentence starting with the variable name and placed directly above the var declaration so it follows Go conventions and matches comments on other fixtures like the 3.2.x/3.3.6 examples.tools/pika_exporter/exporter/metrics/cache.go (1)
137-145: Normalize the exported metric name to fix the typo.The metric name
waitting_load_keys_numcontains a typo (should bewaiting_load_keys_num). While the upstream C++ source uses this typo in field names, the Go exporter can normalize the public metric name by changingNameto"waiting_load_keys_num"while preserving the source mapping viaValueName: "waitting_load_keys_num".Proposed change
"waitting_load_keys_num": { Parser: &normalParser{}, MetricMeta: &MetaData{ - Name: "waitting_load_keys_num", + Name: "waiting_load_keys_num", Help: "pika serve instance cache waiting load keys number", Type: metricTypeGauge, Labels: []string{LabelNameAddr, LabelNameAlias}, ValueName: "waitting_load_keys_num", },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/pika_exporter/exporter/metrics/cache.go` around lines 137 - 145, The exported metric name contains a typo: update the MetaData.Name for the metric entry keyed by "waitting_load_keys_num" to "waiting_load_keys_num" while preserving the original source field mapping by keeping MetaData.ValueName as "waitting_load_keys_num"; leave the surrounding Parser (&normalParser{}), Type (metricTypeGauge) and Labels (LabelNameAddr, LabelNameAlias) unchanged so the exporter normalizes the public metric name but still maps to the upstream field.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tools/pika_exporter/config/info.toml`:
- Around line 1-47: The template config file no longer matches LoadConfig() /
exporter.InfoConfig: it only documents unrelated keys so toggles for INFO
sections are missing; update tools/pika_exporter/config/info.toml to include the
boolean keys that exporter.InfoConfig unmarshals (server, stats, commandstats,
rocksdb, cache) as commented entries with descriptions and defaults (true/false
as appropriate) so users can enable/disable INFO sections, and ensure any
existing keys referenced by LoadConfig() remain present and documented to keep
behavior discoverable.
In `@tools/pika_exporter/exporter/conf.go`:
- Around line 34-54: InfoConf is seeded with defaults but readConfig currently
unmarshals into a fresh InfoConfig and then replaces InfoConf, so sparse TOML
resets all flags; change readConfig (or the caller that uses InfoConfigPath) to
unmarshal/merge file values into the existing InfoConf instead of replacing it —
either by decoding directly into the existing InfoConf pointer or by decoding
into a temporary and copying only set fields onto InfoConf (overlay), ensuring
that fields absent from the file keep the initialized defaults (refer to
InfoConf, readConfig and InfoConfigPath).
In `@tools/pika_exporter/exporter/metrics/parser.go`:
- Around line 56-60: The log call in statusToGaugeParser::Parse is printing
m.ValueName into the error slot instead of the actual collect error; update the
error logging inside the c.Collect error branch (the block that checks "if err
:= c.Collect(metric); err != nil") to include the real error variable (err) in
the log.Errorf message and any contextual fields (e.g., metric or m) rather than
m.ValueName in the err position; apply the same fix to the other identical
logging site referenced (the other c.Collect error branch) so both messages show
the actual collect error.
- Around line 453-457: The current early return when matchMaps is empty silently
drops unexpected parse misses; update the code so regexParser, normalParser, and
timeParser do not simply return when len(matchMaps)==0 but instead emit at least
a debug (or error for non-whitelisted) log including the parser name,
ParseOption identifier, and the input snippet (or metric name) via processLogger
(or the existing logger) so broken regexes or renamed INFO fields are visible;
only skip logging when the ParseOption explicitly marks the field as an
expected/whitelisted missing value (use the existing version-checker flag on
ParseOption), and apply the same change to the analogous empty-match returns at
the other occurrences (the blocks around the other two empty-return sites).
In `@tools/pika_exporter/exporter/pika.go`:
- Around line 329-344: The code in collectKeys() currently swallows errors from
key collection and per-key operations (e.g., the err after calling
c.Select(k.db) and c.Type(k.key)), making failures invisible; modify
collectKeys() to capture and propagate at least the first non-nil error back to
the caller (or increment a scrape error metric) instead of silently
continuing—store the first encountered error in a local variable (e.g.,
firstErr) when keys collection fails or when c.Select / c.Type return errors,
continue processing other keys but after the loop return firstErr (or return
combined error) so callers know the scan failed; apply the same pattern for the
other silent-skip branches noted (the similar blocks around where keys are
collected and processed) so partial scrapes are observable.
In `@tools/pika_exporter/exporter/test/v3.5.5_master.go`:
- Around line 16-18: Replace environment-specific values in the fixture: replace
the concrete config_file path (/data02/pika5436/pika5436.conf) with a neutral
placeholder (e.g. /path/to/pika.conf), replace real-looking identifiers like
run_id and server_id with non-identifying sample values or placeholders (e.g.
run_id: <REDACTED_RUN_ID>, server_id: 1 -> <REDACTED_SERVER_ID>), and scan the
same fixture for the same sensitive values referenced around lines 97-100 and
sanitize them similarly; update the entries named config_file, server_id, and
run_id in test/v3.5.5_master.go to use these anonymized placeholders.
---
Outside diff comments:
In `@tools/pika_exporter/exporter/metrics/keyspace.go`:
- Around line 91-100: The keyspace_info parsers for the ~3.0.5 and ~3.1.0
branches currently only accept the misspelled token "invaild_keys", causing
metrics to be dropped when nodes emit "invalid_keys"; update the sibling regexes
used in the versionMatchParser entries for those keyspace_info parsers (the
regexParser instances named like "keyspace_info_..._~3.0.5" and
"keyspace_info_..._~3.1.0") to mirror the fallback used in keyspace_info_all by
changing the token to an alternation (?:invaild_keys|invalid_keys) so both
spellings are matched.
---
Nitpick comments:
In `@tools/pika_exporter/exporter/metrics/cache.go`:
- Around line 137-145: The exported metric name contains a typo: update the
MetaData.Name for the metric entry keyed by "waitting_load_keys_num" to
"waiting_load_keys_num" while preserving the original source field mapping by
keeping MetaData.ValueName as "waitting_load_keys_num"; leave the surrounding
Parser (&normalParser{}), Type (metricTypeGauge) and Labels (LabelNameAddr,
LabelNameAlias) unchanged so the exporter normalizes the public metric name but
still maps to the upstream field.
In `@tools/pika_exporter/exporter/test/v3.2.0_master.go`:
- Around line 3-4: The fixture V320MasterInfo is immutable so change its
declaration from a var to a const: update the declaration of V320MasterInfo (the
raw string literal holding the Pika 3.2.0 master instance info) to use const
instead of var to prevent accidental mutation and clarify intent.
In `@tools/pika_exporter/exporter/test/v3.2.8_master.go`:
- Line 3: Update the exported fixture comment for V328MasterInfo to explicitly
note that the key name "invaild_keys" is intentionally misspelled to mirror
upstream Pika 3.2.8 output; locate the V328MasterInfo declaration and amend its
top-line comment to state that "invaild_keys" is preserved as in upstream (do
not correct the spelling in the fixture) so future readers won't change it
accidentally.
In `@tools/pika_exporter/exporter/test/v3.5.5_slave.go`:
- Line 3: Add a Go doc comment immediately above the exported variable
V355SlaveInfo describing what the fixture represents (e.g., "V355SlaveInfo
contains the raw Pika/Redis INFO output for a 3.5.5 slave used in tests").
Ensure the comment is a full sentence starting with the variable name and placed
directly above the var declaration so it follows Go conventions and matches
comments on other fixtures like the 3.2.x/3.3.6 examples.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9355ce56-7ff9-4a98-a874-d94c17b40c7f
📒 Files selected for processing (24)
tools/pika_exporter/Makefiletools/pika_exporter/config/info.tomltools/pika_exporter/exporter/client.gotools/pika_exporter/exporter/conf.gotools/pika_exporter/exporter/metrics/cache.gotools/pika_exporter/exporter/metrics/command_exec_count.gotools/pika_exporter/exporter/metrics/keyspace.gotools/pika_exporter/exporter/metrics/parser.gotools/pika_exporter/exporter/metrics/replication.gotools/pika_exporter/exporter/metrics/server.gotools/pika_exporter/exporter/metrics/stats.gotools/pika_exporter/exporter/parser.gotools/pika_exporter/exporter/pika.gotools/pika_exporter/exporter/test/test.gotools/pika_exporter/exporter/test/v3.2.0_master.gotools/pika_exporter/exporter/test/v3.2.0_slave.gotools/pika_exporter/exporter/test/v3.2.8_master.gotools/pika_exporter/exporter/test/v3.2.8_slave.gotools/pika_exporter/exporter/test/v3.3.6_master.gotools/pika_exporter/exporter/test/v3.3.6_slave.gotools/pika_exporter/exporter/test/v3.5.5_master.gotools/pika_exporter/exporter/test/v3.5.5_slave.gotools/pika_exporter/go.modtools/pika_exporter/version.go
💤 Files with no reviewable changes (1)
- tools/pika_exporter/exporter/metrics/replication.go
| # Pika Exporter Configuration | ||
|
|
||
| # The address of the pika instance to monitor | ||
| # If not set, the exporter will use the discovery mechanism | ||
| # pika_addr = "127.0.0.1:9379" | ||
|
|
||
| # The password for the pika instance | ||
| # pika_password = "" | ||
|
|
||
| # The alias for the pika instance | ||
| # pika_alias = "" | ||
|
|
||
| # The address of the codis topom | ||
| # codis_addr = "http://127.0.0.1:18087" | ||
|
|
||
| # The namespace for the metrics | ||
| # namespace = "pika" | ||
|
|
||
| # The path to the metrics definition file | ||
| # metrics_file = "" | ||
|
|
||
| # The port to listen on for the web interface | ||
| # web_listen_address = ":9121" | ||
|
|
||
| # The path under which to expose metrics | ||
| # web_telemetry_path = "/metrics" | ||
|
|
||
| # The log level | ||
| # log_level = "info" | ||
|
|
||
| # The log format | ||
| # log_format = "text" | ||
|
|
||
| # The path to the config file | ||
| # config = "config/info.toml" | ||
|
|
||
| # The key space stats clock | ||
| # keyspace_stats_clock = -1 | ||
|
|
||
| # The key patterns to check | ||
| # check_key_patterns = "" | ||
|
|
||
| # The keys to check | ||
| # check_keys = "" | ||
|
|
||
| # The scan count for checking keys | ||
| # check_scan_count = 100 |
There was a problem hiding this comment.
This template no longer matches the config schema that LoadConfig() reads.
exporter.InfoConfig only unmarshals booleans like server, stats, commandstats, rocksdb, and cache. None of the keys documented here map to that struct, so editing this file will not control which INFO sections are enabled and users lose the discoverable module toggles that existed before.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/pika_exporter/config/info.toml` around lines 1 - 47, The template
config file no longer matches LoadConfig() / exporter.InfoConfig: it only
documents unrelated keys so toggles for INFO sections are missing; update
tools/pika_exporter/config/info.toml to include the boolean keys that
exporter.InfoConfig unmarshals (server, stats, commandstats, rocksdb, cache) as
commented entries with descriptions and defaults (true/false as appropriate) so
users can enable/disable INFO sections, and ensure any existing keys referenced
by LoadConfig() remain present and documented to keep behavior discoverable.
| // Initialize default configuration | ||
| InfoConf = &InfoConfig{ | ||
| Server: true, | ||
| Data: true, | ||
| Clients: true, | ||
| Stats: true, | ||
| CPU: true, | ||
| Replication: true, | ||
| Keyspace: true, | ||
| Execcount: true, | ||
| Commandstats: true, | ||
| Rocksdb: false, | ||
| Cache: true, | ||
| } | ||
|
|
||
| // Try to load config file if path is provided | ||
| if InfoConfigPath != "" { | ||
| err := readConfig(InfoConfigPath) | ||
| if err != nil { | ||
| log.Warnf("Failed to load config file %s: %s, using default configuration", InfoConfigPath, err) | ||
| } |
There was a problem hiding this comment.
Loading a file currently discards the defaults you just initialized.
LoadConfig() seeds InfoConf with defaults, but readConfig() unmarshals into a fresh InfoConfig{} and then replaces InfoConf. A sparse or comment-only TOML therefore flips every flag back to false, so pointing at the new template disables collection instead of falling back to the defaults.
[suggested fix]
🐛 Keep defaults and overlay file values on top
func readConfig(filePath string) error {
- conf := InfoConfig{}
+ conf := *InfoConf
tree, err := toml.LoadFile(filePath)
if err != nil {
return fmt.Errorf("unable to load toml file %s: %s", filePath, err)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/pika_exporter/exporter/conf.go` around lines 34 - 54, InfoConf is
seeded with defaults but readConfig currently unmarshals into a fresh InfoConfig
and then replaces InfoConf, so sparse TOML resets all flags; change readConfig
(or the caller that uses InfoConfigPath) to unmarshal/merge file values into the
existing InfoConf instead of replacing it — either by decoding directly into the
existing InfoConf pointer or by decoding into a temporary and copying only set
fields onto InfoConf (overlay), ensuring that fields absent from the file keep
the initialized defaults (refer to InfoConf, readConfig and InfoConfigPath).
| if err := c.Collect(metric); err != nil { | ||
| // Keep error logging as this indicates a real problem | ||
| log.Errorf("statusToGaugeParser::Parse metric collect failed. metric:%#v err:%s", | ||
| m, m.ValueName) | ||
| } |
There was a problem hiding this comment.
Log the actual collect error here.
These Errorf calls still print m.ValueName in the err slot, so when collection fails the log message drops the real failure reason.
🪵 Minimal fix
- log.Errorf("statusToGaugeParser::Parse metric collect failed. metric:%#v err:%s",
- m, m.ValueName)
+ log.Errorf("statusToGaugeParser::Parse metric collect failed. metric:%#v err:%s",
+ m, err)
- log.Errorf("normalParser::Parse metric collect failed. metric:%#v err:%s",
- m, m.ValueName)
+ log.Errorf("normalParser::Parse metric collect failed. metric:%#v err:%s",
+ m, err)Also applies to: 522-525
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/pika_exporter/exporter/metrics/parser.go` around lines 56 - 60, The log
call in statusToGaugeParser::Parse is printing m.ValueName into the error slot
instead of the actual collect error; update the error logging inside the
c.Collect error branch (the block that checks "if err := c.Collect(metric); err
!= nil") to include the real error variable (err) in the log.Errorf message and
any contextual fields (e.g., metric or m) rather than m.ValueName in the err
position; apply the same fix to the other identical logging site referenced (the
other c.Collect error branch) so both messages show the actual collect error.
| // Silently ignore empty matches - this is normal for version-specific metrics | ||
| // No need to log warnings for expected empty values | ||
| if len(matchMaps) == 0 { | ||
| if opt.CurrentVersion == nil || !opt.CurrentVersion.CheckContainsEmptyRegexName(p.name) { | ||
| log.Warnf("regexParser::Parse reg find sub match nil. name:%s", p.name) | ||
| } | ||
| return | ||
| } |
There was a problem hiding this comment.
Unexpected parse misses are now silent.
regexParser, normalParser, and timeParser all return quietly on missing matches or values, so a broken regex or renamed INFO field now looks identical to an expected version gap and the metric just vanishes. Since the version-checker machinery is still part of ParseOption, please keep at least debug/error accounting for misses that are not explicitly whitelisted.
Also applies to: 514-516, 552-558
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/pika_exporter/exporter/metrics/parser.go` around lines 453 - 457, The
current early return when matchMaps is empty silently drops unexpected parse
misses; update the code so regexParser, normalParser, and timeParser do not
simply return when len(matchMaps)==0 but instead emit at least a debug (or error
for non-whitelisted) log including the parser name, ParseOption identifier, and
the input snippet (or metric name) via processLogger (or the existing logger) so
broken regexes or renamed INFO fields are visible; only skip logging when the
ParseOption explicitly marks the field as an expected/whitelisted missing value
(use the existing version-checker flag on ParseOption), and apply the same
change to the analogous empty-match returns at the other occurrences (the blocks
around the other two empty-return sites).
| if err != nil { | ||
| log.Errorf("get keys from patterns failed. addr:%s err:%s", c.Addr(), err.Error()) | ||
| // Silently continue on error - this is normal for version-specific features | ||
| } else { | ||
| allKeys = append(allKeys, keys...) | ||
| } | ||
|
|
||
| log.Debugf("collectKeys allKeys:%#v", allKeys) | ||
| for _, k := range allKeys { | ||
| if err := c.Select(k.db); err != nil { | ||
| log.Warnf("couldn't select database %s when getting key info. addr:%s", k.db, c.Addr()) | ||
| // Silently skip database selection errors | ||
| continue | ||
| } | ||
|
|
||
| keyInfo, err := c.Type(k.key) | ||
| if err != nil { | ||
| log.Warnf("get key info failed. addr:%s key:%s err:%s", c.Addr(), k.key, err.Error()) | ||
| // Silently skip key type errors | ||
| continue |
There was a problem hiding this comment.
Don’t make key-collection failures completely invisible.
These branches now continue without returning anything, so collectKeys() and the background keyspace scan can fail while exporter_scrape_errors stays flat and the related metrics just disappear. Please surface at least the first error back to the caller or keep a debug/warn log so partial scrapes remain observable.
Also applies to: 402-404, 437-442
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/pika_exporter/exporter/pika.go` around lines 329 - 344, The code in
collectKeys() currently swallows errors from key collection and per-key
operations (e.g., the err after calling c.Select(k.db) and c.Type(k.key)),
making failures invisible; modify collectKeys() to capture and propagate at
least the first non-nil error back to the caller (or increment a scrape error
metric) instead of silently continuing—store the first encountered error in a
local variable (e.g., firstErr) when keys collection fails or when c.Select /
c.Type return errors, continue processing other keys but after the loop return
firstErr (or return combined error) so callers know the scan failed; apply the
same pattern for the other silent-skip branches noted (the similar blocks around
where keys are collected and processed) so partial scrapes are observable.
| config_file:/data02/pika5436/pika5436.conf | ||
| server_id:1 | ||
| run_id:55070bbe17d5ac6c5310b86f367efcbf26e23f87 |
There was a problem hiding this comment.
Sanitize environment-specific identifiers in this fixture.
This snapshot includes real-looking internal IPs and a concrete host path. Even in test data, those should be anonymized before committing so the repo does not leak internal topology or filesystem details.
Also applies to: 97-100
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/pika_exporter/exporter/test/v3.5.5_master.go` around lines 16 - 18,
Replace environment-specific values in the fixture: replace the concrete
config_file path (/data02/pika5436/pika5436.conf) with a neutral placeholder
(e.g. /path/to/pika.conf), replace real-looking identifiers like run_id and
server_id with non-identifying sample values or placeholders (e.g. run_id:
<REDACTED_RUN_ID>, server_id: 1 -> <REDACTED_SERVER_ID>), and scan the same
fixture for the same sensitive values referenced around lines 97-100 and
sanitize them similarly; update the entries named config_file, server_id, and
run_id in test/v3.5.5_master.go to use these anonymized placeholders.
| @@ -1,12 +1,47 @@ | |||
| server = true | |||
| data = true | |||
| clients = true | |||
| for i, labelName := range m.Labels { | ||
| labelValue, ok := findInMap(labelName, opt.Extracts) | ||
| if !ok { | ||
| log.Debugf("statusToGaugeParser::Parse not found label value. metricName:%s labelName:%s", |
There was a problem hiding this comment.
这里为什么要把 Dubugf 日志删掉,同理下面
There was a problem hiding this comment.
没有必要打印过多日志,因为真多个版本的监听,不同版本有不同的参数导致日志较多,这块日志可以去掉
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tools/pika_exporter/exporter/client.go (1)
132-143:⚠️ Potential issue | 🟠 MajorDo not return success when all requested INFO sections fail.
Current logic can return
("", nil)if every enabled section errors. That hides a full collection failure and can produce silent metric loss.💡 Suggested fix
func (c *client) InfoNoneCommandList() (string, error) { var rst []string + var ( + requested int + succeeded int + lastErr error + ) sectionsMap := map[string]bool{ "COMMAND_EXEC_COUNT": InfoConf.Execcount, "COMMANDSTATS": InfoConf.Commandstats, "ROCKSDB": InfoConf.Rocksdb, } for section, flag := range sectionsMap { if flag { + requested++ info, err := c.InfoCommand(section) if err != nil { log.Warnf("Failed to get INFO %s: %v", section, err) + lastErr = err continue // Skip this section but continue with others } + succeeded++ rst = append(rst, info) } } + if requested > 0 && succeeded == 0 && lastErr != nil { + return "", fmt.Errorf("failed to collect all requested INFO sections: %w", lastErr) + } return strings.Join(rst, "\n"), nil } func (c *client) InfoAllCommandList() (string, error) { var rst []string + var ( + requested int + succeeded int + lastErr error + ) sectionsMap := map[string]bool{ "SERVER": InfoConf.Server, @@ for section, flag := range sectionsMap { if flag { + requested++ info, err := c.InfoCommand(section) if err != nil { log.Warnf("Failed to get INFO %s: %v", section, err) + lastErr = err continue // Skip this section but continue with others } + succeeded++ rst = append(rst, info) } } + if requested > 0 && succeeded == 0 && lastErr != nil { + return "", fmt.Errorf("failed to collect all requested INFO sections: %w", lastErr) + } return strings.Join(rst, "\n"), nil }Also applies to: 161-172
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/pika_exporter/exporter/client.go` around lines 132 - 143, The loop that collects INFO sections (iterating over sectionsMap and calling c.InfoCommand) currently returns strings.Join(rst, "\n"), nil even when rst is empty, hiding a full failure; change the logic after the loop to check if len(rst) == 0 and, in that case, return a non-nil error (e.g., fmt.Errorf("failed to fetch any INFO sections") or aggregate the last/collected errors) instead of nil; apply the same fix to the second identical block around the 161-172 region so both places return an error when no sections succeeded.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tools/pika_exporter/exporter/client.go`:
- Around line 132-143: The loop that collects INFO sections (iterating over
sectionsMap and calling c.InfoCommand) currently returns strings.Join(rst,
"\n"), nil even when rst is empty, hiding a full failure; change the logic after
the loop to check if len(rst) == 0 and, in that case, return a non-nil error
(e.g., fmt.Errorf("failed to fetch any INFO sections") or aggregate the
last/collected errors) instead of nil; apply the same fix to the second
identical block around the 161-172 region so both places return an error when no
sections succeeded.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f8878b5b-ca13-4c38-ba99-63f4e5b8edd4
⛔ Files ignored due to path filters (1)
tools/pika_exporter/go.sumis excluded by!**/*.sum
📒 Files selected for processing (2)
tools/pika_exporter/exporter/client.gotools/pika_exporter/go.mod
🚧 Files skipped from review as they are similar to previous changes (1)
- tools/pika_exporter/go.mod
Summary by CodeRabbit
Bug Fixes
New Features
Chores
Documentation