feat:add big key detection#3115
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
8212d18 to
fb563a7
Compare
8b7dc3a to
aee8840
Compare
| # Interval time for scanning expired big key threads. | ||
| # Default: 1 | ||
| # When set to 0, big key logging will be disabled. | ||
| bigkeys_log_interval : 1 |
|
|
||
| Redis::~Redis() { | ||
| std::vector<rocksdb::ColumnFamilyHandle*> tmp_handles = handles_; | ||
| for (auto handle : handles_) { |
There was a problem hiding this comment.
之前两个后台进程的改动遗留下来的,要重写析构函数,现在不需要了恢复到原本的代码。
| return; | ||
| } | ||
|
|
||
| std::lock_guard<std::mutex> lock(big_keys_mutex_); |
There was a problem hiding this comment.
这把锁是用来保护big_keys_info_map_还是bigkeys,如果是big_keys_info_map_,这里不需要用互斥锁?
There was a problem hiding this comment.
这把互斥锁是用来保护 big_keys_info_map_ 的,这里不是已经加了吗?
| // Data CF | ||
| column_families.emplace_back("data_cf", data_cf_ops); | ||
| return rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_); | ||
| s = rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_); |
There was a problem hiding this comment.
之前以为这里也需要调用Check函数,后面就忘记删了
| } | ||
| s = db_->Write(default_write_options_, &batch); | ||
| UpdateSpecificKeyStatistics(key.ToString(), statistic); | ||
| if (s.ok()) { |
There was a problem hiding this comment.
这里你缺少了对已经存在的Hash类型的Key的插入统计的判断,这里的逻辑你写的是对一个新的Key的插入做大 Key 检测的判断,我理解这里是不需要的,你可以直接再 Write 之前就能判断出这个 Key 是否是大 Key 了,不需要一次额外的 Get 操作
| @@ -421,6 +422,15 @@ Status RedisHashes::HIncrby(const Slice& key, const Slice& field, int64_t value, | |||
| } | |||
| s = db_->Write(default_write_options_, &batch); | |||
| UpdateSpecificKeyStatistics(key.ToString(), statistic); | |||
| } | ||
| s = db_->Write(default_write_options_, &batch); | ||
| UpdateSpecificKeyStatistics(key.ToString(), statistic); | ||
| if (s.ok()) { |
There was a problem hiding this comment.
同理这里不需要插入之后再 Get 统计是不是大 Key, 你再调用 Write 之前就应该知道这个是不是大 Key, 你可以直接查看const Slice& key, const std::vector& fvs这两个参数
| return s; | ||
| } | ||
| return db_->Write(default_write_options_, &batch); | ||
| s = db_->Write(default_write_options_, &batch); |
| rescan_ = false; | ||
| off_ = false; | ||
| keyspace_scan_dbs_.clear(); | ||
| info_section_ = kInfoErr; |
There was a problem hiding this comment.
这里在 Clear() 情况下为什么要置为 kInfoErr 状态,什么场景下这个命令会调用 Clear()
There was a problem hiding this comment.
我用 kInfoErr 状态代表一个错误或者未初始化的状态。确保命令对象在被重用之前,其内部状态恢复到初始状态,防止数据污染。当一个命令执行完成后,Clear() 方法会被调用,清理内部状态,释放资源,为下次复用做准备。
| void PikaServer::UpdateDBBigKeysConfig() { | ||
| std::shared_lock l(dbs_rw_); | ||
| for (const auto& db_item : dbs_) { | ||
| db_item.second->DBLockShared(); |
There was a problem hiding this comment.
这里应该需要上 DBLock() 吧,写锁,因为你后面修改了 Storage 里面的值
| } | ||
|
|
||
| void PikaServer::UpdateDBBigKeysConfig() { | ||
| std::shared_lock l(dbs_rw_); |
There was a problem hiding this comment.
这里外层保持 std::shared_lock l(dbs_rw_) 是合适的,不需要改成写锁。dbs_rw_ 保护的是 dbs_ 这个 map 容器本身的结构,在 UpdateDBBigKeysConfig() 里,我们只是遍历 dbs_,不会对容器做插入、删除或指针重排,因此只需要读锁。
There was a problem hiding this comment.
db_item.second->UpdateStorageBigKeysConfig你这里不是对db_item做了更新操作吗
YuCai18: done
| thread_local uint64_t last_output_time = 0; | ||
| uint64_t current_time = pstd::NowMicros(); | ||
|
|
||
| uint64_t interval_us = static_cast<uint64_t>(interval_minutes) * 60 * 1000000; |
There was a problem hiding this comment.
这里为什么不能把 g_pika_conf->bigkeys_log_interval() 设置成 uint64_t 类型,这里还需要强转
| small_compaction_threshold_(5000), | ||
| small_compaction_duration_threshold_(10000) { | ||
| small_compaction_duration_threshold_(10000), | ||
| db_(nullptr), |
There was a problem hiding this comment.
因为在构造函数里把 db_ 初始化为 nullptr,代表此时还未打开数据库。在 rocksdb::DB::Open(ops, db_path, &db_) 中,这里才会真正 new 出 rocksdb::DB,并把指针写回 db_。之后若任何成员函数若在未调用 Open() 之前误用 db_,通过空指针检查就能及时发现问题。
| // Member CF | ||
| column_families.emplace_back("member_cf", member_cf_ops); | ||
| return rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_); | ||
| s = rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_); |
| // Data CF | ||
| column_families.emplace_back("data_cf", data_cf_ops); | ||
| return rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_); | ||
| s = rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_); |
| ops.table_factory.reset(rocksdb::NewBlockBasedTableFactory(table_ops)); | ||
|
|
||
| return rocksdb::DB::Open(ops, db_path, &db_); | ||
| Status s = rocksdb::DB::Open(ops, db_path, &db_); |
| column_families.emplace_back("data_cf", data_cf_ops); | ||
| column_families.emplace_back("score_cf", score_cf_ops); | ||
| return rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_); | ||
| s = rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_); |
| return s; | ||
| } | ||
|
|
||
|
|
| } | ||
|
|
||
| void PikaServer::UpdateDBBigKeysConfig() { | ||
| std::shared_lock l(dbs_rw_); |
There was a problem hiding this comment.
db_item.second->UpdateStorageBigKeysConfig你这里不是对db_item做了更新操作吗
YuCai18: done
| return info, nil | ||
| } | ||
|
|
||
| info, err := c.Info() |
There was a problem hiding this comment.
这里为什么要单独抽出来一个c.InfoBigKeys()的逻辑,这里可以直接写在c.Info()或者c.InfoAll()或者c.InfoNoneCommandList()里面
| return redis.String(c.conn.Do("INFO", command)) | ||
| } | ||
|
|
||
| func (c *client) InfoBigKeys() (string, error) { |
| return version, extracts, nil | ||
| } | ||
|
|
||
| func parseInfoBigkey(s string) string { |
There was a problem hiding this comment.
这里解析不需要单独抽出来一个函数写,你修改一下你的正则表达式即可
|
#include "blackwidow/blackwidow.h" using namespace std; static void Usage() std::string ReplaceAll(std::string str, const std::string& from, const std::string& to) { bool directory_exists(const std::string& path) { int main(int argc, char *argv[]){ path = db_path + "/./lists"; } |
新增功能 #3079:pika3.5新增大key检测及其日志写入功能
功能展示
1. info命令展示
2.大key日志写入结果展示
go测试结果