Skip to content

feat:add big key detection#3115

Closed
YuCai18 wants to merge 4 commits intoOpenAtomFoundation:3.5from
YuCai18:codereview3.5/bigkey
Closed

feat:add big key detection#3115
YuCai18 wants to merge 4 commits intoOpenAtomFoundation:3.5from
YuCai18:codereview3.5/bigkey

Conversation

@YuCai18
Copy link
Copy Markdown
Collaborator

@YuCai18 YuCai18 commented Jun 25, 2025

新增功能 #3079:pika3.5新增大key检测及其日志写入功能

功能展示

1. info命令展示

image

image

2.大key日志写入结果展示

image

go测试结果

image

@github-actions github-actions Bot added the ✏️ Feature New feature or request label Jun 25, 2025
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Jun 25, 2025

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@YuCai18 YuCai18 force-pushed the codereview3.5/bigkey branch 28 times, most recently from 8212d18 to fb563a7 Compare June 30, 2025 12:31
@YuCai18 YuCai18 force-pushed the codereview3.5/bigkey branch 20 times, most recently from 8b7dc3a to aee8840 Compare July 4, 2025 07:18
Comment thread conf/pika.conf
# Interval time for scanning expired big key threads.
# Default: 1
# When set to 0, big key logging will be disabled.
bigkeys_log_interval : 1
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里默认改成0吧,默认不打印

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/storage/src/redis.cc Outdated

Redis::~Redis() {
std::vector<rocksdb::ColumnFamilyHandle*> tmp_handles = handles_;
for (auto handle : handles_) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里为什么要改动

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

之前两个后台进程的改动遗留下来的,要重写析构函数,现在不需要了恢复到原本的代码。

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/storage/src/redis.cc Outdated
return;
}

std::lock_guard<std::mutex> lock(big_keys_mutex_);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这把锁是用来保护big_keys_info_map_还是bigkeys,如果是big_keys_info_map_,这里不需要用互斥锁?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这把互斥锁是用来保护 big_keys_info_map_ 的,这里不是已经加了吗?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/storage/src/redis_hashes.cc Outdated
// Data CF
column_families.emplace_back("data_cf", data_cf_ops);
return rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_);
s = rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里为啥要改动

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

之前以为这里也需要调用Check函数,后面就忘记删了

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(key.ToString(), statistic);
if (s.ok()) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里你缺少了对已经存在的Hash类型的Key的插入统计的判断,这里的逻辑你写的是对一个新的Key的插入做大 Key 检测的判断,我理解这里是不需要的,你可以直接再 Write 之前就能判断出这个 Key 是否是大 Key 了,不需要一次额外的 Get 操作

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -421,6 +422,15 @@ Status RedisHashes::HIncrby(const Slice& key, const Slice& field, int64_t value,
}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(key.ToString(), statistic);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同理 HIncrbyfloat

}
s = db_->Write(default_write_options_, &batch);
UpdateSpecificKeyStatistics(key.ToString(), statistic);
if (s.ok()) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同理这里不需要插入之后再 Get 统计是不是大 Key, 你再调用 Write 之前就应该知道这个是不是大 Key, 你可以直接查看const Slice& key, const std::vector& fvs这两个参数

return s;
}
return db_->Write(default_write_options_, &batch);
s = db_->Write(default_write_options_, &batch);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同理

Comment thread include/pika_admin.h
rescan_ = false;
off_ = false;
keyspace_scan_dbs_.clear();
info_section_ = kInfoErr;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里在 Clear() 情况下为什么要置为 kInfoErr 状态,什么场景下这个命令会调用 Clear()

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我用 kInfoErr 状态代表一个错误或者未初始化的状态。确保命令对象在被重用之前,其内部状态恢复到初始状态,防止数据污染。当一个命令执行完成后,Clear() 方法会被调用,清理内部状态,释放资源,为下次复用做准备。

Comment thread src/pika_server.cc Outdated
void PikaServer::UpdateDBBigKeysConfig() {
std::shared_lock l(dbs_rw_);
for (const auto& db_item : dbs_) {
db_item.second->DBLockShared();
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里应该需要上 DBLock() 吧,写锁,因为你后面修改了 Storage 里面的值

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/pika_server.cc Outdated
}

void PikaServer::UpdateDBBigKeysConfig() {
std::shared_lock l(dbs_rw_);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里同理也是上写锁

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里外层保持 std::shared_lock l(dbs_rw_) 是合适的,不需要改成写锁。dbs_rw_ 保护的是 dbs_ 这个 map 容器本身的结构,在 UpdateDBBigKeysConfig() 里,我们只是遍历 dbs_,不会对容器做插入、删除或指针重排,因此只需要读锁。

Copy link
Copy Markdown
Collaborator

@Mixficsol Mixficsol Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

db_item.second->UpdateStorageBigKeysConfig你这里不是对db_item做了更新操作吗

YuCai18: done

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/pika_server.cc Outdated
thread_local uint64_t last_output_time = 0;
uint64_t current_time = pstd::NowMicros();

uint64_t interval_us = static_cast<uint64_t>(interval_minutes) * 60 * 1000000;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里为什么不能把 g_pika_conf->bigkeys_log_interval() 设置成 uint64_t 类型,这里还需要强转

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/storage/src/redis.cc
small_compaction_threshold_(5000),
small_compaction_duration_threshold_(10000) {
small_compaction_duration_threshold_(10000),
db_(nullptr),
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里为什么db_初始化是nullptr

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

因为在构造函数里把 db_ 初始化为 nullptr,代表此时还未打开数据库。在 rocksdb::DB::Open(ops, db_path, &db_) 中,这里才会真正 new 出 rocksdb::DB,并把指针写回 db_。之后若任何成员函数若在未调用 Open() 之前误用 db_,通过空指针检查就能及时发现问题。

Comment thread src/storage/src/redis_sets.cc Outdated
// Member CF
column_families.emplace_back("member_cf", member_cf_ops);
return rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_);
s = rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同理

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/storage/src/redis_streams.cc Outdated
// Data CF
column_families.emplace_back("data_cf", data_cf_ops);
return rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_);
s = rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个改动有什么作用吗,都是 return s

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/storage/src/redis_strings.cc Outdated
ops.table_factory.reset(rocksdb::NewBlockBasedTableFactory(table_ops));

return rocksdb::DB::Open(ops, db_path, &db_);
Status s = rocksdb::DB::Open(ops, db_path, &db_);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同理

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/storage/src/redis_zsets.cc Outdated
column_families.emplace_back("data_cf", data_cf_ops);
column_families.emplace_back("score_cf", score_cf_ops);
return rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_);
s = rocksdb::DB::Open(db_ops, db_path, column_families, &handles_, &db_);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同理

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/storage/src/redis_zsets.cc Outdated
return s;
}


Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同理

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread src/pika_server.cc Outdated
}

void PikaServer::UpdateDBBigKeysConfig() {
std::shared_lock l(dbs_rw_);
Copy link
Copy Markdown
Collaborator

@Mixficsol Mixficsol Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

db_item.second->UpdateStorageBigKeysConfig你这里不是对db_item做了更新操作吗

YuCai18: done

Comment thread tools/pika_exporter/exporter/client.go Outdated
return info, nil
}

info, err := c.Info()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里为什么要单独抽出来一个c.InfoBigKeys()的逻辑,这里可以直接写在c.Info()或者c.InfoAll()或者c.InfoNoneCommandList()里面

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

去掉了

Comment thread tools/pika_exporter/exporter/client.go Outdated
return redis.String(c.conn.Do("INFO", command))
}

func (c *client) InfoBigKeys() (string, error) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个不需要

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Comment thread tools/pika_exporter/exporter/parser.go Outdated
return version, extracts, nil
}

func parseInfoBigkey(s string) string {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里解析不需要单独抽出来一个函数写,你修改一下你的正则表达式即可

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@chejinge
Copy link
Copy Markdown
Collaborator

#include "blackwidow/blackwidow.h"
#include
//#include
#include "src/redis_strings.h"
#include "src/redis_hashes.h"
#include "src/redis_zsets.h"
#include "src/redis_sets.h"
//#include "blackwidow/util.h"
#include "src/redis_lists.h"
#include "src/strings_value_format.h"
#include "src/base_meta_value_format.h"
#include
#include <sys/stat.h>

using namespace std;

static void Usage()
{
fprintf(stderr,
"example: pika_keys dump/20230330\n"
);
}

std::string ReplaceAll(std::string str, const std::string& from, const std::string& to) {
size_t start_pos = 0;
while((start_pos = str.find(from, start_pos)) != std::string::npos) {
str.replace(start_pos, from.length(), to);
start_pos += to.length(); // Handles case where 'to' is a substring of 'from'
}
return str;
}

bool directory_exists(const std::string& path) {
struct stat st;
return stat(path.c_str(), &st);
}

int main(int argc, char *argv[]){
if (argc < 2) {
Usage();
exit(-1);
}

std::string db_path = "";
if (argc >= 2) {
  db_path = argv[1];
}

  rocksdb::Status s;

char key_name[100];
key_name[100] = '\0';
unsigned long key_size;


//cout<< db_path <<endl;
// Init db
rocksdb::Options options;
//options.create_if_missing = true;
options.keep_log_file_num = 10;
options.max_manifest_file_size = 64 * 1024 * 1024;
options.max_log_file_size = 512 * 1024 * 1024;
options.write_buffer_size = 512 * 1024 * 1024; // 512M
options.target_file_size_base = 40 * 1024 * 1024; // 40M
options.max_open_files = 1048;  // 20M

blackwidow::BlackwidowOptions bwOptions;
bwOptions.options = options;
blackwidow::BlackWidow bw;

int64_t curtime;
rocksdb::DB* rocksDB;
rocksdb::ReadOptions iterator_options;

//strings
std::string path = db_path + "/./strings";
if (directory_exists(path) == 0) {
    blackwidow::RedisStrings stringsDB(&bw, blackwidow::kStrings);
    s = stringsDB.Open(bwOptions, path);
    rocksDB = stringsDB.GetDB();
    rocksDB->GetEnv()->GetCurrentTime(&curtime).ok();
    auto iter = rocksDB->NewIterator(iterator_options);
    for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
        blackwidow::ParsedStringsValue parsed_strings_value(iter->value());
        int32_t ttl = -1;
        int64_t ts = (int64_t)(parsed_strings_value.timestamp());
        if (ts != 0) {
            int64_t diff = ts - curtime;
            ttl = diff ;
        }
        strncpy(key_name, iter->key().ToString().c_str(), 99);
        key_size = iter->key().size() + iter->value().size();
        if (ttl < -1) continue;
        //cout <<  iter->value().size() << endl;
        //blackwidow::ParsedStringsValue parsed_strings_value(iter->value());
        //cout << "string " << iter->key().ToString().c_str() << " " << parsed_strings_value.value().ToString().c_str() << " " << iter->key().size() + iter->value().size() << " " << ttl << endl;
        //cout << "string " << iter->key().size() + iter->value().size() << " " << iter->key().ToString().c_str() << " " << iter->key().size() + iter->value().size() << " " << ttl << endl;
        //std::string k = std::regex_replace(iter->key().ToString().c_str(), std::regex(R"(\n)"), "\\n");
        std::string key_str =  ReplaceAll(iter->key().ToString(), "\n", "\\n");
        key_str =  ReplaceAll(key_str, " ", "\\x20");
        cout << "string " << iter->key().size() + iter->value().size() << " " << key_str << " "  << ttl << endl;
     //printf("[key : %-30s] [value : %-30s] [timestamp : %-10d] [version : %d] [survival_time : %d]\n",
     //  iter->key().ToString().c_str(),
     //  parsed_strings_value.value().ToString().c_str(),
     //  parsed_strings_value.timestamp(),
     //  parsed_strings_value.version(),
     //  ttl);
    }
}

path = db_path + "/./hashes";
if (directory_exists(path) == 0) {
    blackwidow::RedisHashes hashesDB(&bw, blackwidow::kHashes);
    s = hashesDB.Open(bwOptions, path);
    rocksdb::Env::Default()->GetCurrentTime(&curtime);

    rocksDB = hashesDB.GetDB();
    auto iter = rocksDB->NewIterator(iterator_options);
    for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
        //blackwidow::ParsedStringsValue parsed_strings_value(iter->value());
        blackwidow::ParsedHashesMetaValue parsed_hashes_meta_value(iter->value());
        int64_t left = -1;
        int64_t sum = 0;
        std::string k = iter->key().ToString();
        sum = sum + k.size() + 12;
        std::vector<blackwidow::FieldValue> fvs;
        blackwidow::Status s = hashesDB.HGetall(k, &fvs);
        for (auto it = fvs.begin(); it != fvs.end(); it ++) {
            //cout<<"filed:" << it->field.size() <<endl;
            //cout<<"value:" << it->value.size() <<endl;
            sum = sum + 4 + k.size() + 4 + it->field.size();
            sum = sum + it->value.size();
        }
        if (parsed_hashes_meta_value.IsStale() || parsed_hashes_meta_value.count() == 0) {
        } else {
            if (!parsed_hashes_meta_value.IsPermanentSurvival()) {
                left = parsed_hashes_meta_value.timestamp() - curtime;
                if (left <= 0) left = -2;
            }
            std::string key_str =  ReplaceAll(iter->key().ToString(), "\n", "\\n");
            key_str =  ReplaceAll(key_str, " ", "\\x20");
            std::cout << "hash " << sum << " " << key_str.c_str() << " " << left << std::endl;
        }
    }

}

std::string start_key;
////std::string start_key = "llm";
std::string next_key;
std::string pattern("*");
int64_t batch_count = 1000;
bool fin = false;
//while (!fin) {
//  int64_t count = batch_count;
//  std::vector<std::string> keys;
//  fin = hashesDB.Scan(start_key, pattern, &keys, &count, &next_key);
//  start_key = next_key;

//  //cout<< keys.size() <<endl;
//  for (auto k : keys) {
//      //cout<<"k:" << k <<endl;
//      int64_t sum = 0;
//      sum = sum + k.size() + 12;
//      std::vector<blackwidow::FieldValue> fvs;
//      blackwidow::Status s = hashesDB.HGetall(k, &fvs);
//      for (auto it = fvs.begin(); it != fvs.end(); it ++) {
//          //cout<<"filed:" << it->field.size() <<endl;
//          //cout<<"value:" << it->value.size() <<endl;
//          sum = sum + 4 + k.size() + 4 + it->field.size();
//          sum = sum + it->value.size();
//      }
//      int64_t ttl = -1;
//      hashesDB.TTL(k, &ttl);
//      //cout<<"ttl:" << ttl <<endl;
//    strncpy(key_name, k.c_str(), 99);
//    key_size = sum;
//    if (display == 1) {
//        //cout<< "hash " << k << " " << sum << " " << ttl <<endl;
//        cout<< "hash " << sum << " " << ReplaceAll(k, "\n", "\\n") << " " << ttl <<endl;
//    }else{
//        perfix_add_key(key_name, key_size, 1);
//    }
//  }
//  //return 0;
//}

path = db_path + "/./zsets";
if (directory_exists(path) == 0) {
    blackwidow::RedisZSets zsetsDB(&bw, blackwidow::kZSets);
    s = zsetsDB.Open(bwOptions, path);

    start_key.clear();
    next_key.clear();
    fin = false;

    while (!fin) {
      int64_t count = batch_count;
      std::vector<std::string> keys;
      fin = zsetsDB.Scan(start_key, pattern, &keys, &count, &next_key);
      start_key = next_key;

        for (auto k : keys) {
        //cout<<"k:" << k <<endl;
        int64_t sum = 0;
        sum = sum + k.size() + 12;
        std::vector<blackwidow::ScoreMember> score_members;
        blackwidow::Status s = zsetsDB.ZRange(k, 0, -1, &score_members);
        auto it = score_members.begin();
        for (auto it = score_members.begin(); it != score_members.end(); it ++) {
              sum = sum + (4 + k.size() + 4 + it->member.size() + 8) * 2;
            //cout<<"score:" << std::to_string(it->score) <<endl;
            //cout<<"member:" << it->member <<endl;
        }

        int64_t ttl = -1;
        s = zsetsDB.TTL(k, &ttl);

        strncpy(key_name, k.c_str(), 99);
        key_size = sum;
        cout<< "zset " << sum << " " << ReplaceAll(k, "\n", "\\n") << " " << ttl <<endl;
      }
    }
}

path = db_path + "/./sets";
if (directory_exists(path) == 0) {
    blackwidow::RedisSets setsDB(&bw, blackwidow::kSets);
    s = setsDB.Open(bwOptions, path);

    start_key.clear();
    next_key.clear();
    fin = false;

    while (!fin) {
      int64_t count = batch_count;
      std::vector<std::string> keys;
      fin = setsDB.Scan(start_key, pattern, &keys, &count, &next_key);
      start_key = next_key;

        for (auto k : keys) {
            //cout<<"k:" << k <<endl;
            int64_t sum = 0;
            sum = sum + k.size() + 12;
            std::vector<std::string> members;
            blackwidow::Status s = setsDB.SMembers(k, &members);
            for (auto it = members.begin(); it != members.end(); it ++) {
                //cout<<"member:" << *it <<endl;
                sum = sum + 4 + k.size() + 4 + (*it).size();
            }

            int64_t ttl = -1;
            s = setsDB.TTL(k, &ttl);

            strncpy(key_name, k.c_str(), 99);
            key_size = sum;
                //cout<< "set " << k << " " << sum << " " << ttl <<endl;
                cout<< "set " << sum << " " << ReplaceAll(k, "\n", "\\n") << " " << ttl <<endl;
       }
    }
  }

path = db_path + "/./lists";
if (directory_exists(path) == 0) {
blackwidow::RedisLists listsDB(&bw, blackwidow::kLists);
s = listsDB.Open(bwOptions, path);

    start_key.clear();
    next_key.clear();
    fin = false;

    while (!fin) {
      int64_t count = batch_count;
      std::vector<std::string> keys;
      fin = listsDB.Scan(start_key, pattern, &keys, &count, &next_key);
      start_key = next_key;

      for (auto k : keys) {
         //cout<<"k:" << k <<endl;
         int64_t sum = 0;
         sum = sum + k.size() + 12 + 16;
         int64_t pos = 0;
         std::vector<std::string> list;
         blackwidow::Status s = listsDB.LRange(k, pos, pos + batch_count - 1, &list);
         while (s.ok() && !list.empty()) {

           for (auto e : list) {
               //cout<<"e:" << e <<endl;
               sum = sum + 4 + k.size() + 4 + 8 + e.size();
           }

           pos += batch_count;
           list.clear();
           s = listsDB.LRange(k, pos, pos + batch_count - 1, &list);
         }

         int64_t ttl = -1;
         s = listsDB.TTL(k, &ttl);

         strncpy(key_name, k.c_str(), 99);
         key_size = sum;
         cout<< "list " << sum << " " << ReplaceAll(k, "\n", "\\n") << " " << ttl <<endl;
      }
    }

}

}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

✏️ Feature New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants