Skip to content

Commit 631b9e1

Browse files
Merge pull request #313882 from dmonroym/dmon/update-memory-best-practices
Update memory best practices: HA replication in metrics and capacity planning
2 parents 8a1512d + 82f0bfd commit 631b9e1

2 files changed

Lines changed: 31 additions & 3 deletions

File tree

articles/redis/best-practices-development.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ Azure Managed Redis requires TLS encrypted communications by default. TLS versio
8383

8484
## Monitor memory usage, CPU usage metrics, client connections, and network bandwidth
8585

86-
When using Azure Managed Redis instance in production, we recommend setting alerts for **Used Memory Percentage**, **CPU** metrics, **Connected Clients**. If these metrics are consistently above 75%, consider scaling your instance to a bigger memory or better throughput tier. For more details, see [when to scale](how-to-scale.md#when-to-scale).
86+
When using Azure Managed Redis instance in production, we recommend setting alerts for **Used Memory Percentage**, **CPU** metrics, **Connected Clients**. If these metrics are consistently above 75%, consider scaling your instance to a bigger memory or better throughput tier. For more details, see [when to scale](how-to-scale.md#when-to-scale). For details on how memory is reported and how to plan capacity, see [memory management](best-practices-memory-management.md).
8787

8888
## Consider enabling Data Persistence or Data Backup
8989

articles/redis/best-practices-memory-management.md

Lines changed: 30 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,31 @@ appliesto:
1414

1515
In this article, we discuss effective memory management of an Azure Managed Redis cache.
1616

17+
## Understand how memory usage is reported
18+
19+
The **Used Memory** metric reports the total memory consumed by your database, including all shards. When High Availability is enabled, this metric includes the memory used by both primary and replica shards. This means the reported value can be roughly **twice** the size of your actual dataset.
20+
21+
For example, if you store 10 GB of data in a cache with High Availability enabled, the **Used Memory** metric reports approximately 20 GB.
22+
23+
The **Used Memory** metric doesn't include memory fragmentation. Actual physical memory consumption on the server can be higher due to allocator overhead. For more details on what each metric includes, see the [monitoring data reference](monitor-cache-reference.md#details-about-azure-managed-redis-metrics).
24+
25+
## Estimate memory for capacity planning
26+
27+
When planning the memory you need, account for these factors beyond just the raw size of your values:
28+
29+
- **Per-key overhead**: Each key stored in Redis includes internal metadata (pointers, type info, expiration tracking). This overhead is typically 50 to 100 bytes per key, depending on the key name length and value type. For large numbers of small keys, this overhead can be significant.
30+
- **Key names**: The memory used to store your key names adds up at scale. Shorter key names help reduce memory usage.
31+
- **Expiration tracking**: Keys with a TTL set consume extra memory for expiration bookkeeping.
32+
- **High Availability replication**: With High Availability enabled, the dataset is replicated. The **Used Memory** metric reflects both primary and replica memory, but the SKU memory limit already accounts for this. You don't need to choose a larger SKU to accommodate replication — select a SKU based on your actual dataset size.
33+
34+
To check the exact memory cost of a specific key, use the Redis [`MEMORY USAGE`](https://redis.io/commands/memory-usage) command:
35+
36+
```
37+
MEMORY USAGE <your_key_name>
38+
```
39+
40+
This command returns the total bytes consumed by a key, including all internal overhead. Use this to validate your per-key memory estimates against actual usage.
41+
1742
## Eviction policy
1843

1944
Choose an [eviction policy](https://redis.io/topics/lru-cache)that works for your application. The default policy for Azure Managed Redis is `volatile-lru`, which means that only keys that have a TTL value set with a command like [EXPIRE](https://redis.io/commands/expire) are eligible for eviction. If no keys have a TTL value, then the system doesn't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then consider the `allkeys-lru` policy.
@@ -24,9 +49,12 @@ Set an expiration value on your keys. An expiration removes keys proactively ins
2449

2550
## Monitor memory usage
2651

27-
Consider adding alerting on "Used Memory Percentage" metric to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues. If your "Used Memory Percentage" is consistently over 75%, consider increasing your memory by scaling to a higher tier. For information on tiers, see [Architecture](architecture.md#sharding-configuration) for information on tiers.
52+
We recommend monitoring the **Used Memory Percentage** metric rather than raw **Used Memory**. The percentage metric already accounts for your SKU's total memory limit, including High Availability replication, so it gives you a straightforward view of how close you are to capacity without needing to mentally adjust for replica memory.
53+
54+
Add alerting on **Used Memory Percentage** to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues. If your **Used Memory Percentage** is consistently over 75%, consider increasing your memory by scaling to a higher tier. For information on tiers, see [Architecture](architecture.md#sharding-configuration).
2855

29-
## Next steps
56+
## Related content
3057

58+
- [Monitoring data reference](monitor-cache-reference.md)
3159
- [Best practices for development](best-practices-development.md)
3260
- [Best practices for scaling](best-practices-scale.md)

0 commit comments

Comments
 (0)