Skip to content

Commit c7da69b

Browse files
Daniel MonroyCopilot
andcommitted
Update memory best practices docs: HA replication in metrics, capacity planning
- Add sections explaining how Used Memory metric includes replica memory with HA - Add capacity planning guidance with per-key overhead and MEMORY USAGE command - Add memory management section to Enterprise tiers best practices - Add cross-links between AMR, ACRE, and Enterprise tiers memory docs - Add memory management link from development best practices monitoring section Addresses documentation gap identified in IcM 764020947 where customers were confused by memory metrics showing ~2x their expected dataset size. Related work item: https://dev.azure.com/msazure/RedisCache/_workitems/edit/37177660 Co-authored-by: Copilot <[email protected]>
1 parent c76c345 commit c7da69b

4 files changed

Lines changed: 41 additions & 4 deletions

File tree

articles/azure-cache-for-redis/cache-best-practices-enterprise-tiers.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -205,6 +205,15 @@ Many customers want to use persistence to take periodic backups of the data on t
205205

206206
The E1 SKU is intended for dev/test scenarios, primarily. E1 runs on smaller [burstable VMs](/azure/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model). Burstable VMs offer variable performance based on how much CPU is consumed. Unlike other Enterprise SKU offerings, you can't _scale out_ the E1 SKU, although it's still possible to _scale up_ to a larger SKU. The E1 SKU also doesn't support [active geo-replication](cache-how-to-active-geo-replication.md).
207207

208+
## Memory management
209+
210+
For guidance on understanding memory usage, capacity planning, and eviction policies, see [Memory management for Azure Managed Redis](/azure/redis/best-practices-memory-management). Key points for Enterprise tier caches:
211+
212+
- The **Used Memory** metric includes memory from both primary and replica shards when High Availability is enabled, which can make the value appear twice as large as the actual dataset.
213+
- Each key has internal metadata overhead beyond the raw value size. For workloads with many small keys, this overhead can be significant.
214+
- Use the Redis `MEMORY USAGE` command to check the exact per-key memory cost.
215+
208216
## Related content
209217

218+
- [Memory management for Azure Managed Redis](/azure/redis/best-practices-memory-management)
210219
- [Development](cache-best-practices-development.md)

articles/azure-cache-for-redis/cache-best-practices-memory-management.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,8 @@ Eviction can increase server load and memory fragmentation. For more information
5959

6060
- [Memory policies](cache-configure.md#memory-policies)
6161
- [Troubleshoot high memory usage](cache-troubleshoot-timeouts.md#high-memory-usage)
62+
- [Best practices for Enterprise tiers](cache-best-practices-enterprise-tiers.md)
63+
- [Memory management for Azure Managed Redis](/azure/redis/best-practices-memory-management)
6264
- [Best practices for scaling](cache-best-practices-scale.md)
6365
- [Best practices for development](cache-best-practices-development.md)
6466
- [Azure Cache for Redis development FAQs](cache-development-faq.yml)

articles/redis/best-practices-development.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ Azure Managed Redis requires TLS encrypted communications by default. TLS versio
8383

8484
## Monitor memory usage, CPU usage metrics, client connections, and network bandwidth
8585

86-
When using Azure Managed Redis instance in production, we recommend setting alerts for **Used Memory Percentage**, **CPU** metrics, **Connected Clients**. If these metrics are consistently above 75%, consider scaling your instance to a bigger memory or better throughput tier. For more details, see [when to scale](how-to-scale.md#when-to-scale).
86+
When using Azure Managed Redis instance in production, we recommend setting alerts for **Used Memory Percentage**, **CPU** metrics, **Connected Clients**. If these metrics are consistently above 75%, consider scaling your instance to a bigger memory or better throughput tier. For more details, see [when to scale](how-to-scale.md#when-to-scale). For details on how memory is reported and how to plan capacity, see [memory management](best-practices-memory-management.md).
8787

8888
## Consider enabling Data Persistence or Data Backup
8989

articles/redis/best-practices-memory-management.md

Lines changed: 29 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Best practices for memory management for Azure Managed Redis
33
description: Learn how to manage your Azure Managed Redis memory effectively with Azure Managed Redis.
44
ms.date: 05/18/2025
5-
ms.topic: best-practice
5+
ms.topic: conceptual
66
ms.custom:
77
- ignite-2024
88
- build-2025
@@ -14,6 +14,31 @@ appliesto:
1414

1515
In this article, we discuss effective memory management of an Azure Managed Redis cache.
1616

17+
## Understand how memory usage is reported
18+
19+
The **Used Memory** metric reports the total memory consumed by your database, including all shards. When High Availability is enabled, this metric includes the memory used by both primary and replica shards. This means the reported value can be roughly **twice** the size of your actual dataset.
20+
21+
For example, if you store 10 GB of data in a cache with High Availability enabled, the **Used Memory** metric reports approximately 20 GB.
22+
23+
The **Used Memory** metric doesn't include memory fragmentation. Actual physical memory consumption on the server can be higher due to allocator overhead. For more details on what each metric includes, see the [monitoring data reference](monitor-cache-reference.md#details-about-azure-managed-redis-metrics).
24+
25+
## Estimate memory for capacity planning
26+
27+
When planning the memory you need, account for these factors beyond just the raw size of your values:
28+
29+
- **Per-key overhead**: Each key stored in Redis includes internal metadata (pointers, type info, expiration tracking). This overhead is typically 50 to 100 bytes per key, depending on the key name length and value type. For large numbers of small keys, this overhead can be significant.
30+
- **Key names**: The memory used to store your key names adds up at scale. Shorter key names help reduce memory usage.
31+
- **Expiration tracking**: Keys with a TTL set consume extra memory for expiration bookkeeping.
32+
- **High Availability replication**: With High Availability enabled, the dataset is replicated, roughly doubling the total memory consumed. Plan for approximately twice your expected dataset size.
33+
34+
To check the exact memory cost of a specific key, use the Redis [`MEMORY USAGE`](https://redis.io/commands/memory-usage) command:
35+
36+
```
37+
MEMORY USAGE <your_key_name>
38+
```
39+
40+
This command returns the total bytes consumed by a key, including all internal overhead. Multiply the result by your total key count and by two (if High Availability is enabled) for a practical memory estimate.
41+
1742
## Eviction policy
1843

1944
Choose an [eviction policy](https://redis.io/topics/lru-cache)that works for your application. The default policy for Azure Managed Redis is `volatile-lru`, which means that only keys that have a TTL value set with a command like [EXPIRE](https://redis.io/commands/expire) are eligible for eviction. If no keys have a TTL value, then the system doesn't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then consider the `allkeys-lru` policy.
@@ -24,9 +49,10 @@ Set an expiration value on your keys. An expiration removes keys proactively ins
2449

2550
## Monitor memory usage
2651

27-
Consider adding alerting on "Used Memory Percentage" metric to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues. If your "Used Memory Percentage" is consistently over 75%, consider increasing your memory by scaling to a higher tier. For information on tiers, see [Architecture](architecture.md#sharding-configuration) for information on tiers.
52+
Add alerting on the **Used Memory Percentage** metric to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues. If your **Used Memory Percentage** is consistently over 75%, consider increasing your memory by scaling to a higher tier. For information on tiers, see [Architecture](architecture.md#sharding-configuration).
2853

29-
## Next steps
54+
## Related content
3055

56+
- [Monitoring data reference](monitor-cache-reference.md)
3157
- [Best practices for development](best-practices-development.md)
3258
- [Best practices for scaling](best-practices-scale.md)

0 commit comments

Comments
 (0)