You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/redis/best-practices-development.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,7 +83,7 @@ Azure Managed Redis requires TLS encrypted communications by default. TLS versio
83
83
84
84
## Monitor memory usage, CPU usage metrics, client connections, and network bandwidth
85
85
86
-
When using Azure Managed Redis instance in production, we recommend setting alerts for **Used Memory Percentage**, **CPU** metrics, **Connected Clients**. If these metrics are consistently above 75%, consider scaling your instance to a bigger memory or better throughput tier. For more details, see [when to scale](how-to-scale.md#when-to-scale).
86
+
When using Azure Managed Redis instance in production, we recommend setting alerts for **Used Memory Percentage**, **CPU** metrics, **Connected Clients**. If these metrics are consistently above 75%, consider scaling your instance to a bigger memory or better throughput tier. For more details, see [when to scale](how-to-scale.md#when-to-scale). For details on how memory is reported and how to plan capacity, see [memory management](best-practices-memory-management.md).
87
87
88
88
## Consider enabling Data Persistence or Data Backup
Copy file name to clipboardExpand all lines: articles/redis/best-practices-memory-management.md
+30-2Lines changed: 30 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,6 +14,31 @@ appliesto:
14
14
15
15
In this article, we discuss effective memory management of an Azure Managed Redis cache.
16
16
17
+
## Understand how memory usage is reported
18
+
19
+
The **Used Memory** metric reports the total memory consumed by your database, including all shards. When High Availability is enabled, this metric includes the memory used by both primary and replica shards. This means the reported value can be roughly **twice** the size of your actual dataset.
20
+
21
+
For example, if you store 10 GB of data in a cache with High Availability enabled, the **Used Memory** metric reports approximately 20 GB.
22
+
23
+
The **Used Memory** metric doesn't include memory fragmentation. Actual physical memory consumption on the server can be higher due to allocator overhead. For more details on what each metric includes, see the [monitoring data reference](monitor-cache-reference.md#details-about-azure-managed-redis-metrics).
24
+
25
+
## Estimate memory for capacity planning
26
+
27
+
When planning the memory you need, account for these factors beyond just the raw size of your values:
28
+
29
+
-**Per-key overhead**: Each key stored in Redis includes internal metadata (pointers, type info, expiration tracking). This overhead is typically 50 to 100 bytes per key, depending on the key name length and value type. For large numbers of small keys, this overhead can be significant.
30
+
-**Key names**: The memory used to store your key names adds up at scale. Shorter key names help reduce memory usage.
31
+
-**Expiration tracking**: Keys with a TTL set consume extra memory for expiration bookkeeping.
32
+
-**High Availability replication**: With High Availability enabled, the dataset is replicated. The **Used Memory** metric reflects both primary and replica memory, but the SKU memory limit already accounts for this. You don't need to choose a larger SKU to accommodate replication — select a SKU based on your actual dataset size.
33
+
34
+
To check the exact memory cost of a specific key, use the Redis [`MEMORY USAGE`](https://redis.io/commands/memory-usage) command:
35
+
36
+
```
37
+
MEMORY USAGE <your_key_name>
38
+
```
39
+
40
+
This command returns the total bytes consumed by a key, including all internal overhead. Use this to validate your per-key memory estimates against actual usage.
41
+
17
42
## Eviction policy
18
43
19
44
Choose an [eviction policy](https://redis.io/topics/lru-cache)that works for your application. The default policy for Azure Managed Redis is `volatile-lru`, which means that only keys that have a TTL value set with a command like [EXPIRE](https://redis.io/commands/expire) are eligible for eviction. If no keys have a TTL value, then the system doesn't evict any keys. If you want the system to allow any key to be evicted if under memory pressure, then consider the `allkeys-lru` policy.
@@ -24,9 +49,12 @@ Set an expiration value on your keys. An expiration removes keys proactively ins
24
49
25
50
## Monitor memory usage
26
51
27
-
Consider adding alerting on "Used Memory Percentage" metric to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues. If your "Used Memory Percentage" is consistently over 75%, consider increasing your memory by scaling to a higher tier. For information on tiers, see [Architecture](architecture.md#sharding-configuration) for information on tiers.
52
+
We recommend monitoring the **Used Memory Percentage** metric rather than raw **Used Memory**. The percentage metric already accounts for your SKU's total memory limit, including High Availability replication, so it gives you a straightforward view of how close you are to capacity without needing to mentally adjust for replica memory.
53
+
54
+
Add alerting on **Used Memory Percentage** to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues. If your **Used Memory Percentage** is consistently over 75%, consider increasing your memory by scaling to a higher tier. For information on tiers, see [Architecture](architecture.md#sharding-configuration).
28
55
29
-
## Next steps
56
+
## Related content
30
57
58
+
-[Monitoring data reference](monitor-cache-reference.md)
31
59
-[Best practices for development](best-practices-development.md)
32
60
-[Best practices for scaling](best-practices-scale.md)
0 commit comments