You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Scope changes to AMR only and fix HA sizing guidance
- Remove ACRE file changes (enterprise-tiers and cache memory management)
since those docs are for ACRE only, not AMR
- Fix misleading HA capacity guidance: SKU memory limits already account
for replication, customers should size based on actual dataset not 2x
- Add recommendation to monitor Used Memory Percentage instead of raw
Used Memory since it already accounts for the SKU limit
- Simplify MEMORY USAGE command guidance
Co-authored-by: Copilot <[email protected]>
Copy file name to clipboardExpand all lines: articles/azure-cache-for-redis/cache-best-practices-enterprise-tiers.md
-9Lines changed: 0 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -205,15 +205,6 @@ Many customers want to use persistence to take periodic backups of the data on t
205
205
206
206
The E1 SKU is intended for dev/test scenarios, primarily. E1 runs on smaller [burstable VMs](/azure/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model). Burstable VMs offer variable performance based on how much CPU is consumed. Unlike other Enterprise SKU offerings, you can't _scale out_ the E1 SKU, although it's still possible to _scale up_ to a larger SKU. The E1 SKU also doesn't support [active geo-replication](cache-how-to-active-geo-replication.md).
207
207
208
-
## Memory management
209
-
210
-
For guidance on understanding memory usage, capacity planning, and eviction policies, see [Memory management for Azure Managed Redis](/azure/redis/best-practices-memory-management). Key points for Enterprise tier caches:
211
-
212
-
- The **Used Memory** metric includes memory from both primary and replica shards when High Availability is enabled, which can make the value appear twice as large as the actual dataset.
213
-
- Each key has internal metadata overhead beyond the raw value size. For workloads with many small keys, this overhead can be significant.
214
-
- Use the Redis `MEMORY USAGE` command to check the exact per-key memory cost.
215
-
216
208
## Related content
217
209
218
-
-[Memory management for Azure Managed Redis](/azure/redis/best-practices-memory-management)
Copy file name to clipboardExpand all lines: articles/redis/best-practices-memory-management.md
+5-3Lines changed: 5 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,15 +29,15 @@ When planning the memory you need, account for these factors beyond just the raw
29
29
-**Per-key overhead**: Each key stored in Redis includes internal metadata (pointers, type info, expiration tracking). This overhead is typically 50 to 100 bytes per key, depending on the key name length and value type. For large numbers of small keys, this overhead can be significant.
30
30
-**Key names**: The memory used to store your key names adds up at scale. Shorter key names help reduce memory usage.
31
31
-**Expiration tracking**: Keys with a TTL set consume extra memory for expiration bookkeeping.
32
-
-**High Availability replication**: With High Availability enabled, the dataset is replicated, roughly doubling the total memory consumed. Plan for approximately twice your expected dataset size.
32
+
-**High Availability replication**: With High Availability enabled, the dataset is replicated. The **Used Memory** metric reflects both primary and replica memory, but the SKU memory limit already accounts for this. You don't need to choose a larger SKU to accommodate replication — select a SKU based on your actual dataset size.
33
33
34
34
To check the exact memory cost of a specific key, use the Redis [`MEMORY USAGE`](https://redis.io/commands/memory-usage) command:
35
35
36
36
```
37
37
MEMORY USAGE <your_key_name>
38
38
```
39
39
40
-
This command returns the total bytes consumed by a key, including all internal overhead. Multiply the result by your total key count and by two (if High Availability is enabled) for a practical memory estimate.
40
+
This command returns the total bytes consumed by a key, including all internal overhead. Use this to validate your per-key memory estimates against actual usage.
41
41
42
42
## Eviction policy
43
43
@@ -49,7 +49,9 @@ Set an expiration value on your keys. An expiration removes keys proactively ins
49
49
50
50
## Monitor memory usage
51
51
52
-
Add alerting on the **Used Memory Percentage** metric to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues. If your **Used Memory Percentage** is consistently over 75%, consider increasing your memory by scaling to a higher tier. For information on tiers, see [Architecture](architecture.md#sharding-configuration).
52
+
We recommend monitoring the **Used Memory Percentage** metric rather than raw **Used Memory**. The percentage metric already accounts for your SKU's total memory limit, including High Availability replication, so it gives you a straightforward view of how close you are to capacity without needing to mentally adjust for replica memory.
53
+
54
+
Add alerting on **Used Memory Percentage** to ensure that you don't run out of memory and have the chance to scale your cache before seeing issues. If your **Used Memory Percentage** is consistently over 75%, consider increasing your memory by scaling to a higher tier. For information on tiers, see [Architecture](architecture.md#sharding-configuration).
0 commit comments