Skip to content

Commit 2718cac

Browse files
authored
guide: copyediting & small changes to memory (#4734)
This diff includes the following changes: * Copyediting: grammar, spelling, remove redundant text. * Particularly I felt the "Debugging" section was repeated too many times; I removed it. Most of the other changes should be straightforward. * Some additional detail about the trace gc logs.
1 parent 8ee959d commit 2718cac

5 files changed

Lines changed: 126 additions & 130 deletions

File tree

locale/en/docs/guides/diagnostics/index.md

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,16 @@ layout: docs.hbs
55

66
# Diagnostics Guide
77

8-
These guides were created in the [Diagnostics Working Group](https://github.com/nodejs/diagnostics)
9-
with the objective to provide a guidance when diagnosing an issue
10-
in the user application.
11-
The documentation project is organized based on user journey.
12-
Those journeys are a coherent set of step-by-step procedures,
13-
that a user follows for problem determination of reported issues.
8+
These guides were created by the [Diagnostics Working Group][] with the
9+
objective of providing guidance when diagnosing an issue in a user's
10+
application.
11+
12+
The documentation project is organized based on user journey. Those journeys
13+
are a coherent set of step-by-step procedures that a user can follow to
14+
root-cause their issues.
1415

1516
This is the available set of diagnostics guides:
1617

1718
* [Memory](/en/docs/guides/diagnostics/memory)
19+
20+
[Diagnostics Working Group]: https://github.com/nodejs/diagnostics)

locale/en/docs/guides/diagnostics/memory/index.md

Lines changed: 9 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,10 @@ In this document you can learn about how to debug memory related issues.
1111
* [My process runs out of memory](#my-process-runs-out-of-memory)
1212
* [Symptoms](#symptoms)
1313
* [Side Effects](#side-effects)
14-
* [Debugging](#debugging)
1514
* [My process utilizes memory inefficiently](#my-process-utilizes-memory-inefficiently)
1615
* [Symptoms](#symptoms-1)
1716
* [Side Effects](#side-effects-1)
18-
* [Debugging](#debugging-1)
17+
* [Debugging](#debugging)
1918

2019
## My process runs out of memory
2120

@@ -29,28 +28,17 @@ efficient way of finding a memory leak is essential.
2928
The user observes continuously increasing memory usage _(can be fast or slow,
3029
over days or even weeks)_ then sees the process crashing and restarting by the
3130
process manager. The process is maybe running slower than before and the
32-
restarts make certain requests to fail _(load balancer responds with 502)_.
31+
restarts cause some requests to fail _(load balancer responds with 502)_.
3332

3433
### Side Effects
3534

36-
* Process restarts due to the memory exhaustion and request are dropped on the
37-
floor
35+
* Process restarts due to the memory exhaustion and requests are dropped
36+
on the floor
3837
* Increased GC activity leads to higher CPU usage and slower response time
3938
* GC blocking the Event Loop causing slowness
4039
* Increased memory swapping slows down the process (GC activity)
4140
* May not have enough available memory to get a Heap Snapshot
4241

43-
### Debugging
44-
45-
To debug a memory issue we need to be able to see how much space our specific
46-
type of objects take, and what variables retain them to get garbage collected.
47-
For the effective debugging we also need to know the allocation pattern of our
48-
variables over time.
49-
50-
* [Using Heap Profiler](/en/docs/guides/diagnostics/memory/using-heap-profiler/)
51-
* [Using Heap Snapshot](/en/docs/guides/diagnostics/memory/using-heap-snapshot/)
52-
* [GC Traces](/en/docs/guides/diagnostics/memory/using-gc-traces)
53-
5442
## My process utilizes memory inefficiently
5543

5644
### Symptoms
@@ -63,12 +51,12 @@ garbage collector activity.
6351
* An elevated number of page faults
6452
* Higher GC activity and CPU usage
6553

66-
### Debugging
54+
## Debugging
6755

68-
To debug a memory issue we need to be able to see how much space our specific
69-
type of objects take, and what variables retain them to get garbage collected.
70-
For the effective debugging we also need to know the allocation pattern of our
71-
variables over time.
56+
Most memory issues can be solved by determining how much space our specific
57+
type of objects take and what variables are preventing them from being garbage
58+
collected. It can also help to know the allocation pattern of our program over
59+
time.
7260

7361
* [Using Heap Profiler](/en/docs/guides/diagnostics/memory/using-heap-profiler/)
7462
* [Using Heap Snapshot](/en/docs/guides/diagnostics/memory/using-heap-snapshot/)

locale/en/docs/guides/diagnostics/memory/using-gc-traces.md

Lines changed: 51 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ one thing it's that when GC is running, your code is not.
1111
You may want to know how often and how long the garbage collection is running,
1212
and what is the outcome.
1313

14-
## Runnig with garbage collection traces
14+
## Running with garbage collection traces
1515

1616
You can see traces for garbage collection in console output of your process
1717
using the `--trace_gc` flag.
@@ -21,10 +21,9 @@ $ node --trace_gc app.js
2121
```
2222

2323
You might want to avoid getting traces from the entire lifetime of your
24-
process running on a server. In that case, set the flag from within the process,
25-
and switch it off once the need for tracing is over.
26-
27-
Here's how to print GC events to stdout for one minute.
24+
process. In that case, you can set the flag from within the process, and switch
25+
it off once the need for tracing is over. For example, here's how to print GC
26+
events to stdout for one minute:
2827

2928
```js
3029
const v8 = require('v8');
@@ -34,15 +33,15 @@ setTimeout(() => { v8.setFlagsFromString('--notrace_gc'); }, 60e3);
3433

3534
### Examining a trace with `--trace_gc`
3635

37-
Obtained traces of garbage collection looks like the following lines.
36+
The output traces look like the following:
3837

3938
```
4039
[19278:0x5408db0] 44 ms: Scavenge 2.3 (3.0) -> 1.9 (4.0) MB, 1.2 / 0.0 ms (average mu = 1.000, current mu = 1.000) allocation failure
4140
4241
[23521:0x10268b000] 120 ms: Mark-sweep 100.7 (122.7) -> 100.6 (122.7) MB, 0.15 / 0.0 ms (average mu = 0.132, current mu = 0.137) deserialize GC in old space requested
4342
```
4443

45-
This is how to interpret the trace data (for the second line):
44+
Let's look at the second line. Here is how to interpret the trace data:
4645

4746
<table>
4847
<tr>
@@ -59,33 +58,39 @@ This is how to interpret the trace data (for the second line):
5958
</tr>
6059
<tr>
6160
<td>120</td>
62-
<td>Time since the process start in ms</td>
61+
<td>Time since the thread start in ms</td>
6362
</tr>
6463
<tr>
6564
<td>Mark-sweep</td>
6665
<td>Type / Phase of GC</td>
6766
</tr>
6867
<tr>
6968
<td>100.7</td>
70-
<td>Heap used before GC in MB</td>
69+
<td>Heap used before GC in MiB</td>
7170
</tr>
7271
<tr>
7372
<td>122.7</td>
74-
<td>Total heap before GC in MB</td>
73+
<td>Total heap before GC in MiB</td>
7574
</tr>
7675
<tr>
7776
<td>100.6</td>
78-
<td>Heap used after GC in MB</td>
77+
<td>Heap used after GC in MiB</td>
7978
</tr>
8079
<tr>
8180
<td>122.7</td>
82-
<td>Total heap after GC in MB</td>
81+
<td>Total heap after GC in MiB</td>
8382
</tr>
8483
<tr>
85-
<td>0.15 / 0.0 <br/>
86-
(average mu = 0.132, current mu = 0.137)</td>
84+
<td>0.15</td>
8785
<td>Time spent in GC in ms</td>
8886
</tr>
87+
<tr>
88+
<td>0.0</td>
89+
<td>Time spent in GC callbacks in ms</td>
90+
</tr>
91+
<tr>
92+
<td>(average mu = 0.132, current mu = 0.137)</td>
93+
<td>Mutator utilization (from 0-1)</td>
8994
<tr>
9095
<td>deserialize GC in old space requested</td>
9196
<td>Reason for GC</td>
@@ -104,8 +109,8 @@ const { PerformanceObserver } = require('perf_hooks');
104109
const obs = new PerformanceObserver((list) => {
105110
const entry = list.getEntries()[0];
106111
/*
107-
The entry would be an instance of PerformanceEntry containing
108-
metrics of garbage collection.
112+
The entry is an instance of PerformanceEntry containing
113+
metrics of a single garbage collection event.
109114
For example:
110115
PerformanceEntry {
111116
name: 'gc',
@@ -117,7 +122,7 @@ const obs = new PerformanceObserver((list) => {
117122
*/
118123
});
119124

120-
// Subscribe notifications of GCs
125+
// Subscribe to notifications of GCs
121126
obs.observe({ entryTypes: ['gc'] });
122127

123128
// Stop subscription
@@ -178,39 +183,41 @@ For more information, you can refer to
178183
## Examples of diagnosing memory issues with trace option:
179184

180185
A. How to get context of bad allocations
181-
1. Suppose we observe that the old space is continously increasing.
182-
2. But due to heavy gc, the heap maximum is not hit, but the process is slow.
186+
1. Suppose we observe that the old space is continuously increasing.
187+
2. But due to heavy GC, the heap maximum is not hit, but the process is slow.
183188
3. Review the trace data and figure out how much is the total heap before and
184-
after the gc.
185-
4. Reduce `--max-old-space-size` such that the total heap is closer to the
186-
limit.
187-
5. Allow the program to run, hit the out of memory.
189+
after the GC.
190+
4. Reduce [`--max-old-space-size`][] such that the total heap is closer to the
191+
limit.
192+
5. Allow the program to run and run out-of-memory.
188193
6. The produced log shows the failing context.
189194

190195
B. How to assert whether there is a memory leak when heap growth is observed
191-
1. Suppose we observe that the old space is continously increasing.
192-
2. Due to heavy gc, the heap maximum is not hit, but the process is slow.
196+
1. Suppose we observe that the old space is continuously increasing.
197+
2. Due to heavy GC, the heap maximum is not hit, but the process is slow.
193198
3. Review the trace data and figure out how much is the total heap before and
194-
after the gc.
195-
4. Reduce `--max-old-space-size` such that the total heap is closer to the
196-
limit.
199+
after the GC.
200+
4. Reduce [`--max-old-space-size`][] such that the total heap is closer to the
201+
limit.
197202
5. Allow the program to run, see if it hits the out of memory.
198-
6. If it hits OOM, increment the heap size by ~10% or so and repeat few times.
199-
If the same pattern is observed, it is indicative of a memory leak.
200-
7. If there is no OOM, then freeze the heap size to that value - A packed heap
201-
reduces memory footprint and compaction latency.
202-
203-
C. How to assert whether too many gcs are happening or too many gcs are causing
204-
an overhead
205-
1. Review the trace data, specifically around time between consecutive gcs.
206-
2. Review the trace data, specifically around time spent in gc.
207-
3. If the time between two gc is less than the time spent in gc, the
208-
application is severely starving.
209-
4. If the time between two gcs and the time spent in gc are very high, probably
210-
the application can use a smaller heap.
211-
5. If the time between two gcs are much greater than the time spent in gc,
212-
application is relatively healthy.
203+
6. If it hits out-of-memory, increment the heap size by ~10% or so and repeat
204+
few times. If the same pattern is observed, it is indicative of a memory
205+
leak.
206+
7. If there is no out-of-memory error, then freeze the heap size to that value.
207+
A packed heap reduces memory footprint and compaction latency.
208+
209+
C. How to assert whether too many GCs are happening or too many GCs are causing
210+
an overhead
211+
1. Review the trace data, specifically around time between consecutive GCs.
212+
2. Review the trace data, specifically around time spent in GC.
213+
3. If the time between two GC is less than the time spent in GC, the
214+
application is severely starving.
215+
4. If the time between two GCs and the time spent in GC is very high,
216+
the application can probably use a smaller heap.
217+
5. If the time between two GCs is much greater than the time spent in GC,
218+
the application is relatively healthy.
213219

214-
[performance hooks]: https://nodejs.org/api/perf_hooks.html
215220
[PerformanceEntry]: https://nodejs.org/api/perf_hooks.html#perf_hooks_class_performanceentry
216221
[PerformanceObserver]: https://nodejs.org/api/perf_hooks.html#perf_hooks_class_performanceobserver
222+
[`--max-old-space-size`]: https://nodejs.org/api/cli.html#--max-old-space-sizesize-in-megabytes
223+
[performance hooks]: https://nodejs.org/api/perf_hooks.html

locale/en/docs/guides/diagnostics/memory/using-heap-profiler.md

Lines changed: 26 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -5,28 +5,24 @@ layout: docs.hbs
55

66
# Using Heap Profiler
77

8-
To debug a memory issue we need to be able to see how much space our specific
9-
type of objects take, and what variables retain them to get garbage collected.
10-
For the effective debugging we also need to know the allocation pattern of our
11-
variables over time.
12-
13-
The heap profiler acts on top of V8 towards to bring snapshots of memory over
14-
time. In this document, we will cover the memory profiling using:
8+
The heap profiler acts on top of V8 to capture allocations over time. In this
9+
document, we will cover memory profiling using:
1510

1611
1. Allocation Timeline
1712
2. Sampling Heap Profiler
1813

19-
Unlike heap dump that was cover in the [Using Heap Snapshot][],
20-
the idea of using real-time profiling is to understand allocations in a given
21-
time frame.
14+
Unlike heap dumps which were covered in the [Using Heap Snapshot][] guide, the
15+
idea of using real-time profiling is to understand allocations over a period of
16+
time.
2217

2318
## Heap Profiler - Allocation Timeline
2419

2520
Heap Profiler is similar to the Sampling Heap Profiler, except it will trace
2621
every allocation. It has higher overhead than the Sampling Heap Profiler so
2722
it’s not recommended to use in production.
2823

29-
> You can use [@mmarchini/observe][] to do it programmatically.
24+
> You can use [@mmarchini/observe][] to start and stop the profiler
25+
> programmatically.
3026
3127
### How To
3228

@@ -36,44 +32,43 @@ Start the application:
3632
node --inspect index.js
3733
```
3834

39-
> `--inspect-brk` is an better choice for scripts.
35+
> `--inspect-brk` is a better choice for scripts.
4036
4137
Connect to the dev-tools instance in chrome and then:
4238

43-
* Select `memory` tab
44-
* Select `Allocation instrumentation timeline`
45-
* Start profiling
39+
* Select the `Memory` tab.
40+
* Select `Allocation instrumentation timeline`.
41+
* Start profiling.
4642

4743
![heap profiler tutorial step 1][heap profiler tutorial 1]
4844

49-
After it, the heap profiling is running, it is strongly recommended to run
50-
samples in order to identify memory issues, for this example, we will use
51-
`Apache Benchmark` to produce load in the application.
52-
53-
> In this example, we are assuming the heap profiling under web application.
45+
Once the heap profiling is running, it is strongly recommended to run samples
46+
in order to identify memory issues. For example, if we were heap profiling a
47+
web application, we could use `Apache Benchmark` to produce load:
5448

5549
```console
5650
$ ab -n 1000 -c 5 http://localhost:3000
5751
```
5852

59-
Hence, press stop button when the load expected is complete
53+
Then, press stop button when the load is complete:
6054

6155
![heap profiler tutorial step 2][heap profiler tutorial 2]
6256

63-
Then look at the snapshot data towards to memory allocation.
57+
Finally, look at the snapshot data:
6458

6559
![heap profiler tutorial step 3][heap profiler tutorial 3]
6660

67-
Check the [usefull links](#usefull-links) section for futher information
61+
Check the [useful links](#useful-links) section for futher information
6862
about memory terminology.
6963

7064
## Sampling Heap Profiler
7165

72-
Sampling Heap Profiler tracks memory allocation pattern and reserved space
73-
over time. As it’s sampling based it has a low enough overhead to use it in
66+
Sampling Heap Profiler tracks the memory allocation pattern and reserved space
67+
over time. Since it is sampling based its overhead is low enough to use in
7468
production systems.
7569

76-
> You can use the module [`heap-profiler`][] to do it programmatically.
70+
> You can use the module [`heap-profiler`][] to start and stop the heap
71+
> profiler programatically.
7772
7873
### How To
7974

@@ -87,15 +82,15 @@ $ node --inspect index.js
8782
8883
Connect to the dev-tools instance and then:
8984

90-
1. Select `memory` tab
91-
2. Select `Allocation sampling`
92-
3. Start profiling
85+
1. Select the `Memory` tab.
86+
2. Select `Allocation sampling`.
87+
3. Start profiling.
9388

9489
![heap profiler tutorial 4][heap profiler tutorial 4]
9590

9691
Produce some load and stop the profiler. It will generate a summary with
97-
allocation based in the stacktrace, you can lookup to the functions with more
98-
heap allocations in a timespan, see the example below:
92+
allocation based on their stacktraces. You can focus on the functions with more
93+
heap allocations, see the example below:
9994

10095
![heap profiler tutorial 5][heap profiler tutorial 5]
10196

0 commit comments

Comments
 (0)