@@ -11,7 +11,7 @@ one thing it's that when GC is running, your code is not.
1111You may want to know how often and how long the garbage collection is running,
1212and what is the outcome.
1313
14- ## Runnig with garbage collection traces
14+ ## Running with garbage collection traces
1515
1616You can see traces for garbage collection in console output of your process
1717using the ` --trace_gc ` flag.
@@ -21,10 +21,9 @@ $ node --trace_gc app.js
2121```
2222
2323You might want to avoid getting traces from the entire lifetime of your
24- process running on a server. In that case, set the flag from within the process,
25- and switch it off once the need for tracing is over.
26-
27- Here's how to print GC events to stdout for one minute.
24+ process. In that case, you can set the flag from within the process, and switch
25+ it off once the need for tracing is over. For example, here's how to print GC
26+ events to stdout for one minute:
2827
2928``` js
3029const v8 = require (' v8' );
@@ -34,15 +33,15 @@ setTimeout(() => { v8.setFlagsFromString('--notrace_gc'); }, 60e3);
3433
3534### Examining a trace with ` --trace_gc `
3635
37- Obtained traces of garbage collection looks like the following lines.
36+ The output traces look like the following:
3837
3938```
4039[19278:0x5408db0] 44 ms: Scavenge 2.3 (3.0) -> 1.9 (4.0) MB, 1.2 / 0.0 ms (average mu = 1.000, current mu = 1.000) allocation failure
4140
4241[23521:0x10268b000] 120 ms: Mark-sweep 100.7 (122.7) -> 100.6 (122.7) MB, 0.15 / 0.0 ms (average mu = 0.132, current mu = 0.137) deserialize GC in old space requested
4342```
4443
45- This is how to interpret the trace data (for the second line) :
44+ Let's look at the second line. Here is how to interpret the trace data:
4645
4746<table >
4847 <tr >
@@ -59,33 +58,39 @@ This is how to interpret the trace data (for the second line):
5958 </tr >
6059 <tr >
6160 <td>120</td>
62- <td>Time since the process start in ms</td>
61+ <td>Time since the thread start in ms</td>
6362 </tr >
6463 <tr >
6564 <td>Mark-sweep</td>
6665 <td>Type / Phase of GC</td>
6766 </tr >
6867 <tr >
6968 <td>100.7</td>
70- <td>Heap used before GC in MB </td>
69+ <td>Heap used before GC in MiB </td>
7170 </tr >
7271 <tr >
7372 <td>122.7</td>
74- <td>Total heap before GC in MB </td>
73+ <td>Total heap before GC in MiB </td>
7574 </tr >
7675 <tr >
7776 <td>100.6</td>
78- <td>Heap used after GC in MB </td>
77+ <td>Heap used after GC in MiB </td>
7978 </tr >
8079 <tr >
8180 <td>122.7</td>
82- <td>Total heap after GC in MB </td>
81+ <td>Total heap after GC in MiB </td>
8382 </tr >
8483 <tr >
85- <td>0.15 / 0.0 <br/>
86- (average mu = 0.132, current mu = 0.137)</td>
84+ <td>0.15</td>
8785 <td>Time spent in GC in ms</td>
8886 </tr >
87+ <tr >
88+ <td>0.0</td>
89+ <td>Time spent in GC callbacks in ms</td>
90+ </tr >
91+ <tr >
92+ <td>(average mu = 0.132, current mu = 0.137)</td>
93+ <td>Mutator utilization (from 0-1)</td>
8994 <tr >
9095 <td>deserialize GC in old space requested</td>
9196 <td>Reason for GC</td>
@@ -104,8 +109,8 @@ const { PerformanceObserver } = require('perf_hooks');
104109const obs = new PerformanceObserver ((list ) => {
105110 const entry = list .getEntries ()[0 ];
106111 /*
107- The entry would be an instance of PerformanceEntry containing
108- metrics of garbage collection.
112+ The entry is an instance of PerformanceEntry containing
113+ metrics of a single garbage collection event .
109114 For example:
110115 PerformanceEntry {
111116 name: 'gc',
@@ -117,7 +122,7 @@ const obs = new PerformanceObserver((list) => {
117122 */
118123});
119124
120- // Subscribe notifications of GCs
125+ // Subscribe to notifications of GCs
121126obs .observe ({ entryTypes: [' gc' ] });
122127
123128// Stop subscription
@@ -178,39 +183,41 @@ For more information, you can refer to
178183## Examples of diagnosing memory issues with trace option:
179184
180185A. How to get context of bad allocations
181- 1 . Suppose we observe that the old space is continously increasing.
182- 2 . But due to heavy gc , the heap maximum is not hit, but the process is slow.
186+ 1 . Suppose we observe that the old space is continuously increasing.
187+ 2 . But due to heavy GC , the heap maximum is not hit, but the process is slow.
1831883 . Review the trace data and figure out how much is the total heap before and
184- after the gc .
185- 4 . Reduce ` --max-old-space-size ` such that the total heap is closer to the
186- limit.
187- 5 . Allow the program to run, hit the out of memory.
189+ after the GC .
190+ 4 . Reduce [ ` --max-old-space-size ` ] [ ] such that the total heap is closer to the
191+ limit.
192+ 5 . Allow the program to run and run out-of- memory.
1881936 . The produced log shows the failing context.
189194
190195B. How to assert whether there is a memory leak when heap growth is observed
191- 1 . Suppose we observe that the old space is continously increasing.
192- 2 . Due to heavy gc , the heap maximum is not hit, but the process is slow.
196+ 1 . Suppose we observe that the old space is continuously increasing.
197+ 2 . Due to heavy GC , the heap maximum is not hit, but the process is slow.
1931983 . Review the trace data and figure out how much is the total heap before and
194- after the gc .
195- 4 . Reduce ` --max-old-space-size ` such that the total heap is closer to the
196- limit.
199+ after the GC .
200+ 4 . Reduce [ ` --max-old-space-size ` ] [ ] such that the total heap is closer to the
201+ limit.
1972025 . Allow the program to run, see if it hits the out of memory.
198- 6 . If it hits OOM, increment the heap size by ~ 10% or so and repeat few times.
199- If the same pattern is observed, it is indicative of a memory leak.
200- 7 . If there is no OOM, then freeze the heap size to that value - A packed heap
201- reduces memory footprint and compaction latency.
202-
203- C. How to assert whether too many gcs are happening or too many gcs are causing
204- an overhead
205- 1 . Review the trace data, specifically around time between consecutive gcs.
206- 2 . Review the trace data, specifically around time spent in gc.
207- 3 . If the time between two gc is less than the time spent in gc, the
208- application is severely starving.
209- 4 . If the time between two gcs and the time spent in gc are very high, probably
210- the application can use a smaller heap.
211- 5 . If the time between two gcs are much greater than the time spent in gc,
212- application is relatively healthy.
203+ 6 . If it hits out-of-memory, increment the heap size by ~ 10% or so and repeat
204+ few times. If the same pattern is observed, it is indicative of a memory
205+ leak.
206+ 7 . If there is no out-of-memory error, then freeze the heap size to that value.
207+ A packed heap reduces memory footprint and compaction latency.
208+
209+ C. How to assert whether too many GCs are happening or too many GCs are causing
210+ an overhead
211+ 1 . Review the trace data, specifically around time between consecutive GCs.
212+ 2 . Review the trace data, specifically around time spent in GC.
213+ 3 . If the time between two GC is less than the time spent in GC, the
214+ application is severely starving.
215+ 4 . If the time between two GCs and the time spent in GC is very high,
216+ the application can probably use a smaller heap.
217+ 5 . If the time between two GCs is much greater than the time spent in GC,
218+ the application is relatively healthy.
213219
214- [ performance hooks ] : https://nodejs.org/api/perf_hooks.html
215220[ PerformanceEntry ] : https://nodejs.org/api/perf_hooks.html#perf_hooks_class_performanceentry
216221[ PerformanceObserver ] : https://nodejs.org/api/perf_hooks.html#perf_hooks_class_performanceobserver
222+ [ `--max-old-space-size` ] : https://nodejs.org/api/cli.html#--max-old-space-sizesize-in-megabytes
223+ [ performance hooks ] : https://nodejs.org/api/perf_hooks.html
0 commit comments