You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: apps/site/pages/en/blog/announcements/making-nodejs-downloads-reliable.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ layout: blog-post
6
6
author: flakey5
7
7
---
8
8
9
-
Last year, we shared [the details behind Node.js' brand new website](https://nodejs.org/en/blog/announcements/diving-into-the-nodejs-website-redesign).
9
+
Last year, we shared [the details behind Node.js's brand new website](https://nodejs.org/en/blog/announcements/diving-into-the-nodejs-website-redesign).
10
10
Today we're back, talking about the new infrastructure serving Node.js' release assets.
11
11
12
12
This blog post goes into what Node.js' web infrastructure looks like, its history, and where it stands today.
@@ -23,7 +23,7 @@ After the Node.js and io.js merge in 2015, io.js' VPS (which will be referred to
23
23
A backup server was also created, which acted almost like an exact copy of the origin server.
24
24
It served two main purposes:
25
25
26
-
1. Serve any traffic that the origin server couldn't handle
26
+
1. Serve any traffic that the origin server couldn't handle.
27
27
2. Serve as a backup for the binaries and documentation in case if something went wrong with the origin server.
28
28
29
29
The entire architecture looked like this:
@@ -68,7 +68,7 @@ So, everyday at roughly midnight UTC, the origin server got effectively DDoS'ed
68
68
69
69
There were also a handful of other issues with the origin server pertaining to its maintenance:
70
70
71
-
- Documentation of things running on the server was spotty; some things were well documented and others not at all
71
+
- Documentation of things running on the server was spotty; some things were well documented and others not at all.
72
72
- Changes performed in the [nodejs/build](https://github.com/nodejs/build) repository needed to be deployed manually by a Build WG member with access. There was also no guarantee that what's in the build repository is what's actually on the server.
73
73
- There's no staging environment other than a backup instance.
74
74
- Rollbacks could only be done via disk images through the VPS providers' web portal.
@@ -141,7 +141,7 @@ To do this, we implemented four things:
141
141
1. Any request to R2 that fails is retried 3 times (in additon to the retries that Workers already performs).
142
142
2. A "fallback" system. Any request to R2 that fails all retries is rewritten to the old infrastructure.
143
143
3. When an error does happen, it's recorded in [Sentry](https://sentry.io/welcome) and we're notified so we can take appropriate action.
144
-
4. Slack alerts are in place for Sentry and for any critical point of failure in the Release Worker (ex/ deployment failure)
144
+
4. Slack alerts are in place for Sentry and for any critical point of failure in the Release Worker (ex/ deployment failure).
145
145
146
146
### The Iterations
147
147
@@ -195,7 +195,7 @@ We still want to,
195
195
196
196
- Look into any performance improvements that could be made.
197
197
- This includes looking into integrating [Cloudflare KV](https://developers.cloudflare.com/kv/) for directory listings.
198
-
- Have better tests and a better development environment ([PR!](https://github.com/nodejs/release-cloudflare-worker/pull/252))
198
+
- Have better tests and a better development environment ([PR!](https://github.com/nodejs/release-cloudflare-worker/pull/252)).
199
199
- Metrics to give us more visibility into how the Release Worker is behaving and if there's anything that we can improve.
0 commit comments