Skip to content

Commit 896abbc

Browse files
flakey5avivkelleraduh95
authored
Apply suggestions from code review
Co-authored-by: Aviv Keller <[email protected]> Co-authored-by: Antoine du Hamel <[email protected]> Signed-off-by: flakey5 <[email protected]>
1 parent e7ef7d0 commit 896abbc

1 file changed

Lines changed: 5 additions & 5 deletions

File tree

apps/site/pages/en/blog/announcements/making-nodejs-downloads-reliable.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ layout: blog-post
66
author: flakey5
77
---
88

9-
Last year, we shared [the details behind Node.js' brand new website](https://nodejs.org/en/blog/announcements/diving-into-the-nodejs-website-redesign).
9+
Last year, we shared [the details behind Node.js's brand new website](https://nodejs.org/en/blog/announcements/diving-into-the-nodejs-website-redesign).
1010
Today we're back, talking about the new infrastructure serving Node.js' release assets.
1111

1212
This blog post goes into what Node.js' web infrastructure looks like, its history, and where it stands today.
@@ -23,7 +23,7 @@ After the Node.js and io.js merge in 2015, io.js' VPS (which will be referred to
2323
A backup server was also created, which acted almost like an exact copy of the origin server.
2424
It served two main purposes:
2525

26-
1. Serve any traffic that the origin server couldn't handle
26+
1. Serve any traffic that the origin server couldn't handle.
2727
2. Serve as a backup for the binaries and documentation in case if something went wrong with the origin server.
2828

2929
The entire architecture looked like this:
@@ -68,7 +68,7 @@ So, everyday at roughly midnight UTC, the origin server got effectively DDoS'ed
6868

6969
There were also a handful of other issues with the origin server pertaining to its maintenance:
7070

71-
- Documentation of things running on the server was spotty; some things were well documented and others not at all
71+
- Documentation of things running on the server was spotty; some things were well documented and others not at all.
7272
- Changes performed in the [nodejs/build](https://github.com/nodejs/build) repository needed to be deployed manually by a Build WG member with access. There was also no guarantee that what's in the build repository is what's actually on the server.
7373
- There's no staging environment other than a backup instance.
7474
- Rollbacks could only be done via disk images through the VPS providers' web portal.
@@ -141,7 +141,7 @@ To do this, we implemented four things:
141141
1. Any request to R2 that fails is retried 3 times (in additon to the retries that Workers already performs).
142142
2. A "fallback" system. Any request to R2 that fails all retries is rewritten to the old infrastructure.
143143
3. When an error does happen, it's recorded in [Sentry](https://sentry.io/welcome) and we're notified so we can take appropriate action.
144-
4. Slack alerts are in place for Sentry and for any critical point of failure in the Release Worker (ex/ deployment failure)
144+
4. Slack alerts are in place for Sentry and for any critical point of failure in the Release Worker (ex/ deployment failure).
145145

146146
### The Iterations
147147

@@ -195,7 +195,7 @@ We still want to,
195195

196196
- Look into any performance improvements that could be made.
197197
- This includes looking into integrating [Cloudflare KV](https://developers.cloudflare.com/kv/) for directory listings.
198-
- Have better tests and a better development environment ([PR!](https://github.com/nodejs/release-cloudflare-worker/pull/252))
198+
- Have better tests and a better development environment ([PR!](https://github.com/nodejs/release-cloudflare-worker/pull/252)).
199199
- Metrics to give us more visibility into how the Release Worker is behaving and if there's anything that we can improve.
200200

201201
## Thanks

0 commit comments

Comments
 (0)