Conversation
|
Review requested:
|
|
I'm unsure about the relationship between request window size (initialWindowSize) and connection window size (connectionWindowSize). But I assume it's ok to have a 1:2 relationship. 🤷 |
|
This affects UPLOAD speed, i.e. data to server. |
lib/internal/http2/core.js
Outdated
| this.emit('session', session); | ||
| } | ||
|
|
||
| function initializeOptions(options) { | ||
| assertIsObject(options, 'options'); | ||
| options = { ...options }; | ||
| options = { ...getDefaultSettings(), ...options }; |
There was a problem hiding this comment.
We should actually apply the default settings and not just assume that they are the same as nghttp2 will do.
eae93cc to
f6fa8fa
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #61036 +/- ##
==========================================
- Coverage 88.58% 88.49% -0.10%
==========================================
Files 704 703 -1
Lines 207815 208557 +742
Branches 40036 40198 +162
==========================================
+ Hits 184102 184571 +469
- Misses 15758 16019 +261
- Partials 7955 7967 +12
🚀 New features to boost your workflow:
|
mcollina
left a comment
There was a problem hiding this comment.
lgtm
I think this might require a doc change too
My understanding is they are overlapping limits: if sending any byte would go over the receiver's window for the connection or the window for the specific stream, then you have to wait. That means a 1:2 ratio only lets 2 streams fully using their windows at any time - beyond that, streams are fighting for bandwidth. Similarly, if you're handling lots of streams on a single connection, it means 2 busy streams start to starve all the others. Concrete numbers: with this specific setting, with 256KB/512KB stream/connection windows on a 100ms latency connection, each stream can use ~2.5MB/s (i.e. 20mbps) and the entire connection across all streams can use 5MB/s. Imo, 1:2 seems a touch conservative still, I'd be happy to go a few multiples further, but ofc if we want to really get into this we'd need to measure a bunch of scenarios in a lot more detail, and trying to do some kind of dynamic optimization instead as mentioned in Slack would make more sense anyway. Still a very significant improvement though regardless, so I'm all aboard as-is, very happy to ship this 👍. We can bikeshed & tweak further later as feedback comes in. |
Failed to start CI- Validating Jenkins credentials ✔ Jenkins credentials valid - Querying data for job/node-test-pull-request/70522/ [SyntaxError: Unexpected token '<', ..." https://github.com/nodejs/node/actions/runs/24120621083 |
The current defaults are unnecessarily conservative which makes http2 control flow over high latency connections (such as public internet) unbearably slow.
RafaelGSS
left a comment
There was a problem hiding this comment.
Shouldn't it be flagged as semver-major?
Failed to start CI- Validating Jenkins credentials ✔ Jenkins credentials valid - Querying data for job/node-test-pull-request/70522/ [SyntaxError: Unexpected token '<', ..." https://github.com/nodejs/node/actions/runs/24164427109 |
The current defaults are unnecessarily conservative which makes http2 control flow over high latency connections (such as public internet) unbearably slow.