-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TLS Record Size issues #190
Comments
Andre, thanks for flagging this. For KeyCDN in particular, doing Git archeology yields #85 where @svenba added the "configurable" tag. To your point though, per discussion in the thread it sounds like it was fixed at 8KB (perhaps it has changed since) and it's not clear if and how those values can be updated. Based on your experience, I'm inclined to suggest that we flip it back to "no", until and unless we can find documentation or guidance that proves otherwise.
A good implementation will start with small records and then ramp to larger size to reduce framing overhead, then reset after some idle time. As such, if the CDN edge server has this implemented, you shouldn't have to worry about the tradeoff you're asking about.
Good point. No I think we can clean that up: s/dynamic/yes Would you be willing to make a PR to update some of these? :) |
1. Replace 'dynamic' with 'yes'; 2. KeyCDN uses unconfigurable 16K TLS records;
Just to clarify, KeyCDN has the default TLS record size of 8k and we can change the value on request per Zone. |
First of all, I'd like to thank you, Ilya, for all the hard work in the field of web performance optimization. No one can explain technical stuff (sometimes complicated) in such a simple words as you can.
Now I'd like to draw your attention to the Dynamic record sizing metric you use to compare different servers and CDNs.
Recently I have found that using 16K records for those providers who apparently use NGINX is not such a rare case. The problem is such configuration adds extra RTT to the response delay. One of those providers is KeyCDN. It is in your list and its TLS buffer size recorded as configurable (static) and marked as warning (yellow). But it's not actually configurable - I could not find it. And when I contacted their support about 2*RTT size of TTFB and pointed them to ssl_buffer_size, they told me that I should use custom certificate instead of Lets Encrypt's one to solve this issue.
I don't know whether other providers really allow clients to configure TLS buffer size - probably Akamai does (according to their community forum). But what is more important for CDN clients - to have dynamic sizing behaviour or not use large records during TCP slow start? Shouldn't those providers who has unconfigurable 16K records be marked as alert (red)?
And a couple of minor notes about record sizes for servers.
Cloudflare reported that they published their patch for NGINX they use to implement dynamic sizing: https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency/. Maybe it makes sense to include it into comparison.
Some servers have yes, but some have dynamic. Is there a difference?
The text was updated successfully, but these errors were encountered: