-
-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle too many connections to Wayback automatically #525
Comments
(Other options on the last point above include dropping |
These are meant to parially address #525. Using memento objects (which are just requests response objects) means they are explicitly closed when we are done with them, and our HTTP connections might be managed a little better. Warning on resets will let us examine logs to see how other future fixes improve (or don't) the problem.
These are meant to parially address #525. Using memento objects (which are just requests response objects) means they are explicitly closed when we are done with them, and our HTTP connections might be managed a little better. Warning on resets will let us examine logs to see how other future fixes improve (or don't) the problem.
When we handle memento redirects, we should have been loading all the content and then closing the response so that the connection is released for re-use, otherwise it just gets left hanging around :( This contributes to solving some of the issues over in edgi-govdata-archiving/web-monitoring-processing#525
All the work that is unique to this repo has now been done! Ideally we want to get thread-safety for Wayback in edgi-govdata-archiving/wayback#23 (or some flavor of it) done soon, and we’ll then update code here to make use of it. |
Update: over on the Wayback side, edgi-govdata-archiving/wayback#52 is about to land. It's the last stepping stone required before we can switch away from requests to make it thread-safe. |
Long ago, we worked around an issue where we were getting lots of connection failures from Wayback with a dirty hack: if we ran out of retries but still had a failure to establish a new connection, we’d try resetting the whole session (only once!) and start over:
web-monitoring-processing/web_monitoring/cli.py
Lines 264 to 273 in 74b31b3
At the time, I knew this was a kind of ugly hack, and probably masking some underlying issues. After some discussion with @danielballan and some spelunking through the source of
requests
andurllib3
today, we realized there were two problems:We might be holding open connections for an unnecessarily long time. This can be fixed by explicitly closing our response objects.
@danielballan did this for
WaybackClient.search
in Explicitly close response when search is done. wayback#19I need to do this when we load Mementos here in
cli.py
:web-monitoring-processing/web_monitoring/cli.py
Lines 260 to 263 in 74b31b3
(Update: we reliably read the whole response, which closes the connection, so we are actually OK here.)
But that doesn’t cover everything. Ultimately,
requests
buries some features around connection pooling fromurllib3
in a way that means eachWaybackSession
(we have one per thread, for 36 in production right now) could possibly create infinity connections and then hold onto up to 10. (so we could wind up attempting to hold 360 open connections to Wayback!) Some things we can do to be smarter about this:Just reduce the number of threads in production! Already on this, but it doesn’t make the problem obvious or clear if you scale up too far. We should ultimately do better.
When
get_memento()
redirects, make sure we read the body and close the response before moving onto the next in the redirect chain. (Release connections for memento redirects wayback#20)Automatically step down the number of memento threads (to a point) if one gets too many connections refused. (How Many is Too Many? Two? Five? Probably > 1 to differentiate between something spurious and Wayback actually telling us we have too many connections open. Then again, maybe the retry configuration can be partially responsible for this? Much nuance here.) We have to make sure we re-queue the
CdxRecord
that was being loaded in this case. The error we’d see to trigger this would includeFailed to establish a new connection: [Errno 111] Connection refused
(Update: I prototyped this out and it didn’t actually improve things over just sharing a connection pool between threads. See Use a shared HTTPAdapter across all threads #551 for details.)
Expose ways to set
pool_maxsize
andpool_block
on therequests.HTTPAdapter
instances used byWaybackSession
so users can control this better. (Update: see Use a shared HTTPAdapter across all threads #551 and Sketch out a way to support multithreading wayback#23)Make sure
WaybackClient
andWaybackSession
andrequests.Session
andrequests.HTTPAdapter
are thread-safe, and use one client with appropriatepool_maxsize
andpool_block
parameters on itsHTTPAdapter
instead of one client per thread. This is the most resilient solution, but most questionable: we can make our stuff thread-safe by fixingDisableAfterCloseSession
, but there seems to deep confusion among both users and maintainers ofrequests
as to its thread-safety (urllib3
is all good).(Update: Given the issues in the requests package, see Use a shared HTTPAdapter across all threads #551 for a practical, short-term workaround in this repo and Sketch out a way to support multithreading wayback#23 for one approach to solving this better.)
The text was updated successfully, but these errors were encountered: