-
-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change table queue to in-memory queue and add callbacks #62
Comments
Ohh, so that's what you meant by callbacks. Nice! Also like that this makes the impl a lot simpler and more akin to typical HTTP clients. How would go about implementing the in-memory queue? IIRC when I looked into it passing info from the client process to the bgworker process was nontrivial. |
that looks great! Is it correct that all http status codes (e.g. including 500) would still go through the returns void -- no id is returned now I think it would still be good to return a select net.http_get(
...
, success_cb := $$ -- $1=status, $2=headers, $3=body $4=request_id $$,
, error_cb := $$ -- $1=url, $2=error_message, $3=request_id $$
); -- result is uuid (request_id) so if people are firing off a large number of requests, or want to associate a request with a response they'll have the tools they need I know if not, a backup solution might be create or replace function net.http_get(
-- ..,
success_cb regproc default null,
error_cb regproc default null
)
...as $$
-- ...
$$ language sql; where the function signature matches the callback requirements |
Should be possible with the |
There's an example in postgres/src/test/modules/test_shm_mq for how to use the shared memory queue.
Yes, correct. Things like "unable to resolve host" as well.
I think we should leave that to be done externally, users can pass an id on the callback like: select net.http_get(
-- url, headers, body..
, success_cb := format($$insert into responses values(%s, $1, $2, $3)$$, id);
); And this
Yes, SPI_execute_with_args should take care of that. I've also thought about supporting something like |
cool! didn't know about that
ah nice, that works
good point |
I'm thinking that moving to an in-memory queue would allow for http requests during postgrest During |
Yes, since an |
We would lose observability if we go this route, currently customers can do: -- requests with fail/success HTTP status codes
select count(*), status_code from net._http_response group by status_code;
count | status_code
-------+-------------
448 | 429
821 | 500
1182 | 408
2567 | 200
-- rate limiting error
select content from net._http_response where status_code = 429 limit 1;
content
------------------------------------------------------------
<html><body><h1>429 Too Many Requests</h1> +
You have sent too many requests in a given amount of time.+
</body></html> Which is great for debugging and finding out why a webhook receiver might be failing. To increase throughput, we could make the net tables UNLOGGED, remove indexes and PKs from them(this would also solve #44). An alternative could be logging the failed http requests. Then the user would have to search the pg logs. |
+1 for just logging errors, in general making it as stateless as possible should help with stability. |
Yeah, agree. Also forgot about #49, observability would be better with logging. |
Any plans to implement this new feature in the near future? |
Reasons
_http_response
table must be done, this reduces throughput. There are cases where the client doesn't care about the response so it doesn't need to be persisted. For example, function hooks don't do anything with the response.http_request_queue
.http_request_queue
can grow big with many requests and cause production issues(internal link).Proposal
Drop the
_http_response
andhttp_request_queue
tables and instead use an in-memory queue, plus add two callbacks:Which can be used like:
Pros
error_cb
can also be an insert/update on a table, so the request can be retried if needed.@olirice @soedirgo WDYT?
The text was updated successfully, but these errors were encountered: