Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prolonged high CPU usage when the net tables grow too big #166

Open
steve-chavez opened this issue Nov 6, 2024 · 3 comments
Open

Prolonged high CPU usage when the net tables grow too big #166

steve-chavez opened this issue Nov 6, 2024 · 3 comments
Labels

Comments

@steve-chavez
Copy link
Member

Problem

There was a case where a user imported data causing millions of rows to be inserted into a table that had webhooks enabled. This caused high CPU usage for a prolonged period of time.

Proposal

With an in-memory queue, it's possible to bound it to a certain size. Once this size is surpassed, we could:

  • Block the producers of http requests, until there's more capacity in the queue.
  • Log an ERROR or WARNING.

Note

Spill over to disk is not an option (inserting into another table), as it would make the usage more complex.

@riderx
Copy link

riderx commented Nov 8, 2024

I'm struggling with this exact case

@steve-chavez
Copy link
Member Author

@riderx How many rows do you have in the net tables? I've seen this case before but I'd like to gather more data.

@begank
Copy link

begank commented Nov 18, 2024

@steve-chavez hello
My net table once had 100,000 rows piled up, which forced me to divert some network requests and call through my own service.
http1234--> pg_net
http5678--> my_pg_net, AND i have a service to read my_pg_net and send http

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants