-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gearmand performance worse than Python gear in our production, is some configuration missing / not correct ? #393
Comments
Well, I imagine |
You're also not comparing the same keepalive settings. I doubt it matters much, but you should compare the two implementations with the same settings. |
Kubernetes? Are you using the Docker image from https://hub.docker.com/r/artefactual/gearmand/ ? |
Thanks esabol What do you think about these 2 parameter with default value, and what's that potential impact ? The docker image is built by ourself Thanks again for your quick support |
-b will only matter if you have a lot of churn in workers/clients. from
For -f, that's likely not important unless you're seeing socket/file errors. Open file limits are mostly meant to stop runaway processes from eating up kernel resources. The only open files gearmand is going to use is sockets or a handful for things like logs or local sqlite files if you're using a background queue plugin. The user-level ulimit will be the highest they can go so this would only be to reduce it anyway. |
And it seems to me like that could be the case if one is doing performance testing with a trivial worker. So a higher value might be better in this arbitrary scenario? |
After have several trial on our production CI, usually in busy developing time there are about over 20k gear tasks. For example, client submit a task take about over 1s After check the server debug log: -- mostly to processing (CAN_DO && PRE_SLEEP) workers register like this: build:production/24r2/xxxxx-lte-newagent 0 0 1671 Seems it only has one thread "proc" to process the received packets and one by one. which caused many tasks (launching test) can't submit in time for it should get the gear server response (job handler) Is there any special configuration or method to make it fast ? I have an idea to submit task async in client which still use "submit_job" and handle the gear server response to get handler via a callback or similar function, I am not sure is it available and I will verify it in test env. Great appreciate you can give some comments. Big thanks for your continuous support |
@pythonerdog asked:
Beyond what we've already told you? Probably not. What command line options you are currently using to start gearmand? 20K tasks seems kind of crazy. Is that in a single job submission? If so, it seems reasonable to me that that would take 1.5 seconds. |
In our production, Zuul as a gear client, and Jenkins gearman plugin as a gear worker (each jenkins node executor registered as a gear worker).
gearmand -t 0 --job-retries 1 --keepalive --keepalive-idle 600 --keepalive-count 9 --keepalive-interval 75 --verbose DEBUG -p 4730
GearServer(4730, host="0.0.0.0", statsd_prefix='zuul.geard', keepalive=True, tcp_keepidle=100, tcp_keepintvl=30, tcp_keepcnt=5)
Running on the same production env, kubernets
With gearmand, 13000 tasks can be consumed per hour
With Python gear, 24000 tasks can be consumed per hour
Is any comment for this ? Thanks very much
PS: next, we continue try enable gearmand multi-thread with parameter "-t" to further verification.
The text was updated successfully, but these errors were encountered: