-
-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding the issue with memory usage on jtop #600
Comments
Apologize for my late reply.
Not all used memory shows the same, like htop and top. It really depends on what "used" means for you. I spent a lot of time making the output familiar like other monitors (but if you know there are better ways to measure this output, I would be really happy to implement it :-) )
My best understanding to make the memory reasonable is to use the equation:
GPU Sh is the memory used by the GPU. The GPU inside an NVIDIA Jetson doesn't have proper memory (like a discrete GPU) but uses the same memory as the CPU. They, in fact, share the same memory.
Here there is a missing data visualization comparing what is currently shown from the
In my tests, I see a Jetson crash, but I'm happy to know it is not happening anymore. :-) If something is not well explained or you have suggestions, I'm really happy to help you. Best, |
Hi @rbonghi , First of all, thank you for your detailed response. However, I still have some questions. You mentioned that GPU Sh represents the memory actively used by the GPU. Does this mean it could potentially increase over time with prolonged service usage, rather than being a fixed value? Previously, when using non-shared memory devices like consumer-grade GPUs, I relied on nvidia-smi to monitor memory usage, which appeared to remain constant. However, I’ve noticed that GPU Sh sometimes increases unexpectedly. For example, when the service first starts, GPU Sh usage is 27GB, but over time, it grows to 35GB. This leaves me uncertain about how to determine the actual GPU memory requirements for my service on the Jetson AGX Orin. |
Hi everyone,
Based on the formula in the code, it seems that "Used" does not include "GPU sh," which means my actual memory usage might be "Used + GPU sh," right? What I’d like to understand is how to interpret these memory statistics to determine the actual resources required to run my services. For example, in the following case:
Used: 28.1 G
GPU sh: 12.3 G
Buffers: 287 M
Cached: 15.1 G
Free: 18.2 G
TOT: 61.4 G
According to the formula in the code:
Used = TOT - Free - (Buffers + Cached) = 61.4 - 18.2 - (0.287 + 15.1)
This raises some questions:
The text was updated successfully, but these errors were encountered: