You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
i use:
python export.py --weights runs/train/v1/weights/best.pt --include onnx
export the onnx model, runs/train/v1/weights/best.pt model is only 14M but onnx model is 28M, so why export model so large? I didn't use --half because the model was used on the CPU.
Additional
No response
The text was updated successfully, but these errors were encountered:
The size of the exported ONNX model can be larger than the original PyTorch weights due to differences in serialization formats and optimizations performed by ONNX. The larger size is not indicative of any performance issues.
If you want to reduce the size of the exported ONNX model further, you can try quantization techniques offered by frameworks like ONNXRuntime or OpenVINO. These techniques can help to reduce the model size without significantly affecting the inference performance.
Feel free to explore these options, and let us know if you have any further questions.
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Search before asking
Question
i use:
python export.py --weights runs/train/v1/weights/best.pt --include onnx
export the onnx model, runs/train/v1/weights/best.pt model is only 14M but onnx model is 28M, so why export model so large? I didn't use --half because the model was used on the CPU.
Additional
No response
The text was updated successfully, but these errors were encountered: