Replies: 1 comment 4 replies
-
I´ve modified it to run inference with slice batches: https://github.com/andresinsitu/sahi_custom/tree/batch_inf At the moment the batch has to be a divisor of the number of slices, otherwise it leaves some slice without inference(but it works regardless) |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi @fcakyon ,
I am currently using SAHI with YOLOv8. I am not completely sure that SAHI runs batch inference, since I have observed in the code that only num_batch=1 is supported and that inferences can take up to seconds per complete image depending on the number of slices.
I was wondering how could this package be improved in order to run batch inference in the GPU. Probably also doing the slices in the GPU directly when the original image is loaded.
Is batch inference dependant on YOLOv8? or is SAHI creating the bottleneck? Several seconds for the prediction of a single image doesn't justify higher small object detection for some real-world/real-time applications like security applications.
Best regards
Beta Was this translation helpful? Give feedback.
All reactions