You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have a model where both the input and output are float32. After providing the same input, I performed TensorRT inference on the same GPU (4090) using both the C++ and Python versions, and I obtained different floating-point outputs. The discrepancy starts from the third decimal place. Could you please help explain the reason for this? I look forward to your reply.
TensorRT version: 8.5.1.7
GPU:4090
The text was updated successfully, but these errors were encountered:
@app-houqiangli Yes, as @lix19937 mentioned make sure the C++ and Python TRT versions installed in your env are the same. If you're still noticing a discrepancy, please provide additional information and scripts to reproduce this issue
Hello, I have a model where both the input and output are float32. After providing the same input, I performed TensorRT inference on the same GPU (4090) using both the C++ and Python versions, and I obtained different floating-point outputs. The discrepancy starts from the third decimal place. Could you please help explain the reason for this? I look forward to your reply.
TensorRT version: 8.5.1.7
GPU:4090
The text was updated successfully, but these errors were encountered: