-
Notifications
You must be signed in to change notification settings - Fork 562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Models not JIT traceable/exportable to TorchScript after Fantasization #2604
Comments
What if we want to differentiate some downstream computation of the fantasized model w.r.t. the training inputs (or the fantasy location)? Detaching this always would prevent that? I guess we could detach this when trying to JIT this instead? |
We already have a |
@jacobrgardner yes I think that's correct. It's just a bug that the So, I think it should work if we can just detach |
@Balandat @jacobrgardner thanks for your inputs on this. I have added a PR with this fix in #2605 . Please let me know if you have any comments on that. |
🐛 Bug
Fantasization / conditioning model on new data points renders the model unexportable to TorchScript/not traceable with JIT. Models cannot be JIT traced/exported to Torchscript once
get_fantasy_model
method is called.To reproduce
** Code snippet to reproduce **
** Stack trace/error message **
Expected Behavior
Should have been able to export the model to torchscript/JIT trace the model
System information
Please complete the following information:
Additional context
The error was because the
new_covar_cache
created in this line which is the updated precomputed cache of the training data covariance matrix with the new observations, is still part of the computational graph(and therefore tracks gradients).Detaching this value from the computational graph in the same line, solves the issue because now this matrix is a gradient-free tensor and the model can be JIT traced. I can make a PR with this fix if that's helpful.
The text was updated successfully, but these errors were encountered: