-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GENERAL SUPPORT]: Improving run time of BO iterations #3210
Comments
Hi @RoeyYadgar. Thanks for reporting this. In this case
|
Hi @saitcakmak, Thanks a lot! The mocking out of this method is really helpful! I also wanted to ask about the data fetching, it seems it spends quite a bit of time in What I ended up doing for now was to implement a custom |
I am not that familiar with the internals of cc @mpolson64 - data fetching (rather lookup) seems to be adding a significant overhead |
Wow, the difference between If the data is pre-attached to the experiment (as in there is no Metric class that does some querying to fetch the metrics), these two are functionally identical. However, fetch loops through all trials & metrics to check if there is any new data to be retrieved, which takes a lot longer than just looking up the data that's readily attached. @RoeyYadgar, I am guessing you're using an ask-tell setup here. If so, you can just replace |
Thanks for pointing me out to that! I wasn't aware of the |
I found a few places where things could be improved. |
Question
Hi, I'm trying to use BO on a problem in which the sampling time (time it takes to evaluate the function to be optimized at a specific arm) is relatively short. In this case the BO iteration time can become the computational bottleneck, and I'd like to use Ax's framework without some of the additional computations (that aren't strictly necessary for the BO) it performs
More specifically, I've seen that considerable part of the run time per iteration is spent on
get_fit_and_std_quality_and_generalization_dict
(https://github.com/facebook/Ax/blob/main/ax/modelbridge/cross_validation.py#L409) which is called byModelSpec.gen
(https://github.com/facebook/Ax/blob/main/ax/modelbridge/model_spec.py#L239).I'd like to skip the computation of the cross validation each iteration as I'm not using it anyway, I tried looking for some flag in
ModelBridge
(or anywhere else for that matter) that allows me to skip that but wasn't able to find one. Is there a way to do that or should I approach this in a different manner (Like inhereting fromModelSpec
and overwriting this method)?Additionally, I've seen that
TorchModelBridge._gen
also computes best_point at each iteration (https://github.com/facebook/Ax/blob/main/ax/modelbridge/torch.py#L729) which I would also like to skip, but not sure if there's a simple flag that allows me to do so (This computation is very fast when using best in sample point so that's of less importance to me, however I would sometimes to use a customTorchModelBridge
that computesbest_point
by optimizing the posterior mean, and I'd still want to have a flag that allows me to skip that computation in_gen
)I've also seen that
Experiment.fetch_data
(https://github.com/facebook/Ax/blob/main/ax/core/experiment.py#L575) takes quite a bit of time on each iteration. but I wasn't able to understand what it really does and what makes it computationally expensive?Below is a profiler on a single BO iteration using
Models.BOTORCH_MODULAR
with 250 samples.Thanks!
Please provide any relevant code snippet if applicable.
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: