Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chained Objectives #2418

Open
kstone40 opened this issue Jul 9, 2024 · 3 comments
Open

Chained Objectives #2418

kstone40 opened this issue Jul 9, 2024 · 3 comments
Labels
enhancement New feature or request

Comments

@kstone40
Copy link

kstone40 commented Jul 9, 2024

Similar to constraints, it would be nice to send a list of objectives to optim_acqfs, but which are executed in sequence.

In my use cases, we have a number of (usually multioutput) custom objectives which are made of composite parts (scalarization, weighting, bounding, normalization to a reference or utopian point). But since only one objective can be sent to botorch, we have to construct the callables to make all of the combinations manually. It would be more convenient to define each part once, and be able to chain them together as the individual cases demand.

It's still possible in practice today, since I can do GenericMCObjective(callable) in a roundabout way with callable(x) = part1(part2(part3(x))), but it would be great if that also could be implemented natively in Botorch!

@esantorella esantorella added the enhancement New feature or request label Jul 10, 2024
@esantorella
Copy link
Member

I see the logic in this. But for the sake of figuring out the prioritization of this ask, I'm wondering if your particular use case could be achieved with existing BoTorch functionality.

  • Weighting and scalarization can be handled by ScalarizedPosteriorTransform. It's more efficient to use a PosteriorTransform than an MCAcquisitionObjective because the scalarization can operate on the mean and covariance of the posterior without sampling from it, whereas an MCAcquisitionObjective always operates on samples.
  • Constraints (see documentation) can be passed to many acquisition functions.
  • Normalization to a particular range or reference point:
    • for getting a better model fit can be done with an outcome transform (the posterior and objective values will remain in the original space)
    • for hypervolume computations with multi-objective optimization is best achieved by passing a ref_point to the acquisition function
    • for the purpose of general interpretability -- e.g. you pulled the data in meters, but centimeters are a more natural unit to think in -- would best be done by transforming the training data, either with a BoTorch transform such as standardize or another library.

All of this could be documented better, especially guidance on using posterior transforms and outcome transforms.

I may be misunderstanding your use case; if this doesn't address your needs, can you provide an example of what you're trying to do and what your preferred API would look like?

@kstone40
Copy link
Author

kstone40 commented Aug 4, 2024

I was able to do this by creating a class that bases on MC multioutput acquisition but manually manages .is_mo for flexibility.

Anyway, I just had a really good example of where this was useful and wanted to follow up.

I have two responses in my data: yield (to all products), and selectivity (to desired product). I build a model to each of these. However, what I really want is their product, so that's one objective.

Second, I also want to calculate the cost of each experiment as a deterministic function of the inputs X.

Third, I actually want to divide the (yield*selectivity) by the cost objective.

So, that's three sequential operations based on simple objectives: product of y, weighted sum of X, divide.

Of course, I can do this easily with custom objectives in BoTorch, but I am working on a library with reusable parts, so I'm looking for generality.

I created an "objective_list" class that calls them sequentially, and does some management of output shape as appropriate (i.e. only squeeze output dimension of 1 if it's the last in the list).

I plan to put this up on GitHub soon, so if you are interested, I can circle back

@Balandat
Copy link
Contributor

Balandat commented Aug 4, 2024

Computing the product of the individually modeled outcomes makes sense, this is a case of BO for composite functions (https://proceedings.mlr.press/v97/astudillo19a.html). I guess one could come up with some set of basic operations (e.g. a "ProductObjective") and make those easier to use (e.g. by implementing methods like __mul__ on objectives to auto-generate those). However, that seems like it could become arbitrarily complex, so one would have to find a good balance and think about what kinds of operations one wants to support given the maintenance burden.

The inverse-cost-weighting approach that you discuss is actually rather common and used also in multi-fidelity acquisition functions. In fact, there is an InverseCostWeightedUtility, which is used e.g. by the qKnowledgeGradient acquisition function.

We plan to work more on multi-fidelity methods in the near future, so there may be more of this / improvements to this coming down the pipe. cc @SebastianAment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants