Sample-then-optimize batch neural Thompson sampling

Z Dai, Y Shu, BKH Low, P Jaillet - Advances in Neural …, 2022 - proceedings.neurips.cc
Bayesian optimization (BO), which uses a Gaussian process (GP) as a surrogate to model its
objective function, is popular for black-box optimization. However, due to the limitations of …

Unifying and boosting gradient-based training-free neural architecture search

Y Shu, Z Dai, Z Wu, BKH Low - Advances in Neural …, 2022 - proceedings.neurips.cc
Neural architecture search (NAS) has gained immense popularity owing to its ability to
automate neural architecture design. A number of training-free metrics are recently …

A simple yet effective strategy to robustify the meta learning paradigm

Q Wang, Y Lv, Z Xie, J Huang - Advances in Neural …, 2024 - proceedings.neurips.cc
Meta learning is a promising paradigm to enable skill transfer across tasks. Most previous
methods employ the empirical risk minimization principle in optimization. However, the …

Bayesian optimization with cost-varying variable subsets

S Tay, CS Foo, D Urano, R Leong… - Advances in Neural …, 2023 - proceedings.neurips.cc
We introduce the problem of Bayesian optimization with cost-varying variable subsets
(BOCVS) where in each iteration, the learner chooses a subset of query variables and …

Batch Bayesian optimization for replicable experimental design

Z Dai, QP Nguyen, S Tay, D Urano… - Advances in …, 2024 - proceedings.neurips.cc
Many real-world experimental design problems (a) evaluate multiple experimental
conditions in parallel and (b) replicate each condition multiple times due to large and …

Contextual Gaussian process bandits with neural networks

H Zhang, J He, R Righter, ZJ Shen… - Advances in Neural …, 2024 - proceedings.neurips.cc
Contextual decision-making problems have witnessed extensive applications in various
fields such as online content recommendation, personalized healthcare, and autonomous …

Training-free neural active learning with initialization-robustness guarantees

A Hemachandra, Z Dai, J Singh… - International …, 2023 - proceedings.mlr.press
Existing neural active learning algorithms have aimed to optimize the predictive
performance of neural networks (NNs) by selecting data for labelling. However, other than a …

Bayesian optimization under stochastic delayed feedback

A Verma, Z Dai, BKH Low - International Conference on …, 2022 - proceedings.mlr.press
Bayesian optimization (BO) is a widely-used sequential method for zeroth-order optimization
of complex and expensive-to-compute black-box functions. The existing BO methods …

Distributionally robust variational quantum algorithms with shifted noise

Z He, B Peng, Y Alexeev… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Given their potential to demonstrate near-term quantum advantage, variational quantum
algorithms (VQAs) have been extensively studied. Although numerous techniques have …

Robust fast adaptation from adversarially explicit task distribution generation

C Wang, Y Lv, Y Mao, Y Qu, Y Xu, X Ji - arXiv preprint arXiv:2407.19523, 2024 - arxiv.org
Meta-learning is a practical learning paradigm to transfer skills across tasks from a few
examples. Nevertheless, the existence of task distribution shifts tends to weaken meta …