Despite a surge of recent advances in promoting machine Learning (ML) fairness, the existing mainstream approaches mostly require training or finetuning the entire weights of …
In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long …
Adversarial training is widely used to make classifiers robust to a specific threat or adversary, such as l_p-norm bounded perturbations of a given p-norm. However, existing …
In this work, we leverage visual prompting (VP) to improve adversarial robustness of a fixed, pre-trained model at test time. Compared to conventional adversarial defenses, VP allows …
Z Hu, L Shen, Z Wang, B Wu… - … on Machine Learning, 2023 - proceedings.mlr.press
Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta- learning from a collection of pre-trained models without access to the training data. Existing …
PY Chen, S Liu - Proceedings of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability. With the proliferation of deep-learning-based technology, the …
Zeroth-order (ZO) optimization has become a popular technique for solving machine learning (ML) problems when first-order (FO) information is difficult or impossible to obtain …
In the evolving landscape of natural language processing (NLP), fine-tuning pre-trained Large Language Models (LLMs) with first-order (FO) optimizers like SGD and Adam has …
W Lu, H Yu, J Wang, D Teney, H Wang, Y Chen… - arXiv preprint arXiv …, 2023 - arxiv.org
When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources. In addition to typical limitations such as data …