Extrapolation and spectral bias of neural nets with hadamard product: a polynomial net study Y Wu, Z Zhu, F Liu, G Chrysos, V Cevher Advances in neural information processing systems 35, 26980-26993, 2022 | 9 | 2022 |
Revisiting Character-level Adversarial Attacks for Language Models E Abad Rocamora, Y Wu, F Liu, G Chrysos, V Cevher 41st International Conference on Machine Learning (ICML 2024), 2024 | 7* | 2024 |
On the convergence of encoder-only shallow transformers Y Wu, F Liu, G Chrysos, V Cevher Advances in Neural Information Processing Systems 36, 2023 | 7 | 2023 |
Adversarial audio synthesis with complex-valued polynomial networks Y Wu, GG Chrysos, V Cevher arXiv preprint, 2022 | 5 | 2022 |
Robust NAS under adversarial training: benchmark, theory, and beyond Y Wu, F Liu, CJ Simon-Gabriel, G Chrysos, V Cevher The Twelfth International Conference on Learning Representations, 2024 | 2 | 2024 |
Universal Gradient Methods for Stochastic Convex Optimization A Rodomanov, A Kavis, Y Wu, K Antonakopoulos, V Cevher Forty-first International Conference on Machine Learning, 2024 | 2 | 2024 |
Membership Inference Attacks against Large Vision-Language Models Z Li*, Y Wu*, Y Chen*, F Tonin, EA Rocamora, V Cevher Advances in neural information processing systems 2024, 2024 | 1 | 2024 |
Imbalance-Regularized LoRA: A Plug-and-Play Method for Improving Fine-Tuning of Foundation Models Z Zhu, Y Wu, Q Gu, V Cevher Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning, 2024 | | 2024 |
Quantum-PEFT: Ultra parameter-efficient fine-tuning T Koike-Akino*, F Tonin*, Y Wu, LN Candogan, V Cevher Workshop on Efficient Systems for Foundation Models II@ ICML2024, 2024 | | 2024 |
Single-pass detection of jailbreaking input in large language models LN Candogan, Y Wu, EA Rocamora, G Chrysos, V Cevher ICLR 2024 Workshop on Secure and Trustworthy Large Language Models, 2024 | | 2024 |
Character-level robustness should be revisited EA Rocamora, Y Wu, F Liu, G Chrysos, V Cevher ICLR 2024 Workshop on Secure and Trustworthy Large Language Models, 0 | | |