Patchbackdoor: Backdoor attack against deep neural networks without model modification

Y Yuan, R Kong, S Xie, Y Li, Y Liu - Proceedings of the 31st ACM …, 2023 - dl.acm.org
Backdoor attack is a major threat to deep learning systems in safety-critical scenarios, which
aims to trigger misbehavior of neural network models under attacker-controlled conditions …

CrossCert: A Cross-Checking Detection Approach to Patch Robustness Certification for Deep Learning Models

Q Zhou, Z Wei, H Wang, B Jiang, WK Chan - Proceedings of the ACM on …, 2024 - dl.acm.org
Patch robustness certification is an emerging kind of defense technique against adversarial
patch attacks with provable guarantees. There are two research lines: certified recovery and …

Context-Aware Fuzzing for Robustness Enhancement of Deep Learning Models

H Wang, Z Wei, Q Zhou, WK Chan - ACM Transactions on Software …, 2024 - dl.acm.org
In the testing-retraining pipeline for enhancing the robustness property of deep learning (DL)
models, many state-of-the-art robustness-oriented fuzzing techniques are metric-oriented …

Vortex under Ripplet: An Empirical Study of RAG-enabled Applications

Y Shao, Y Huang, J Shen, L Ma, T Su… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) enhanced by retrieval-augmented generation (RAG)
provide effective solutions in various application scenarios. However, developers face …

A Majority Invariant Approach to Patch Robustness Certification for Deep Learning Models

Q Zhou, Z Wei, H Wang… - 2023 38th IEEE/ACM …, 2023 - ieeexplore.ieee.org
Patch robustness certification ensures no patch within a given bound on a sample can
manipulate a deep learning model to predict a different label. However, existing techniques …

Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward

X Xie, J Song, Z Zhou, Y Huang, D Song… - arXiv preprint arXiv …, 2024 - arxiv.org
While Large Language Models (LLMs) have seen widespread applications across
numerous fields, their limited interpretability poses concerns regarding their safe operations …

Suspicious activities detection using spatial–temporal features based on vision transformer and recurrent neural network

S Hameed, J Amin, MA Anjum, M Sharif - Journal of Ambient Intelligence …, 2024 - Springer
Nowadays there is growing demand for surveillance applications due to the safety and
security from anomalous events. An anomaly in the video is referred to as an event that has …

ViTGuard: Attention-aware Detection against Adversarial Examples for Vision Transformer

S Sun, K Nwodo, S Sugrim, A Stavrou… - arXiv preprint arXiv …, 2024 - arxiv.org
The use of transformers for vision tasks has challenged the traditional dominant role of
convolutional neural networks (CNN) in computer vision (CV). For image classification tasks …

Automated Robustness Testing for LLM-based NLP Software

M Xiao, Y Xiao, S Ji, H Cai, L Xue, P Zhang - arXiv preprint arXiv …, 2024 - arxiv.org
Benefiting from the advancements in LLMs, NLP software has undergone rapid
development. Such software is widely employed in various safety-critical tasks, such as …

Scalable Methods for Robust Machine Learning

AJ Levine - 2023 - search.proquest.com
In recent years, machine learning systems have been developed that demonstrate
remarkable performance on many tasks. However, naive metrics of performance, such as …