Deep neural network concepts for background subtraction: A systematic review and comparative evaluation

T Bouwmans, S Javed, M Sultana, SK Jung - Neural Networks, 2019 - Elsevier
Conventional neural networks have been demonstrated to be a powerful framework for
background subtraction in video acquired by static cameras. Indeed, the well-known Self …

Post-training quantization for vision transformer

Z Liu, Y Wang, K Han, W Zhang… - Advances in Neural …, 2021 - proceedings.neurips.cc
Recently, transformer has achieved remarkable performance on a variety of computer vision
applications. Compared with mainstream convolutional neural networks, vision transformers …

Proflip: Targeted trojan attack with progressive bit flips

H Chen, C Fu, J Zhao… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Abstract The security of Deep Neural Networks (DNNs) is of great importance due to their
employment in various safety-critical applications. DNNs are shown to be vulnerable against …

Trojvit: Trojan insertion in vision transformers

M Zheng, Q Lou, L Jiang - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Abstract Vision Transformers (ViTs) have demonstrated the state-of-the-art performance in
various vision-related tasks. The success of ViTs motivates adversaries to perform backdoor …

Selective prototype network for few-shot metal surface defect segmentation

R Yu, B Guo, K Yang - IEEE Transactions on Instrumentation …, 2022 - ieeexplore.ieee.org
Metal surface defects segmentation is a critical task to make pixel-level predictions about
defects in the industrial production process, which has great significance in improving …

Overcoming limitation of dissociation between MD and MI classifications of breast cancer histopathological images through a novel decomposed feature-based …

M Sepahvand, F Abdali-Mohammadi - Computers in Biology and Medicine, 2022 - Elsevier
Magnification-independent (MI) classification is considered a promising method for detecting
the histopathological images of breast cancer. However, it has too many parameters for real …

Deep neural network compression through interpretability-based filter pruning

K Yao, F Cao, Y Leung, J Liang - Pattern Recognition, 2021 - Elsevier
This paper proposes a method to compress deep neural networks (DNNs) based on
interpretability. For a trained DNN model, the activation maximization technique is first used …

Trojtext: Test-time invisible textual trojan insertion

Q Lou, Y Liu, B Feng - arXiv preprint arXiv:2303.02242, 2023 - arxiv.org
In Natural Language Processing (NLP), intelligent neuron models can be susceptible to
textual Trojan attacks. Such attacks occur when Trojan models behave normally for standard …

Nonlinear tensor train format for deep neural network compression

D Wang, G Zhao, H Chen, Z Liu, L Deng, G Li - Neural Networks, 2021 - Elsevier
Deep neural network (DNN) compression has become a hot topic in the research of deep
learning since the scale of modern DNNs turns into too huge to implement on practical …

Teacher–student knowledge distillation based on decomposed deep feature representation for intelligent mobile applications

M Sepahvand, F Abdali-Mohammadi… - Expert Systems with …, 2022 - Elsevier
According to the recent studies on feature-based knowledge distillation (KD), a student
model will not be able to imitate a teacher's behavior properly if there is a high variance …