Mvimgnet: A large-scale dataset of multi-view images

X Yu, M Xu, Y Zhang, H Liu, C Ye… - Proceedings of the …, 2023 - openaccess.thecvf.com
Being data-driven is one of the most iconic properties of deep learning algorithms. The birth
of ImageNet drives a remarkable trend of" learning from large-scale data" in computer vision …

Mvtn: Multi-view transformation network for 3d shape recognition

A Hamdi, S Giancola… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Multi-view projection methods have demonstrated their ability to reach state-of-the-art
performance on 3D shape recognition. Those methods learn different ways to aggregate …

Advpc: Transferable adversarial perturbations on 3d point clouds

A Hamdi, S Rojas, A Thabet, B Ghanem - Computer Vision–ECCV 2020 …, 2020 - Springer
Deep neural networks are vulnerable to adversarial attacks, in which imperceptible
perturbations to their input lead to erroneous network predictions. This phenomenon has …

Dataset interfaces: Diagnosing model failures using controllable counterfactual generation

J Vendrow, S Jain, L Engstrom, A Madry - arXiv preprint arXiv:2302.07865, 2023 - arxiv.org
Distribution shift is a major source of failure for machine learning models. However,
evaluating model reliability under distribution shift can be challenging, especially since it …

Towards viewpoint-invariant visual recognition via adversarial training

S Ruan, Y Dong, H Su, J Peng… - Proceedings of the …, 2023 - openaccess.thecvf.com
Visual recognition models are not invariant to viewpoint changes in the 3D world, as
different viewing directions can dramatically affect the predictions given the same object …

Towards verifying robustness of neural networks against a family of semantic perturbations

J Mohapatra, TW Weng, PY Chen… - Proceedings of the …, 2020 - openaccess.thecvf.com
Verifying robustness of neural networks given a specified threat model is a fundamental yet
challenging task. While current verification methods mainly focus on the l_p-norm threat …

3db: A framework for debugging computer vision models

G Leclerc, H Salman, A Ilyas… - Advances in …, 2022 - proceedings.neurips.cc
We introduce 3DB: an extendable, unified framework for testing and debugging vision
models using photorealistic simulation. We demonstrate, through a wide range of use cases …

Improving viewpoint robustness for visual recognition via adversarial training

S Ruan, Y Dong, H Su, J Peng, N Chen… - arXiv preprint arXiv …, 2023 - arxiv.org
Viewpoint invariance remains challenging for visual recognition in the 3D world, as altering
the viewing directions can significantly impact predictions for the same object. While …

Deepcert: Verification of contextually relevant robustness for neural network image classifiers

C Paterson, H Wu, J Grese, R Calinescu… - … Safety, Reliability, and …, 2021 - Springer
We introduce DeepCert, a tool-supported method for verifying the robustness of deep neural
network (DNN) image classifiers to contextually relevant perturbations such as blur, haze …

SADA: semantic adversarial diagnostic attacks for autonomous applications

A Hamdi, M Müller, B Ghanem - … of the AAAI Conference on Artificial …, 2020 - ojs.aaai.org
One major factor impeding more widespread adoption of deep neural networks (DNNs) is
their lack of robustness, which is essential for safety-critical applications such as …