Machine learning for healthcare wearable devices: the big picture

F Sabry, T Eltaras, W Labda, K Alzoubi… - Journal of Healthcare …, 2022 - Wiley Online Library
Using artificial intelligence and machine learning techniques in healthcare applications has
been actively researched over the last few years. It holds promising opportunities as it is …

A systematic review of human–computer interaction and explainable artificial intelligence in healthcare with artificial intelligence techniques

M Nazar, MM Alam, E Yafi, MM Su'ud - IEEE Access, 2021 - ieeexplore.ieee.org
Artificial intelligence (AI) is one of the emerging technologies. In recent decades, artificial
intelligence (AI) has gained widespread acceptance in a variety of fields, including virtual …

PPFL: Privacy-preserving federated learning with trusted execution environments

F Mo, H Haddadi, K Katevas, E Marin… - Proceedings of the 19th …, 2021 - dl.acm.org
We propose and implement a Privacy-preserving Federated Learning (PPFL) framework for
mobile systems to limit privacy leakages in federated learning. Leveraging the widespread …

Privacy and security issues in deep learning: A survey

X Liu, L Xie, Y Wang, J Zou, J Xiong, Z Ying… - IEEE …, 2020 - ieeexplore.ieee.org
Deep Learning (DL) algorithms based on artificial neural networks have achieved
remarkable success and are being extensively applied in a variety of application domains …

Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models

A Salem, Y Zhang, M Humbert, P Berrang… - arXiv preprint arXiv …, 2018 - arxiv.org
Machine learning (ML) has become a core component of many real-world applications and
training data is a key factor that drives current progress. This huge success has led Internet …

Machine learning with membership privacy using adversarial regularization

M Nasr, R Shokri, A Houmansadr - … of the 2018 ACM SIGSAC conference …, 2018 - dl.acm.org
Machine learning models leak significant amount of information about their training sets,
through their predictions. This is a serious privacy concern for the users of machine learning …

Model inversion attacks against collaborative inference

Z He, T Zhang, RB Lee - Proceedings of the 35th Annual Computer …, 2019 - dl.acm.org
The prevalence of deep learning has drawn attention to the privacy protection of sensitive
data. Various privacy threats have been presented, where an adversary can steal model …

Cryptflow: Secure tensorflow inference

N Kumar, M Rathee, N Chandran… - … IEEE Symposium on …, 2020 - ieeexplore.ieee.org
We present CrypTFlow, a first of its kind system that converts TensorFlow inference code into
Secure Multi-party Computation (MPC) protocols at the push of a button. To do this, we build …

Darknetz: towards model privacy at the edge using trusted execution environments

F Mo, AS Shamsabadi, K Katevas… - Proceedings of the 18th …, 2020 - dl.acm.org
We present DarkneTZ, a framework that uses an edge device's Trusted Execution
Environment (TEE) in conjunction with model partitioning to limit the attack surface against …

{DeepHammer}: Depleting the intelligence of deep neural networks through targeted chain of bit flips

F Yao, AS Rakin, D Fan - 29th USENIX Security Symposium (USENIX …, 2020 - usenix.org
Security of machine learning is increasingly becoming a major concern due to the
ubiquitous deployment of deep learning in many security-sensitive domains. Many prior …