Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution

S Shao, Y Li, H Yao, Y He, Z Qin, K Ren - arXiv preprint arXiv:2405.04825, 2024 - arxiv.org
Ownership verification is currently the most critical and widely adopted post-hoc method to
safeguard model copyright. In general, model owners exploit it to identify whether a given …

Sakshi: Decentralized ai platforms

S Bhat, C Chen, Z Cheng, Z Fang, A Hebbar… - arXiv preprint arXiv …, 2023 - arxiv.org
Large AI models (eg, Dall-E, GPT4) have electrified the scientific, technological and societal
landscape through their superhuman capabilities. These services are offered largely in a …

SoK: Unintended Interactions among Machine Learning Defenses and Risks

V Duddu, S Szyller, N Asokan - arXiv preprint arXiv:2312.04542, 2023 - arxiv.org
Machine learning (ML) models cannot neglect risks to security, privacy, and fairness.
Several defenses have been proposed to mitigate such risks. When a defense is effective in …