作者
Sebastian Szyller
发表日期
2020/3/16
简介
Machine learning applications have become increasingly popular. At the same time, model training has become an expensive task in terms of computational power, amount of data, and human expertise. As a result, models now constitute intellectual property and business advantage to model owners and thus, their confidentiality must be preserved. Recently, it was shown that models can be stolen via model extraction attacks that do not require physical white-box access to the model but merely a black-box prediction API. Stolen model can be used to avoid paying for the service or even to undercut the offering of the legitimate model owner. Hence, it deprives the victim of the accumulated business advantage. In this thesis, we introduce two novel defense methods designed to detect distinct classes of model extraction attacks.