The alignment problem from a deep learning perspective

R Ngo, L Chan, S Mindermann - arXiv preprint arXiv:2209.00626, 2022 - arxiv.org
arXiv preprint arXiv:2209.00626, 2022arxiv.org
In coming decades, artificial general intelligence (AGI) may surpass human capabilities at
many critical tasks. We argue that, without substantial effort to prevent it, AGIs could learn to
pursue goals that conflict (ie, are misaligned) with human interests. If trained like today's
most capable models, AGIs could learn to act deceptively to receive higher reward, learn
internally-represented goals which generalize beyond their fine-tuning distributions, and
pursue those goals using power-seeking strategies. We review emerging evidence for these …
In coming decades, artificial general intelligence (AGI) may surpass human capabilities at many critical tasks. We argue that, without substantial effort to prevent it, AGIs could learn to pursue goals that conflict (i.e., are misaligned) with human interests. If trained like today's most capable models, AGIs could learn to act deceptively to receive higher reward, learn internally-represented goals which generalize beyond their fine-tuning distributions, and pursue those goals using power-seeking strategies. We review emerging evidence for these properties. AGIs with these properties would be difficult to align and may appear aligned even when they are not. We outline how the deployment of misaligned AGIs might irreversibly undermine human control over the world, and briefly review research directions aimed at preventing this outcome.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果