Conversational AI is augmenting and replacing human endeavors in finance, mental health counseling, advertising, dating, journalism, and wellness. More commonly referred to as “chatbots” or “social bots,” these computational agents can automatically promulgate ideas, generate messages, and act as followers of users. With over 100 trillion parameters trained on a huge amount of texts, large language models such as OpenAI’s ChatGPT, Microsoft’s Bing or Google’s Bard exhibit sophisticated linguistic and conversational capacities that can mimic human behaviors. The popularity of conversational AI has grown exponentially since the release of ChatGPT which has reached more than 100 million users within just three months after its launch in November 2022. More advanced versions, called empathy bots, are growing their ability to read, evaluate, and respond to a user’s emotional state, heightening not only their utility as artificial headhunters but importantly, anthropomorphic appeal.
While these artificial agents are intended as tools for societal good, they are also becoming the weapon of choice for agent provocateurs in information warfare and cybercrime. Extremist groups, rogue governments, and criminal organizations are using chatbots to heighten the speed and scale of online mis/disinformation, create fake social media accounts, harvest personal data from unsuspecting users, impersonate friends/associates, and manipulate/disrupt political communication. Already, prominent scientists and Silicon Valley pundits are calling for a 6-month pause to consider the growing risks while Italy, China, Russia, Iran,