Computer scientists from Nanyang Technological University (NTU) of Singapore could "jailbreak" AI chatbots by setting them against each other. After "jailbreaking" them, the researchers got valid responses to queries that chatbots, such as ChatGPT, Google Bard, and Microsoft Bing Chat, generally don't respond to.