Although the “language” the bots devised seems mostly like unintelligible gibberish, the incident highlighted how AI systems can and will often deviate from expected behaviors, if given the chance. In 2016, Microsoft launched an ambitious experiment with a Twitter chatbot known as Tay. I’m not sure whether chatting with a bot would help me sleep, but at least it’d stop…