“These are design choices, very thoughtfully done, to create that verisimilitude,” Venkatasubramanian said. A rectangle blinks as it types, not unlike when a friend is typing over iMessage. “If you want it to be Sydney,” Microsoft’s chief technology officer said of Bing Chat in May, “you should be able to tell it to be Sydney.” ChatGPT, meanwhile, sends out responses word by word, as if it’s thinking. It was suppressed after professing its love for a journalist, but maybe not permanently. In this new era of generative AI, human names are just one more layer of faux humanity on products already loaded with anthropomorphic features.Īlthough Microsoft’s official name for its chatbot is simply Bing Chat, the AI initially appeared to have an alter ego that called itself Sydney. But that was before AI was convincing enough to feel real. (Artificial Linguistic Internet Computer Entity) in 2017, Saudi Arabia granted citizenship to a humanoid robot named Sophia. The following decades brought chatbots with names such as Parry, Jabberwacky, Dr. A decade after ELIZA’s debut, Weizenbaum remarked that he was “startled to see how quickly and how very deeply people conversing with became emotionally involved with the computer and how unequivocally they anthropomorphized it.” Today, the projection of human traits onto computers has a name: the ELIZA effect. Still, people ascribed this janky form of AI with more understanding, creativity, and personality than Weizenbaum had expected. A therapist bot created by the MIT professor Joseph Weizenbaum in the mid-1960s, ELIZA was more parrot than psychoanalyst, often doing little more than repeating and rephrasing questions that users asked it. The very first chatbot, ELIZA, wasn’t capable of much. The future of AI may or may not involve a bot taking your job, but it will very likely involve one taking your name. These names can have a malicious effect, but in other instances, they are simply annoying or mundane-a marketing ploy for companies to try to influence how you think about their products. “There’s a difference between what you expect from a ‘help assistant’ versus a bot named Tessa,” Katy Steinmetz, the creative and project director of the naming agency Catchword, told me. The names are yet another way to make bots seem more believable and real. Many of the most advanced chatbots- ChatGPT, Bard, HuggingChat-stick to clunky or abstract identities, but there are now many new additions to the already endless customer-service bots with real names ( Maya, Bo, Dom).Īs generative AI continues to advance, expect a deluge of new human-named bots in the coming years, Suresh Venkatasubramanian, a computer-science professor at Brown University, told me. In addition to Tessa, there are bots named Ernie (from the Chinese company Baidu), Claude (a ChatGPT rival from the AI start-up Anthropic), and Jasper (a popular AI writing assistant for brands). The new generation of chatbots can not only converse in unnervingly humanlike ways in many cases, they have human names too. Perhaps the organization didn’t want to suggest a human connection, but why else give the bot that name? “It was not our intention to suggest that Tessa could provide the same type of human connection that the Helpline offered,” the nonprofit’s CEO, Liz Thompson, told NPR. “Every single thing that Tessa suggested were things that led to the development of my eating disorder,” one woman who reviewed the chatbot wrote on Instagram. But although it was designed to deliver a set of approved responses to people who might be at risk of an eating disorder, Tessa instead recommended that they lose weight. The National Eating Disorder Association’s chatbot had recently replaced a phone hotline and the handful of staffers who ran it.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |