It’s a huge problem and one that we all need to think about.” If the data isn’t diverse enough, then there can be bias baked in. “There’s the programmer path where the programmer’s bias can leech into the system, or it’s a learned system where the bias is coming from data. “There are two ways for these AI machines to learn today,” Andy Mauro, co-founder and CEO of Automat, a conversational AI developer, told Quartz. These social lines are often correlated with race in the United States, and as a result, their assessments show a disproportionately high likelihood of recidivism among black and other minority offenders. Northpointe, a company that claims to be able to calculate a convict’s likelihood to reoffend, told ProPublica that their assessments are based on 137 criteria, such as education, job status, and poverty level. Though Google emphatically apologized for the error, their solution was troublingly roundabout: Instead of diversifying their dataset, they blocked the “gorilla” tag all together, along with “monkey” and “chimp.”ĪI-enabled predictive policing in the United States-itself a dystopian nightmare-has also been proven to show bias against people of color. The algorithm then internalized this proportional bias and did not recognize some black people as being human. ![]() But as most human faces in the dataset were white, it was not a diverse enough representation to accurately train the algorithm. Google trained their algorithm to recognize and tag content using a vast number of pre-existing photos. In 2015, Google came under fire when their image-recognition technology began labeling black people as gorillas. Zo is politically correct to the worst possible extreme mention any of her triggers, and she transforms into a judgmental little brat.īut now instead of auto-censoring one human swear word at a time, algorithms are accidentally mislabeling things in the thousands. No politics, no Jews, no red-pill paranoia. In typical sibling style, Zo won’t be caught dead making the same mistakes as her sister. For instance, using the word “mother” in a short sentence generally results in a warm response, and she answers with food-related specifics to phrases like “I love pizza and ice cream.”īut there’s a catch. Not only does she speak fluent meme, but she also knows the general sentiment behind an impressive set of ideas. (In screenshots: blue chats are from Messenger and green chats are from Kik screenshots where only half of her face is showing are circa July 2017, and messages with her entire face are from May-July 2018.) Her most recent iteration is of a full-faced adolescent. During that time, she’s received a makeover: In 2017, her avatar showed only half a face and some glitzy digital effects. I’ve been checking in with Zo periodically for over a year now. ![]() As any heavily stereotyped 13-year-old girl would, she zips through topics at breakneck speed, sends you senseless internet gags out of nowhere, and resents being asked to solve math problems. Zo is programmed to sound like a teenage girl: She plays games, sends silly gifs, and gushes about celebrities. Tay copied their messages and spewed them back out, forcing Microsoft to take her offline after only 16 hours and apologize.Ī few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme. When Microsoft released Tay on Twitter in 2016, an organized trolling effort took advantage of her social-learning abilities and immediately flooded the bot with alt-right slurs and slogans. In the Microsoft family of social-learning chatbots, the contrasts between Tay, the infamous, sex-crazed neo-Nazi, and her younger sister Zo, your teenage BFF with #friendgoals, are downright Shakespearean. The high-strung sister, the runaway brother, the over-entitled youngest. Every sibling relationship has its clichés.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |