Cooking robots have come a long way in a relatively short amount of time. We’re not yet at the point where we’ve got robot arms dangling from the ceiling that do all the work for us, but there are a bunch of robots out there with reasonable cookie-making skills. However, we’ve mostly seen cooking robots that are programmed to follow a specific recipe, rather than cooking robots that are programmed to cook you exactly what you want. Sometimes these are the same thing, but often cooking is (I’m told) much more about adapting a recipe to your individual taste. For personal cooking robots to make us food that we love, they’re going to need to be able to listen to our feedback, understand what that feedback means, and then take actions to adapt their recipe or technique to achieve the desired outcome. This is more complicated than, say, adding less Continue reading Robot Learns to Cook Your Perfect Omelet
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. In my mythical free time outside of professorhood, I’m a stand-up comedian and improviser. As a comedian, I’ve often found myself wishing I could banter with modern commercial AI assistants. They don’t have enough comedic skills for my taste! This longing for cheeky AI eventually led me to study autonomous robot comedians, and to teach my own robot how to perform stand-up.
For the most part, robots are a mystery to end users. And that’s part of the point: Robots are autonomous, so they’re supposed to do their own thing (presumably the thing that you want them to do) and not bother you about it. But as humans start to work more closely with robots, in collaborative tasks or social or assistive contexts, it’s going to be hard for us to trust them if their autonomy is such that we find it difficult to understand what they’re doing. In a paper published in Science Robotics, researchers from UCLA have developed a robotic system that can generate different kinds of real-time, human-readable explanations about its actions, and then did some testing to figure which of the explanations were the most effective at improving a human’s trust in the system. Does this mean we can totally understand and trust robots now? Not yet—but it’s Continue reading A Robot That Explains Its Actions Is a First Step Towards AI We Can (Maybe) Trust
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. At an early age, as we take our first steps into the world of math and numbers, we learn that one apple plus another apple equals two apples. We learn to count real things. Only later are we introduced to a weird concept: zero… or the number of apples in an empty box. The concept of “zero” revolutionized math after Hindu-Arabic scholars and then the Italian mathematician Fibonacci introduced it into our modern numbering system. While today we comfortably use zero in all our mathematical operations, the concept of “nothing” has yet to enter the realm of artificial intelligence. In a sense, AI and deep learning still need to learn how to recognize and reason with nothing. Is it an apple or a banana? Continue reading The Next Frontier in AI: Nothing
In September, Facebook sent out a strange casting call: We need all types of people to look into a webcam or phone camera and say very mundane things. The actors stood in bedrooms, hallways, and backyards, and they talked about topics such as the perils of junk food and the importance of arts education. It was a quick and easy gig—with an odd caveat. Facebook researchers would be altering the videos, extracting each person’s face and fusing it onto another person’s head. In other words, the participants had to agree to become deepfake characters. Facebook’s artificial intelligence (AI) division put out this casting call so it could ethically produce deepfakes—a term that originally referred to videos that had been modified using a certain face-swapping technique but is now a catchall for manipulated video. The Facebook videos are part of a training data set that the company assembled for a global Continue reading Facebook AI Launches Its Deepfake Detection Challenge
Yoshua Bengio is known as one of the “three musketeers” of deep learning, the type of artificial intelligence (AI) that dominates the field today. Bengio, a professor at the University of Montreal, is credited with making key breakthroughs in the use of neural networks—and just as importantly, with persevering with the work through the long cold AI winter of the late 1980s and the 1990s, when most people thought that neural networks were a dead end. He was rewarded for his perseverance in 2018, when he and his fellow musketeers (Geoffrey Hinton and Yann LeCun) won the Turing Award, which is often called the Nobel Prize of computing. Today, there’s increasing discussion about the shortcomings of deep learning. In that context, IEEE Spectrum spoke to Bengio about where the field should go from here. He’ll speak on a similar subject tomorrow at NeurIPS, the biggest and buzziest AI conference in the world; Continue reading Yoshua Bengio, Revered Architect of AI, Has Some Ideas About What to Build Next
AI experts gathered at MIT last week, with the aim of predicting the role artificial intelligence will play in the future of work. Will it be the enemy of the human worker? Will it prove to be a savior? Or will it be just another innovation—like electricity or the internet? As IEEE Spectrum previously reported, this conference (“AI and the Future of Work Congress”), held at MIT’s Kresge Auditorium, offered sometimes pessimistic outlooks on the job- and industry-destroying path that AI and automation seems to be taking: Self-driving technology will put truck drivers out of work; smart law clerk algorithms will put paralegals out of work; robots will (continue to) put factory and warehouse workers out of work. Andrew McAfee, co-director of MIT’s Initiative on the Digital Economy, said even just in the past couple years, he’s noticed a shift in the public’s perception of AI. “I remember from previous versions Continue reading AI and the future of work: The prospects for tomorrow’s jobs
This is part six of a six-part series on the history of natural language processing. In February of this year, OpenAI, one of the foremost artificial intelligence labs in the world, announced that a team of researchers had built a powerful new text generator called the Generative Pre-Trained Transformer 2, or GPT-2 for short. The researchers used a reinforcement learning algorithm to train their system on a broad set of natural language processing (NLP) capabilities, including reading comprehension, machine translation, and the ability to generate long strings of coherent text. But as is often the case with NLP technology, the tool held both great promise and great peril. Researchers and policy makers at the lab were concerned that their system, if widely released, could be exploited by bad actors and misappropriated for “malicious purposes.” The people of OpenAI, which defines its mission as “discovering and enacting the path to safe Continue reading For Centuries, People Dreamed of a Machine That Could Produce Language. Then OpenAI Made One
This is part five of a six-part series on the history of natural language processing. In March 2016, Microsoft was preparing to release its new chatbot, Tay, on Twitter. Described as an experiment in “conversational understanding,” Tay was designed to engage people in dialogue through tweets or direct messages, while emulating the style and slang of a teenage girl. She was, according to her creators, “Microsoft’s A.I. fam from the Internet that’s got zero chill.” She loved E.D.M. music, had a favorite Pokémon, and often said extremely online things, like “swagulated.” Tay was an experiment at the intersection of machine learning, natural language processing, and social networks. While other chatbots in the past—like Joseph Weizenbaum’s Eliza—conducted conversation by following pre-programmed and narrow scripts, Tay was designed to learn more about language over time, enabling her to have conversations about any topic. Tay was designed to learn more about language over time…. Eventually, Continue reading In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Online Conversation
This week at MIT, academics and industry officials compared notes, studies, and predictions about AI and the future of work. During the discussions, an insurance company executive shared details about one AI program that rolled out at his firm earlier this year. A chatbot the company introduced, the executive said, now handles 150,000 calls per month. Later in the day, a panelist—David Fanning, founder of PBS’s Frontline—remarked that this statistic is emblematic of broader fears he saw when reporting a new Frontline documentary about AI. “People are scared,” Fanning said of the public’s AI anxiety. Fanning was part of a daylong symposium about AI’s economic consequences—good, bad, and otherwise—convened by MIT’s Task Force on the Work of the Future.