A senior data scientist at Netflix trained an AI to detect kissing scenes in films—and had to take precautions to make sure the model didn’t confuse kissing with sex Like someone who has never been kissed, AI began learning the basics by binge-watching romantic film clips to see how Hollywood stars lock lips. By training deep learning algorithms that have already proven adept at recognizing faces and objects to also recognize steamy kissing scenes dramatized by professional actors, a data scientist has shown how AI systems could gain greater insight into the most intimate human activities.
PartNet is a new semantic database of common objects that brings a new level of real-world understanding to robots One of the things that makes humans so great at adapting to the world around us is our ability to understand entire categories of things all at once, and then use that general understanding to make sense of specific things that we’ve never seen before. For example, consider something like a lamp. We’ve all seen some lamps. Nobody has seen every single lamp there is. But in most cases, we can walk into someone’s house for the first time and easily identify all their lamps and how they work. Every once in a while, of course, there will be something incredibly weird that’ll cause you to have to ask, “Uh, is that a lamp? How do I turn it on?” But most of the time, our generalized mental model of lamps Continue reading Massive 3D Dataset Helps Robots Understand What Things Are
Without instructions, software agents learn how to crush human players at “Capture the Flag” in Quake III Arena Chess and Go were originally developed to mimic warfare, but they do a bad job of it. War and most other competitions generally involve more than one opponent and more than one ally, and the play typically unfolds not on an orderly, flat matrix but in a variety of landscapes built up in three dimensions. That’s why Alphabet’s DeepMind, having crushed chess and Go, has now tackled the far harder challenge posed by the three-dimensional, multiplayer, first-person video game. Writing today in Science, lead author Max Jaderberg and 17 DeepMind colleagues describe how a totally unsupervised program of self-learning allowed software to exceed human performance in playing “Quake III Arena.” The experiment involved a version of the game that requires each of two teams to capture as many of the other teams’ flags as possible. The teams begin at base camps set at opposite ends of Continue reading DeepMind Deploys Self-taught Agents To Beat Humans at Quake III
Learning in simulation no longer takes human expertise to make it useful in the real world We all know how annoying real robots are. They’re expensive, they’re finicky, and teaching them to do anything useful takes an enormous amount of time and effort. One way of making robot learning slightly more bearable is to program robots to teach themselves things, which is not as fast as having a human instructor in the loop, but can be much more efficient because that human can be off doing something else more productive instead. Google industrialized this process by running a bunch of robots in parallel, which sped things up enormously, but you’re still constrained by those pesky physical arms. The way to really scale up robot learning is to do as much of it as you can in simulation instead. You can use as many virtual robots running in virtual environments testing Continue reading NVIDIA Brings Robot Simulation Closer to Reality by Making Humans Redundant
Predictive computer models could prompt physicians to talk with families who are skeptical of vaccines Growing skepticism toward vaccines has sparked a flareup of measles outbreaks affecting New York City neighborhoods, cruise ships, international airports and even Google’s Mountain View headquarters. To help family physicians reach out to vaccine-hesitant parents, data scientists have shown how computer models can predict the likelihood that an individual child’s parents will not get him or her vaccinated.
Microsoft’s president talks about the promise and perils of artificial intelligence AI can reveal how many cigarettes a person has smoked based on the DNA contained in a single drop of their blood, or scrutinize Islamic State propaganda to discover whether violent videos are radicalizing potential recruits. Because AI is such a powerful tool, Microsoft president Brad Smith told the crowd at Columbia University’s recent Data Science Day that tech companies and universities performing AI research must also help ensure the ethical use of such technologies.
Companies are deploying artificial intelligence systems but don’t know if they’ll measure up Photo: Dariusz Mejer/EyeEm/Getty Images The Animal-AI Olympics, which will begin this June, aims to “benchmark the current level of various AIs against different animal species using a range of established animal cognition tasks.” At stake are bragging rights and US $10,000 in prizes. The project, a partnership between the University of Cambridge’s Leverhulme Centre for the Future of Intelligence and GoodAI, a research institution based in Prague, is a new way to evaluate the progress of AI systems toward what researchers call artificial general intelligence. Such an assessment is necessary, the organizers say, because recent benchmarks are somewhat deceiving. While AI systems have bested human grandmasters in a host of challenging competitions, including the board game Go and the video game StarCraft, these matchups only proved that the AIs were astoundingly good at those particular games. AI systems Continue reading Is AI as Smart as a Mouse? A Crow? An Expert Physician?
TossingBot, developed by Google and Princeton, can teach itself to throw arbitrary objects with better accuracy than most humans As anyone who’s ever tried to learn how to throw something properly can attest to, it takes a lot of practice to be able to get it right. Once you have it down, though, it makes you much more efficient at a variety of weird tasks: Want to pass an orange ball through a hoop that’s inconveniently far off of the ground? Just throw it! Want to knock some small sticks placed on top of large sticks with a ball? Just throw it! Want to move a telephone pole in Scotland? You get the idea. Most humans, unfortunately, aren’t talented enough for the skills we’ve developed at throwing things for strange reasons to translate well to everyday practical tasks. But just imagine what we’d be capable of if we could throw arbitrary objects Continue reading Google Teaches Robot to Toss Bananas Better Than You Do
California Governor Gavin Newsom wows a crowd of distinguished computer scientists, educators, and other Silicon Valley luminaries at Stanford Human-Centered AI symposium Stanford University launched its Institute for Human-Centered AI on Monday. Known as Stanford HAI, the institute’s charter is to develop new technologies while guiding AI’s impact on the world, wrestle with ethical questions, and come up with helpful public policies. The Institute intends to raise US $1 billion to put towards this effort. The university kicked off Stanford HAI (pronounced High) with an all-day symposium that laid out some of the issues the institute aims to address while showcasing Stanford’s current crop of AI researchers. The most anticipated speaker on the agenda was Microsoft co-founder Bill Gates. Lines of AI researchers, Silicon Valley entrepreneurs, investors, and educators formed early to get through the security screening required to watch his talk in person. And indeed, Gates’ remarks, structured as an interview by two Continue reading A Crowd of Computer Scientists Lined Up for Bill Gates—But it Was Gavin Newsom That Got Them Buzzing
Researchers can use the “Moments in Time” project to train AI systems to recognize and understand actions and events in videos Imagine if we had to explain all of the actions that take place on Earth to aliens. We could provide them with non-fiction books or BBC documentaries. We could try to explain verbally what twerking is. But, really, nothing conveys an action better than a three second video clip. Falling Asleep via GIPHY Thanks to researchers at MIT and IBM, we now have a clearly labelled dataset of more than one million such clips. The dataset, called Moments in Time, captures hundreds of common actions that occur on Earth, from the beautiful moment of a flower opening to the embarrassing instance of a person tripping and eating dirt. Tripping via GIPHY (We’ve all been there.) Moments in Time, however, wasn’t created to provide a bank of GIFs, but to lay Continue reading Flipping or Turning? This Massive Database of Video Clips Will Help AIs Understand the Difference