Robotics researchers expand their Duckietown autonomous car course to Kickstarter There is a strong and natural relationship between robots and rubber duckies. Seriously. Being small, cheap, colorful, and pleasingly compliant, duckies became a sort of physical Stanford Bunny—when you want to show the scale of a robot, or give a robot something to visually locate or grasp or something, just toss a duckie in there. This relationship was formalized through the 2016 ICRA conference, where duckies inspired a bunch of videos and some poetry that is surprisingly not terrible. Since then, duckies have been taking over in robotics—at this point, I’m fairly certain that Andrea Censi at ETH Zurich is held hostage by (and doing the bidding of) a small army of little yellow duckies. This would explain why an entire duckie village full of duckie-sized autonomous cars that you can learn how to program is now on Kickstarter, with Continue reading Learn to Program Self-Driving Cars (and Help Duckies Commute) With Duckietown
Your weekly selection of awesome robot videos Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!): ISR 2018 – August 24-27, 2018 – Shenyang, China BioRob 2018 – August 26-29, 2018 – University of Twente, Netherlands RO-MAN 2018 – August 27-30, 2018 – Nanjing, China ELROB 2018 – September 24-28, 2018 – Mons, Belgium ARSO 2018 – September 27-29, 2018 – Genoa, Italy ROSCon 2018 – September 29-30, 2018 – Madrid, Spain IROS 2018 – October 1-5, 2018 – Madrid, Spain Let us know if you have suggestions for next week, and enjoy today’s videos.
By adding randomness to a relatively simple simulation, OpenAI’s robot hand learned to perform complex in-hand manipulation In-hand manipulation is one of those things that’s fairly high on the list of “skills that are effortless for humans but extraordinarily difficult for robots.” Without even really thinking about it, we’re able to adaptively coordinate four fingers and a thumb with our palm and friction and gravity to move things around in one hand without using our other hand—you’ve probably done this a handful (heh) of times today already, just with your cellphone. It takes us humans years of practice to figure out how to do in-hand manipulation robustly, but robots don’t have that kind of time. Learning through practice and experience is still the way to go for complex tasks like this, and the challenge is finding a way to learn faster and more efficiently than just giving a robot hand Continue reading OpenAI Demonstrates Complex Manipulation Transfer from Simulation to Real World
Facebook’s DensePose technology lets anyone turn 2D images of people into 3D models In early 2018, Facebook’s AI researchers unveiled a deep learning system that can transform 2D photo and video images of people into 3D mesh models of those human bodies in motion. Last month, Facebook publicly shared the code for its “DensePose” technology, which could be used by Hollywood filmmakers and augmented reality game developers—but maybe also by those seeking to build a surveillance state.
A casual Q&A with a robotics PR expert and a robotics journalist Image: iStockphoto Evan Ackerman is a journalist who has been covering robotics (and some other stuff) for the last 11 years. Last time he counted, he’d written well over 6,000 articles, which seems like a lot. Tim Smith is the CEO of Element Public Relations, a boutique PR firm in San Francisco specializing in emerging technologies. Element has particular expertise in robotics, given their work with Willow Garage, Open Robotics, Fetch Robotics, Simbe Robotics, and others. For this article, Evan and Tim sent each other some questions about what it’s like for a journalist to work with PR firms, and what it’s like for PR firms to work with journalists. Tim Smith: What is that you do, exactly? Evan Ackerman: I read a lot of news, send a lot of emails, talk to a lot of people, and Continue reading Two Robot Geeks Discuss Robotics PR, Automation Fears, and Terminators
Axed engineers say IBM isn’t always smart about artificial intelligence Illustration: IEEE Spectrum; Images: IBM; iStockphoto IBM, a venerable tech company on a mission to stay relevant, has staked much of its future on IBM Watson. The company has touted Watson, its flagship artificial intelligence, as the premier product for turning our data-rich but disorganized world into a smart and tidy planet. Just last month, IBM CEO Ginni Rometty told a convention audience that we’re at an inflection point in history. Putting AI into everything will enable businesses to improve on “an exponential curve,” she said—a phenomenon that might one day be referred to as “Watson’s Law.” But according to engineers swept up in a major round of layoffs within IBM’s Watson division last month, the company’s promotions of its “cognitive computing” platform mask its own real difficulties in turning its AI into a profitable business. “IBM Watson has great AI,” Continue reading Layoffs at Watson Health Reveal IBM’s Problem with AI
Researchers wager on a possible Deepfake video scandal during the 2018 U.S. midterm elections Illustration: iStockphoto A quiet wager has taken hold among researchers who study artificial intelligence techniques and the societal impacts of such technologies. They’re betting whether or not someone will create a so-called Deepfake video about a political candidate that receives more than 2 million views before getting debunked by the end of 2018. The actual stakes in the bet are fairly small: Manhattan cocktails as a reward for the “yes” camp and tropical tiki drinks for the “no” camp. But the implications of the technology behind the bet’s premise could potentially reshape governments and undermine societal trust in the idea of having shared facts. It all comes down to when the technology may mature enough to digitally create fake but believable videos of politicians and celebrities saying or doing things that never actually happened in real life. “We talk about these technologies and we see the Continue reading Experts Bet on First Deepfakes Political Scandal
Alphabet’s DeepMind neural networks can grasp a three-dimensional scene from just a handful of two-dimensional snapshots Image: DeepMind Researchers at Alphabet’s DeepMind today described a method that they say can construct a three-dimensional layout from just a handful of two-dimensional snapshots. So far the method, based on deep neural networks, has been confined to virtual environments, they write in Science magazine. Natural environments are still too hard for current algorithms and hardware to handle. The article doesn’t speculate on commercial applications, and the authors weren’t available for interview. That gives me license to speculate: The new method might be useful for any surveillance system that has to reconstruct a crime from a few snapshots. Self-driving cars and household robots would also seem likely beneficiaries of the technique. What’s key is that the system learns a lot from very little—in these experiments it never got more than five snapshots to work with. And, the researchers write, it does the job by observation alone, without anyone having to first label the Continue reading Alphabet’s DeepMind Makes a Key Advance in Computer Vision
A drone surveillance system trains to watch out for humans stabbing or punching each other Images: University of Cambridge/National Institute of Technology/Indian Institute of Science/IEEE Drones armed with computer vision software could enable new forms of automated skyborne surveillance to watch for violence below. One glimpse of that future comes from UK and Indian researchers who demonstrated a drone surveillance system that can automatically detect small groups of people fighting each other. The seed idea for researchers to develop such a drone surveillance system was first planted in the wake of the Boston Marathon bombing that killed three and injured hundreds in 2013. That first attempt petered out. It was not until the Manchester Arena bombing that killed 23 and wounded 139—including many children leaving an Ariana Grande concert—when the researchers made some progress. This time, they harnessed a form of the popular artificial intelligence technique known as deep learning. “This time we were able to do a Continue reading AI Drone Learns to Detect Brawls
Researchers explore whether robots can become useful sacred objects in a religious context Photo: Gabriele Trovato/Waseda UniversityA robot with the appearance of a Christian Catholic saint can pray together with users and cite parts of the Bible and stories of the life of saints. Robots appear to be in the middle of a gradual but persistent transition from automated tools that perform specific tasks to artificially intelligent entities that we interact with socially and emotionally. It’s not at all clear where this is going to end up—people toss around the idea of robot companionship and even robot love with some frequency, for example. What hasn’t been explored nearly as much is the idea of robots in a religious context. We’ve seen a few examples of robots assisting in religious tasks, but what if robots could take things a step farther, and become sacred objects, embodying divinity within a robot itself? Continue reading Can a Robot Be Divine?