Addressing the possibility of life on Mars

In 2018, millions of people around the world caught glimpses of the planet Mars, discernible as a bright red dot in the summer’s night skies. Every 26 months or so, the red planet reaches a point in its elliptical orbit closest to Earth, setting the stage for exceptional visibility. This proximity also serves as an excellent opportunity for launching exploratory Mars missions, the next of which will occur in 2020 when a global suite of rovers will take off from Earth.  The red planet was hiding behind the overcast, drizzling Boston sky on Oct. 11, when Mars expert John Grotzinger gave audiences a different perspective, taking them through an exploration of Mars’ geologic history. Grotzinger, the Fletcher Jones Professor of Geology at the Caltech and a former professor in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), also used the eighth annual John Carlson Lecture to talk to the audience gathered at the New Continue reading Addressing the possibility of life on Mars

Flex Logix Says It’s Solved Deep Learning’s DRAM Problem

Bandwidth limits mean AI systems need too much DRAM, embedded-FPGA startup thinks its technology can change that Deep learning has a DRAM problem. Systems designed to do difficult things in real time, such as telling a cat from a kid in a car’s backup camera video stream, are continuously shuttling the data that makes up the neural network’s guts from memory to the processor. The problem, according to startup Flex Logix, isn’t a lack of storage for that data; it’s a lack of bandwidth between the processor and memory. Some systems need four or even eight DRAM chips to sling the 100s of gigabits to the processor, which adds a lot of space and consumes considerable power. Flex Logix says that the interconnect technology and tile-based architecture it developed for reconfigurable chips will lead to AI systems that need the bandwidth of only a single DRAM chip and consume one-tenth Continue reading Flex Logix Says It’s Solved Deep Learning’s DRAM Problem

This Robot Transforms Itself to Navigate an Obstacle Course

A central perception system allows a robot to change its own configurations for each new challenge When you’ve got a hammer, everything looks like a nail, but the world starts to look more interesting if your hammer can change shape. For the builders of a class of robots called modular self-reconfigurable robots (MSRR), shape-shifting is the first step toward endowing robots with an animal-like adaptability to unknown situations. “The question of autonomy becomes more complicated, more interesting,” when robots can change themselves to meet changing circumstances, said roboticist Hadas Kress-Gazit of Cornell University. The key to achieving adaptability for robots rests in centralized sensory processing, environmental perception, and decision-making software, Kress-Gazit and colleagues report this week in a new paper in Science Robotics. The authors claim their new work represents the first time a modular robot has autonomously solved problems by reconfiguring in response to a changing environment. To achieve that, they built strict limitations into both the environment and Continue reading This Robot Transforms Itself to Navigate an Obstacle Course

Machines that learn language more like kids do

Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence. In computing, learning language is the task of syntactic and semantic parsers. These systems are trained on sentences annotated by humans that describe the structure and meaning behind words. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri. Soon, they may also be used for home robotics. But gathering the annotation data can be time-consuming and difficult for less common languages. Additionally, humans don’t always agree on the annotations, and the annotations themselves may not accurately reflect how people naturally speak. In a paper being presented at this week’s Empirical Methods in Natural Language Processing conference, Continue reading Machines that learn language more like kids do