Intel and academic groups are designing specialized hardware to speed path planning and other aspects of robot coordination Robots have a tough job making their way in the world. Life throws up obstacles, and it takes a lot of computing power to avoid them. At the IEEE International Solid-State Circuits Conference last month in San Francisco, engineers presented some ideas for lightening that computational burden. That’s a particularly good thing if you’re a compact robot, with a small battery pack and a big job to do.
In a quest for penny-priced plastic sensors, Arm and its partners are demonstrating a stripped-down form of machine learning Body odor is a stubborn problem. Not just for people, but also for sensors. Sensors and the computing attached to them struggle to perceive armpit odors in the way humans do, because B.O. is really a complex mix of dozens of gaseous chemicals. The UK’s PlasticArmPit project is designing the first machine learning–enabled flexible plastic sensor chip. Its target audience: those who think they might stink. The prototype chip will be manufactured and tested in 2019. The project is part of a broader effort Arm has been involved in to drive the cost of plastic IoT devices down below US $0.01 so that they can be embedded in all sorts of consumer goods, including disposable ones.
Bandwidth limits mean AI systems need too much DRAM, embedded-FPGA startup thinks its technology can change that Deep learning has a DRAM problem. Systems designed to do difficult things in real time, such as telling a cat from a kid in a car’s backup camera video stream, are continuously shuttling the data that makes up the neural network’s guts from memory to the processor. The problem, according to startup Flex Logix, isn’t a lack of storage for that data; it’s a lack of bandwidth between the processor and memory. Some systems need four or even eight DRAM chips to sling the 100s of gigabits to the processor, which adds a lot of space and consumes considerable power. Flex Logix says that the interconnect technology and tile-based architecture it developed for reconfigurable chips will lead to AI systems that need the bandwidth of only a single DRAM chip and consume one-tenth Continue reading Flex Logix Says It’s Solved Deep Learning’s DRAM Problem
Chip can learn on its own and inference at 100-microwatt scale company says at Arm Tech Con At Arm Tech Con today, West Lake Village, Calif.-based startup Eta Compute showed off what it believes is the first commercial low-power AI chip capable of learning on its own using a type of machine learning called spiking neural networks. Most AI chips for use in low-power or battery-operated IoT devices have a neural network that has been trained by a more powerful computer to do a particular job. A neural network that can do what’s called unsupervised learning can essentially train itself: Show it a pack of cards and it will figure out how to sort the threes from the fours from the fives. Eta Compute’s third generation chip, called TENSAI, also does traditional deep learning using convolutional neural networks. Potential customers already have samples of the new chip, and the company expects Continue reading Eta Compute Debuts Spiking Neural Network Chip for Edge AI
TSMC is the big winner, having made them both At an event today, Apple executives said that the new iPhone Xs and Xs Max will contain the first smartphone processor to be made using 7 nm manufacturing technology, the most advanced process node. Huawei made the same claim, to less fanfare, late last month and it’s unclear who really deserves the accolades.
IBM’s new chip is designed to do both high-precision learning and low-precision inference across the three main flavors of deep learning The field of deep learning is still in flux, but some things have started to settle out. In particular, experts recognize that neural nets can get a lot of computation done with little energy if a chip approximates an answer using low-precision math. That’s especially useful in mobile and other power-constrained devices. But some tasks, especially training a neural net to do something, still need precision. IBM recently revealed its newest solution, still a prototype, at the IEEE VLSI Symposia: a chip that does both equally well.