In the not-too-distant future, today's smartphone could look pretty dumb. At least if Google and its new partner, Movidius, a maker of processors for "low-power machine vision," have anything to say about it.

Under the partnership announced yesterday, the companies will work together to advance Google's vision of mobile devices with deep-learning capabilities. That means that the silicon-based smarts currently seen on cognitive computers like IBM's Watson could one day reside on the phone in a user's pocket.

Under the agreement, Google plans to bring the neural computation engine it currently runs on its own servers to Movidius' chip platform, which could eventually enable machine-learning capabilities on local, mobile devices. In exchange for using the San Mateo, California-based chipmaker's processors and software development environment, Google said it will "contribute to Movidius' neural network technology roadmap."

'Machine Intelligence on Personal Devices'

One possible outcome of Google's collaboration with Movidius is a future of truly smart smartphones that could, for example, recognize faces and other images on sight and understand the meaning of different audio inputs like human speech. Google researchers have already helped advance the capabilities of the company's own servers to dramatically reduce transcription errors in its Google Voice and Project Fi phone services, for instance.

"What Google has been able to achieve with neural networks is providing us with the building blocks for machine intelligence Relevant Products/Services, laying the groundwork for the next decade of how technology will enhance the way people interact with the world," said Blaise AgĻ‹era y Arcas, who heads Google's machine intelligence group. "By working with Movidius, we're able to expand this technology beyond the data Relevant Products/Services center and out into the real world, giving people the benefits of machine intelligence on their personal devices."

In partnering with Movidius, Google will use the company's newest chip, the MA2450 (pictured above), which can perform complex, neural-network computations at the processor Relevant Products/Services rather than server scale. The MA2450 is a visual processing unit, or VPU, which itself is a type of GPU (graphics processing unit) chip capable of fast multi-tasking that's increasingly being used for machine learning as well as for gaming.

Moving Processing to the Edge

Bringing machine learning capabilities to smartphone scale opens up a vast array of new computing possibilities for daily life, said David Schubmehl, research director for Content Analytics, Discovery and Cognitivie Systems at the analyst firm IDC.

"It makes a lot of sense," Schubmehl told us. For example, a smartphone with support for built-in deep learning could figure out from people's daily behavior patterns what times they prefer to wake up and what temperatures they like their thermostats set at, and then control their households Internet of Things-enabled devices to reflect those preferences -- all without having to send a single byte of data across the networks to distant servers for processing.

Such a smartphone, "becomes a much more democratic player," enabling invididuals to locally access the type of computing smarts currently available only through data centers on the cloud Relevant Products/Services, Schubmehl said. "It's a logical progression to move more processing to the edge."

"The challenge in embedding this technology into consumer devices boils down to the need for extreme power efficiency, and this is where a deep synthesis between the underlying hardware architecture and the neural compute comes in," said Remi El-Ouazzane, CEO of Movidius, in a statement. He said his company's mission is to "bring visual intelligence to devices so that they can understand the world in a more natural way."

Image Credit: Movidius.