As we've talked about a number of times in the past, Google is very keen on taking good advantage of deep-learning to help accomplish some amazing things. Its latest venture could affect us in a very direct way: by making it easier to search for specific photos in our collection.
Object identification is already a big part of Google's business; it's why the company's search engine is so eerily accurate at times. It knows the difference between "small" and "large", different types of very similar objects, colors, and so forth. With a partnership with Movidius, which is headquartered in San Mateo, California, Google will pack a special chip in some future smartphones that add acceleration to image recognition.
An example of deep-learning object detection, per Movidius
The ultimate goal here is to allow people to search through their collections with greater speed; which is especially important if someone carries a large photo collection with them. But that's not to make it sound like this is all for offline use: mobile devices could make various detections after a photo or video is taken, and send that metadata up to the cloud along with the files themselves. This could make Google's job easier, since the heavy lifting would be done on the mobile device, not its own servers.
There are other potential uses for this technology, including instant face recognition (imagine simply looking at your phone to unlock it), aiding blind persons, and improving the detection and translation of text.
Google hasn't confirmed when we'll see the first results of this partnership come to fruition, but it does seem very likely that this kind of technology will be seen in its future top-end smartphones, such as the sequels to the Nexus 6P and 5X.