We’ve seen the new Pixels, the PixelBook, and refreshed Daydream VR and Google Home speakers. The one device we’ve become particularly interested in is a small teal and white camera Google calls Clips.
It’s not an action camera, it’s not a smartphone accessory. The latter is something Clips’ product lead Juston Payne emphasizes to TechCrunch‘s Frederic Lardinois.
“It’s an accessory to anything, I’d say. It’s a stand-alone camera. A new type of camera and insofar as that any digital camera has become an accessory to a computer or a phone, so too with this,” he said. “The reason for that comes back to the fact that the intelligence built into the device to decide when to take these shots, which is really important because it gives users total control over it.”
“Smart features” built into a camera? Sure, we’ve seen this. It’s not new. But the implementation Google opts for with Clips feels like something we haven’t seen before. The camera is a self-contained unit that can take your images without you prompting it, run pre-trained machine learning algorithms to select the best photos, and then generate clips and photos it thinks are the best ones it has taken. With Intel’s Movidius Myriad 2 vision processing unit, it gives Clips the ability to have on-board artificial intelligence processing, which Payne says isn’t something we could do until fairly recently.
“The thing is that until really quite recently, you needed at least a desktop or you needed literally a server farm to take imagery in, run convolutional neural networks against them, do semantic analysis, and then spit something out,” Payne said.
For the machine learning algorithm, Google had to collect its own data since what artificial intelligence can do at the moment doesn’t necessarily translate to it being able to see, for example, a baby crawling and think that it should capture this. Google had editors on their staff look at content and say what they liked. Labelers went through these clips and decided which ones they preferred. These pieces of data then became the training material for the model.
This algorithm will learn what you like and be much smarter about what it captures so that it truly gets what you like. But it’s not a camera for all situations. The AI can’t handle, for example, a vacation somewhere exotic just yet (but the plan is that it could be used for more situations in the future). Right now, Google is marketing this to parents with kids and/or pets, specifically for home use. Since it’s not always easy to photograph either, having a tiny cube take photos and videos will help capture those magical moments and you might even be part of the shot sometimes. Of course, you can also manually prompt it to record with the lone button on the device.
But what about privacy concerns or having Clips record things you don’t want it to? As mentioned earlier, Clips is a self-contained unit. The photos and videos won’t leave the device except for the smartphone it’s linked to. It takes soundless videos so concerns about it violating wiretap-related laws are sidestepped, too. And with its built-in LED light that blinks when it records, you can tell when it’s in use.
The media that Clips takes can be exported as either video, photos, or GIFs. It has a 12-megapixel sensor with a 130-degree field-of-view lens capable of taking photos at 15fps. It has a 16GB internal memory and works with a handful of devices at the moment, including Pixel, Samsung Galaxy S7 and S8, and iPhone 6 and up.
At US$249 (around P12,700), it’s not a cheap camera but it’s new tech and sometimes these things command a higher price. We’re waiting to see how the public reacts to the existence of this device. It shows us a glimpse of what the future of photography can be like in this age where artificial intelligence seems to be dominating the conversation more and more.