Cameras native to the Internet of Things

*Those are spooky.

Do they even need lenses? Probably not

by Stacey Higginbotham

(...)

There are two challenges driving the silicon shift. First, processing power: Many of these cameras try to identify specific objects by using machine learning. For example, an oil company might want a drone that can identify leaks as it flies over remote oil pipelines. Typically, training these identification models is done in the cloud because of the enormous computing power required. Some of the more ambitious chip providers believe that in a few years, not only will edge-based chips be able to match images using these models, but they will also be able to train models directly on the device.

That’s not happening yet, due to the second challenge that silicon providers face. Comparing images with models requires not just computing power but actual power. Silicon providers are trying to build chips that sip power while still doing their job. Qualcomm has one such chip, called Glance, in its research labs. The chip combines a lens, an image processor, and a Bluetooth radio on a module smaller than a sugar cube.

Glance can manage only three or four simple models, such as identifying a shape as a person, but it can do it using fewer than 2 milliwatts of power. Qualcomm hasn’t commercialized this technology yet, but some of its latest computer-vision chips combine on-chip image processing with an emphasis on reducing power consumption.

But does a camera even need a lens? Researchers at the University of Utah suggest not, having invented a lensless camera that eliminates some of a traditional camera’s hardware and high data rates. Their camera is a photodetector against a pane of plexiglass that takes basic images and converts them into shapes a computer can be trained to recognize.