*Because there's plenty to boast about.
The device formerly known as the camera
(...)
There is, of course, the vacation beach pic, the kid’s winter recital, and the one–or ten–obligatory goofy selfie(s). But there’s also the book that caught my eye at a friend’s place, the screenshot of an insightful tweet and the tracking number on a package.
As our phones go everywhere with us, and storage becomes cheaper, we’re taking more photos of more types of things. We’re of course capturing sunsets and selfies, but people say 10 to 15 percent of the pictures being taken are of practical things like receipts and shopping lists.
To me, using our cameras to help us with our day-to-day activities makes sense at a fundamental human level. We are visual beings—by some estimates, 30 percent of the neurons in the cortex of our brain are for vision. Every waking moment, we rely on our vision to make sense of our surroundings, remember all sorts of information, and explore the world around us.
The way we use our cameras is not the only thing that’s changing: the tech behind our cameras is evolving too. As hardware, software, and AI continue to advance, I believe the camera will go well beyond taking photos—it will help you search what you see, browse the world around you, and get things done.
That’s why we started Google Lens last year as a first step in this journey. Last week, we launched a redesigned Lens experience across Android and iOS, and brought it to iOS users via the Google app.
I’ve spent the last decade leading teams that build products which use AI to help people in their daily lives, through Search, Assistant and now Google Lens. I see the camera opening up a whole new set of opportunities for information discovery and assistance. Here are just a few that we’re addressing with Lens...