PlaNet—Photo Geolocation with Convolutional Neural Networks

*The registration and geolocativity aspects here are plenty interesting.

*"Deep learning neural nets" work gangbusters, but nobody ever knows exactly HOW their "neurons" will "deep-learn" a data set, which is plenty weird. It's kinda like hiring a Chief Technology Officer because you compared ten thousand candidates for the job and figured out which one was the luckiest.

https://www.technologyreview.com/s/600889/google-unveils-neural-network-with-superhuman-ability-to-determine-the-location-of-almost/

(…)

"An interesting question is how PlaNet performs so well without being able to use the cues that humans rely on, such as vegetation, architectural style, and so on. But Weyand and co say they know why: "We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-traveled human to distinguish.”

"They go further and use the machine to locate images that do not have location cues, such as those taken indoors or of specific items. This is possible when images are part of albums that have all been taken at the same place. The machine simply looks through other images in the album to work out where they were taken and assumes the more specific image was taken in the same place.

"That’s impressive work that shows deep neural nets flexing their muscles once again. Perhaps more impressive still is that the model uses a relatively small amount of memory unlike other approaches that use gigabytes of the stuff. “Our model uses only 377 MB, which even fits into the memory of a smartphone,” say Weyand and co…."