AI predicts geolocation from photos: new type of privacy breach

The latest hot topic of conversation among our staff at Waltzer is the intriguing news a few of us heard on NPR, that a project, known as Predicting Image Geolocations (or PIGEON, for short), designed by three Stanford graduate students.

And, maybe no surprises, AI trained on images from Google Street View was able to identify the correct country 95% of the time and usually pick the location within about 25 miles of the actual site.

PIGEON recently competed against Trevor Rainbolt, a top ranked player of GeoGuess, and won.

Of course, as a civil liberties advocate in the NPR story pointed out, it means exposing information about individuals that they never intended to share when they posted a photo.

The reporter on the NPR story tested the system on five personal photos from a trip years ago, none of which have been published online. Some were in cities, but other were in nature, nowhere near roads or other easily recognizable landmarks.

It guessed a campsite in Yellowstone to within around 35 miles of the actual location. The program placed another photo, taken on a street in San Francisco, to within a few blocks.

Apparently indoor photos continue to be difficult for the AI to guess but we can see that changing. I’m sure power points and types of furniture and the appearance of walls and floors could be used by the next generation of this tech to master the indoor photo prediction game too.

Location Prediction is another head spinner for AI impact on society

The core ethical principles of privacy, autonomy and consent are turned upside down without a responsible AI approach to introducing this capability into the world.

That is a very big burden on whichever startup or tech firm ends up trying to push a version of this out.

The implications are dizzying and unforeseeable. With the number of photos out there on social media. Maybe this technology destroys social media as we know it? One wonders how long people will continue to be comfortable sharing photos or videos the way they do now.

When location is potentially knowable in this way, it may even impact willingness to go to certain places.

Then there’s fears of stalking, harassment, property security, social engineering and manipulation, and security of essential or dangerous infrastructure that should be kept secret from malicious actors.

Excessive privacy violations are unsustainable as a business model

Here at Waltzer we are thinking this is an example of a new application of AI that could easily be introduced in a careless way and trigger blowback resulting in governments highly restricting it or even banning it outright.

Privacy laws could be harnessed against this type of AI app in most major countries. The calls for tougher laws will come loud and fast if they aren’t.

The EU’s GDPR already gives people the right to control information related to their location. The proposed EU AI Act could also be the basis in future for banning an application of AI that created widespread indignation in the general community.

On the other hand, a smart company with AI safety as a priority could avoid this result through a rigorous responsible AI impact assessment which could assist in driving a sustainable growth strategy. Perhaps profitable applications in specialized fields first, followed by wider releases later when a fully thought-out approach is ready.  

It’s certainly an exciting time to be in the AI space but also a time that calls for those pushing the boundaries to think carefully about the impact of the work they do.

Previous
Previous

Privacy automation solutions: do they work?

Next
Next

How AI creates new types of personal information