A somewhat concerning new trend is going viral: People are using ChatGPT to determine the location shown in pictures. This week, OpenAI unveiled its latest AI models, o3 and o4-mini, which can “reason” through uploaded images. These models can crop, rotate, and zoom in on photos, including blurry and distorted ones, to analyze them thoroughly.
The ability of these models to analyze images, along with their capacity to search the web, creates a powerful location-finding tool. Users quickly discovered that o3, in particular, is adept at deducing cities, landmarks, restaurants, and bars from subtle visual clues.
In many instances, the models don’t seem to rely on memories of past ChatGPT conversations or EXIF data, which reveals where a photo was taken. Users have shared restaurant menus, neighborhood snapshots, facades, and self-portraits on X, instructing o3 to play “GeoGuessr,” a game where players guess locations from Google Street View images.

There’s an evident privacy concern with this trend. Bad actors could screenshot a person’s Instagram Story and use ChatGPT to attempt to reveal their identity. This was possible even before the launch of o3 and o4-mini. When TechCrunch tested o3 against an older model without image-reasoning capabilities, GPT-4o, they found that both models often arrived at the same correct answer, with GPT-4o even being faster.
During our testing, o3 successfully identified a place where GPT-4o couldn’t. By looking at a picture of a purple, mounted rhino head in a dimly-lit bar, o3 correctly identified it as a Williamsburg speakeasy, while GPT-4o guessed a U.K. pub.
Although o3 isn’t flawless and can sometimes provide inaccurate location deductions, this trend highlights the risks posed by more advanced reasoning AI models. There seem to be few safeguards in place to prevent this type of reverse location lookup in ChatGPT, as OpenAI doesn’t address the issue in its safety report for o3 and o4-mini.
We have contacted OpenAI for a comment, and we will update this piece if they respond.