ChatGPT Can Make Fake Receipts So Good, They're Dangerous For Shops and Employers
The popularity of ChatGPT's Ghibli-style AI image generator raises serious privacy concerns

The rise of sophisticated AI has brought many conveniences but also presents new challenges. One concerning development is ChatGPT's ability to generate remarkably convincing fake receipts. This capability poses a significant danger to shops and employers, potentially leading to financial losses and fraudulent activities.
ChatGPT introduced a fresh picture creator this month within its 4o model, which excels at producing text within visuals. Individuals are already employing it to craft bogus eatery bills, possibly expanding the already wide collection of AI deepfakes utilised by swindlers.
ChatGPT's New Trick
Well-known social media user and venture capitalist Deedy Das shared on X an image of a fabricated bill from an actual San Francisco steakhouse, claiming it was made with 4o. Others managed to reproduce comparable outcomes, with one even featuring food and beverage marks to enhance its believability.
You can use 4o to generate fake receipts.
— Deedy (@deedydas) March 29, 2025
There are too many real world verification flows that rely on “real images” as proof. That era is over. pic.twitter.com/9FORS1PWsb
I think in the original image the letters are too perfect and they don’t bend with the paper. They look like hovering above the paper. Here is my attempt to make it more realistic. Let me know what you think. pic.twitter.com/EixRSHubeY
— Michael Gofman (@michaelgofman) March 29, 2025
The most convincing instance, initially brought to light by TechCrunch, actually originated in France, where a LinkedIn member displayed a crumpled, AI-created receipt for a nearby restaurant group. TechCrunch put 4o to the test and produced a phoney bill for an Applebee in San Francisco; however, their effort contained a few obvious clues that it was not genuine.
The final amount uses a comma instead of a period, and the numbers just don't add up right. Large language models (LLMs) still find simple calculations challenging, so this isn't especially unexpected.
The Danger Of AI-Generated Fake Bills
However, it wouldn't take much effort for a trickster to quickly correct a few figures using either image editing tools or, perhaps, more specific instructions. Evidently, the ease of creating phony bills offers significant avenues for deceit. It's not difficult to picture tricksters using this kind of technology to claim payment for completely invented costs.
OpenAI representative Taya Christianson informed TechCrunch that all their pictures contain data showing ChatGPT generated them. Christianson also mentioned that OpenAI' takes action' when users break its rules and that it's 'always learning' from actual use and comments.
When TechCrunch questioned OpenAI about why ChatGPT allows the generation of fake receipts, considering their policy against fraud, OpenAI spokesperson Taya Christianson explained that their 'goal is to give users as much creative freedom as possible.'
Moreover, she suggested that AI-generated receipts could have legitimate, non-fraudulent applications, such as 'teaching people about financial literacy,' creating original art, and developing product advertisements.
Beyond the realm of fabricated financial documents, the capabilities of AI image generators like ChatGPT and the newly accessible Grok 3 for creating Ghibli-inspired art are also raising alarms, particularly concerning the potential risks associated with users uploading personal photographs.
Ghibli Filter, Real Privacy Risks
Since OpenAI released ChatGPT's Ghibli-inspired picture creator last week, it has become a sensation on social media. From well-known figures to regular individuals, it seems everyone is displaying their AI-produced likenesses in the distinctive style of Ghibli master Hayao Miyazaki. However, not everyone is comfortable with this trend.
Digital privacy advocates on social media platform X are raising concerns, suggesting that OpenAI might be leveraging this popular trend to collect vast quantities of personal images for AI training purposes, potentially bypassing standard data protection regulations that apply to web-scraped data, according to recent reports.
While individuals enjoy this feature, critics caution that they might unknowingly provide new facial information to OpenAI, sparking significant worries about their personal data.
This craze has brought to light ethical questions regarding artificial intelligence tools trained using copyrighted creative content and the implications for the future earnings of human artists. Hayao Miyazaki, the visionary behind Studio Ghibli, has previously voiced doubts about AI's place in animation.
Data Collection In Disguise?
These advocates, however, pointed out that OpenAI's method of gathering information goes beyond mere AI copyright concerns, per Just Think. In their view, it enables the company to obtain willingly provided pictures, thus circumventing legal limitations that govern data gathered from the Internet.
🚨 Most people haven't realized that the Ghibli Effect is not only an AI copyright controversy but also OpenAI's PR trick to get access to thousands of new personal images; here's how:
— Luiza Jarovsky (@LuizaJarovsky) March 29, 2025
To get their own Ghibli (or Sesame Street) version, thousands of people are now voluntarily… pic.twitter.com/zBktscNOSh
In a detailed post on X, Luiza Jarovsky, co-founder of the AI, Tech & Privacy Academy, explained that when individuals willingly provide these pictures, they grant their approval for OpenAI to handle them (Article 6.1.a of the GDPR). This establishes a distinct legal basis that offers OpenAI greater latitude, removing the necessity of a legitimate interest assessment.
'Moreover, OpenAI's privacy policy explicitly states that the company collects personal data input by users to train its AI models when users haven't opted out,' she added.
Originally published on IBTimes UK
© Copyright 2025 IBTimes UK. All rights reserved.