🎉 Join Gate.io's 15-day Thanksgiving Posting Challenge and win a Share of $2,000 Rewards!
To Celebrate Thanksgiving! Gate.io is launching a 15-day Posting Challenge! Join Gate Post to win a share of $2,000. There’s also an exclusive merch for Gate Post Ambassadors!
🔎 To join:
Click the form in the
Ah, what? AI stole my Wallet?
Author: Azuma, Odaily Planet Daily
On the morning of November 22nd Beijing time, SlowMist founder Yu Xian posted a bizarre case on his personal X — a certain user's Wallet was 'hacked' by AI...
The background of this case is as follows.
In the early hours of this morning, X user r_ocky.eth revealed that he had previously hoped to use ChatGPT to carry a pump.fun-assisted trading bot.
r_ocky.eth gave his requirements to ChatGPT, and ChatGPT returned a piece of code to him. This code can indeed help r_ocky.eth deploy a bot that meets his requirements, but he never expected that there would be a hidden phishing content in the code - r_ocky.eth linked his main wallet and lost $2500 as a result.
Judging from the screenshot posted by r_ocky.eth, the code given by ChatGPT will send an AddressPrivate Key to a phishing API website, which is also the direct cause of the theft.
When r_ocky.eth stepped on the trap, the attacker reacted very quickly, and within half an hour, all the assets in the r_ocky.eth Wallet were transferred to another Address (FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX), and then r_ocky.eth The address (2jwP4cuugAAYiGMjVuqvwaRS2Axe6H6GvXv3PxMPQNeC) suspected to be the attacker's main wallet was found through on-chain tracing.
According to on-chain information, the address has collected more than $100,000 in "stolen money", so r_ocky.eth suspects that this type of attack may not be an isolated case, but an attack of a certain scale.
After the incident, r_ocky.eth disappointedly said that it had lost trust in OpenAI (the company that developed ChatGPT) and called on OpenAI to start cleaning up the anomalous phishing content as soon as possible.
So, as the most popular AI application today, why does ChatGPT provide fishing content?
In response, Yu Xian characterized the incident as an 'AI poisoning attack' and pointed out the widespread deceptive behavior in ChatGPT, Claude, and other LLMs.
The so-called "AI poisoning attack" refers to the behavior of intentionally sabotaging AI training data or manipulating AI algorithms. The attackers may be insiders, such as dissatisfied current or former employees, or external hackers whose motives may include causing reputation and brand damage, tampering with the credibility of AI decisions, slowing down or disrupting AI processes, etc. Attackers can distort the learning process of the model by planting data with misleading tags or features, leading to erroneous results when the model is deployed and run.
Combined with this incident, the reason why ChatGPT provided the phishing code to r_ocky.eth is most likely because the AI model was contaminated with the data with phishing content at the time of training, but the AI seems to have failed to identify the phishing content hidden under the regular data, and the AI learned it and then provided the phishing content to the user, which caused the incident.
With the rapid development and widespread adoption of AI, the threat of 'poisoning attacks' has become increasingly significant. In this case, although the absolute amount of loss is not large, the extended impact of such risks is enough to raise concerns—assume it happens in other areas, such as AI-assisted driving...
When replying to a netizen's question, Yu Xian mentioned a potential measure to mitigate such risks, which is to add some kind of code review mechanism by ChatGPT.
The victim, r_ocky.eth, also said that he had contacted OpenAI about the matter, and although he has not received a reply for the time being, he hopes that this case can be an opportunity for OpenAI to take such risks seriously and propose potential solutions.