AI

Dark side of the boom: How hackers are vibing with AI

Published

on

[ad_1]

If vibe coding is the cool kid of AI, vibe hacking is being recognised as its sinister twin. Cybercriminals are manipulating the behaviour of AI models with plain-language prompts to launch sophisticated ransomware attacks.

AI model developer Anthropic revealed that its coding model Claude Code was recently misused for personal data theft across 17 organisations to extort nearly $500,000 from each victim. AI toolkits or so-called Evil LLMs (large language models), which are purpose-built for cyberfraud, such as FraudGPT and WormGPT, are now available…

[ad_2]

Source link

Exit mobile version