Monday, May 20, 2024
News

How ChatGPT—and Bots Like It—Can Spread Malware

44views

AI landscape have begun to advance much faster: consumer-facing tools like Midjourney and ChatGPT are now able to generate incredible image and text results in seconds based on natural language prompts, and we’re seeing them everywhere from web searches to Watching as it gets deployed. children’s books,

However, these AI applications are being diverted to more nefarious uses, including spreading malware, Take the traditional scam email, for example: it’s usually full of obvious mistakes in its grammar and spelling – mistakes that the latest set of AI models don’t make, as noted in A recent advisory report from Europol,

Think about it: A lot of phishing attacks and other security threats rely on social engineering, tricking users into revealing passwords, financial information or other sensitive data. The persuasive, authentic-sounding text needed for these scams can now be pulled out fairly easily, requiring little to no human effort, and endlessly improvised and refined for specific audiences.

In the case of ChatGPT, it is important to note first that developer OpenAI has built security measures into it. Ask it to “write malware” or “phishing email” and it will tell you that it is “programmed to adhere to strict ethical guidelines that prohibit me from engaging in any malicious activities, including the creation of malware.” Writing or assisting.”

ChatGPT won’t code malware for you, but it’s polite about it.

OpenAI via David Nield

However, it’s not too hard to get around these protections: ChatGPT can certainly code, and it can certainly write email. Even if it doesn’t know it’s writing malware, it may be prompted is generating something like, there are already indicated that cyber criminals are working to evade security measures.

We’re not picking on ChatGPT specifically here, but pointing out what is possible once it’s used for more sinister purposes, such as the Large Language Model (LLM). In fact, it’s not too hard to imagine criminal organizations developing their own LLM and similar tools to make their scams more believable. And it’s not just text: audio and video are more difficult to fake, but it’s happening too.

When it comes to your boss asking for an urgent report, or company tech support asking you to install a security patch, or your bank notifying you that you have a problem to answer, these are all potential scams to build trust. And rely on sounding genuine. , and that’s some AI bots I’m doing great, They can produce text, audio and video that sounds natural and is tailored to specific audiences, and they can do it quickly and continuously on demand.