If the AI ​​becomes a fraud – you should know these tricks

ChatGPT

Generative AI makes cheating with text elements easier.

(Photo: IMAGO/Panama Pictures)

Berlin Is the email that just arrived in your inbox a cyber attack? The chatbot ChatGPT makes it difficult to answer this question. Because fraudsters can use it to create so-called phishing emails that aim to get hold of personal data, credit card information or passwords from users.

A study by the cyber security provider Sosafe shows that one in five clicks on phishing emails created with the help of artificial intelligence. Significantly more – namely 78 percent – do not unmask the phishing mail right away. And the danger is increasing: According to the survey, generative language models such as ChatGPT can help hackers to compose phishing emails much faster.

Because ChatGPT facilitates scams in text form. “Where scammers used to fail at the language hurdle because the texts were grammatically bad or they didn’t have enough capacity to write texts on a large scale, ChatGPT relieves the scammers of this work or makes it easier for them,” says text forensic scientist Inna Vogel from Fraunhofer SIT.

ChatGPT also helps with bot spoofing

Phishing emails created with the help of ChatGPT are by no means the only scam that has become easier with artificial intelligence. Fraudsters could also use fake customer bots to trick customers into thinking they’re communicating with a site’s official support, Vogel warns.

“In this way, they can try to get payment for non-existent services or get users to download malware,” Vogel describes possible scenarios.

>> Read about this: How to protect your private data in the AI

ChatGPT can also help create fake news sites with appropriate tools. Fraudsters who want to earn money through advertising revenue, for example, can have ChatGPT generate texts on various topics. “Text-to-image AI models like stable diffusion can be used to generate fake images,” says Vogel. A lot of time can pass before this is recognized – in which these pictures get a lot of coverage on social media.

Generative AI can also help mimic a user’s writing style, a scam that’s hard to spot. “Reputation can be damaged by spreading messages that go against one’s views, beliefs or standards,” Vogel explains. For example, fake quotes could be attributed to a politician – with significant consequences for his political career or even his private life.

How to protect yourself from scams?

So far, there is no application that can effectively protect against AI-generated scams. “Therefore, common sense and caution are required in the first place – especially when it comes to personal and confidential information or money,” says Vogel. It is always important to check the authenticity of the communication, especially if it appears suspicious or contains unusual requests.

Users are advised not to click suspicious links or download files from unknown sources. Vogel advises that those who do their own research protect themselves most effectively. If the content does not match reports from independent sources, it could be a false report.

More: What you should know about OpenAI’s AI

source site-14