An artificial intelligence tool that will make it simpler for hackers to steal information is becoming a reality, which is a nightmare | Israel Hayom

WormGPT is a brand-new chatbot that was just recently made available on forums as a tool for reliably and easily forging emails. It was even described as “ChatGPT’s biggest enemy, which allows you to do all kinds of illegal things” by the tool’s author. The risk posed by generative AI is highlighted by the fact that wormGPT acts outside of predetermined ethical constraints and even makes it simple for hackers to conduct such assaults without having to cope with their technological repercussions.

In recent months, ChatGPT has truly taken off, attracting millions of users from all over the world who utilise the AI-powered chatbot for both lighthearted and serious conversations. A recent article claims that some people have chosen to exploit the chat’s popularity and steer it in the wrong directions.

According to research published on the SlashNext website, a new generative AI tool known as WormGPT has been used to “sting” firms and launch digital attacks against them by making information theft simple.

(Reuters) Illustration of a hacker

Security expert Daniel Kelly noted that the programme “presents itself as a substitute for GPT chat hackers designed for malicious purposes.” Cybercriminals can use this technology to create personalised bogus emails automatically, boosting the likelihood that an attack will be successful. It was even described by the tool’s author as “ChatGPT’s biggest enemy, which allows you to do all kinds of illegal things.”

WormGPT’s ability to work outside of predetermined ethical constraints emphasises the danger posed by generative AI, which even makes it simple for hackers to conduct such assaults without having to deal with their technological repercussions. “Even cybercriminals with limited capabilities can use this technology,” Kelly said.

Leave a Reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!