Private Test Reveals ChatGPT's Ability to Create Dangerous Malware

Video Credit: Wibbitz Top Stories
Published on April 13, 2023 - Duration: 01:31s

Private Test Reveals ChatGPT's Ability to Create Dangerous Malware

Private Test Reveals , ChatGPT's Ability , to Create Dangerous Malware.

Fox News reports that ChatGPT continues to cause controversy in the tech world, as a user claims to have created powerful data-mining malware.

.

Fox News reports that ChatGPT continues to cause controversy in the tech world, as a user claims to have created powerful data-mining malware.

.

A security researcher for Forcepoint, Aaron Mulgrew, shared how OpenAI's generative chatbot is able to create malware.

The researcher used a loophole to bypass ChatGPT's protections that are meant to prevent people from using the AI to create malware codes.

According to Mulgrew, after creating the code with separate lines, he was able to compile the functions into an undetectable data-stealing executable.

.

Fox News reports that the creation of this sophisticated malware was accomplished without a team of hackers, and Mulgrew himself didn't have to write a single line of code.

Mulgrew's private test, which was not publicly released, highlights the dangers of ChatGPT.

.

Fox News reports that Mulgrew, who claims to not have any advanced coding experience, was able to bypass ChatGPT's security measures with a simple test.

.

The malware that Mulgrew created scrubs files for any data that could be stolen from an infected device.

The program then breaks the data down, hides it within other images and uploads it to a Google Drive folder.

Fox News reports that by using simple prompts, Mulgrew was able to use ChatGPT to strengthen and refine the code to conceal it from detection.


You are here


๐Ÿ’ก newsR Knowledge: Other News Mentions


You might like