A recent blog post by Malwarebytes’ Mark Stockley shows that the GPT-4 version of ChatGPT can be used to write code for ransomware.
Stockley showed that he could use GPT-4 to write C code for ransomware step by step. And he hadn’t written a line of C code before. This is one of several studies that show large language models can be used for writing malicious code.
And as LLMs continue to progress, they will be able to accomplish more advanced tasks—both good and bad.
However, there is more to malware than creating a payload. Read my latest article in TechTalks to see what you should (and shouldn’t) worry about.
For more on LLMs:
It will be an interesting balance of form vs function.. I was following a thread on twitter today of devs who are leaving GPT due to the bumpers they've been putting up, presumably to help prevent mal-usage that is causing headaches with legitimate development. Same as it ever was.