The developer of WormGPT is selling access to the chatbot, which can help hackers create malware and phishing attacks, according to email security provider SlashNext.
WormGPT Is a ChatGPT Alternative With 'No Ethical Boundaries or Limitations'::undefined
A scary possibility with AI malware would be a virus that monitors the internet for news articles about itself and modifies its code based on that. Instead of needing to contact a command and control server for the malware author to change its behavior, each agent could independently and automatically change its strategy to evade security researchers.
The limiting factor is pre existing information. It's great at retrieving obscure information and even remixing it, but it can't really imagine totally new things. Plus white hats would also have LLMs to find vulnerabilities. I think it's easier to detect vulnerabilities based on known existing techniques than it is to invent totally new techniques.