AI tools are now popular. Everyone is enamored with it, including hackers. According to a new report from Facebook parent company Meta, the firm’s security team is on the lookout for new malware risks, especially those that weaponize the current AI trend.
“Over the past several months, we’ve investigated and taken action against malware strains taking advantage of people’s interest in OpenAI’s ChatGPT to trick them into installing malware pretending to provide AI functionality,” Meta writes in a new security report released by the company.
Meta claims to have uncovered “around ten new malware families” that are hacking into customers’ accounts via AI chatbot tools such as OpenAI’s popular ChatGPT.
According to Meta, one of the more urgent schemes is the spread of malicious web browser extensions that seem to offer ChatGPT capabilities. To employ AI chatbot capability, users download these extensions for Chrome or Firefox, for example. Some of these extensions actually work and provide the advertised chatbot functionality. However, the extensions contain malware that can gain access to the user’s device.
Meta claims to have detected over 1,000 unique URLs offering malware disguised as ChatGPT or other AI-related applications and has stopped them from being shared on Facebook, Instagram, and Whatsapp.
According to Meta, once a user installs malware, bad actors can launch an attack quickly and are continually changing their tactics to circumvent security mechanisms. In one case, bad actors were able to quickly automate the process of taking over business accounts and granting these bad actors advertising capabilities.
According to Meta, it has reported the malicious links to the numerous domain registrars and hosting providers that these illicit actors utilize.
Meta security researchers go into the more technical features of contemporary malware, such as Ducktail and NodeStealer, in their report. That report can be read in its entirety.