Researchers at HP have discovered that hackers are exploiting generative AI tools to generate malicious code targeting French-speaking victims. This malware is known as AsyncRAT which allows hackers to access and record victims’ screens and keystrokes as reported by the HP threat researchers on Thursday. The malware’s code was written using AI tools in VBScript and JavaScript which is indeed very much alarming. The structure of the scripts or comments explaining each line of code and the choice of native language function names and variables suggest strong indications of AI involvement.
According to HP‘s threat security team, hackers are leveraging AI to simplify cybercrime and the report reveals an increase in ChromeLoader campaigns which deceive victims with fake PDF converters through false advertising around popular search keywords. Also, the hackers are smuggling malware through vector images in SVG format which automatically execute malicious code when it is viewed.
Also, read| Beware: “Goodbye Meta AI” Post Is A Hoax
HP Wolf Security warns that generative AI lowers the bar for cybercriminals making endpoint infections easier and that the development signifies a shift beyond phishing attacks highlighting the need for increased vigilance, researchers stated. Since a lot of such malicious practices using AI have been prevalent these days, it is evident that security experts are forewarning people. The experts have already warned French-speaking users and how the hackers develop these malicious attacks using GenAI tools to write the code.
On #SplunkBlogs, the Splunk Threat Research Team and Cisco Talos provide an analysis of a phishing campaign associated with the Handala Hacking Team. Learn about the attack chain, associated TTPs, and detections to help defend against this threat. https://t.co/yUFo492tQR pic.twitter.com/NEAZzqriAz
— ❤️✌️🎶 (@TMegs) September 23, 2024
The report also suggests that generative AI developed the code to deliver the AsyncRAT malware as the chatbots including Gemini and OpenAI’s ChatGPT are bound to typically about the computer code if someone asks the chatbot to write a computer program. It was said that hackers are rarely using AI to create actual malware. However, in April, a cybersecurity provider called ProofPoint discovered a case where hackers used AI to create a malicious script and found that the hackers have been using AI to create malware at large.
Also, read| Supreme Court’s YouTube Channel Hacked And Shuts Down: Restoration Underway