The FBI has reported a concerning trend in the world of cybercrime. Hackers are using Generative Artificial Intelligence, such as the Chat GPT chatbot, to create more sophisticated and difficult-to-detect viruses. This new way of leveraging AI allows criminals to rapidly generate malicious code and launch waves of cyberattacks more effectively than in the past.
Authorities warn that this trend could increase considerably as more people adopt and use Artificial Intelligence technology. The adoption and democratization of AI models provide malicious actors with a new tool to complement their criminal activities, including the use of AI speech generators to scam people by posing as trusted individuals.
This is not the first time that hackers have used AI tools to create dangerous malware. Security researchers previously discovered how a chatbot's API could be altered to generate malware code, making virus creation easy for almost any hacker.
However, some cyber experts disagree with the FBI's concerns, arguing that the threat of AI chatbots has been overblown. They argue that most hackers still find code vulnerabilities through traditional methods such as data leaks and open source. Furthermore, they claim that the quality of malware code produced by chatbots tends to be low and that many novice hackers lack the skills to bypass anti-malware barriers.
Despite divergent opinions, the The situation is made more complex by the discontinuation of tools designed to detect plagiarism generated by chatbots, which could make it more difficult to fight AI-driven cybercrime.
AI is capable of programming
It is important to highlight that Artificial Intelligence tools such as Chat GPT have shown an impressive ability to program in different programming languages. These language models can understand and generate code from the prompts or instructions provided by users.