Vote for Cryptopolitan on Binance Square Creator Awards 2024. Click here to support our content!

Cybercriminals Exploiting AI and ML Technologies for Malicious Purposes

In this post:

  • AI and ML are being exploited by cybercriminals for malicious activities.
  • Deepfakes, AI-supported hacking, and human impersonation pose serious threats.
  • Future risks include enhanced attacks, evasion, and physical harm using AI.

A new research paper jointly produced by Trend Micro, the United Nations Interregional Crime and Justice Research Institute (UNICRI), and Europol reveals how cybercriminals are exploiting artificial intelligence (AI) and machine learning (ML) technologies for malicious purposes.

AI and ML in business

AI and ML are rapidly fueling the development of a more dynamic world. These technologies promise greater efficiency, higher levels of automation, and autonomy. AI is a dual-use technology at the heart of the fourth industrial revolution. With ML, a subfield of AI that analyzes large volumes of data to find patterns via algorithms, enterprises, organizations, and governments can perform impressive feats that drive innovation and better business. 37% of businesses and organizations had already integrated AI within their systems and processes by 2020. Tools powered by AI and ML enable enterprises to predict customers’ buying behaviors better, contributing to increased revenues. Some enterprises, such as Amazon, have built highly profitable businesses with the help of ML- and AI-powered tools.

Misuses and abuses of AI and ML

While AI and ML can support businesses and help solve some of society’s biggest challenges, these technologies can also enable a wide range of digital, physical, and political threats. The features that make AI and ML systems integral to businesses are the same features that cybercriminals misuse and abuse for ill gain.

*Deepfakes*: One popular abuse of AI is deepfakes, which involve using AI techniques to craft or manipulate audio and visual content to appear authentic. Deepfakes are suited for disinformation campaigns because they are difficult to differentiate from legitimate content. Deepfakes have the potential to distort reality for nefarious purposes, as seen in an alleged deepfake video that destabilized a coalition government in Malaysia and a deepfake audio that duped a UK-based energy firm into transferring nearly 200,000 British pounds to a Hungarian bank account.

*AI-Supported Password Guessing*: Cybercriminals are employing ML to improve algorithms for guessing users’ passwords. With neural networks and Generative Adversarial Networks (GANs), cybercriminals can analyze vast password datasets and generate password variations that fit the statistical distribution, leading to more accurate and targeted password guesses.

*Human Impersonation on Social Networking Platforms*: Cybercriminals are abusing AI to imitate human behavior on social media platforms. For example, AI-supported bots on Spotify can mimic human-like usage patterns to dupe bot detection systems and generate fraudulent streams and traffic for specific artists.

Read Also  Rejuve AI Token (RJV) Surges 93% in 48 Hours

*AI-Supported Hacking*: Cybercriminals are weaponizing AI frameworks for hacking vulnerable hosts. For instance, DeepExploit, an ML-enabled penetration testing tool, and PWnagotchi 1.0.0, developed for Wi-Fi hacking through de-authentication attacks, use neural network models to improve hacking performance.

Future misuses and abuses of AI and ML

In the future, criminals are expected to exploit AI to enhance the scope and scale of their attacks, evade detection, and abuse AI both as an attack vector and an attack surface. AI can automate the first steps of an attack through content generation, improve business intelligence gathering, and speed up the detection rate at which both potential victims and business processes are compromised. AI can also be abused to manipulate cryptocurrency trading practices and harm or inflict physical damage on individuals, as seen in AI-powered facial recognition drones carrying explosives.

AI and ML technologies have many positive use cases but are also being abused for criminal and malicious purposes. Understanding the capabilities, scenarios, and attack vectors that demonstrate how these technologies are being exploited to protect systems, devices, and the general public from advanced attacks and abuses is essential.

About the research

The research paper, titled “Malicious Uses and Abuses of Artificial Intelligence,” is a joint effort among Trend Micro, the United Nations Interregional Crime and Justice Research Institute (UNICRI), and Europol. It discusses the present malicious uses and abuses of AI and ML technologies and the plausible future scenarios in which cybercriminals might abuse these technologies for ill gain.

AI and ML technologies have many positive use cases, including visual perception, speech recognition, language translations, pattern extraction, and decision-making functions in different fields and industries. However, these technologies are also being abused for criminal and malicious purposes. Understanding the capabilities, scenarios, and attack vectors that demonstrate how these technologies are being exploited to protect systems, devices, and the general public from advanced attacks and abuses is essential.

Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Editor's choice

Loading Editor's Choice articles...

Stay on top of crypto news, get daily updates in your inbox

Most read

Loading Most Read articles...
Subscribe to CryptoPolitan