If you watch cartoons like Tom and Jerry, you will recognize a common theme: an elusive goal avoids his powerful opponent. This “cat and mouse” game (literally or otherwise) involves pursuing something that escapes your stuff every time you try.
In a similar way, escaping ongoing hacking is an ongoing challenge for cybersecurity teams. MIT researchers are taking them to chase something just out of reach and are adopting an AI approach called “artificial adversarial intelligence” that mimics attackers of devices or networks to test network defenses before real attacks are implemented. Other AI-based defenses help engineers further strengthen their systems to avoid ransomware, data theft, or other hackers.
Here, Una-May O’Reilly, principal investigator at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) Study in all groups (Alfa) discusses how artificial confrontational intelligence protects us from cyber threats.
ask: In what ways can artificial adversarial intelligence play the role of cyber attackers, and how can artificial adversarial intelligence portray cyber defense lawyers?
one: Cyber attackers exist along the scope of their capabilities. On the lowest end, there are so-called scripts kiddies or threatening participants who spray well-known exploits and malware in the hope of finding some networks or devices that cannot practice good network hygiene. In the middle are cyber mercenaries, whose resources are better, organized and prey on enterprises with ransomware or ransomware. And, at the high end, sometimes state-supported groups that may initiate the most difficult “advanced threats” (or APTS).
Think of these attackers’ professional, evil intelligence – that’s confrontational intelligence. Attackers create very technical tools that allow them to hack into code, they choose the right tools for the target, and there are multiple steps to the attack. At each step, they learn something, integrate it into situational awareness, and then decide what to do next. For exquisite apartments, they can strategically choose their goals and develop a slow and low visibility plan that is so subtle that its implementation escapes our defensive shield. They can even plan to point out deceptive evidence from another hacker!
My research goal is to replicate this specific offensive or attack intelligence, adversarial-oriented intelligence (intelligence on which human threat actors rely). I use AI and machine learning to design network agents and model the adversarial behavior of human attackers. I also modeled the learning and adaptation of featured cyber weapons competitions.
I should also note that network defense is very complex. They develop complexity due to their escalating ability to respond to attacks. These defense systems involve designing detectors, processing system logs, triggering appropriate alerts, and then breaking them down into incident response systems. They must be constantly alert to defend a large attack surface that is difficult to track and very dynamic. On the other end of the attacker-defender game, my team and I also invented AI to serve these different defensive fronts.
Another thing stands out in confrontational intelligence: both Tom and Jerry can learn from each other’s competition! Their skills improved and they locked in the arms race. One person will get better, and the other can save his own skin and get better. The progress of this tit (Tit-for-Tat) continues to move forward! We strive to replicate online versions of these arms races.
ask: What are the examples of artificial confrontation of wisdom making us safe in our daily lives? How do we use adversarial intelligence agents to stay ahead of threat actors?
one: Machine learning has been used in a variety of ways to ensure network security. There are various detector filtering threats. For example, they are tuned to abnormal behavior and recognizable malware. There is a classification system that enables AI. Some spam protection tools on your phone have AI enabled!
On my team, I designed support for AI-ENI-ASEBER attackers who can do the job of threatening actors. We invented AI to empower network agents with computer skills and programming knowledge to enable them to handle a variety of network knowledge, plan attack steps, and make informed decisions in activities.
Adversarial intelligent agents, such as our AI cyber attackers, can be used as practice when testing network defenses. There are a lot of efforts to check the robustness of the network, and AI can help. Additionally, when we add machine learning to agents and add machine learning to defenses, they can engage in weapons competitions that we can check, analyze and use to predict what countermeasures we can use when we take steps to defend ourselves.
ask: What new risks are they adapting to?
one: It seems that new software and new configurations of systems being designed will never end. Each version has vulnerabilities that an attacker can target. These may be examples of weaknesses in the documented code, or they may be novel.
New configurations pose a risk of error or a new way to be attacked. When we deal with denial of service attacks, we don’t imagine ransomware. Now, we are engaging in IP (intellectual property) theft related to cyber espionage and ransomware. All of our critical infrastructure, including telecommunications networks, as well as finance, healthcare, municipal, energy and water systems, are targets.
Fortunately, a lot of efforts are dedicated to defending critical infrastructure. We will need to translate this into AI-based products and services to automate some of these efforts. And, of course, continue to design smarter, smarter confronters to keep us on our toes, or help us practice defending our network assets.