As Artificial Intelligence improves and grows in many fields, it's not uncommon to find people asking if or how AI could replace a human in a particular task. But is that even the right question? A more pertinent question might be what role analysts play in this AI-driven reality, not just as users but as essential interpreters, overseers, and directors of AI technology, leveraging it to enhance their capabilities, optimize processes, and make more informed and strategic decisions.
AI is a tool for empowering people and teams to do more and innovate in solving problems. Trying to make AI fit into the processes that were made for humans can sometimes be like trying to encircle a square. We must visualize what it enables us to take full advantage of it.
Cybersecurity requires us to make trade-offs for managing risk, so optimizing resources and prioritizing assets is part of the challenge. The ISC2 Cybersecurity Workforce Study showcases this landscape very well, with many professionals reporting team gaps and a need for more manpower in many organizations.
By adapting the risk management process with an AI-first approach, it's possible to close that gap and redefine what it means to have reliable and actionable cyber threat intelligence.
AI threat intelligence: the first responder
Artificial Intelligence shines when it comes to gathering and correlating information in a timely manner. AI can- and should- play a very important role whenever there's a need to sift through alert and event data and match that to what's relevant for your attack surface.
AI can be very fast and comprehensive, bringing you data points and threat intelligence insights that you might have otherwise missed or wouldn't have caught in time to elaborate a mitigation plan. For CISOs and team managers, a tool like Polaris can provide invaluable aid in detecting blind spots and refocusing the team into more active threats. It can work like a counselor, advising on the latest cyber risks before the day has even started.
While general large language models (LLMs) can be problematic due to hallucinations and inconsistent output, specialized models can be tailored to become a first responder for any event, incident, or alert that requires attention. Polaris is not subject to the limitations that prevent tools like ChatGPT or Gemini from acquiring and processing last-minute information.
In that capacity, Artificial Intelligence can lessen the effects of "alarm fatigue," a known phenomenon where humans become desensitized to alerts or information produced by sensors and other tools. AI does not have that problem—it will always consider every bit of data it has access to and prioritize it based on what it has learned about an attack surface, the threat actors involved, or the business risks you have identified.
When AI takes on this role, it's not doing the work of a human. Instead, it's an entirely new perspective that a business wouldn't otherwise have – least of all, as fast as AI can deliver on it.
Threat intelligence is more useful when it's actionable, relevant, and timely, and that's what AI can be trained to find for security teams. That's why we built Polaris.
Analysts bring their insight and intuition
Mitigations and response strategies vary by business. Companies have distinct work cultures and are each subject to a regulatory environment that varies by sector or location. While one company may see user awareness as a viable path to mitigate a particular threat, another would find that ineffective or impractical.
Analysts can and should provide insights about what a threat means to the business in ways that an AI cannot easily be set to do. Not all systems are equally important to every business, and a machine wouldn't comprehend many details that are easily and intuitively understood by both IT and security teams.
Thanks to that, analysts can come up with solutions that an AI would require an endless number of parameters and training to replicate.
Leveraging Artificial Intelligence for information gathering frees more time for analysts to work on solving these problems and staying up to date with current trends, which will inform their decision-making and ingenuity. Such tasks can be more fulfilling than combing through data to find a single relevant point that a tool such as Polaris could have brought to their attention in seconds.
Analysts can also think of questions to improve their understanding of a threat. Unlike general-purpose LLMs like ChatGPT, which are more focused on generation than sourcing data, Polaris reports favor short summaries with proper references to the source material, enabling easy access to everything that can help shape the response to an event, such as information about the threat actor, target industry or organization and mentioned locations.
A joint perspective for better risk management
Hackers and online scammers are already employing AI. New types of cyberattacks based on AI are likely incoming. And attackers aren't worried they will be "replaced" by AI since that makes no sense. Instead, they are focused on building different types of scams and campaigns that were simply not possible before, counting on the fact many users and businesses alike will not be ready.
AI allows us to rethink and improve the way we collect threat intelligence. It can minimize the delay from data collection to action. But that action will always require people who understand cybersecurity's trade-offs and tenets. AI is there to be part of the team.
Artificial Intelligence shouldn't be a threat to cybersecurity analysts. If anything, it is a threat to the adversaries looking to make our networks and businesses unsafe.
Start a free trial: axur.com/polaris
Contact us at: contact@axur.com
Experts in creating relevant external cybersecurity content to make the internet a safer place.