By Michael Cocanower
Black Hat USA 2023 was a landmark event for the cybersecurity community. The event featured over 100 briefings, dozens of open-source tool demos, and a robust business hall. Two keynotes, given by Maria Markstedter and Kemba Walden, provided a glimpse into the future of the field.
Maria Markstedter’s Keynote
In the first keynote, Maria Markstedter, founder of Azeria Labs, talked about the challenges and opportunities of artificial intelligence (AI) in cybersecurity. She argued that AI has the potential to revolutionize the way we defend against cyberattacks, but it also poses new risks. The scariest prospects that came from Markstedter’s keynote were the use cases for autonomous AI systems that can learn and adapt to attack on their own.
She notes we will reach a point where programs can build other programs without being explicitly instructed to do so. This poses an inherent risk to any industry that decides to use AI, especially those in my industry — financial services and wealth management — where safeguarding sensitive data is paramount.
With these potential use cases surfacing, many pertinent questions arise: Will AI ensure optimal tool selection during web searches? More crucially, how can we be certain that our AI doesn’t inadvertently introduce malware while attempting to solve a problem by installing a tool it found on the Internet?
These are the questions that are on every cybersecurity team’s mind, especially after the extraordinary AI showcase at Black Hat. As regulations continue to not keep up with the pace of AI innovation, how cybersecurity professionals choose to leverage AI in their organizations will be crucial for overall business success.
Kemba Walden’s Keynote
The second keynote, led by Acting National Cyber Director Kemba Walden, discussed the importance of government-industry collaboration in cybersecurity. She stated we need to work together to address the growing threat of cyberattacks and highlighted the urgent need for businesses to transcend mere compliance and invest comprehensively in cybersecurity.
Kemba’s stance strongly resonated with me and the larger cybersecurity community. Many industries will inherently have problems trying to keep up with regulation and guidance with the development of AI technology. In wealth management sectors — and frankly, across the business world — these decisions are currently being made by leaders that do not have a deep cybersecurity understanding. I am hopeful we will see an increased turn to the cybersecurity industry, and the professionals therein, to help make informed decisions rooted in risk mitigation.
Using AI — especially large language models (LLMs) as they are built now — comes with inherent risks, such as bias, inaccuracy, and security vulnerabilities, if you decide to feed any data into their algorithms. The eagerness to harness AI’s potential, especially among wealth managers and RIAs, is balanced by the imperative to safeguard data integrity throughout the process and mitigate these risks.
Black Hat USA 2023 was a valuable opportunity for cybersecurity professionals to learn about the latest threats and trends. The keynotes by Markstedter and Walden provided a stimulating discussion of the future of AI and cybersecurity, as well as a glimpse into the formidable challenges ahead. Yet, they also kindled a glimmer of hope — collaborative efforts and unity stand as the guiding light to navigate these challenges, fortifying our data and systems against adversaries.
If I learned anything from my attendance at this year’s conference, it’s that there is a strong need for the understanding of AI. Attending industry events like Black Hat and DEFCON (the hacking conference immediately following BlackHat every year) have been important in helping cybersecurity professionals understand this powerful new technology and how to stay ahead of threat actors utilizing this technology.
The potency of AI is a double-edged sword, capable of both positive and malicious applications. AI can be a powerful tool for detecting and responding to cyberattacks, but it is important to remember that it is not infallible and threat actors are also learning how to utilize it. A new wave of AI-powered cyberattacks is brewing so we need to be prepared.
The ethical implications of using AI in cybersecurity is also something to keep top of mind. For example, AI can be used to automate security tasks, but you need to ensure that these tasks are performed fairly and without bias. There is a risk of bias in AI systems, which could lead to discrimination against certain groups of people. It is key to carefully consider these implications before deploying AI systems in cybersecurity applications.
The insights gleaned from the event were truly invaluable, equipping the security community with a much-needed roadmap to navigate the surge of AI. While AI can certainly be misused, the onus is squarely on the cybersecurity industry experts to harness its potential for good. By investing in cybersecurity and pooling our efforts, we can lay the foundation for a safer and more secure digital future. I’m optimistic that this will foster a deeper partnership between the cybersecurity industry and its professionals, leading to well-informed decisions rooted in effective risk mitigation.
About Michael Cocanower:
Michael Cocanower is founder and chief executive officer of AdviserCyber, a Phoenix-based cybersecurity consultancy serving Registered Investment Advisers (RIAs). A graduate of Arizona State University with degrees in finance and computer science, he has worked more than 25 years in the IT sector. Michael, a recognized author and subject matter expert, has earned certifications as both an Investment Adviser Certified Compliance Professional and as a Certified Ethical Hacker. He is frequently quoted in leading international publications and has served on the United States Board of Directors of the International Association of Microsoft Certified Partners and the International Board of the same organization for many years. He also served on the Microsoft Infrastructure Partner Advisory Council.