The State of Enterprise IT Security Podcast - S1 EP. 08: Google Fosters AI in Cybersecurity, AI-Generated Election Trickery, and Shadow AI

Cybersecurity The State of Enterprise IT Security Podcast - S1 EP. 08: Google Fosters AI in Cybersecurity, AI-Generated Election Trickery, and Shadow AI

In Episode 8 of the State of Enterprise IT Security Podcast, Brad Bussie discusses Google's AI initiative in cybersecurity for scalable threat detection and response, a new tech company accord to fight AI-generated election trickery, and the risks of shadow AI for organizations.

Overview:

In episode 08 of the State of Enterprise IT Security podcast, host Brad Bussie covers the rapidly evolving landscape of artificial intelligence (AI) within the realm of cybersecurity. 

 

The episode touches on three critical topics: Google's initiatives to incorporate AI in cybersecurity, the collective efforts of tech giants to mitigate AI-generated election interference, and the challenges posed by Shadow AI in corporate environments. This episode serves as a comprehensive guide for technology leaders seeking to navigate the complex intersection of AI and cybersecurity.

Listen to the Episode:

 

Watch the Episode:

Key Topics Covered:

1. Google's AI Cybersecurity Initiatives: Brad kicks off the episode with an overview of Google's commitment to enhancing cybersecurity through AI. He explains Google's efforts in developing an AI-ready network of global data centers and its investment in startups focused on AI for cybersecurity. Notably, he highlights Magica, an open-source AI tool by Google for malware detection, and Google's funding of research in prestigious institutions like the University of Chicago, Carnegie Mellon, and Stanford to advance AI-powered security solutions.

2. Combating AI-Generated Election Trickery: The second segment addresses a significant concern: AI-generated misinformation in elections. Brad discusses a recent accord signed by leading tech companies, including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok, at the Munich Security Conference. This pact aims to adopt "reasonable precautions" against the misuse of AI tools in democratic processes. However, Brad expresses skepticism about the effectiveness of these measures, citing the need for more robust actions beyond just detecting and labeling deceptive content.

3. Shadow AI Risks: The final topic explores the phenomenon of Shadow AI, where employees use unsanctioned AI applications, potentially exposing organizations to security breaches. Brad offers strategies for identifying and mitigating Shadow AI risks, emphasizing the importance of comprehensive technical controls, staff surveys, due diligence during third-party onboarding, enforcing consequences for unauthorized AI usage, and user education.


Links Referenced:

 

Read the Transcript:

00:00 - AI allows security professionals and defenders to do something that they haven't been able to do before, which is to work at scale in threat detection, malware analysis, vulnerability detection, and fixing vulnerabilities, as well as instant response. But what I'm seeing is, by and large, defenders that are responsible for protecting corporate and personal data, they're just not prepared. They're pretty overwhelmed and feel generally unsupported when it comes to the new style of attacks and just the volume that's coming from AI-generated types of attacks.

01:02 - Hello. Hey everyone, I'm Brad Bussie, chief information security officer here at E 360. Thank you for joining me for the state of Enterprise IT Security Edition. This is the show that makes it security approachable and actionable for technology leaders. I'm happy to bring you three topics this week. First, new Google initiatives to foster AI in cybersecurity security. The second, tech companies sign accord to combat AI-generated election trickery. And number three, Shadow AI. So with that, let's get started.

01:41 - Now, Google has an initiative to foster AI and cybersecurity. And as a cybersecurity practitioner, I'm pretty excited about this. So they've announced the initiative and it's aimed at fostering the use of artificial intelligence in cybersecurity overall. So what does that mean? If I look at this, I think AI allows security professionals and defenders to do something that they haven't been able to do before, which is to work at scale in threat detection, malware analysis, vulnerability detection, and fixing vulnerabilities as well as incident response.

02:28 - But what I'm seeing is, by and large, defenders that are responsible for protecting corporate and personal data, they're just not prepared. They're pretty overwhelmed and feel generally unsupported when it comes to the new style of attacks and just the volume that's coming from AI-generated types of attacks. So to combat this, I feel, and others feel that public and private organizations should work together and the goal should be to secure AI from the ground up.

03:12 - And what's exciting is Google is doing just that. So they're continuing to invest in what they're calling an AI-ready network of global data centers. So they've got a bunch of those, they have 17 startups. If you look at all of the different acquisitions and things that they're working on, Google has startups in the UK, the US, the EU, and really the focus is around new AI for cybersecurity programs.

03:48 - Now the company also has something pretty cool called Magica, and it's open-sourced. It's an AI-powered tool for malware detection through things like file type identification. And this powers things like Google Drive, Gmail, safe browsing, and some other components that have been blended with an acquisition known as virustotal.

04:15 - So I think another pretty interesting thing is they're wanting to support advancements in AI-powered security overall. So Google is offering 2 million in research grants and strategic partnerships to support research. And they're doing it at institutions like University of Chicago, Carnegie Mellon, and Stanford.

05:03 - So second topic today, tech companies have signed an accord to combat AI-generated election. We're going to call it trickery. Now these executives, I'll just name a couple from, I think it was Adobe, Amazon, Google was there, IBM Meta, and Microsoft OpenAI. And you heard me throw a little shade on TikTok in a previous episode, but TikTok was there. They gathered at the Munich security conference and it was to announce a framework for how they plan on responding to AI-generated deep bakes.

05:51 - And what's interesting is these major tech companies, they signed a pact to voluntarily adopt, and I'm going to do this in air quotes. You're going to see and hear this. Reasonable precautions to prevent artificial intelligence tools from being used to disrupt democratic elections, not just in the US, but around the world.

06:16 - And if you ask me, I think they're taking a bit of a kid gloves approach to this. And the reason is the companies aren't committed to ban or even remove deepfakes. Instead, the accord outlines methods that they'll use to do things like detect and then label deceptive AI content when it's created or distributed on their own platforms.

07:01 - So I said a lot of things, and I feel like I didn't really say much of anything because based on that reaction from these companies, to me, they're not really doing nearly enough. Detecting and labeling is one thing. And I know when I watch most people on social media, attention span is super short and generally they're not reading something, it's just flashing by.

07:36 - So to put things in perspective, election deepfakes, they're already happening. So a robocall went out in New Hampshire and it was President Joe Biden's voice, and it was trying to discourage people from voting in the primary election last month. And by all accounts it was pretty convincing.

08:12 - Some think we should hold back on some of our AI capabilities, like hyper-realistic text to video generators and the full voice to face generation. It's a little hard though, because I think what we're trying to do is we're trying to maintain free speech as well as maximize transparency.

09:24 - Third thing we're going to talk about today is shadow AI. And I love topics like this. Even back when it was shadow IT, I think some of my friends back in the day called it business-led SaaS. And really it is a real problem for organizations. And what it is is employees are leveraging things that we call unsanctioned applications.

11:19 - So a couple different strategies that I would recommend are what I call comprehensive technical controls. So this is establishing things like network traffic monitoring, having a secure web gateway, doing things like endpoint detection, response, having those on endpoints and part of that, all good security things, but it's to pinpoint those unexpected AI related activities and make sure we can identify some AI software. I'm not even going to call it installations because a lot of these are just still web based, but it is possible for an organization to have a bootleg or pirated LLM that's imported into a data center and next thing you know, you've kind of stood up your own and that could be dangerous doing a staff survey.

12:19 - So conducting a survey and just understanding what are your users doing? What have they decided is helping them do their job? And it's interesting how much information you can get just from a simple one two question survey. Are you using Gen AI or LLM? I guess we're kind of past the point of the first question is, do you know what it is? Everybody pretty much knows what it is now. So are you using it? Second question, which one and or ones are you using? Making sure we are doing things like onboarding, due diligence.

12:57 - So when we're onboarding a third party vendor or partner understanding, are they leveraging AI and how are they doing it? So having some of those questions in the due diligence process and that will help identify some potential shadow AI risks, because maybe your organization isn't doing it today, but you onboard a third party, they're heavy users, and next thing you know, it's like a gateway drug. Your users are leveraging what they're doing.

13:28 - I think, as with anything, enforcing consequences, outlining. We've talked about having a governance program, but there's really no way to enforce governance unless there's some kind of consequence. So making sure that you're implementing consequences for the use of unauthorized AI, that sends a strong message to organizations, and I think it will help to deter some of that unauthorized usage.

13:59 - And I should have probably done this one first, but I did it last because technical solutions aren't always the answer. I think educating users is still very important. That's been debated for a very long time, whether we should protect users from themselves through technology. But I think with this one specifically, we left technology last on purpose.

14:23 - So using things like SaaS security platforms that can do things like automatically detect when things like business credentials are used to log into any tool. So for most of these gen AI large language models that are publicly available, they are leveraging a common set of authentication and authorizations, whether it's Microsoft, whether it's Google. Very rarely can you actually create an account with an email address. They're starting to standardize, which is actually good. It's helpful for organizations to make sure that this is something that's allowed.

15:05 - There's other tools out there that you can have a browser extension, and they will watch what's being typed into the prompts because often users can share information that is private. It's not to be put out in the public domain. Maybe they don't know that, so they're just putting that data into a prompt or they're ingesting it somewhere into an AI, this browser extension, and we can get into some of the technology in a later episode. It's definitely interesting because it helps to avoid, it's really like a data loss prevention exercise, but it's something interesting we can definitely get into a little bit more.

15:54 - So I would suggest adopting these five simple ways to identify and protect an organization when it comes to shadow AI. Thank you, everybody, for tuning in, and we'll see you next time

Written By: Brad Bussie