The State of Enterprise IT Security Podcast - S1 EP. 12: House Votes to Ban TikTok, AI Regulated by EU, Cars Tracking and Reporting Driving Habits and more

Cybersecurity The State of Enterprise IT Security Podcast - S1 EP. 12: House Votes to Ban TikTok, AI Regulated by EU, Cars Tracking and Reporting Driving Habits and more

Join Brad Bussie on episode 12 of the State of Enterprise IT Security Edition as he covers potential TikTok ban in the U.S., Europe's groundbreaking AI regulations, and the privacy concerns surrounding data sharing by modern cars with insurers. Tune in for a critical analysis of these pressing IT security issues.

Overview:

In episode 12 of the State of Enterprise IT Security Edition, host Brad Bussie, Chief Information Security Officer at e360, discusses three pivotal topics impacting the IT security landscape. 

The first segment addresses the United States House of Representatives passing a bill advocating for a ban on TikTok, citing a potential security threat with the China-based company's ability to collect sensitive user data. The TikTok discussion touches on the app's missteps and the political consequences it faces, highlighting the complex dialogue around national security and social media.

Bussie then shifts focus to Europe's groundbreaking Artificial Intelligence Act, which has just received final approval. This act is set to become a global benchmark for AI governance, introducing strict regulations for AI uses deemed high-risk and outright banning others deemed to pose an unacceptable risk, like social scoring and predictive policing.

The final topic raises privacy concerns about modern cars and their data-sharing practices with insurance providers. Bussie elaborates on how vehicles collect a wide range of data, including personal information and driving habits, often without clear options for users to opt out or protect their privacy.

Listen to the Episode:

 

Watch the Episode:

Key Topics Covered:

  1. The U.S. House has passed a bill that could lead to a TikTok ban over data privacy concerns and national security issues.
  2. Europe's Artificial Intelligence Act sets strict regulations for AI, bans certain high-risk uses, and could serve as a global model for AI governance.
  3. Modern cars collect extensive personal and driving behavior data, which is shared with insurance companies, raising substantial privacy concerns.

Links Referenced:

Read the Transcript:


[00:00:00] Your car is watching you and sharing that data with your insurance provider. So unless you're driving something from the 1980s or maybe before, chances are your data is being shared.[00:00:30]

[00:00:30] Hey, everybody. I'm Brad Bussie, Chief Information Security Officer here at e360. Thank you for joining me for the State of Enterprise IT Security Edition. This is the show that makes IT security approachable and actionable for technology leaders. I'm happy to bring you three topics this week. The first, the House pushes for U.S. ban on TikTok. The second, Europe's world first AI rules get final approval. And the

[00:01:00] third, your car is watching you. So with that, let's get started. So the first one, the house pushes for us ban on tick tock. So the house of representatives, they passed a bill on Wednesday that requires. TikTok owner, ByteDance, to sell the social media platform or face a total ban in the United States.

[00:01:27] And I think TikTok made a tactical error by forcing a pop up that they said was only distributed to TikTok users over the age of 18. And what happened is Congress was bombarded by calls from a lot of people who are, I would say under the age of 18 and completely swamped with, with calls straight from the app.

[00:01:56] And it had a, uh, I'd say an adverse effect because the vote in the house was 352 to 65.

[00:02:10] And that basically means a unanimous approved committee that happened last week was ratified and that says that it is now moving on to the Senate and it's interesting because the

[00:02:30] vote in the house, it represents probably the most concrete threat to tick tock. In that ongoing political battle. And really it's all about the allegations that the China based company could collect sensitive user data.

[00:02:45] We talked about this in a previous episode and they could politically censor content. And I think they proved the point a little bit with this pop up that. Tick tock decided everyone needed

[00:03:00] to see regardless of your user history. You're browsing anything like that. So I think they kind of brought this on themselves a little bit.

[00:03:11] And what's interesting is tick tock. They still keep saying that they. Haven't shared any data coming from the U. S. with China, but they can't prove it. So therein lies the challenge. So the bill, it's moving to the

[00:03:30] Senate, and I would say this is where the destiny is uncertain. There's already a couple of senators that have called out how constitutional is this really?

[00:03:42] To name a specific company and ban them. So it's interesting because we were thinking about how to respond to this as an organization, e360, I was talking with our marketing team and we were debating, should we make a meme

[00:04:00] that says free tech talk or one that has the logo maybe behind bars in the end, we're still faced with people choosing to put their information up for the highest bidder.

[00:04:14] And some of us just choose not to participate. So the second issue that I wanted to talk about today is the world's first major act to regulate AI being passed by European lawmaker.

[00:04:30] So lawmakers in the European parliament voted overwhelmingly in favor of the artificial intelligence act. So that's five years after regulations were first proposed.

[00:04:45] Kind of mirrors what we've been doing in the U. S. As far as what our executive order just looked like. So the act is expected to act as a global signpost for other governments grappling with how to regulate the

[00:05:00] fast developing technology. So how does the act work? Like many? EU regulations, uh, the AI act was initially intended as consumer safety legislation and like many other regulations, they're taking a risk based approach and that's to products, services, anything that's using artificial intelligence.

[00:05:27] Now, if you look at it,

[00:05:30] they're seeing AI in things like Medical devices or critical infrastructure, things like water, electric communications, those are going to face tougher requirements, like using high quality data and providing clear information to users. And some AI uses are going to be banned completely because they're deemed to pose an unacceptable risk like social scoring systems that govern how people behave, uh, some type of predictive policing.

[00:06:05] and emotion recognition systems in school and workplaces. I think I could pretty much agree with all of those. Uh, other band uses would include things like police scanning, uh, faces in public using things like AI, remote powered biometric identification systems.

[00:06:30] Now I think they will use this for things like serious crimes, Kidnapping, terrorism.

[00:06:37] So keep an eye on that because I think it's not a no, it's just a more responsible now developers of things like general purpose AI models from European startups, all the way up to open AI in Google. They're going to have to provide a detailed summary

[00:07:00] of the text, pictures, video, and other data on the internet that is used to train the systems, as well as follow the EU copyright law.

[00:07:13] And I think the US is probably going to end up following this as well. And AI generated deep fake pictures, video, audio, we talked about this in a previous episode. Those are going to need to be labeled. As artificially

[00:07:30] manipulated. So the question comes this extra scrutiny, it's going to be for the biggest and most powerful AI models, because they're seen as posing the most systemic risk and they include a lot of the ones we've talked about before open AI, GPT four, as well as Google Gemini and the EU says it's, it's worried that the powerful AI systems could cause.

[00:08:00] Serious accidents or misuse of far reaching information that could cause a cyber attack. They also fear Gen AI could spread harmful biases across Many applications and, and that affect many people. So the big question that I had was, do the, the European rules influence the rest of the world? And I would say yes and no.

[00:08:31] Biden proposed an executive order back in October. Uh, China, they've proposed a global AI governance initiative. So I'm looking at this as kind of like the GDPR. It's going to go into effect and be enforced in the EU starting in May or June of 2024. And I think that will help lead the way when it comes to the rest of the world.

[00:08:58] Now, this is going to be a bit of a process, like the rules for general purpose AI systems, like We'll just say chatbots. This is going to apply a year after the law take effect. So by mid 2026, there should be a complete set of regulations, which include requirements for high risk systems that will then be enforced.

[00:09:23] And speaking of enforcement, when it comes to that, each EU country will set up their own AI watchdog where citizens can do things like file a complaint if they think they've been a victim of a violation of the rules. And meanwhile, In Brussels, they're going to create an AI office tasked with enforcing and also supervising the law of the general purpose AI systems.

[00:09:53] Part of this that caught my eye was violations of the act could draw fines up to 35 million euros, so that's like 38 million or 7 percent of a company's revenue. Global revenue. So as this starts to unfold, it could be pretty expensive for some of these A. I. Organizations. Now, the third thing to talk about today, your car is watching you and sharing that data with your insurance provider.

[00:10:28] So unless you're driving something from the 1980s or maybe before, chances are your data is being shared. This means that your car. And the services you use in it, like GPS, satellite radio, uh, they can collect data such as your contact info, race, immigration status, and really any other personal inferences the systems can make about like where you're going.

[00:11:00] And similar to a previous story about a vending machine that was scanning your face, there's other data that the vehicles are silently gathering, including things like when you break too hard, uh, you rapidly accelerate. And then that data is shared with a data broker, and then that data broker sells the information to car insurance companies to help tailor an appropriate policy for the driver at one data broker.

[00:11:34] I came across was Lexus Nexus, and they had a 258 page report on a driver. whose insurance went up 21%. He was curious, why did my insurance go up? And he was pointed to this report as the reason why. And I always thought, you know, you needed one of those devices that you plug into your car's maintenance port, and then that feeds it into an app.

[00:12:05] And then your insurance goes up or down based on your driving habits. But it looks like those days with the smartening of our vehicles are gone. Now, the fine print here is that most major automakers, privacy policies, they have no opt out choice, and they don't offer encryption for our data, and no U.S.

[00:12:33] brands really have a way to totally delete your info. So if you're looking for that right to be forgotten from a privacy policy standpoint, doesn't exist. In a study, there was like 19 car companies that even specify that they can sell your data to brokers, marketers, or dealers. So with that, thank you for tuning in.

[00:13:01] See you next time

Written By: Brad Bussie