The State of Enterprise IT Security Podcast - S1 EP. 4: ChatGPT Team Licenses, ChatGPT Store Risks, Have I been PWNED?

Cybersecurity The State of Enterprise IT Security Podcast - S1 EP. 4: ChatGPT Team Licenses, ChatGPT Store Risks, Have I been PWNED?

Join Brad Bussie, CISO at e360, in Episode 4 of the State of Enterprise IT Security Edition, as he explores OpenAI's Teams, the Chat GPT store, and updates to "Have I Been Pwned?" Discover key insights into privacy, innovation, and cybersecurity vigilance.

Overview:

In the fourth episode of the State of Enterprise IT Security Edition, Brad Bussie, Chief Information Security Officer at e360, delves into pressing topics within the realm of cybersecurity, offering vital insights for technology leaders. This episode explores the nuances of OpenAI's Teams, the Chat GPT store, and the significant addition of a new data set to "Have I Been Pwned?" Bussie's expertise shines through as he breaks down complex issues, making them accessible and actionable.

OpenAI's Teams: A Leap Towards Privacy

OpenAI introduces "Teams," a new pricing model aimed at enhancing privacy within ChatGPT interactions. For $25 a month, Teams ensures that inputs remain confidential and aren't used for training the model. Bussie highlights, "What you're typing in...it's going to go to train the model, which means if there's enough information in there, anyone with access can get that information back out of ChatGPT." This development marks a critical step towards securing user data in the age of AI.

The Chat GPT Store: Innovation Meets Responsibility

The launch of the Chat GPT store opens a Pandora's box of possibilities and challenges. It allows developers to create and share GPTs, fostering a community of innovation. However, Bussie raises important questions about privacy and misuse, "You have to be aware, they probably don't have the same privacy concerns as like the enterprise version or the teams version does." He urges developers and users alike to consider the acceptable use policy and the ethical implications of their creations.

"Have I Been Pwned?" and the Importance of Vigilance

Bussie discusses the addition of a statistically significant data set to "Have I Been Pwned?" emphasizing the ever-present threat of data breaches. This platform's role in individual and organizational cybersecurity is crucial, highlighting the need for constant vigilance and proactive measures like multi-factor authentication and the use of passkeys.

  •  

Brad Bussie's insights provide a roadmap for navigating the complex landscape of cybersecurity, emphasizing innovation, privacy, and ethical responsibility. As technology continues to evolve, these principles will be paramount in safeguarding the digital world.

Listen to the Episode:

 

Watch the Episode:

Key Topics Covered:

1. Privacy in AI: The introduction of Teams by OpenAI is a significant move towards ensuring user privacy in AI interactions.

2. Ethical Innovation:
The Chat GPT store showcases the balance between fostering innovation and maintaining ethical standards in AI development.

3. Proactive Cybersecurity: The update to "Have I Been Pwned?" serves as a reminder of the importance of being proactive in personal and organizational cybersecurity efforts.

Links Referenced:

Read the Transcript:


[00:00:00] That's something that sits out on compromised machines and collects credentials, whether it's key logged, whether it's, it's scraping files, whether it's sitting in line. It's, uh, it's on these devices, and then it's feeding back to a central repository.[00:00:20] All right. Hey, everybody. I'm Brad Bussey, Chief Information

[00:00:40] Security Officer here at E360. Thank you for joining me for the State of Enterprise IT Security Edition. This is the show that makes IT security approachable and actionable for technology leaders. I'm happy to bring you three different topics this

[00:01:00] week. The first one is OpenAI. They've introduced something new called Teams, and I'll tell you a little bit more about that and some of the privacy components. The second is the introduction of the chat GPT

[00:01:20] store. What is it? And are there some potential problems for enterprises? And the third, have I been pwned? They added a statistically significant data set, and I'll talk a little bit more about that. So with that, let's get started.

[00:01:40] So first and foremost, uh, OpenAI has a new pricing model. So you can still get it for free, but you only have access to the chat GPT 3 model. You can pay for an individual license, which gives you

[00:02:00] A, uh, access to doll and chat GPT four. And that's, I think running probably when this was recorded, 20 could be more when you listen to this, we'll see. But I think what's interesting is there's now a teams option and it's 25 a month.

[00:02:20] But what you get as part of that teams. is privacy. So anything that you do ask or put into the prompt will not be used to train the, the chat GPT. So that's

[00:02:40] something that some people know, but not everybody knows is that when you type something into a lot of these publicly facing and available LLMs, whether it's Google barred, whether it's Bing chat, or whether it's chat GPT,

[00:03:00] what you're typing in. It's going to go to train the model, which means if there's enough information in there, anyone with access. Can get that information back out of chat GPT. And it's kind of funny. I've actually done this as an author. I was

[00:03:20] asking chat GPT, you know, do you, do you know who the author Brad Bussey is? And when I first started this, it had no idea who I was, which was sad, but that's okay. So what I ended up doing is I started to feed it more information. Like, Hey, let me tell you who. I am. Let me tell you about the books I've written. Let me give you some, some synopsis as

[00:03:40] well as descriptions, things like that. Next thing I know, now when I ask, do you know who the, the person is, me, uh, it actually comes back with something. So that's a very finite kind of focused use case. But you can imagine if you start asking questions, maybe it's something medical related,

[00:04:00] maybe it's something, uh, just concerning overall about your privacy. You're really giving it away. So what I want you to think about is with, with the teams option, uh, you can do this and then whatever you are prompting the LLM to, to

[00:04:20] bring back to you, it's just, it's private, it's just for you. So I think that's, that's definitely interesting. You'll you'll have access. So let's say you do the team's option. You'll still have access to things like doll three. You'll get the GPT four with vision. You'll get the browsing

[00:04:40] the advanced data analysis. I know what you're thinking. This is not a commercial for chat. Chat GPT. I don't get paid for this endorsement. I'm just telling you a little bit about it. So This also is a secure workspace for your team. So when you go in and you sign up for this, you can add

[00:05:00] other people's to the team, and then you can collaborate and, and do different things. I won't go too much into that. Um, and I think it's, it's interesting too, because I've heard, uh, people refer to DALL. Um, or or

[00:05:20] D. A. L. L. E. 3. They call it Dolly. Uh, so I think it just depends on on how you how you say it. So if you're wondering what I'm talking about, that's that's what I'm talking about. And then I think the other thing that you should keep your eye on is there is an, uh, an admin console. For the

[00:05:40] workspace and team management. And then you do get early access to some of the new features and improvements. So it feels a lot like the enterprise option, but you don't have to get the big quote and, and spend a lot of time and or money. Uh, it's more accessible for everyone else.

[00:06:00] And I think this is definitely the most approachable way to get started with one of these, uh, large LLMs. Uh, in a team setting without having to go full enterprise. So one thing that I would urge you to do is to take a look at the accept, uh, acceptable use policy. And there are some guidelines on how to use AI, and these are things that are, that are constantly updated. So that's, that's something for not just the Machine learning

[00:06:40] slash LLMs out there, but that's also for organizations. And we'll talk about that in, in some podcasts in the future, where we're going to really define the governance around the usage of these different platforms and what enterprises and

[00:07:00] organizations need to be thinking about when they allow their users to start leveraging this technology. So, second topic, uh, the introduction of the chat GPT store, and are there potential problems for

[00:07:20] enterprises? So, this was, uh, something that we, we talked about previously, but to bring you back up to speed. The really the chat GPT store is just like any other app store. So you can go there and you can shop for GPTs that other people have

[00:07:40] made. And they create the GPTs leveraging an API that is given. By open a A P I and, uh, and chat GPT overall. So what that does is it allows someone who has a developer background to go in

[00:08:00] and create their own GPT, their own large language model, but they're leveraging what open a I built. So you're probably asking yourself, well, what, what use does that have? I, I spent a lot of time looking at this app store and there's absolutely

[00:08:20] just amazing things out there already. Like if you really like cars, if you really like, you know, motorcycles, if you really like other things, there are GPTs now that are built that. Anything and everything you want to know about fixing a car or whatever, uh, you, you're probably thinking, well, I

[00:08:40] could just get that information on Google. Well, this goes a step further because you're, you're prompting just like you do in any other, uh, of these GPT examples. And next thing you know, it's like giving you step by step instructions on how to do, do something and it's saying, oh, and by the way, other

[00:09:00] people have had a problem with, you know, the distributor cap on whatever. And so it gives you an idea of, of like what you should be doing. So it's pretty cool. And you definitely get a lot of that predictive learning. So as people are using these GPTs from the store.

[00:09:20] They're just going to get better. And that brings. If you're feeding these, uh, GPTs with information, you have to be aware they probably don't have the same privacy concerns as like the enterprise version or the, the teams version does. Now, what I did is I actually went through and. Looked at some of the, the policies and, and that includes privacy policies, acceptable use and things that open AI is telling developers they can and can't do with the platform.

[00:10:00] It used to be a lot more restrictive, but now there are some things that, that you should, should think about. So first and foremost, they're in the business. Of maximizing innovation and creativity. So they want a thriving economy, very similarly to how Apple did this back in the day when they released their app store and people were in the beginning, like, well, what am I going to use an app for on my phone? Well, now we really can't survive without those apps on our phone. So it's just that forward thinking, you know, what can you do?

[00:10:40] With the GPT customized. And I would actually challenge you to say, what can't you do? It's pretty amazing, but there are some restrictions such as, you know, you still have to comply with, with laws. And one of the primary tenants is that you don't compromise the privacy of others. And you need to make sure that when these are designed, you're engaging in a regulated activity. So you heard me talk on a previous podcast about the GPTs that have been created, for hackers.

[00:11:20] But what, what I don't think we'd spent enough time on is the fact that some of those LLMs have actually been stolen and copied and they are not sanctioned by open AI or Google or anyone else. So they're using an LLM. It may look and feel very similar to what open AI has built, but it's, it's definitely not. Sanctioned. It doesn't have the same privacy controls and concerns, but what OpenAI is saying is anything on their app store has to comply with some of these things like, uh, you can't use this service to harm yourself or others.

[00:12:00] So, one, one use case was, you know, you can't promote a service that's going to do self harm, uh, develop or use weapons. I mean, when I was a, when I was growing up, everybody was trying to get their hands on something called the Anarchist Cookbook. We were just a bunch of teenagers didn't truly understand what that was all about, but that book is pretty much what OpenAI is trying to prevent, which is, hey, how do you make a grenade or how you do you make dynamite or how do you do these things?

[00:12:40] Because, you know, if you Google search that kind of stuff fairly, fairly quickly, someone will be showing up at your door. Same thing with, with the OpenAI. And the GPT. Components. You just you can't design something that has anything to do with, you know, sensitive things like that. Uh, you can't leverage what you're building to spam, mislead, bully, harass, defame. I've had a lot of discussions recently with the election coming. Well, what what's going to happen with A. I.

[00:13:20] The fact that we can  fake somebody's appearance, fake somebody's voice, uh, you could create a GPT to spread propaganda. Well, that's what these privacy controls are trying to prevent. So look for a lot of these, um, I would say fear mongering of, oh, they're going to create something using open AI that's going to cause just a panic related to elections. I don't think a lot of that's going to happen because of the safeguards that are built in. Sure, you can trick.

[00:14:00] Not easily, but it can be done. And we'll talk about that, I think, in a, in a future episode, but honestly, just know that there are some, I would say, safeguards out there and be thinking about. Uh, You know, when you're, when you're getting one of these GPTs that they were designed with that in mind. So overarchingly, I would say, what are we looking at is we are not compromising the privacy of others. And one of the big things is, you know, I just said, we're prompting a GPT, we're giving it information, but as the. Uh, developer of another GPT, you're not supposed to collect or process or disclose. Anything that can be construed as personal data and you have to comply with legal requirements.

[00:15:00] So think about the, the CPRA, uh, in California or CCPA is how it has, how it used to be called. That's a privacy, uh, law and you have to comply with those kinds of things, even with a GPT. So. European GDP are there's lots of, uh, of privacy concerns when it comes to this stuff. So another thing you can't do is you can't use biometric systems. For identification or assessment, including facial recognition, which is interesting because that's pretty much how we're getting into our phones, our laptops, uh, everything else. But the reason for this is they don't want to facilitate spyware, uh, communication surveillance. Or, uh, monitoring of individuals.

[00:16:00] So for a while, people were, were creating caricatures of their faces with some of these platforms that were available on the Google store and even the Apple store. So essentially we were trying to create these cool portraits that look cartoony or they would change our faces, uh, before these laws were in place, however. What was happening is the companies were collecting all of that telemetry about your face. And what they found is you could actually unlock a phone or you could unlock a device with a face that had been scanned into this app. So that was something that was very concerning and why OpenAI put that into their, uh, OpenAI platform privacy policy, which is you just, you can't use that data that you've collected because of things like that.

[00:16:40] So I would say another piece of this is we, we talk often about money and you can't leverage the platform to facilitate things like real money, gambling, payday lending. They actually put a section in here that says, uh, you can't use it to engage in political campaigning or lobbying. Uh, including this is the important part, generating campaign materials personalized to a target or specific demographic. So I think that was like the exciting thing for a lot of people is look at all this data we're going to be able to pull out of a a GPT can't do it, not supposed to be able to do it.

[00:17:40] And that really brings me to the piece that I found most interesting. We are talking about. Artificial intelligence, you know how I think about it. It's actually augmented intelligence. It still needs to be prompted to do something. But the question is, how is open a I checking up on all of these that are being created? And their answer to this is they use a combination of. automated systems, human review, and users reporting to find if there has been a potential violation of policies. when they're assessing a GPT.

[00:18:20] So this is just a really good way of saying, should you find something on that store that violates the privacy policy, I would urge you to report it because it's going to make it better. Uh, it's going to make the platform better. And I'm sure you're thinking what I was thinking, which is, well, can't they have their AI do that again? We're still in the prompt phase. Of these large language models and machine learning, I think is going to be really the next step on being able to do that kind of stuff, which is going to be looking at the code and seeing what is the intent, what is it written to do, we're getting closer to that. We're just not there yet. So our third topic today is, have I been pwned? And the fact that they, they, the site has found a statistically significant data set, which is titled nas.

[00:19:20] API and added it to their, their database, their data set. It's 104 gigabytes of data. And one third of all of the emails and accounts in that data set are actually net new, meaning, you know, this data came from something and what they've traced it back to is what's called a stealer log. And that's something that sits out on compromised machines and collects credentials, whether it's key logged, whether it's, it's scraping files, whether it's sitting in line. Um, It's on these devices and then it's feeding back to a central repository. So if you haven't ever looked at the website, if you haven't ever been to, have I been pwned, it's, it's definitely interesting for an end user.

[00:20:20] Is it as useful in a cyber defender like me and my organization? Typically not. I'm typically getting that information from a threat intelligence platform that's plugged directly into the dark slash deep web. It's crawling that information looking for compromised users. Compromised passwords and just as much information as it can find. And then it brings that up, aggregates it and identifies if there's a risk to the enterprise organization, but that doesn't necessarily mean that extends the protection so you and your. Gmail account or your Hotmail or your Outlook or whatever. So if you go out, take a look and just if you put in your email address, it'll tell you what data breaches that particular account has been a part of.

[00:21:20] Some of them are old from 2012, from 2018, from 2020, whatever. I would be shocked if you still have the same password from when those breaches actually happened. I would actually say shame on you if you do and don't tell me because I don't want to know, but what I think it does is it helps to identify, has there been one more recently since perhaps, you know, you, you have changed your password or maybe you've been meaning to set up multifactor authentication, but you haven't. And if there's one piece of advice I can give you. It is on your personal email.

[00:22:00] Do yourself a favor. If that's the one account that you enable for multi factor authentication, do it. Because if I'm an attacker and I'm going after someone, the first thing I'm going to try to do is get their email. Username and password. Why? Because then I can take that information and I can go to any other website and I can start resetting your accounts, credit cards, banking, whatever. So I would say definitely. Enable multi factor authentication on that account. Also do it on banking. And if you can, if you can do it, don't do the SMS option. Uh, don't do the text based option. Do something that is on an authenticator. Google has an authenticator. Microsoft has an authenticator. They're free. And that just gives you an additional layer of protection on a very.

[00:23:00] It's a significant account for attackers. Everyone's trying to get it. Don't let them get it. And I would also say, because I'm always trying to give you some good advice. When I look at the statistics on have I've been pwned, there are 12 billion, almost 13 billion pwned accounts and over 741 websites. Uh, just today listed, which means they're compromised the what, what it doesn't say is, are they still compromised?

[00:23:40] All it's saying is that at 1 point in time, your username and your password or your social security number, we're part of a breach. It's on the dark web. Somebody has it. So making sure you change those passwords often is critical, but I would argue that we should be looking at doing something which is actually a lot easier to do now than it was before, which is get rid of your passwords completely. Most applications and most websites will now allow you to do something called pass keys.

[00:24:20] And really we're aiming to make all of your accounts more secure by using pass keys. Because it enables passwordless login. It replaces the traditional password. You don't have a password anymore. And the thing that I want you to take from this is that each passkey is a unique digital key that cannot be reused. And that's generally where we run into trouble is when you reuse passwords. And we all do it. Even when you have a password manager, you're sitting on your phone, you're signing up for something and you're like, ah, I have to go in and do that on my laptop because it's not as easy to do on my phone. So you use that same password where you're like, eh, this account doesn't matter that much. You don't need to do that. You can leverage a passkey. That then stores an encrypted format of that passkey on your device instead of a company's server.

[00:25:20] And really, what does that mean? Well, it's keeping it safe in the event of some kind of data breach. So even if that company gets breached, they only have half of what they would need in order to get your data. So, I would recommend, if you want to get comfortable with this concept, Grab a password manager. There's a couple of different ones that are available for consumers and try a passkey with one of your own accounts. Just pick one and, and look at how easy it is to A, set up, B, use, and C, keep you safe. So thanks again for spending some time with me and E360 security. Have a good one.

Written By: Brad Bussie