Data 4 All

40 - Responsible Data and AI with Franklin Graves

February 27, 2024 Charlie Yielding and Charlie Apigian Season 5 Episode 40
40 - Responsible Data and AI with Franklin Graves
Data 4 All
More Info
Data 4 All
40 - Responsible Data and AI with Franklin Graves
Feb 27, 2024 Season 5 Episode 40
Charlie Yielding and Charlie Apigian

Ever grappled with the idea that the music you stream, the content you consume, and the data you share could intersect with the intricate web of technology law? We're peeling back the layers of this digital onion with Franklin Graves, an attorney whose expertise melds legal know-how with genuine enthusiasm for content creation. Together, we're traversing the terrain of data provenance and the ethical quandaries in AI development, crucial for everyone from up-and-coming YouTubers to the boardrooms of tech giants.

Wading deeper into the data pool, our episode traverses the tightrope walk of legal databases, access to justice, and the spark of controversy ignited by AI's hand in generating legal documents. It's a world where the scales of public good and proprietary interests are in constant flux, a balance made even more precarious by the rise of deepfakes and the push for platform liability. Through it all, we underscore the shared responsibility in content moderation, and why every click, every post, carries weight in the digital ecosystem.


Franklin Graves
Franklin is an experienced in-house counsel currently serving as a member of the technology law group at HCA Healthcare, Inc., where he provides guidance and strategic counsel for corporate technology initiatives.

He is also a Lecturer on Law with New England Law | Boston where he teaches Cyber Law, and an Affiliated Faculty with Emerson College’s Business of Creative Enterprises MA program where he teaches business and IP law. 

Franklin previously held roles on the commercial legal team at Eventbrite, Inc. and Naxos Music Group. Franklin also runs the weekly newsletter, Creator Economy Law, on LinkedIn.

He regularly contributes to IPWatchdog, Tubefilter, and Passionfruit, as a means to educate creators and raise awareness of all legal aspects of the creator economy. He is based in Nashville, TN.

Franklin's Social Links:

Data 4 All Social Media Links

Charlie Yielding Social Media Links 

Charlie Apigian Social Media Links

For more information please visit us at www.data4all.io or email us at charlie@data4all.io.

Show Notes Transcript Chapter Markers

Ever grappled with the idea that the music you stream, the content you consume, and the data you share could intersect with the intricate web of technology law? We're peeling back the layers of this digital onion with Franklin Graves, an attorney whose expertise melds legal know-how with genuine enthusiasm for content creation. Together, we're traversing the terrain of data provenance and the ethical quandaries in AI development, crucial for everyone from up-and-coming YouTubers to the boardrooms of tech giants.

Wading deeper into the data pool, our episode traverses the tightrope walk of legal databases, access to justice, and the spark of controversy ignited by AI's hand in generating legal documents. It's a world where the scales of public good and proprietary interests are in constant flux, a balance made even more precarious by the rise of deepfakes and the push for platform liability. Through it all, we underscore the shared responsibility in content moderation, and why every click, every post, carries weight in the digital ecosystem.


Franklin Graves
Franklin is an experienced in-house counsel currently serving as a member of the technology law group at HCA Healthcare, Inc., where he provides guidance and strategic counsel for corporate technology initiatives.

He is also a Lecturer on Law with New England Law | Boston where he teaches Cyber Law, and an Affiliated Faculty with Emerson College’s Business of Creative Enterprises MA program where he teaches business and IP law. 

Franklin previously held roles on the commercial legal team at Eventbrite, Inc. and Naxos Music Group. Franklin also runs the weekly newsletter, Creator Economy Law, on LinkedIn.

He regularly contributes to IPWatchdog, Tubefilter, and Passionfruit, as a means to educate creators and raise awareness of all legal aspects of the creator economy. He is based in Nashville, TN.

Franklin's Social Links:

Data 4 All Social Media Links

Charlie Yielding Social Media Links 

Charlie Apigian Social Media Links

For more information please visit us at www.data4all.io or email us at charlie@data4all.io.

Speaker 1:

On today's podcast of Data for All responsible data. Welcome to the Data for All podcast.

Speaker 2:

I'm Charlie Epiglian and I'm Charlie Yielding, where we want to empower you to think different with data.

Speaker 1:

And on today's podcast, we are going to focus on AI data and what is being done to ensure that we're using it responsibly.

Speaker 2:

Well, but before we get to that and our guests and whatnot, we want to thank the Nashville Technology Council for letting us use their wonderful premises to record our wonderful podcast.

Speaker 1:

Yes, and it's our second time in here. Hopefully we're getting some of the bugs out still playing with things. But the great thing about today's is, as we were talking, we get a lot of people talking to us about responsible AI and what we were most interested in is not just AI, but also just responsible data in general, with privacy, with everything. And, Charlie, I said I've got one guy I know that we should have on and because of his long, what we found out is his podcasting career as well as his knowledge in this space. So, Franklin Graves, thank you so much for being here.

Speaker 3:

Thank you both for having me. Yeah, and I will say it's nice to hear that intro music at the right speed, because when I'm normally listening it's like on one and a half acts and it's like, oh, that's a banger. If I just like slow it down a little bit, so does my voice sound different.

Speaker 1:

Oh yeah, Normal sound and then, yeah, it's it's one of those things when it's on normal speed and so when I hear my voice on, 1.3 is what I do I'm like boy, why am I talking so fast? Why am I doing that? And then I realize, oh, I got to go back to one speed with. I'm listening to me.

Speaker 2:

Yeah, I can listen to myself at normal speed and 1.5. Anything else is real for me 1.5,.

Speaker 3:

I can't understand, I can't keep up mentally, I just can't handle it.

Speaker 1:

Well, well, for today we're bringing. We brought you in not just because of your podcasting chops, but because you are a lawyer and not your lawyer, if you're listening. That's right, and and and. You're here today as a knowledgeable person in AI. That's why we have you on here today. But, franklin, why don't you go ahead and tell us who you are, and a little bit? We always ask about a data journey, so we're going to force you to talk about your data journey as well.

Speaker 3:

Yeah, thank you. So I'm as you said. I'm an attorney. I work for a large health care company here in town in Nashville, but outside of that I'm also an adjunct professor. I do a lot of blogging and creator economy work, working with working pro bono, which means like low service or no fee. Legal services with YouTubers focused specifically on what are called edge YouTubers, so people that are creating educational content, like podcasters.

Speaker 1:

Hey, I know.

Speaker 3:

And and getting them like answering legal need questions and providing education that's that's the biggest thing is like more educated to become in the space, the better you can protect your brand, your business and all that going forward. So that's what I love to do on the side. I love to blog about it, write about it, and that's kind of what got me going down this path. But, yeah, full time I am a technology attorney, focus only really on technology. Ai, of course, is everywhere now.

Speaker 3:

Data has always been a key aspect of that as well for a long time. But, yeah, I love listening to everyone on the podcast talk about, like their data journey. So for me it even goes back to like when I was in middle school. I grew up as a child of the Internet Napster, kaza Linewire, all that kind of fun stuff. So for me, music metadata became one of the first touch points I had to understanding the importance of data management and so like whether it was like having a very clean looking iTunes library before you could go and buy it, or a Winamp have it, have it load metadata correctly on there.

Speaker 2:

Stuff like that I was always, always, always into oh, hang on, yeah, yeah, yeah, winamp, so you go replace the cover art. If it was in, oh my gosh, absolutely.

Speaker 3:

Because sometimes it would pull from the database and I'd be like no, that's not clear enough, that's not good enough, or I want an alternate cover, stuff like that, oh yeah all the time.

Speaker 2:

I just loved it when Windows would display my record appropriately Once I started collecting digital music.

Speaker 1:

Yes, so my first pod well, it wasn't an iPod, it was a MP3 player was a Philips one. It was this little round thing and, yes, everything came on there, and sometimes it would be like off by one song and so I'd have to go in and fix things on my Windows 95 computer and then hope and it don't. I felt like it was still going to skip. Of course I did have the CD Walkman and of course I had Walkmans. So you talk about your part of the internet world. As a kid I hate hearing that kind of stuff.

Speaker 3:

Well, no, no, so it would be fair. I did have tapes. I would record songs off the radio when I was in elementary school Big Moon, mixed tapes, do stuff like that but yeah, by the time I was in middle school or early high school, it was burning your own CDs, yeah, buying CDs and ripping them, so you had them digitally, things like that, and a lot of that got me interested in understanding data, where data has come from. What is Grace note in the music industry? Things like that, or Nielsen now, I guess. But yeah, it's kind of fun to think about it from an early digital age of what data meant, and so for me that was a lot of it. That's awesome.

Speaker 1:

So, before we get into the fancy topics because I know that Charlie has loves this topic of data privacy and I'm going to let him run that show but I'm interested in you as a lawyer. Where are you seeing data or AI really starting to affect, or what is it that you do from a data perspective that is delightful and joyful for you in your job?

Speaker 3:

Yeah, so delightful and joyful for me and my job.

Speaker 1:

Wow, data is always delightful and joyful, yeah.

Speaker 3:

So I think having so data can mean a lot of different things. Obviously you talk about that on your podcast a lot, so access information, that's data. So having access to contract information, negotiation, even in the creator space, understanding what other creators are getting paid for their brand deals, having transparency, because a lot of times in the creator space there's not one union, there's not one agency or group that represents creators, and the word creator means so many different things too. It can encompass podcasters to vloggers, to just bloggers or Twitch streamers you name it. So the vast scope of it is just exploded, and so it's hard to have structured data points and understandings that are broadly applicable across all of the industry. So that's what I love seeing our studies put out. A lot of it is based on individual areas of the creator economy space, but diving into the numbers behind things Of what are the usual prices getting paid or ad rev payouts of different platforms and a lot of that can help inform a creator's business strategy, because a lot of it is a business for them Understanding where they can go to chase ad dollars.

Speaker 3:

Or maybe there's not an ad dollar fund, maybe there's not ad dollars here on this platform, but they have a creator fund. So what are those payouts look like? And having access to that kind of data is super important and a lot of times creators come together and publish it. There are a lot of nonprofit organizations out there that compile it and put it out there, so there's some type of understanding. But that's kind of a fun part of it.

Speaker 3:

And then in my day job, the legal industry is getting flipped upside down almost because of data and because of contracts, because of generative AI coming in and being able to generate contract clauses. Legal research is a little bit more nuanced and data definitely impacts that. There's actually an ongoing case right now that goes to jury trial this summer Regarding the ability to copyright aspects of legal holdings and extracts that are made from that. So it actually put a local, it actually put a business, put a company out of business. Ross Intelligence was the company that started up and was doing the gen AI work. And AI work not gen AI at that point, but AI work in the early days machine learning, work around contracts and around court decisions and having access equitable access to legal decisions and everything that empowers how we move forward as a society when someone sues another company or just understanding general laws as well.

Speaker 1:

Wow, one, and I keep saying one real quick. I was at, I did a talk earlier today and it was with CIOs and one of the questions was what jobs are going to be eliminated, which we get that all the time. Are you seeing a job reduction or a task reduction? So, is there? Is there going to? And when there's a task reduction, that means there's same job but less people in it, versus that job being completely eliminated. What are you going to see in the law world?

Speaker 3:

So I think I look at it at two different angles. First is access to justice issues. So in the legal field, in the legal world, having access to legal resources to defend yourself or to go to court and have an understanding of what your rights are, what previous precedents are for particular outcomes. A lot of that is locked behind paid legal research tools or just a lack of understanding and ability to access the right databases. You mean like Lexus, nixus and West Law and so on. Right, yeah, not to name specifics, not the people under the bus, but yeah, that's essentially it. Yeah, some of those. And if you're thinking about, like indigent clients or clients that can't afford an attorney or have to represent themselves, we're even seeing cases now come over, like news headlines come up now, where they're using chat GPT to help draft something and it's just making up cases. So having access to a database that can help ground the outputs and actually tie it down to cases, stuff like that, that's where data becomes hugely important from an access to justice standpoint. But of course you have to think about it in defense of Lexus and Thomson Reuters. You look at it from their standpoint of they're investing massive amounts of money to go and encode these things, to build the databases, to maintain them, make them accessible for a fee. So there's that element to it. We have to balance the business need and the cost. It would be if the government decided to do something or local governments tried to do something like that and then even extends into PACER, which is an online federal filing database to access court filings from not just decisions ultimate decisions that are reached, but how complaints are filed and all the different filings that go along and the money file lawsuit when it's concluded. That is also you have to pay per page, and per page I don't mean I literally mean per page, like it actually counts, but not on, like how many documents are loaded but how many pages are in the document and all of that. So there's a whole issue around access to justice and the ability to unlock access through the use of more specifically generative AI tools, but doing it in a responsible way, that it's tied and grounded properly.

Speaker 3:

And on the other side, I look at it from my perspective of I view it in my day to day life as a corporate attorney is it's going to unlock my ability to perform, I think, more efficiently? I don't think it's going to. I'm not scared of it going completely doing away with my job. Yes, there's an element where people can look up something themselves, try to find an answer, but they can do that already with Google. Like a lot of people come to me already with solutions, and I appreciate that, or they come with their own expertise of, let's say, they work in marketing, they already understand advertising laws, they already understand can spam and all this kind of stuff, and so I'm there to help facilitate and guide the discussion towards a more finite conclusion or work through solutions to get to a better outcome for the company in that way. So that's the way that I see my job as more of a legal advocate and business partner, as opposed to just somebody that can draft a contract or read a law and try to interpret it.

Speaker 2:

Yeah, you're hitting on lots of things that we've discussed before as far as, like, how AI works in the workforce.

Speaker 2:

It's not just going to come in and take your job, but you, as a lawyer, can be empowered by it, and then other folks who are not lawyers can also be empowered by it. Having said that, though, they still need you to review the end product or give them, like you said, direction on where to go from here, because there's always going to be a need to review the work, and if you're not a lawyer, you're not going to catch those things, and I was having this same conversation with a developer earlier. It's like, yes, you can use GPT to build code and to develop all you want to. Having said that, though, you are not a good reviewer. If you don't know the vocabulary, if you don't know the process of development, you're not going to be able to review the end product for QC purposes and whatnot, and so you still need the expertise. It's just like now you, the lawyer, are going to be able to do so much more than you were before.

Speaker 3:

And we're arguably, as an attorney we being the legal community, legal profession we're a little bit insulated to a degree, because we do have these laws in most states where it's unauthorized practice of law If you're not an attorney licensed in that state to provide legal services.

Speaker 3:

It gets a little iffy defining how do you define legal services? How do you define provide all that kind of stuff, and can you do it for yourself versus doing it for your company or a company you work for? So I think there is a it's worth noting, at least in the legal field. There is this benefit, if you want to call it that of self-preservation almost, of you still have to be a lawyer, you still have to be licensed to give legal advice and provide legal advice, and I say that with the full understanding, like lawyers get it wrong all the time. I mean there was even that, like last year, there was that case Avionka Airlines where the attorneys I think it was outside counsel that they had hired but they were preparing a court filing and they used chatGPT. They turned you to an even know that it was making stuff up and it's like that's not good.

Speaker 2:

Yeah, that particular case was. You know it was a milestone because it's the first lawyer got lazy and got caught. But that's the perfect example, because that's what's going to always happen if you don't review it.

Speaker 1:

Yeah, well, it's still the same mentality of there's phases of the data discovery or the data problem solving process. There's the front end I always call the dilemma. There's the data in the insights in the middle and then an action at the end. The dilemma and the action still even for lawyers then need to be done by humans. You have to prompt it correctly, you have to ask the right questions to get to the real problem. Trying to solve. It's okay if AI or filtering through searching gets you the data and insights in the middle, but then what do you do? What's that next step? You know, knowing that that's a truck driver that can't take off of work, if that was one of those kind of cases. If it's a corporation, all the different aspects or just the nuances within healthcare that are, you know, case by case basis. So there's a lot to that. All right, enough about lawyers. I'm kidding, of course.

Speaker 2:

Get out of here.

Speaker 1:

Yeah, so we brought you on here for this idea, we pitched it to you and you've been gracious enough to play along about responsible data, so we're going to let you define that for us, and then we're going to fire at you and tell you why you're wrong. I love it Go ahead. So if we say responsible data, what does that mean to you?

Speaker 3:

You're asking an attorney to come up with a definition and that's like what we love to do.

Speaker 3:

We love to define parameters, no, but for real.

Speaker 3:

So I think when we were talking, like you said, it's this concept of responsible data, stems from this concept of responsible AI, which is a growing, rapidly growing industry or topic of the industry, tech industry is this concept of what does it mean to be? If you're on the model development side, what does it mean to be a responsible model developer? What does it mean? Even before that, what does it mean to be a responsible gatherer of data and collector of data or generator of data? Do you have those two buckets? And then, if you get to the point of providing the model as a service, or integrating it into your service, or allowing others to access it as a service that they can integrate into their services, it begs this question Okay, then what does it mean to be responsible in that context? And then you have the output to the end user the output, the actions that are being taken and all that. You have to look at it from that parameter to what is responsibility in that column of that bucket of concentration. So responsible AI across the board means a lot of different things and we don't have time to go into responsible AI as a whole.

Speaker 1:

Oh yeah, we do. We can go as long as you want.

Speaker 3:

I know right, but I think your listeners probably would glaze over at some point, so I think it's best just to point them.

Speaker 3:

Like, if you do Google search, you can find Microsoft, google, salesforce, pretty much every major company has published their commitment. So, at least on the tech side, at least on the platform like the cloud provider side and some of the major platforms as well Even OpenAI and some of the other ones Lama and Meta have these guardrails that they've self-imposed and put out there to identify ways in which they are protecting themselves, protecting the data that's going in, protecting people that are implementing those models or AI solutions and tools, and then on the tail end, how is it ultimately protecting users? And so we're seeing some of that play out now with the deep. In the current state of people who are listening right now, we're dealing with a lot about deep fakes and that's a huge concern. So that, specifically, is a great example of that's one area to focus on with responsible AI. If you're making a model available, if you're making a tool available, if you're a service available or if you're the end user of it, what kind of restrictions do you have around that?

Speaker 2:

Real quick what is a deep fake. So everybody's on the same page.

Speaker 3:

Yeah. So deepfake would be using a let's just take an image as the most popular one right now, using a tool like Mid Journey or Microsoft, I think, is one of the more recent ones that was allegedly used some of their tools to create the Taylor Swift explicit images that were going around. A deep fake is using a tool. The term basically means taking a tool and creating a fake image that puts someone it's normally in a negative context, in a derogatory context or in a sexually explicit context. Taking an image and generating an image of a known individual or even, in some cases, unknown individuals or lesser known individuals, and creating a fake image that would never existed before that. So that's why it's a deep fake. That's what that terminology means, and even down like the reason why some of the laws we're seeing proposed now are looking at it not just from the perspective of protecting celebrities, because previously, with name, image, likeness rights and kind of privacy rights around that front, a lot of it was a lot of the federal level stuff and a lot of the state level stuff was really focused on protecting celebrities because they wanted to protect their interest and be able to commercialize their name, their image and their likeness. But now we're in a stage where that right needs to be protected for individuals.

Speaker 3:

Even those of us sitting here, people listening, we all are at risk of having someone, and even if you have children, kids these days are even using them to do revenge porn or to make fun of people. Bullying. Cyber harassment it's a way in which this is just. It's no longer just a high on celebrity. I don't want my image to be used without permission or my voice to be used in a way that alludes to me sponsoring this product. Now, deep fakes and revenge porn and all of that have become such a huge topic because of the capabilities that Gen AI unlocks to just give anyone the ability to type something in generate an image. So that's why you're seeing a lot of content filters as well on all of the platforms.

Speaker 2:

There's definitely a. There's been a lot of news recently over, like the deep fake porn stuff because, like there was a popular Twitch streamer whatever you want to call it there's a popular Twitch streamer, or there was a part of a group and then it turns out other members of the group were actively looking at her stuff, like on stream, but they had it behind a different tab and they accidentally showed it and then you know it's a. It's a. It's like, if you want to see people that you know in explicit contexts, you can do that now and, to your point, it's not just the Taylor Swift's of the world. It's like do I have a photo of you?

Speaker 3:

That's all I need Even us on this podcast. Somebody could take our voice. This is producing plenty of sample content for them to take load into a system and call one of our significant others, or call someone important to call our business.

Speaker 1:

I told my mom that, yeah, I was like, hey, if you ever hear me and I'm being really nice to you, you know, or I'm offering up some help. No, let me with the word responsible, whether it's data or AI some different words I think about will be ethics, bias, privacy. Any other ones you think kind of fall into the responsible bucket.

Speaker 3:

Yeah, I actually like the HHS here in the US has their Trustworthy AI playbook.

Speaker 1:

Okay.

Speaker 3:

And it has a nice little flywheel. I think a lot of it can be credited to Deloitte as well. So it's the Trustworthy AI playbook and it's a great example. So back under the Trump administration, actually, a lot of the federal agencies were tasked with coming up and exploring this concept of how AI is going to impact their industry or not. The industry impact their agency and how they operate and how they perform. So this was a lot of the work that we're seeing. There are a lot of government agencies that have been working at this for years, before the Gen AI boom per se, and taking a look at this and kind of understanding what does it mean? And to your point, yes, it's everything from privacy to trustworthiness, and trustworthiness can mean a lot of different things, just like responsibility can mean a lot of different things. And you notice, I dodged your question of actually describing and defining what is responsible AI mean or what is responsible data mean. Typical lawyer.

Speaker 3:

I know right, it depends, but no, it really does. It kind of depends, like when you mentioned earlier on the use case matters for AI and for ML technologies, and it really does. The use case is going to have a huge impact on a whole framework of how you analyze the situation of using AI or ML technology.

Speaker 1:

Wow, any of those that you want to tackle first, like the privacy component to it. So because we, you know, I know one of the things that you said to me when we went and spoke at the Capitol a few months ago was there's already laws that cover data and we're talking about AI and so that's a new tool Does that mean that those laws don't apply to it? And so I'm thinking about, like data privacy laws, things like that Is there? Is there any? I'll just ask it as plainly as this is there anything where you see laws current laws already cover some of the stuff in AI, or do we need brand new stuff out there to make sure that people are responsible?

Speaker 3:

That is a question of the day month week year, yes, no.

Speaker 3:

So it is a great question and I think I look at it from the framework of. We are encountering something similar to when the internet came about. I agree, and I think sometimes people even go back to say, oh, this is like the invention of the wheel or electricity and all that, whatever your take on that is, however, you view generative AI in this space For for the purposes of this, I think I look back at it from internet law, because that's more recently a Tribunal. It's technology in that sense of more recent technology. I look at it as back in the 80s and 90s and even early, even into the early 2000s, we had this concept of internet exceptionalism.

Speaker 3:

So internet exceptionalism is kind of this concept of do you need to treat the internet differently because it isn't something new, because it is different than all the previous application that we've had of existing laws? And and the answer, of course, is depends it kind of? It depends on on whether it is so novel or nuanced that you do need something. So I think Deep Fix is a great example of that. Right now, we recognize we do have current privacy laws that protect a person's name, image, likeness, even their sound, their voice, depending on how the state has drafted the statute and that's everybody, not just celebrities, you're saying.

Speaker 3:

So mostly that's targeting celebrities and, and even more so recently in the last like five years or so, the NCAA's efforts and to push into collegiate name and image likeness rights has really spun and spurred a lot of development over the last couple of years from the state level to protect that and offer that as a protectable right and a right, an actual right. That's the important thing. Is it a right that someone has, and so that's why we look at it? Yeah, go ahead.

Speaker 2:

Well, no, that is the. That is the question. Because why do any of this if it's not a right that we have a sovereign name? You know name, likeness and image.

Speaker 3:

Yeah, and that's why the right. If you think about well, what right are you talking about? Well, ultimately all these fall under a bucket of privacy rights, because it is your person, your, your interests that you're protecting as an individual. A lot of times, like I said earlier, it is oftentimes tied to commercial in it, commercialization of it. Except if you look at it like to the, there's a lot of Supreme Court cases, a lot of law built up over the years about right to privacy in your home, how that's expanded to even be outside your home and and that gets into a lot of even down to like reproductive rights and things like that.

Speaker 3:

So the right privacy is such a broad, sweeping category, especially in the digital space. That's where you're seeing now. A lot of them are targeting the ability to recognize this, this newer concept, and making sure that, okay, we recognize from and what I call AI exceptionalism. We have to re examine what we have and where the gaps and if we have gaps, maybe we can close those gaps, like we're just talking about, and provide this right as an actionable, protective right for individuals that aren't maybe well known or that aren't trying to commercialize something.

Speaker 3:

But with that one specifically, the strategy is just to copy and paste stuff that's already afforded to some, to all, whereas, like there is a not necessarily a copy and paste, so I don't want to get too much into like playing one side or the other, but there are people that argue it's poorly drafted. The way that they're trying to go about it now is poorly drafted because it's it's the it's lawmakers trying to push something out ahead of elections. They didn't take the time to truly understand the technology and and how to properly. Again getting back to definitions what, what are the defined terms they're using to to identify this and and the other question there to a lot of what we're seeing in some of the newer bills is platform liability, so service provider liability, the liability of, like I said, this gets to the that's a whole separate conversation that I do want to come back to in a little bit.

Speaker 2:

But keep on going. I just I don't want to go deep in it just yet.

Speaker 3:

Yeah, no, that says that. I just want to highlight that's. That's where some of the the critiques are coming out. From this perspective of these laws. What is the aim here? Is the aim to give individuals a private right of action to take and to pursue, or is it to hold someone accountable? Are you holding the actor, the bad actor accountable, allegedly bad actor accountable, or are you holding the platform, the tool provider, and even going like a broader and scope over to the EU? The EU AI Act accounts for this and and identifies distinguishes between model developers and distributors and platforms and users and all of that. So it just depends, like I think it looking at what law, what area you're trying to cover and provide a framework, and try to analyze around like is this sufficient in the AI age?

Speaker 3:

Kind of like back in the day we were saying is this sufficient in the internet age? And a lot of times we were, we were determining like Okay, we don't have adequate means of stopping mass copyright infringement that's destroying the music industry, or we don't have a means of platforms being able to operate without being held liable for anything their users post. So we have section 230 and the. The DMCA solved a lot for mass copyright infringement in music industry and film and all that. So it's kind of cycling back. History repeats itself and I think we're seeing that now we're at the stage where it's trying to. Even now, copyright still comes up in the space of understanding. How do existing laws, how do they, how do they apply under an AI ecosystem and framework, and do they or do they not hold up? And if they don't, then maybe we can talk about where the gaps are.

Speaker 1:

What do you think leads to it not like, is it the fact that AI can now make decisions instead of just, you know, give you insights? Is and that really goes back to the really the liability question, right, charlie? And the idea of who is responsible. Is it the data, is it the ones that build the models or is it the ones that use those models? And so there's, there's some aspects there, charlie. What do you think about that? I want your opinion before we go to Franklin on that on the the distribution of responsibility.

Speaker 1:

Yeah, I'll give you. I'll give an example. Let's say I've, I'm a data aggregator for real estate data and I am using somebody else's model to build a model, but it's their model, yeah. And then I am then leasing that to real estate companies, right? So now you have real estate companies that are using a model built by somebody else yeah, or at least the way it's built but it's my data. And let's say there's something that was falsified in that data or that was on the at the end point was wrong. Which of those three should be reliable? All three or two of the three or none of them?

Speaker 2:

I feel it's like it's definitely a shared burden, but I'm going to take your analogy you just gave me and throw it in the garbage. Yeah, and I'm going to use a weapon analogy instead, because that's how I was wrapping my head around it. And and so if you're a weapon manufacturer, do you get to just do whatever you want?

Speaker 1:

Absolutely. They do it now, right, isn't that? What hell? But I have no idea, of course not.

Speaker 2:

They have rules and regulations obviously around around manufacturing a weapon, and, and the reason is because it can be harmful. And so if you're, if you're, a media platform, you can unintentionally host harmful things, and so those things can in fact do damage. And so you as a as a will, calm tech company or host, are liable to some degree or another. I think the bad actors ultimately like the primary, the alleged bad actor, if you will, and so they should always be the one treated as the one who pulled the trigger. But you, as the manufacturer use, the host, have a responsibility to maintain at least a level of responsibility when it comes to what is on your platform. And I'm not saying that everything comes with a huge penalty or anything like that, but you've got to, you've got to moderate.

Speaker 2:

The problem is that right now, a lot of these platforms, they weren't built with moderation in their financial plan and then, in some cases recently, moderation was taken away from the financial plans, and so that in and of itself is irresponsible, I think, and it's irresponsible that it's not considered in the way that it should be. But then you've got the YouTubes and stuff like that, where they're adding like a lot of platforms are responsibly adding on to what their moderation is and looks like, and that's great and everything. But then we're in an infinite growth economy, so at some point somebody's going to start cutting something and it's probably going to be in moderation, and we've seen that. So with the takeover of Twitter, like one of the first groups to go was the support team, and because they're like we'll AI this up and we'll get it going and Gronk is going to take care of everything.

Speaker 1:

Rob Gronkowski or Grock. It's Grock without the N, isn't it? Yeah, I think so.

Speaker 2:

I say it wrong on purpose. I have a disdain for Twitter as a company.

Speaker 1:

right now you have a disdain for Rob Gronkowski.

Speaker 2:

No, I think you mean X. I don't, I don't actually. You're intentionally not saying it. Yeah, I still call it retweets and everything but the. What else would you call it Re-exes? They call them reposts.

Speaker 1:

Oh, do they.

Speaker 2:

Yeah, they do.

Speaker 1:

I like re-exing better now.

Speaker 2:

Anyways. So I feel like it's a distributed model for responsibility, but the ultimate, like whoever pulled the trigger, unless it's willful ignorance or maliciousness on the host's side, it's mostly on them. It's mostly on the bad actor.

Speaker 1:

So in your case, the weapon manufacturer is that the data or the processor? That's both there.

Speaker 2:

That's the host, and so it's like where the data comes from. So if the data, if like I'm thinking about it from like if I put something on YouTube that shouldn't be there and it stays up for way too long, then it's the platform's responsibility. But if I put something up on YouTube and it gets caught immediately, those three seconds it's on there like that's. I can't do better than that right now, and so that's the, that's the holder of the information, and then the manipulator of the information is who I'm talking about, being the bad actor Gotcha.

Speaker 1:

Franklin tell him why he's wrong. I'm kidding, actually. I think that was well said. What can you add there?

Speaker 3:

Yeah, a couple of nuances, I guess. So, like thinking again applying existing laws and regulations across industries, products liability immediately comes to mind. That would be something that the gun manufacturer would potentially be liable for If they do not adequately manufacture and produce and sell and distribute. If they're the distributor of their products, then products liability could be traced back to them and that's under law. Same thing, like in the internet age, we had this concept of service provider liability and that developed over time and Twitter is X.

Speaker 3:

Twitter is currently facing that and so because they've removed their content teams and because they've removed teams that risk their safety teams, they've removed these people that even respond to like DMCA takedowns and there's actually it's kind of interesting they're being sued because of failure to respond to their takedown request. So they're being sued under the DMCA for violating that because as a service provider and some of that did stem technically prior to the acquisition, but a lot of it still continued post acquisition under that law the time frame of the lawsuit, but DMCA go ahead, I'm sorry, yeah, the Digital Millennium Copyright Act.

Speaker 3:

It's a section of the copyright act that focuses specifically on digital copyright issues.

Speaker 2:

If you listen to any YouTuber talk, they talk about getting DMCA.

Speaker 3:

Yeah that's where you see like it's basically, it is one of the reasons why YouTube and developing their content ID system, that, plus the slew of lawsuits they that YouTube and Google faced early on in the early days of YouTube surrounding like hosting content that's copyrightable and not taking it down, all that kind of stuff. So it was this framework that was developed around. What, to your point, what is the responsible time frame in which someone should respond? So the content moderation is a little bit different. It's arguably under the umbrella of content moderation managing copyright claims and trademark claims that are received on a platform. But from a content standpoint, I would argue a little bit with your distinction of just because content is hosted for a while and it ends up being harmful, it doesn't necessarily mean that the platform should be liable. It's just, I would argue, that, under the statute, as long as they have a robust and well, a reasonable content moderation system in place for their platform and they're catching the vast majority of it, then they're probably in the clear.

Speaker 3:

But I think we are seeing social pressure, government pressure, just like even last week when all the executives from all the major tech platforms were called or maybe that was this week, no, it was last week, yeah, last week, all the major tech platforms were called in not all the majors, but everyone from Snap to, of course, tiktok CEOs there again to grill China, even though he's not Chinese.

Speaker 3:

And then the other aspects of that like Linda from X was there and they were all grilled because all of that is stemming from pressure societal pressure as well as congressional pressure about the lack arguable, alleged lack of protecting children online on their platforms. So that's where you're also seeing that. Well, maybe that's going to. That's a gap that people are identifying. So maybe that's going to be a way in which new laws are drafted and regulations are drafted. Maybe it's an exception to section 230 of the DMCA, which doesn't allow a service provider to be held liable for hosting content at the direction of a user. That's one of the distinctions of what section 230 does. But they have to have reasonable content moderation policies and practices. That's in place.

Speaker 2:

So to step back from that, so they can be okay in this situation so long as they're following the rules of, or the rest of the rules on moderating.

Speaker 3:

Put very simplistically yes, that would be the argument.

Speaker 2:

I'm thinking about the folks out there in Listener Land.

Speaker 3:

Yes, thanks for clarifying that. Yeah, that's great, but yeah, I think, even taking this back to the topic of data though, so like rewinding a little bit now, like we talked about service providers, we talked about platforms when does this leave us from a data perspective? We're seeing that with publicly accessible data sets and the concept of even like data cards, and that gets into the concept of data provenance and this concept of what data set are you working with? Do you know where it's coming from, who compiled it, the origins of the data? That's a part of it and that's where the Google and Microsoft both have two good examples of templates for model cards that can be used.

Speaker 3:

And the way you can think about model cards is it's kind of like if you go on GitHub right now and you look at like a repo for some type of project that's there, you have like your, about like the readme file, and it breaks down kind of about the project, what it's all about, if it's a well done project and has that filled out, of course, and you'll see all that, and it might even include like limitations, or you can go into the contributions or the open issues and kind of understand what the limitations are.

Speaker 3:

It's the same kind of concept in the data space. So if you have a data set, it will identify what are the biases? Are there any gaps in the data collected? Are there any risks associated with the data? Because it might be PHI Hopefully it's not out there publicly accessible as PHI, but things like that you can kind of identify in a model card around the actual data set or the actual model. It's meant to accompany a model that's developed and identifying attributes almost like metadata of the model itself and the underlying data that was built or that was used to build the model. Alright, so pause.

Speaker 3:

Let's take like two steps back.

Speaker 2:

You said data providence and then you started saying a whole lot of stuff right after that Providence, providence.

Speaker 1:

So yeah, we'll have to make sure that's in there. But yeah, that's P-R-O-V-E-N-A-N-C and I'm reading it directly from our notes that Franklin gave us, so that's new for me as well.

Speaker 2:

Okay, and I caught the part where it's like is it a dubious authority? Do we know and trust the data? And with the world we're going to? If you wanted to be a little sneaky, you could probably make up your own data or manipulate some data. I'm actually dealing with this professionally in a way too, so it's exciting in that, like folks, they're used to getting away with some of this stuff, but with the data they can't really anymore. But I digress. So you've got a set of data and you don't know if it's good or not. So qualifying it as data that's usable, is that the providence or?

Speaker 3:

I wouldn't say that's the goal of data providence, because I think that question can only be answered by the end user of the data set.

Speaker 1:

Okay.

Speaker 3:

And so I think if I'm out there on hugging face looking for a data set, I, as the all-nowhere of the project I'm working on, can understand whether or not this data set is going to have limitations that simply will not work for the type of model we're trying to build, or the type of model we're trying to fine tune, or the type of work we're trying to do in this section in the sector, or for this project. So does that help? That's kind of how I view it.

Speaker 1:

So it's almost to me I'm hearing it as it's an origin story. It's giving you some specs on where it's from, giving you the idea of from this neighborhood versus that neighborhood, kind of thing, but it's data.

Speaker 2:

So it's essentially like a less formalized blockchain type idea.

Speaker 3:

Yeah, and arguably you could use blockchain technology to further authenticate the data sources.

Speaker 2:

They're not mutually exclusive Right, but you have to have blockchain to approach it in the same kind of way.

Speaker 3:

Another way to think about it and this is also a great way to segue back into the privacy discussion you're asking about is when it comes to, I think Apple did a great job of bringing to the forefront a digestible and that's a pun I'm not allowed to get to why that's a pun A digestible way to understand your privacy rights, and they developed what are called they call their data nutritional labels.

Speaker 3:

So if you're a mobile app developer and you're releasing an app on iOS app store or whatever, whatever, whatever other ones they have they have so many. Now, if this new requirement that you have all these what are essentially metadata points data points filled out about what, what data your app is collecting, how it's accessing a house going to use it will be commercialized, this whole slew and it basically is presented to the end user in a digestible way, in the way that when you look at a nutritional label, we're also used to looking at nutritional labels on food to understand, okay, what's the serving size? What's in this? All of that. It's the same concept that Apple's done with our privacy within their ecosystem. It's the same thing that is also being done. That's how I would view a model card. I love that.

Speaker 2:

That actually makes it, that puts it into perspective in a way that it wasn't clicking before, which is Apple Apple's good at that, yeah.

Speaker 1:

We love oh, you're an Apple guy now. No, I'm not.

Speaker 2:

I only participate in the ecosystem because I have to. My friends will literally like to under EU law.

Speaker 3:

You don't have to move to the EU.

Speaker 2:

My friends will literally not include me in group messages.

Speaker 3:

Yes, are you agreeing? No, no, no, okay, you were okay, I had to see.

Speaker 1:

Okay, see here, you solved your problem this is yeah, you were ruining our tech chains in the beginning. I digress yeah, can I ask a question? You mentioned it, I say it all the time and I might be wrong when I say it publicly available data. Who? And people will say, well, what is that? And I'll be like that's the internet. But what is if people say publicly available data, like everything that's been part of the OpenAI model? Right, oh, publicly available data. What does that mean?

Speaker 3:

Let's go there. I love this topic. Yes.

Speaker 1:

This is a heavy one.

Speaker 3:

Yes, it is important. So the best way that I've found to communicate this and to developers that I work with is to think about it as if you're looking at open source code, an open source project that you're wanting to use. In the open source community, there are standardized dozens of standardized open source licenses that can be applied to your project. Github makes it super easy to choose from a list or whatever, or automatically apply your organization wide. You're going to apply MIT or whatever your own custom copyright notice. So in the open source realm, you're talking about copyright ownership over the code, and some of it does also. Some of them also get into whether or not you can use the trademark of the organization and that might have been sponsoring the open source project, or it might even touch on patent rights, if you're waiving patent rights or if there is a patentable process or some aspect of the code that would be subject to patent protection. So that's the purpose and intent behind open source licensing. Okay, and the whole intent behind it was to have a standardized way of quickly moving and having contributors and open sourcing a project so that others could come and contribute to it and you can understand.

Speaker 3:

Okay, am I contributing to something that is truly open source in the sense of what that means to me. Or is that open source that's too closed, like? I don't really want this to copy left or do copyright? It kind of helps people understand what what the project does or helps corporation maintain ownership over it. So that's what I use in most developers. I think it's fair to say anyone that's doing has been working in code understands the importance of understanding what open source license applies to code that you're going to pull into your larger project or that you're just going to take off the shelf and run as part of your solution that you're developing internally. So it is similar to that in the sense of that same concept should be applied when you're thinking about data sets. Just because something is publicly accessible on the internet doesn't always mean that it's just fair game.

Speaker 1:

Okay.

Speaker 3:

And the reason why you might be asking is that was what I was about to ask. See, this is why I'm good at my job.

Speaker 3:

I anticipate the next question. That's it, but so that's. The question, though, is if you look at a lot of the content that's available online and this circles back to what we were talking about with platform liability they're not liable if somebody, if a user, posts a copyrighted image or copyrighted Like. If I really love this Taylor Swift song and I'm going to post on my ex account and just share parts of it, I arguably am not going to get a takedown from Taylor Swift because I'm just sharing, because I'm a fan, all that Gotcha but the reality is, the platform really should not be using that content and others should not come along and scrape that content because it's subject to other rights that I, as the individual user, did not have to post. I'm using it in a different context.

Speaker 1:

So if you're scraping and that's kind of the heart of the New York Times case and one of the issues Gotcha, go ahead, explain that case, because I vaguely understand it and I know you're going to make it sound crystal clear.

Speaker 3:

Oh, my goodness, I will try. I'm going to try. And Microsoft are getting sued by the New York Times because they have allegedly done scraping and have used the content that New York Times has published online to train their GPTs.

Speaker 1:

Okay, Now, what's that? So my understanding is it's not like they put an entire article out for the public, but they may have put a snippet and then some other people are quoting it and so all of a sudden it can be recreated, even though it's behind a paywall. Is that part of that as well?

Speaker 3:

And that's part of what the arguments are is did open AI? Because opening AI is the developer of the models. Did they actually go to the New York Times site and actually scrape full articles, could they? Or was it behind a paywall?

Speaker 3:

And so I'll stop right there and just pause for there, because I'm actually teaching a cyber law course for a law school right now, and just last week I was discussing Trespass to Chattels, which might not sound like anything to you all, but it's essentially this concept of if I'm browsing an internet website, what are the rights that that owner of that internet website has to prevent me from taking content, scraping it? And so it kind of gets into the heart of yeah, go ahead, would you call that Trespass to Chattels? It's like a Trespass to Chattels yes, okay. So it's essentially this concept of if you think of property, it's an area of property law and torts is a trance. Torts are harms that you suffer, and trespass is one of those. And so if someone is coming onto your site, which is your property, and taking content from that, is that illegal?

Speaker 3:

And right now we don't have a lot to go off of, because that's not the only question that has to be asked If something's publicly accessible on the internet, the Supreme Court, in a case called Van Buren, said that you should have this concept of gates up, gates down, which, if you have your website gates are down, anything can be accessed, then arguably it's fair for someone to come and scrape that content and use it because it's publicly accessible.

Speaker 3:

However, if you have it behind a paywall, like the New York Times, that is a protective measure that they're taking and there are a lot of questions left unanswered by that case, such as like if your terms of use for the website say you can't scrape, is that sufficient? So it's a lot to unpack in that sense, but there's a lot of previous internet law history that comes into play in this new realm that we're in, of the importance of either being able to legally or illegally whether it's legal or illegal to scrape content from the public internet, because one thing in particular is whether it's considered a harm to the host of that content.

Speaker 2:

Well, you could sit in the case of the New York Times if you scraped, like if they paid a subscription and then went and scraped everything behind the subscription, you'd definitely harm them.

Speaker 3:

So this is where it gets. I love talking about this case because the irony is just lovely. So Microsoft owns LinkedIn. Linkedin, you have to have a user account to be able to see the data. That was. They actually sued a company called HiQ. They sued them because HiQ was scraping public profiles and maybe there was a case where they did actually create accounts to go and scrape profiles from LinkedIn to build a database and you can imagine the benefit of that to businesses to have a database of people's current jobs. It's like voluntarily, you put that out there.

Speaker 2:

Yeah, and you don't have to show them that you were looking at their profiles.

Speaker 3:

And so I love the irony of it, because LinkedIn was involved, which Microsoft owns, was involved in suing them out of business, basically for scraping. And it's a great case to look into because it dove into those nuances of what Van Buren started and kind of the nuances of what does it mean to have gates up, gates down where the terms of service come in? And in that one too it's also important to think about like the data is not really copyrightable data because it's personal information. You can't copyright personal information. It's hard to claim ownership over that.

Speaker 2:

But you're definitely hurting the LinkedIn business model and the people, the investors in that company.

Speaker 3:

Or the expectation of users, because I guess LinkedIn could argue like if I'm a user of LinkedIn and LinkedIn has done nothing to stop this, I don't want to be on LinkedIn anymore.

Speaker 2:

But to your point, LinkedIn is a definite gated community At this point. Yes, so how do you think that the like from this perspective or from this point in time, like how do you think the New York Times thing is going to play out?

Speaker 1:

That's a good one, I'll be a judge.

Speaker 3:

So there's also another case that is authors guild related, where authors are suing because their content's been scraped in. It's interesting when you think about it from the perspective of studies that are coming out now and I try my best to read and get a holistic view, but it's impossible to keep up with everything every day. But I've looked at a lot of studies and it's this concept of and I beg the question myself, when you're training a model on preexisting I call it preexisting materials because I think that's it's side note If you're working with model cards or data provenance. I prefer to use the term materials because I want to understand it's not just data, it's also it might be PHI.

Speaker 3:

I think it's important to name what you're working with Personal health information it might be personal health information. It might be personally identifiable information PII. It might be copyrighted information or information that is protected by copyright laws. It might be public domain information. It might be subject to Creative Commons licenses all of that kind of stuff. It might be subject to open source licenses, which is what GitHub is dealing with right now in one of their cases.

Speaker 2:

Like it could be sensitive in some way. Shape or form Right.

Speaker 3:

Sensitive or protective or proprietary, and so that's why I think, as someone who's advising teams on use of data sets and use of models, to me it's important from a risk liability perspective to understand, okay, what is the liability of engaging with this data set and reproducing it. Because that's what some of the lawsuits like along the image generation side for mid-journey and stability AI. A lot of them are involved in cases there's over a dozen now copyright cases surrounding this, so we can go on and on, but those in particular are looking at the Lyon data set out of Germany, which is a data set which is just links to images online and it has the associated description and kind of stuff like that. But in order to use that data set and have it to build a model that can become like a mid-journey type model where you can do the fun text image generation, you have to copy the data, you have to copy the images from those links onto your own server in the cloud, arguably and then use that to train and build your model.

Speaker 3:

And so that's the act of infringement in a lot of these cases that we're seeing is they're claiming that act of copying is unauthorized. You're making a copy of those images from the internet without the permission of the owner of those images, and then you're using them. And so that's when, also, we're going to see this argument play out of whether that's a fair use defense or not. And then it's this other question of when the New York Times specifically just because something is publicly available online, is that going to mean it's fair game to be scraped and to be used, or is it going to be limited to what's just behind a paywall, where you have to license it and it's proprietary at that point, so you didn't answer the question though.

Speaker 3:

I know I try not to share my own viewpoints on this Okay okay, but no, I don't mind doing it.

Speaker 2:

I will though.

Speaker 3:

I will say I'm torn. I recognize the immense value, so kind of where I fall is I'm. As of right now, my views are subject to change. I view it as impactful on creative industries and creators to be able to have someone that can scrape something that they put out on the internet, because that's the nature of the internet you put it out there for public consumption.

Speaker 3:

If that means that that can just be scraped and used up by somebody that's going to then create a product, an AI model or a service that allows others to basically not come to me for that same image or come to me for a similar need, what does that mean in the future of creativity, in the future of the world? Because to me that kind of is a harm in that sense. But outside of that, what if you're building a security system to recognize images and objects and that is, do you still need to license all that? Or, because it was publicly accessible, there's not really a licensing model for that anyways. Is that fair game? Because it's a completely different industry that's not impacting the underlying rights and market for that original content? There are a lot of holes you can poke in either with direction, but yeah.

Speaker 1:

Wow, well, I love the lawyer speak stuff.

Speaker 2:

What do you think about that same question then, because I don't want to change subjects yet.

Speaker 1:

I do, I'm ready to, I want to get to know Franklin as a person. When it comes to this, I feel like I'm on both sides too. I want to see how it plays out, because I notice that I am taking one side versus another, sometimes mainly for discussion. Like, I feel there's this there's almost an unfair opinion toward machines compared to us. So it's okay for me to be influenced by the Beatles, but it's not okay for an AI model to be influenced by the Beatles.

Speaker 2:

Without permission. I don't have permission, I mean theoretically without permission.

Speaker 1:

Yeah, and I think I do technically have permission. I think the Beatles would say I'm pleased.

Speaker 2:

I'm talking about the machine.

Speaker 1:

That's right, but I also don't as well Like so. If I, if I'm writing, I am writing based on books. I have read right. If open AI's models are writing, it's basing it on the books that it's in there and we're mad at the models for that, but we're not mad at people.

Speaker 2:

So this is we are, if they get plagiaristic, though but?

Speaker 1:

but they're not. They're influenced and gen AI is not plagiarism, it's influenced by, it's a probability.

Speaker 3:

It's a statistics probability.

Speaker 1:

Yeah, it's a probability of the next word being generated based on previous works, and that's so. That's where I'm. I don't know what I just said, I believe yet, but I keep trying to think both sides and I and I. I think I'm playing the other side so much lately that I'm starting to believe it, but but I do wonder. There's influence. Our brain is a. Is a GPT model right? A lot more sophisticated, a lot more connections, parameters, all of that kind of good stuff. We've talked about that many times on here. So why is it we?

Speaker 1:

It's generally accepted that we can copy in in imitation is flattery at this point, until it's plagiarism right, or flat out stealing, where any influence within a model seems to be taken negatively, and I'm not sure if that's right or wrong. Now I do think. First of all, if you're flat out stealing articles from a under behind a paywall and you have the ability to put in the username and password while you're scraping, well, hey, that's a little bit too far, right, but are they or are they giving, getting the snippets that are publicly available? So, so I don't know. And, and you know, I I would almost say, is it? Is it okay that publicly available, like Taylor Swift stuff is out there. Right, it's on YouTube for free. Does that make it publicly available or, like you said, accessible? And should that be used? And if we are trying to generate music, it has to be used, because she is a huge influence. I mean there's about 50 Billie Eilish sing sound, sound delights out there right now. We're not suing them, but they only sing that way because of Billie Eilish.

Speaker 2:

And parody is allowed.

Speaker 1:

Yes, yeah, that's true. Weird Al did have a really good career, didn't he? He probably got sudo at, though.

Speaker 3:

No, he actually had permission.

Speaker 2:

He didn't do some. He has permission but just because he wanted to, Yep, he didn't have to. He was denied in a couple of occasions and didn't didn't do it.

Speaker 1:

And now the two live, and I like Weird Al even more. Now See what, what, what a great casting.

Speaker 3:

I did want to, just one rebuttal of acid. And the reason I'm doing this? Because the same question came from an audience member when I was speaking at Gen AI week in Atlanta last year. And it was the same question this concept of well we hold, why are we holding machines to a different standard than we hold humans to a standard? He, this audience member, specifically said, specifically said if I walk into a museum and I walk around the museum and I leave that museum and I paint something, I'm inspired by that, how is that any different? And my rebuttal to that is what you're talking about on your, we have to get out, and we have to get out of this concept of humanizing what Christians do. That's great and so it's dangerous, I think, to put them in that realm, because we also don't want that, to continue that anthropomorphic, and I'm going to put the word.

Speaker 2:

Say that bad.

Speaker 3:

But anyways, we don't want that to. We don't want that play to continue on down when we're actually like allowing it to make decisions or when we're allowing it to do certain things and be responsible for certain actions. So my, my rebuttal to that is we, as humans, can walk into anywhere or consume any content in pop culture or whatever, but we're not, our brain is not making an exact copy of it, we're not retaining a copy of it, and that's exactly what is happening with the models is their ability to ingest billions and billions like I said, the lion databases billions and billions of images. Humans will not see that many in their lifetime, and so I think that's the that's the true distinction to be made between that type of interpretation of it is you can't humanize something that's not human and that is doing something that is that is arguably and probably factually inhuman.

Speaker 3:

To begin with, and that's why I mentioned I love reading studies on, like there was one recently about analyzing whether machine learning models are actually in deep learning. Models are actually just data compression mechanisms, because they are able to recomp, uncompress and reproduce almost similar images, or almost exactly similar images, which is what we're seeing in, like the Getty images case that was filed. The lawsuit there it would. It was trained so much on the Getty images trademark because those images are just publicly accessible and out there as thumbnails. But the licensed version's not the model would recreate the Getty image for a lot of sports content that it was asked to produce and that was the heart of one of the Getty images lawsuits. So it's filed.

Speaker 2:

Wow, well, let me, let me ask you something. Did you answer that?

Speaker 1:

question. By the way, I don't know, Would you, how would you decide on the your?

Speaker 2:

oh, oh, I want to. I want to have the ability to say this is mine and you can't touch it. So I want to. If I build a website, I want to have something that I can click that says keep your scrapers away from this.

Speaker 3:

But if you're putting something on YouTube, the the. I will go ahead and take the other side of the argument too, like, if you're putting something on a platform like YouTube, that is arguably it is taking your content, analyzing it, producing subtitles automatically, doing clothes captioning or helping you analyze your audio to improve it out of the click of a button. There are elements of where, like in the creator space, platforms are taking content that users are putting through their platforms or through their systems to improve the system as a whole, and that's where you have to have this discussion of okay, am I okay for my data being used for that purpose? And most of the time, the answer is going to be yes, because it's the benefit of everyone. Even right now, if somebody's listening to this podcast on Apple podcast, they just released that they are going to start allowing transcripts for their podcast.

Speaker 2:

That is machine generated, so that that's true and that's something else they wanted to get to. But the like to go back to the analogy that the the audience member was talking about. So he went to a museum and looked at a bunch of stuff, theoretically, I can't say if he did or not.

Speaker 2:

That's the story that we're, that we're going with, at least right here. But the thing that he's not qualifying is that somebody had to build the access for him to be able to go in and do that and then permit him to be able to go do that.

Speaker 2:

And curate that data set? Yeah, and so the internet works the same way. It's like the internet. If I build a website and put something behind a paywall, that's the same thing as a museum. And so is somebody going to go into a sense where, cause, we can't say we're doing some of this physical, and then I'm going to take it and change it mentally. It's like, since we're just physical, he's not going to walk out with those pictures that then have one right here so he can make a make an effect, similarly, or something like that. He's, he's just going to look at it and he's going to take that impression home with him and then the impression is what stays.

Speaker 1:

And to but he can take a picture. He can use his phone and take a picture.

Speaker 2:

Maybe not or maybe not, but I think that the museum gets to to apply.

Speaker 3:

And even if he took a picture, he would probably arguably still be violating copyright law.

Speaker 2:

Oh, good point, it depends.

Speaker 3:

Yeah.

Speaker 2:

But but like a YouTube, they can have similar rules. It's like you can view this stuff, but you can't take it with you, Like, and that's completely fine, and I think that that needs to be extended down to the person as well. It's like you can look at my stuff, but it's mine and like there's a. There's a guy on on the internet now that I love pixel art and he's a. He's a pixel artist and he's like he's committed himself to a one image a day and he's so far, he's a hundred percent and that should be his and in no way anybody else's, unless he says otherwise.

Speaker 1:

Which is why NFTs came into play for a while.

Speaker 2:

Yeah, that's a whole separate conversation. Yeah, let's not go there.

Speaker 1:

That which I'd like, to know what that website is and put that in our show. I'll put it there and I'll put it in the pixel art. Anybody that had a system way back in the day when it was eight, bit you. You dig that kind of stuff.

Speaker 2:

But if somebody scrapes that, I think that's stealing and I want Gen AI to to move as quickly as possible. But I also want to keep people's you know creativity for them. I want people to be able to keep their creativity to themselves if they want to.

Speaker 1:

Yeah, it's so hard. I feel like I've always been in this dilemma of privacy versus availability, going back to the days when I would teach security in the IT space, and I've always been conflicted, so it's always hard. I'll say something today, like, like honestly Franklin, you convinced me I'm wrong, right, this idea that machines are not human. I need to think about that more when I'm thinking of some of the cases or dilemmas that I'm going through. That is something that is hard to to not do. I mean, we want to put a face on the models, right?

Speaker 2:

Trying to say the word.

Speaker 1:

What word?

Speaker 2:

Anthropomorphization.

Speaker 1:

Yeah, I don't. I don't even know how to say it, so I'll let you do it, Did you?

Speaker 2:

look it up. I think it was. That's it, but it sounded. It's hard to say fluently.

Speaker 1:

Yes, there are just some words.

Speaker 3:

I had a really hard time growing up ever saying the word funeral. I could only say a funeral, furniel. Took me a long time and I need to practice.

Speaker 1:

Yeah, well, speaking of you, Franklin and Charlie, I'm going to throw this at both of you. Let's talk responsible AI, responsible data at the personal level. What are some of your do's and don'ts? You're not as a lawyer, not as a CEO of a company. You personally what are some do's and don'ts that you feel? So I already feel I got the. I like the one about if it's my personal website, my personal domain, and I want it to not be scraped. I should have that right. So I got that from you, charlie.

Speaker 2:

It's not just anything I build like a personal company-wise. If I start a business, I can lock the front door.

Speaker 1:

Yeah, Well, what if you put it on somebody's platform like a YouTube? Does that change?

Speaker 2:

I think it transfers liability to the platform because you're entering into an active partnership where you've signed ULAs, you've got an agreement going with them.

Speaker 3:

It's very favorable to the platform, though.

Speaker 1:

Very favorable.

Speaker 2:

You agree to it?

Speaker 1:

Yes, yes, and they'll monetize, but they'll change that anytime they feel like it, and they frequently do yeah. Do you feel there's any do's and don'ts when it comes to making decisions with data or not, or storing of data? Anything that you feel is good for our audience.

Speaker 3:

Yeah. So one thing I'd close the loop on with the discussion we were just having is it's easy to talk about this in the context of creative works with outputs that are synthetic media of some kind. And if you look up synthetic content and the initiatives that are happening around trying to have content authenticity, adobe is a big player, they're Microsoft's a big player, they're a lot of big players in that space. Now I kind of take a step back because that is one aspect of the work that I do, but the other aspect is working with developers that are developing these tools. And again, the reason why I like with the model cards, I prefer calling things what are the preexisting materials, what are the training materials, this training data, but in reality I like to refer to as training materials because just as much as the outputs could be infringing, if you're using an image generator or could, potentially, you can prompt it in a way that you can use the name of a well-known person that has been photographed millions of times and have that drive how you control the look of a character in your comic book, which is what Chris Kastunova did. They did that for their book, their graphic novel, that they tried to register with the Copyright Office. Look up that, the prompts that were used there.

Speaker 3:

I look at this outside of the creative space and I also look at it in just everyday kind of products and services that are being developed. What does it mean to be collecting personal data? What does it mean to be collecting personal information, sensitive information about people, and then how you're using that? And if you're using a model that is going to be continuously learning model or if it's going to be a static model that won't learn, those will be some decisions to make and maybe you're choosing to not use raw data. Instead you're going to create a synthetic variation of that to where it's not identifiable anymore. It removes those identifiers, things like that. That's where responsible data really comes into play.

Speaker 3:

From my perspective, at a professional level and at an advising level, is to understand that even just have I try not to get bogged down in the headlines because that's just going to stifle innovation and development but the ability that people can pull out and extract phone numbers, social security numbers, addresses, all that kind of stuff, real information about people from because the model that was used I don't want to use names because I don't want to be disparaging, but a model that was used and just trained on the vast amount of information on the internet. That's a risk, and so those are the types of things that you got to think through, and so that's one of the things that doesn't keep me up at night, but also is just one of the things I always try to drive home is understanding the training materials that you're using and how that will inform the decisions you make on the architecture side and on the development side as well.

Speaker 3:

But, in my personal life. I personally choose to. I had personally choose to use Apple and I love being able to understand the lockdown nature of where my data goes. So, like I use Apple Fitness Plus, I get all the prompts and I understand that by using a workout that is doing benchmarking against other people that have done the workout before me and my data is contributing to that and I'm fine with that.

Speaker 3:

Like to me, I think Apple has a great framework for identifying how to disclose what's being done and no surprises, and also capturing those consents pretty much almost every time or if you ever update your phone stuff like that.

Speaker 3:

So I'm a big fanboy in that sense, and I think Microsoft has done a great job, too, of doing similar initiatives, of making sure that it's clear of where data is going and where it's not going, how it's being used and how it's not being used. And even more so, in support of that, microsoft a couple years back launched their Open Data Initiative and I thought that was a great resource to go look at. If you look at the landing page, they actually have a couple of on the legal side, a couple licensing agreements that can be used that identify kind of like I was talking about earlier, open source licensing. There's open data licenses and Microsoft has like four or five good examples, some for research, some not, and so it's just the. Those are the kind of things that I like to look at and dive into and reverse engineer, if I can, to kind of understand and think through what's going on.

Speaker 1:

Gotcha Charlie. Anything when it comes to responsible data or AI personally you feel is a benefit to you.

Speaker 2:

The only thing that I would encourage people to start thinking about is like and you kind of hit on it briefly like what if? What if people see your stuff? Like what if your data were to be exposed? What happens then?

Speaker 2:

So I personally try to live my life as if my data could be exposed at any point in time, and you know it's going to be embarrassing, for sure, but I'll live, I'll make it through, as it were. Yeah, but it makes life harder. It makes life harder, for sure, when you know everything you do could be visible. But really like, if you sit in Daydream about like, how likely is it that your text messages are going to be leaked to the public at some point? Or what happens to your text messages?

Speaker 2:

Is text message threads right now, in 60 years? Like, are you going to be a 75 year old, that your entire life is exposed before you die? And then what's going to happen? Like, once you've died, like your kids are going to be around, and then you as an adult have your data exposed, and then entire family trees worth of drama is exposed. And so I think living like those are the inevitabilities is a good thing personally, but also I don't mean to be too doom and gloom, but just knowing that's a possibility and then like, do as good as you can, but be ready just in case. So if you've been up to shenanigans, just be prepared to explain them all.

Speaker 3:

I would say, if nobody knows what you're talking about, there's a great book that just came out last fall by the New York Times reporter, kashmira Hill. She wrote your Face Belongs to Us, and it's a great look into how privacy has been impacted by data scraping, and that one in particular, to your point, is eye-opening. If you never heard of the company that, back in the day, scrapes publicly accessible photographs from Facebook and anywhere online flicker and has gotten to the point now where you can identify any individual, they sold it to law enforcement officials and all that, and so that is a real discussion that needs to be had around responsible data and having stuff publicly available and therefore publicly scrapable.

Speaker 2:

Yeah Well, it's not just when it's purposely publicly available, but like you know it's happened all the time.

Speaker 3:

Sure, there have been many notable instances of data like getting out of the walled garden and then people being exposed literally, or even, like I mentioned earlier, improper development of models where there are it's not privacy first, or privacy preserving development, privacy preserving model when you can be linked yeah, and then, if you think about it, then what happens if a bad actor gets a hold of that model and can extract data points from it?

Speaker 2:

They will. That's the thing it's like, inevitably like, once the data is out there, a bad actor is going to touch it.

Speaker 1:

Yeah, and I think for me, because I do modeling, I'm probably that's where I feel like most cautious. I've been trusted with data that is personal, identifiable, and I have to be the one that masks it sometimes. I mean, I actually got a data set one time with tax ID numbers in it and I'm like what are you doing? I did not need these. And you know, they just hit star, dot star and sent me the file, which just means that they sent all the fields from all of the tables. And so I'm always thinking about do I need that, do I need this, do I why, why? And people will say, well, that makes a better model, but does it Like I'm not interested in the prediction, I'm interested in the factors, and so keep it to the things that I want to control. So I'm more on the data cleaning side, spending a lot of time, and you know that's not really personal, that's more professional.

Speaker 3:

But it's not personal standpoint. Do you want to give like that's? That makes you stop and think if I'm signing up for a new service or I'm sharing data for whatever reason with a vendor, do I need to give them this information? Is that pertinent to what they need to do? Do I trust them to handle it? For that exact reason?

Speaker 2:

That's. That's like when, back in the 90s, when they started collecting your name and number at the store.

Speaker 1:

Oh, and they still do. Yeah, give me your phone number.

Speaker 2:

But I worked in stores. Then, though, and when we first started asking people for their name and stuff, they're like why do you need that? That's right For our mailing letter. No.

Speaker 1:

And why, at Hobby Lobby, do I need to give you my phone number? So it's 555-1212. Yeah, yeah, no, I think there's a lot in terms of like all right, let's go to something that everybody can relate to, which is you're prompting something to write. Now we're not talking about anything for a lawyer perspective, nothing for you as a CEO. So this is personal, so we don't want to get you in trouble. But if you had to write something, just for maybe an organ is a nonprofit you're working with and you knew a little bit about what you needed to, but you know you want to make it a little bit more informative, or whatever it is, would you start with chat GBT? Would you use it secondary? How do you use it in a responsible way?

Speaker 2:

I have two different ways.

Speaker 1:

I can see, frankly, that doesn't want to answer this one.

Speaker 2:

Oh, I'm happy to.

Speaker 1:

Okay, good, go ahead.

Speaker 2:

So I'll use it to do a rough draft on the starting point.

Speaker 2:

Yeah, I'll use it as a starting point in some cases, but then if it's something that I know, like industry stuff, I'll have a rough draft to start with. And then I like a descriptive rough draft with bulleted items and stuff and then I'll just say you know, write something with each one of these points as a sentence, nice. And then that's where I'll start. Okay, it gives me a lot further pretty quick, because I'm really good at list making. I'm really bad at like the full editing and stuff like that.

Speaker 1:

I hear you on that one. I'm that way too. I will put it in and let it be my editor. Yeah, what about you, Franklin?

Speaker 3:

So I do subscribe to OpenAI's chat GBT and so I use chat GBT for, and I use it for ideation, I use it to outline, I use it to help improve my writing. This is all personal, I don't, this is all my personal capacity. I do it and I mean that seriously. I'm not joking in that sense. From my employer standpoint, depending on which employer I'm talking about, I let them. They're the ones that have the policies that they take. So I respect that.

Speaker 3:

But for my own personal capacity, if I'm writing something, I use it for ideation. I think it's also helpful just to make sure I don't miss anything, and it's helpful to improve my writing if I want it to help like, say, how do you write this more persuasively? I personally disagree. I don't ever copy and paste straight from what's presented. Absolutely.

Speaker 3:

That's just my own personal viewpoint on it, my own personal ethical consideration there, especially because, like I do a lot of freelance writing for news publications and so for me it's like what I write I want to be my own. And then I can't help but also put my lawyer ahead on, because anything that is synthetic in that sense is not copyrightable. The copyright office in the US, at least in other parts of the world. It's come out to be said as well. Human authorship has always been a fundamental, foundational requirement for copyright ownership and being able to register it with the copyright office. So that basically takes you out of the ballgame of being able to claim ownership and protect what you're putting out there or sue others for what you're putting out there.

Speaker 1:

So what if 99% of it's yours? What if 90% is yours?

Speaker 3:

Again 80%.

Speaker 1:

Where's going to be the cutoff?

Speaker 3:

Yeah, and actually wrote an article about this. I'll make sure that's in the show notes. Yeah, it's this. I put together what I call the spectrum of creativity generation. It's the creativity generation spectrum. And so on the far left hand side of the spectrum, you have just me talking out loud at a conference not even a conference, because that has technology involved just talking to a group of people delivering a speech, performing on a stage where I am dancing, and what a not that I dance, but you get the point. That will be in the show notes. Yes, I'll send you that. So that is without. That's creation without any tools. That's creation without any assistance.

Speaker 3:

And then you can kind of the spectrum continues along where, okay, you're using a paintbrush or you're using a microphone system, or you're using sound recording equipment or you're using a video editor or you're using other tools like that, and the progression can go where, okay, it's human authorship that's being assisted by a tool, or photography. That's one of the bigger questions right now in the GNI space is people are pointing back to when photography first came around, because copyright was around before cameras came around and so it originally there was no such thing as photograph, so it wasn't a part of copyright act. So the development of copyright around the advent and invention of the camera is huge influence for a lot of people in trying to apply existing copyright laws and protections to generative AI. So, anyways, the spectrum continues along. And then you kind of hit this middle ground where, okay, I'm using word, I'm using spell check, or I'm using this. When I first created this, co-pilot was not even publicly accessible, open AI was not really out yet.

Speaker 3:

So it's this concept of, okay, I'm using word, where it will almost complete my sentence or has grammar check, or I'm using developer environment of some kind where it will complete my code but it won't do it to us since, like, it's writing out the rest of it for you. It's just guessing what. The next phrase you might say is limited support, exactly. So that would probably rise to the level of Unless you just do that over and over which you're just gonna get gibberish because that's not Jena I, that's just basic machine learning and productivity models. That's where, like, the copyright office would say, like, well, that's, that's not gonna Knock you out of being copyrightable because you use a Grammarly. And that's where I think the AI powered version of Grammarly it kind of gets you into the weeds. Well, how much is it changing your writing?

Speaker 1:

style.

Speaker 3:

It's good point and so then you continue along the spectrum and you get to this phase where there's the Copyright office has said outputs from mid-journey are Not eligible for copyright protection because there's no control. Even if you're prompting 500 times to get to a specific output, you still it's a black box of what happens with the model and the software that you're using. So that takes away that human layer of control and they've even been challenged on it in recent cases, like the art, the, the artist I use air quits there. But the artist that submitted the, the work, space opera theater to the state fair and One there's like a New York Times article about it and he won the prize came out later that it was generative AI. He actually used Photoshop and he also used another AI tool to upscale the image from. I think it was mid-journey that he used. So there was a little more creative involvement, arguably. But the copper office still said no, the underlying image, the, is this thing called the de minimis test? Sorry, a long-winded way of getting to your point of what the copyright office has recognized as what's called the de minimis test.

Speaker 3:

Is there a? How much of it's kind of. Basically the best way to put it is how much of the final work product is Human authored and how much is machine Generated. Send that at content. And I point to all this because, again, we're talking about creativity. This is hugely important. If you're working in an ID. If you're working in ID Sorry, integrated developer environment. It's like Like if you're using GitHub, that plugs into Shoot. I'm a blinking on like Microsoft has one, apple has one, yet Jupiter and yeah, yeah, if you're using something like that, a compiler, is it what you call?

Speaker 2:

it? No, is we're a library?

Speaker 1:

I don't know, I don't know it's basically a space where you're writing the code.

Speaker 3:

Yes, I am doing a terrible job remembering.

Speaker 1:

You let that part for me.

Speaker 3:

Apple has one, microsoft has one anyways. So that's what I beg the question of like, if you're using a copilot, if you're using a Gen AI tool to develop code, you're calling into question your ability to protect under copyright law that software and the output Software is is still hard to Copyright. There are a lot of gotcha moments with it. But there's also the same thing with patent law. Though you can, you can still patent software, and so you might be calling the question because on the patent side, with patents are covering inventions, copyright is for creative works. With with patents, there's also the same issue. You cannot patent something that is not human invention, and so it's this kind of issue where people are trying to patent stuff that has some level of machine invention, contribution to the invention, and it's like, well, sorry, says copyright off, or says the USPTO Wow.

Speaker 2:

Like a quick hypothetical Legally hit me if, if I say, we had a mid journey where, if I had 3000 word prompt that generated the exact same image every single time, would that not be?

Speaker 3:

Would that actually happen?

Speaker 2:

I mean so, if it could happen it can't happen right now for right. Yeah, that's.

Speaker 1:

That's because you can add a random Seed to generate the same thing. You can do that in in modeling.

Speaker 3:

It's without using that same prompt.

Speaker 2:

No you're not using that output back and circling back into the problem, I'm saying is can you, could you potentially like? So I'm thinking back to the like what's the movie where the dude has the intermittent wipers? A stroke of genius.

Speaker 2:

Yeah, yeah, I know it's when you're talking about any. He talks about seeing that. Anyways, he talks about like his, his. They were like it's just a random like group of electronics that everybody's got access to, and he was like well, is is the Something that people can't use, and he's just like all this is is a random string of words that makes sense to people. And so if, if I create a prompt that's un, unrepeatable by anybody else unless they've got this in their hand, is that not something that has value?

Speaker 3:

So the copyright office has said they held a webinar last year when they first put out their guidance around generative AI and they have put out guidance on. I'd recommend you rent, I'd recommend, okay, I want to read it, go read it. You can actually go to copyright copyright got no, sorry, copyright gov, slash AI and that will take you to their AI landing page and you can find their guidance or you can find the cash To know the case that I mentioned and some of the other ones, like the state in case, the spatial space opera theater one, sorry, yeah. Back during the webinar they held after they released their guidance, one of the questions was is a prompt copyrightable? Yeah, and the response from one of the copyright office attorneys was Theoretically, I guess, like, yeah, like they're not aware of anybody at that time. We're not aware of anybody having Attempted to register a prompt, but if it met all the requirements for copyright ability, which is like human authorship is it human written? Is it creative? Is it written in tangible medium of expression? So if you're typing it on the computer, then you have, then arguably you could submit that to the copyright office and register it as probably just a textual work and so in that sense that might be protectable. But in terms of the system, that kind of gets at the heart of Understanding.

Speaker 3:

The guidance we have from the copyright office right now is basically looking at the fact that a lot of how deep these deep learning models Work is that it is all. Probability is statistical. Yeah, you cannot control, at least for now. You can have some guidance on on the, on the. Well, it's sorry. It depends on really how the software is developed, because some software does allow you to fine-tune and have granular Controls over how much ramion, randomness, noise, all that kind of stuff we're talking about general right, general use tools, though, pretty much don't offer that. And Copyright office, I think, even pointed to the fact that, like with some of the tools, you get back four different options Mm-hmm. That alone is showing that you really have no control over the out. You're selecting from four or five or however many of the outputs there are. That's your selection as a human, but up until that point it's all the computer software doing it you made.

Speaker 2:

You made that comment earlier, though it's black box technology. Like it might as well be magic as far as the user's concerned, because we don't have any input or Control over how it's manipulated once we give it a prompt and the developers are finding that too.

Speaker 3:

I think model developers are seeing that too. That's why they do a lot of research and red teaming and all that to understand what is capable with this particular model or what happens when you put this and and Build on it or sync it with other other technologies, other models like that's where you really get into the nitty-gritty of understanding what are the capabilities here and that's kind of like where it's like this, what's the future going to be? And it's kind of interesting when you get into that sense. But it does from a standpoint of quickly adopting these tools. Stop and think about what you might be losing out on from an IP perspective standpoint, if that's important to you, yeah.

Speaker 1:

That's good. Well, we are. We are at that point.

Speaker 2:

Yeah, there's so much to talk about.

Speaker 1:

You know, yeah, and I think Franklin has just become a regular because this is going to continue to be a Topic that I think everybody needs to know about. I know I mentioned that we are having somebody on like Franklin To a bunch of executives and they're like, which episode is that going to be? Because they're they're wanting to know what the heck to do, as well as the average listener out there.

Speaker 2:

As well. Yeah, and we didn't even get into the creator economy. No, I love creator economy, but we also didn't even touch on like the FTC.

Speaker 3:

You could have a whole episode of like.

Speaker 1:

Oh, I see FTC what is the right?

Speaker 1:

that one down that'll be in the near future here, and we stayed away from industries Specifically like healthcare, I think, for a reason you know. We stuck to ones that were relatable, which is the creative side To explain this stuff, at least the first time, right and and I think it would be really interesting to get into or operational modeling, like in the manufacturing space, the finance and the healthcare space, and I think that there'll be some really interesting things there, frankly. But we don't want to just cut you off anything that you would like to add as a parting shot or or plug or anything else Non-lawyery, yeah.

Speaker 3:

Yeah, so I predominantly post on LinkedIn. I sometimes cross post to X. I've gotten away from that just because it's it's hard to pick a platform. But I'm with you, linkedin is where I find the most algorithmic favoritism. I guess I have creator mode turned on there. But so, yeah, I'm on LinkedIn. It's pretty easy to find me on any social media platform. I'm Franklin Graves. I also have a newsletter that focuses on issues in creator economy, if that's something you're interested in, where we find it is on LinkedIn.

Speaker 3:

Okay, but then I also have a website and, again, I'm terrible at updating my website, but it's creator, economy, law, calm. Pretty much what I post on LinkedIn and with that newsletter For right now is the same as what's on.

Speaker 1:

I got you Well yeah, and we'll definitely have those links. Actually, you've already given him to us, so we'll make sure that people get that, charlie. Any last questions.

Speaker 2:

I've had a blast like yeah all, I thoroughly enjoy living in the times where we get to make the rules and stuff, isn't it cool? And so it's. It's really neat talking to you about all the, the current goings on and stuff like that, like in the actual legal realm, because that's that's ultimately where this is gonna get sorted out, right.

Speaker 1:

Yeah, yeah for sure. Thanks, franklin, for your time today.

Speaker 3:

Thank you this would be.

Speaker 1:

A quick hour and a half is what it looks like we are. We are at and we want to thank all of you for listening here today. We'll thank also the National Technology Council for the space to be able to Do this. If you like what you heard, please follow Franklin Graves on LinkedIn, but also go ahead and follow us on your favorite podcast player. That's data for all data, the number four and all, and you can find us on our website and all of our videos there, our YouTube channel at data for all, dot I. Oh, and for the data for all podcast on Charlie a pig in no.

Speaker 1:

I'm Charlie yield and until next time.

Responsible Data and AI in Podcasting
Access to Justice and Legal AI
Responsible Data in Tech Industry
Responsible AI and Privacy Laws
Shared Responsibility in Platform Moderation
Data Provenance and Privacy Rights
Content Scraping
Ethical Considerations in AI Influence
Making Responsible Data Decisions
Data Privacy and Writing Tools
Exploring Copyright Issues in AI
Data for All Podcast Thank You