Inspiration. Ambition.
Passion. Process. Technique.

By: Larry J. Cohen, Sarah Montana and A.M. Homes

This week, WGAE AI Taskforce co-chairs Larry J. Cohen and Sarah Montana, and taskforce member A.M. Homes sit down for an OnWriting roundtable conversation about all things artificial intelligence: how we got to where we are today, the risks AI poses to workers, what the WGAE is doing to protect members from this technology, and more.

A.M. Homes is a writer whose credits include television series like Mr. Mercedes and The L Word. She is a former member of the WGAE Council and currently a member of the WGAE AI Taskforce.

Larry J. Cohen is a writer on the television series Berlin Station and Borgia. He currently serves on the WGAE Council and is the co-chair of the WGAE AI Taskforce.

Sarah Montana is the writer of Hallmark holiday movies like Rescuing Christmas. She’s a WGAE Council member for the Film/TV/Streaming sector and is the second co-chair of the WGAE AI Taskforce.

Listen here:


OnWriting is a production of The Writers Guild of America East. The series is produced by WGA East staff members Jason Gordon, Tiana Timmerberg, and Molly Beer. Production, editing, and mix are by Giulia Hjort. Original music is by Taylor Bradshaw. Artwork is designed by Molly Beer.

If you like OnWriting, please subscribe to our show wherever you listen to podcasts, and be sure to rate us on iTunes.

Follow us on social media at @WGAEast. Thanks for listening. Write on.

Transcript

Intro: You’re listening to On Writing a podcast from the Writers Guild of America East. In each episode, you’ll hear from the union members who create the film, TV series, podcasts, and news stories that define our culture. We’ll discuss everything from inspirations and creative process to how to build a successful career in media and entertainment.

A.M. Homes: I am Holmes and I am a writer and creator on series like Mr. Mercedes and The L Word and a former council member. Today I’m thrilled to be talking to Larry J. Cohen and Sarah Montana, the co-chairs of the WGAE A.I. Task Force, of which I’m also a member. In this podcast, we’re going to talk about all things artificial intelligence, how we got to where we are and what the Writers Guild East has done and continues to do to protect members from this technology. Let’s meet Larry and Sarah.

Larry J. Cohen: Hi, I’m Larry J. Cohen. I’m a writer on television series like Berlin Station and Borgia, and I’m a council member and co-chair of the WGAE AI Task Force.

Sarah Montana: And I’m Sarah Montana, I’m the writer of hallmark movies featuring Santa like Rescuing Christmas and a New Year’s Resolution. I’m a council member for film, TV streaming and the other co-chair of the WGA East AI task force.

A.M. Homes: I was hoping we could start by talking a little bit about how we got here. You know, the history of AI and tech and what kind of led us to the moment we’re in now.

Larry J. Cohen: Totally. Well, you know, I think, when we talk about is everything old is new again and that this, technology in some ways has been around since the 60s, which is kind of a shocking thing to find out that the first chat bot came out in 1964. It was, it was called Eliza. It was a therapist chat bot that could just ask questions and offer simple replies. And it was named Eliza after Eliza Doolittle of My Fair Lady, who was trained to pass in high society. And so, 60 years later, Eliza was able to beat OpenAI’s chair, GPT 3.5, in a Turing test because it was just that good. And so what we’ve been dealing with is a false reality for for a very long time to sort of play with our senses and play with what we’re really interacting with.

A.M. Homes: Larry, would you mind just telling people a little bit about the Turing test? Before we go to Sarah?

Larry J. Cohen: Yeah, I think it’s important. The Turing test is now kind of a very inexact test. But it is a term that’s used to determine if an AI can be perceived to be human. It is kind of this moving but bouncing ball and a lot of ways. But, it is, a term from Alan Turing who sort of came up with this concept of determining if and when, you know, a machine could, could pass as a human. Super.

A.M. Homes: Sarah, over to you.

Sarah Montana: Yeah. I think the other thing to think about in the history of AI is that, like, even though we’ve really started using that term more broadly as it’s amped up in public consciousness now, AI has been around for a really long time. If you’ve used spellcheck, that was a form of AI. Computer chess games used AI to calculate next moves. And that’s what we call like reactive AI. The first generations of Syrian Alexa were forms of AI. Even your, like, recommendation algorithms for, you know, TikTok and Instagram and everything that was sort of feeding into your algorithm to tell you as soon as you whispered to a friend, oh, I’m thinking about having a baby, and you suddenly got fed like a thousand crib recommendations. Like, those were all versions of artificial intelligence. There’s just a huge difference between, the ways it’s being applied now and we’re using our AI is sort of a misnomer. To talk about what we call more like generative AI and lens, when actually there have been all kinds of very reasonable and practical applications and things like medicine and research and, you know, a little more suspect in things like self-driving cars. Right?

Larry J. Cohen: I was just going to add that, you know, writers have been writing about AI for 100 years now. And, you know, we’ve talked about how I will kill us or torture us or outsmart us or enslave us or have sex with us, or, you know, all the all the things you can possibly imagine. And we keep kind of telling the same story, which is, you know, we, we, we have to be be careful with this technology, and we also have to be careful with ourselves and, and so we hope that that lesson starts to metabolize. But we’ll just keep writing those stories until hopefully they really hit home.

Sarah Montana: Yeah. It’s telling if I can like, throw an anecdote in about that really quickly about what we do in the power of story that, like my father in law, worked, for a major computer chip company for a really long time and then transferred into their AI department. And, I mean, I made a thousand Skynet jokes, right? For the first, like six months. But it worked there, and they started his jokes and he was like, yeah, yeah. And then as time went on, there was sort of a shift where he was like, this technology is actually kind of chilling, like it’s sort of blowing my mind. And we had a conversation where, at first I think a lot of people sort of because what we do is engage in story and mythology. When we tell these stories, we’re sort of like, yeah, yeah, but that’s a parable. And we’re living in a time where it’s starting to feel a lot less like a fable and more like, oh, we really didn’t pay attention to any of the lessons of, like, Hollywood’s Aesop fables about technology.

A.M. Homes: So, yeah, that leads me to my next question, which is sort of a combo question, which is, yes, we have been living with AI for quite a long time in various iterations, but it somehow has gotten more serious. And I guess I’m hoping that you could talk a little bit just about the differences in in the LMS and in the generative AI, and also what we do need to be careful of and what’s on your radar in terms of being mindful to totally.

Larry J. Cohen: So the thing about, these large language learning models is that they changed the, the scope of this limited memory. And so now we’re dealing with a model that has access to, massive amounts of data, much of which, I should add, is stolen intellectual property that we’ll probably talk more about. And that’s within this analysis of this large amount of data, is this probabilistic, you know, pattern based approach, much like predictive text. Or when Google’s trying to figure out what you’re searching for, but it literally has no understanding of what it’s saying. It has less than an understanding.

Understanding is not a word I would use when talking about this, because it doesn’t know what it doesn’t know. And it’s really important to remember at the bottom of every ChatGPT input, it says it can make mistakes because it is hallucinating constantly. And I would say that that’s the most important thing to understand is that, you know, it is just this giant bucket of data that you can now access and summarize in these streamline matters. But, you know, what’s under the hood is these things that have just been been plucked without permission. And we’re kind of living in this, very weird wild West moment.

Sarah Montana: Yeah. I think the emphasis here when I talk to people about this, both who are writers and like Nonreaders who work in other fields where this is, I think, actually being, you know, in corporate world, this is really being shoved down people’s throats. And the thing about ChatGPT Claude, I think a lot of these, different systems is they are language learning models. So these are these are machines that are being taught to create, like data synthesis patterns based on their program, to be a chat bot, to be your friend, to talk you through things. They’re probably okay at rewriting your email. They’re not going to generate any new ideas. They because their their whole premise is that they’re built on taking what exists and sort of following the pattern of what exists.

The other problem is, you know, we’ll get into the stolen data piece in a little bit. But the other big piece is that a lot of, you know, it’s a question of where is this data coming from? And a lot of it has been scraped illegally, but as these companies come up against, issues of, you know, of the legality of what they’re doing, I have a friend who just went to a conference for corporate, you know, integration of AI, and she was really stunned when she realized that 40% of the data that was going into some of these language learning models, when they say their training on things was from Reddit and like, I know I can get on Reddit and say absolutely anything about anything.

Sarah Montana: One of the things that I think is a is a bigger existential question we’ve talked a lot about on the task force is what happens when people think that these are Google machines, but these are research machines that you’re going to go to ChatGPT or Claude or any of these other systems and say, oh, if I ask for an answer on something or I ask it to do this task, it’s going to be accurate. But the reality is, one like Larry said, they’re prone to hallucination and they’re trying to be very transparent about that.

There’s the lawsuit right now with Deloitte, where they tried to put together a report for Australia government and are now being sued because somebody double checked the work and was like, wait, the chap made up studies or, or it’s it’s goal is to create a relationship with you and build something around chat around lake language, around dependency, and it will do whatever it takes in order to maintain that relationship, including lie, including saying that it. Read your Substack essay to give you editing advice, when really it may have only read the title, and then taken a guess at what the rest of the content of the essay was. Right?

So I think that’s one of the things that we’re really wary of is that this is being sold as, sort of impervious and almost like it’s an intelligence that is more evolved or more accurate than human intelligence because it’s a computer, because it’s in the realm of like data analysis. But this is not the same as a lot of data analysis that we a technology that we’ve dealt with in the past. It is really riddled with inaccuracies and biases.

Larry J. Cohen: And I would just add, you know, the way we talk about it in our task force group, is that it? It can tell you what has been, but not what will be. You know, that it is sort of stuck in in whatever data pool that it has. And I think an example that we’ve kind of talked about is, you know, when people talk about Joseph Campbell’s Hero’s Journey was it’s sort of this actually misunderstood, conversation, which is that Campbell was really analyzing these patterns and plays and literature and became very influential on Star Wars. But it was an analysis of what stories have been and not where they’re going next. And I feel like sometimes in our storytelling, we’re stuck in that. And the thousand million percent, we’re stuck with that in the Lem world.

Because it is the epitome of average. And I don’t want to live in an average world. I think most people don’t. They want to keep pushing themselves. And that is, I think, the inherent flaw in relying on this technology to create culture and conversation and all sorts of things is, is is magic, as it may seem to, that it can fill, you know, pages and pages so quickly you start to realize how empty and hollow all of it is.

Sarah Montana: Well, and that and that piece about averages is really dangerous, right? Because like so many things about both being an artist or being a human are all of these things are about realizing that average is taking this like, mean of all these things. But that’s not specific to your story, your world, your situation. I’ll give a brief like medical example I’ve heard from a from a medical practitioner in an IVF clinic that there was a patient who was uploading all of their, you know, bloodwork and all of their ultrasound results to ChatGPT and then asking ChatGPT what they should do or what their protocol should be. And then when the doctor recommend, oh, it’s time for us to trigger and do your procedure.

The patient pushed back and was like, well, ChatGPT says we should wait an extra day. Patient completely ovulated through the medication, lost the entire cycle because they decided that what ChatGPT was saying, which ChatGPT was saying, may have been the best advice for the average patient, but was not specifically what needed to happen in the case of this person and their medical history and what the lab results were showing. And so, you know, across the board, I think we have to think culturally as humans, too, about like, what happens when we start to see information as a law of averages, as opposed to really using critical thinking to think about, like what’s right for my work, what’s right for my story or my movie, what’s right for my care. And how might that differ from this machine that’s taking a synthesis of every, say everything and saying this is the average.

But also we’ve been really proactive about, as a guild about the fact that these machines are who determines what averages are, you know, like a lot of historically in Hollywood were and, you know, we’ve had a recent boom of more diverse stories. Right. But are these chat bots or anything you’re using to create on this? Is it being trained on one particular type of person’s idea of what a good story is? And does that mean that if that replaces, you know, the script, the script analysis that’s happening at these studios, that your script might be discarded because on page 75, you didn’t do what save the Cat says should happen. And you’re, you’re you’re breaking form or doing something creative that actually might never get you out of the slush pile. And that’s a dangerous thing to think about.

A.M. Homes: Well, that’s it. It’s also one of the things that I would say is, is very much unacknowledged, which is who is training the various models and the biases of the trainers and of the material, because it’s literally a case of you are what you eat and they are nothing more than what they have been fed. And historically, the people who are building, designing, training, feeding come from a, a not wide swath of, of society. And they we know that there are a lot of biases built into the system and that are very hard to for like a lot of word identified because they are they are biases that are encoded literally. So I want to just jump ahead because that is so interesting. But ask you a little bit about okay, so this is all this is all intense and real. What are the implications for our members? What are the implications for writers?

Larry J. Cohen: Look, I think that we have to talk about privacy. And we have to talk about consent. And we have to talk about transparent. See, first and foremost, because this technology is becoming so pervasive. And I think that we all need to kind of have better, you know, digital hygiene, I guess, and understand our relationship with these, these very large companies, that want to be at the center of our lives. You know, I was actually thinking about this morning, you know, they promised us all these privacy protections sometimes in their terms of service, and yet they’ve stolen so much information from everyone. It kind of is really hard to trust them and what they’re talking about.

And so we need to start to think about our data and the value of our data, which there’s a stat, I think in 2022, the global market for data brokers was estimated about $250 billion and is supposed to grow to $400 billion. So they’re literally selling our data. And our data is, you know, our ideas are the way we, you know, create our treatments, the way we do our drafts, the, you know, our final scripts. I think it was Tony Gilroy who literally said, why would I want to help the the fucking robots? And that was about releasing his and or scripts into the public because he didn’t want them to become training data. So I think we have to be a little more protective with our data and be thinking about, how we are sort of sort of feeding this beast in a way, and that if we cut the the faucet off, that it kind of will flail and this mediocrity will be even more exposed.

Because, by the way, it’s really important to know that I can’t learn from itself. It actually degenerates over time. And so there’s a study. The New York Times put out a piece about learning from outputs of handwriting, and then over time, it literally just became, you know, fuzzy nonsense. And so it is just this insatiable beast that, cannot do anything without stealing more from us. And so I think that’s a really good place to start for members to get what they want as opposed to being, you know, turned into a product by these, you know, soulless corporations.

Sarah Montana: I think there’s, there’s like, like two things that come to mind for me after what Larry just said, which is brilliant. It’s like the sheer amount of data it takes to train these language learning models. Or let’s talk about video for a second, because I also think that, that’s something that members are worried about is, you know, or thinking about or even sometimes get excited about, depending on who you are as a member is. Oh, oh, but then, you know, you can create these this entire movie. The sheer amount of data that it takes.

I mean, look at what happened with Lionsgate. They signed a deal with runway, an AI company, that was exclusive to try to get ahead of this copyright idea, right? Where they could feed all of the material that they owned into an AI that would just sort of be contained for them, where they might be able to generate things. All of Lionsgate’s catalog was not enough to allow runway to create anything usable. And the data reality is that, like Disney’s catalog isn’t big enough to do that either. And think about how massive Disney is. The CEO three struck a deal with YouTube where they got access to 20 years of every YouTube video that exists, and that still wasn’t enough data to create a video that seemed believable, where people didn’t have seven fingers, and to act like weirdos on camera. Right?

Or or generate things that language that sounded really human or like, oh, I can’t differentiate between this and something that was created by a Hollywood studio. And Sora two is coming, you know, dangerously close in some ways, but a lot of that data was scraped illegally. And there’s all kinds of lawsuits against OpenAI about how they obtained that data, and it forced them to change their whole, policy in terms of opt in versus opt out. And so I think one of the things that comes up for me in relation to that about privacy is I have a lot of members who have said, oh, but I checked the box on ChatGPT that said, it’s not training on my data, that my data is protected, that my data is private and if we are worried that an AI bubble might happen, or that this technology might not be able to fully deliver on the promise of the promise, and these companies get desperate, we could find ourselves very quickly in a 23 in any situation, if you did 23 and me, your, you know, genetic saliva tube and all your DNA was, promised to be protected and they say signed all these kinds of things until they got into financial trouble.

And then suddenly your DNA was up for sale, right. And has been distributed to all these companies, even though that’s not what you thought the deal was when you first signed up for it. And I think sometimes we get really optimistic about this technology, as if we didn’t just live through a social media era where we all thought it was safe to upload our lives and our kids faces and all of our preferences to Facebook and Instagram. And that data didn’t get sold in ways that had like massive implications for our society and democracy, in our own personal privacy.

Larry J. Cohen: That we’re still living.

Sarah Montana: With, we’re still living with. We’re not even at the bottom of the fall. We’re mid fall on that. Right. And so we’re trying to be skeptical. We’re not trying to be doomsday, which is to say that like terms in the same way that we know that as humans, we don’t read when we check the box. I have read all the terms and conditions that were lying. When do you say these are the terms and conditions in your data is protected? That’s all well and good until they get into trouble. And so, I’ve urged people both as writers about like training it on your voice and your ideas and your data and just take your voice and your the way you think about craft.

You’re training this machine, but also on a more personal level, the stories that you’re trying to tell go into a file about who you are and that, you know, let’s think about who runs these companies. Let’s think about who those companies have their alliances with and and think about, like, what do I want all of my thoughts and feelings and creative world to be fed into a machine that could create a profile on me that then, that, that then could be used in the future in ways that I actually don’t have control over. And I actually just at this iteration, don’t think you can be too careful about that.

Larry J. Cohen: I have one last point that we have to hammer home. Generative AI alone cannot be copyrighted. It immediately goes into the public domain. So everything that you make with it is so much harder to monetize, and it is so much more fleeting. And so as you’re interacting with it, you have to understand that it is just like it’s vapor, you know, essentially, it is just that 32nd clip that is in your feed or it’s that one zinger that it generates for you or whatever, and it’s no one’s going to remember it.

It no one’s going to be like, oh, wow, I really loved that generative AI output from my childhood. I’m going to think about it for the rest of my life. There’s going to be fandom that surrounds this generative AI output for for forever. And, there will be franchises, there will be books. So there’ll be, you know, t shirts. There will be Halloween costumes that is not in the cards right now. And anything that gets even remotely close to that is in the public domain. And so it just isn’t part of any version of this business that we’ve known. And so that is part of why I, I think these companies have stayed away from it. And our entertainment world, even as the tech world has, you know, tried to make it, more and more popular.

Sarah Montana: I think it’s important to actually think about this from the studio perspective. Like if we go back to 2023, we were really afraid and rightfully so, because this is how the the technology was being marketed, that this was going to replace writers and, the copyright piece of it has been, I think, the primary obstacle to that happening because, like, we exchange our right to own our material to the studios, the studios own our material. They can’t own something that’s public domain. They can’t own something that’s not able to be copyrighted. And so what we’re actually seeing is like a fair amount of paranoia sometimes from the studios about like, did you use AI to write this spec script? Did you use it okay, you used it in the like maybe just to come up with ideas, but what does that mean? Like, and I think that, there is a lot of like mythology in the press right now are there’s really this really big push about how AI is taking over Hollywood. And I think we’re not really differentiating all the time between what’s hap how that might be true. In some ways, I think it is coming faster. And we couldn’t have known this for directors and actors and for, crew than it like variety, than it is for writers because of that copyright piece. And I think that if we look at things like, first of all, like, you see like Disney and some of these other companies suing OpenAI, which is an interesting part of this. But I also think looking at something like the anthropic lawsuit that just happened, you know, the Authors Guild put together this group of authors to sue anthropic about taking all of their books and uploading them to train anthropic and, on the one hand, that settlement was I mean, what was it like 1.5 or $1.3 billion?

Larry J. Cohen: That’s the flaw.

Sarah Montana: Yeah. And so and so there were some people who were like, that’s not enough. These companies are valued at so much, and there’s a lot of complicating factors that went into it. But what’s interesting for me, the primary takeaway from that was anthropic did not think they could win that case. They settled because they’re looking at a recent history of fair use doctrine and how the law is considering and applying that even with federal courts that are fairly conservative on these issues fairly might be the understatement of the century. But you get it. Even with Trump letting go of the person who was the head of the U.S Copyright Office, and even with some pushback against this, the legal application of fair use seems to tip in the favor. Still, of this is not going to be copyrightable. And like there was even a case of an artist who used Midjourney, to generate an image and then refined that image. 633 times with his own input and, and re prompting that thing, and he won an art contest and then couldn’t copyright the image in the US Copyright Office, ruled that there was not enough evidence of like human input essentially that this thing, it would have been a different story. Maybe if he’d taken a photo. And then he had refined this thing with AI, but 633 re prompts and asking the question. And we get this a lot from people who are asking about how they can possibly use generative AI in their own work if they want to do that. I think a lot of writers are skeptical, but then we do have some people who are excited about it, and my thing to them is always like, the threshold for this is more human than it is. AI is impossibly higher than you think. And if imagine putting all that work into something that still feels like yours, and you’ve been using AI as a sounding board, and at the end you just like, don’t get to own your work anymore, and you can’t go anywhere with this script and you can’t get it produced. And you spent a year on this thing when you could have been creating your own work and and just going through the process that we’ve all spent so much time learning how to do and have become pros at, or else we wouldn’t be in this union. Right?

A.M. Homes: I guess one of the questions I have is, is, there’s so many, but we can’t copyright this materials. There’s also, I guess, a question about what happens. Well, two things. One is how will people know if someone has used AI to produce something? Well, that was that. Another certificate of authorship that we’re going to have to deal with. As a union. I’m curious about that.

Larry J. Cohen: We already sign a certificate of authorship when we know when we sell our work.

A.M. Homes: Amended.

Larry J. Cohen: Well, I will say this, that, you know, some of our these chat logs have leaked already that people are finding their their conversation, making it out into the public. So I’m sure that there’s going to be a level of digital forensics, you know, that can go into this kind of stuff if, if that wants to be challenged, any which way. And by the way, you know, the the real fear from the studio perspective is that, you know, again, these are theft machines. So if you know, someone’s relying on it in their work, it could steal a line from another movie. You know, that is a iconic line that maybe, you know, someone didn’t realize until the lawyers are looking through it and like, oh, wow, we’re about to go, you know, into a $100 million movie. And we have a line from another movie that is just going to totally ruin everything that we’ve built around. So the focus for us is usually the limits and liabilities of this technology. And that is, I think, where the rubber meets the road. Just on a pure business level for, for writers and for studios.

Sarah Montana: Yeah. And it doesn’t even have to be an iconic movie line. Right? Like if you registered your script or something like that, and then you, you know, for whatever reason, uploaded it to, you know, one of these lines for notes because you wanted to refine it before you send it back to your agent that’s now trained this and it’s part of it. I’ll give the example of, I, my dad and I got into a debate about whether or not somebody could track something that I said, and I gave a Ted talk, like a back in like 2018, 2019. And, he asked, I think it was copilot the a question about me. And it regurgitated without quoting it regurgitated the answer, to the question line for line from my Ted talk and didn’t give credit. And when I said, well, and he was like, well, well, if I ask the sources, it’ll ask for the sources when it asked when he asked it for sources, it, made up that it was from a New York Post article and not from, my Ted talk. Never talked about the Ted talk, even though. And he was like, well, how do you know that it’s wrong? And I’m like, because I wrote it, dad. Like I wrote it. They know my they know my rating.

A.M. Homes: And I also know where I’ve been quoted.

Sarah Montana: And where I haven’t as a writer. Right. And so that piece is like, you don’t know, as a liability, you don’t know where it’s going. But also it means that when you, as Larry said, when you bring something to a studio, you can’t promise that you’ve upheld what you signed and that certificate of authorship, and that is going to make them feel squishy. I mean, the other thing to to be aware of is that, like PR is dealing with this, right, like in publicity there, or in journalism, they’re worried right now about, would they get quotes from people? There are websites and tools that you can use to see if it was generated by ChatGPT or a by an Lem. And sometimes they’re pushing back and saying, we can’t use that quote, or we can’t go with that press release because it’s clear that this didn’t come from a human. There’s no reason to think that people can’t do that about your work, too. It’s why those of us, I think, who were beloved users before the rise of ChatGPT or so sad, like, oh, like great. My favorite punctuation is gone because the Jags over reliant on it. That sucks. So I think that piece of it knowing and our journalist friends have really been at the forefront of of seeing how all of this works and what they’re dealing with, both in terms of like what they have to deal with from sources, but also from companies around this, like really trying to figure out, like, what’s real anymore, like what’s fact, what came from a human, what is even a human experience versus what is AI is getting trickier and trickier. And I do think we have an ethical obligation to try to be as artists, to try to be and writers to be authentic in that experience and say, no, this is my work.

A.M. Homes: So I want to just pivot for a minute and talk a little bit about it. In 2023, we went on strike for many reasons and I became a front line issue. Can you talk through that in terms of what we won in that time period? And also I’m curious how the landscape has changed.

Larry J. Cohen: Definitely. So we got, I think, the strongest AI provisions of of any guild or union at the time. And I can just tell you in all the conversations we’ve had is that, you know, everyone is like, wow, like you are the gold standard now, you know, across the business. And so what does that mean? It means that, you know, written material produced by AI or generative AI cannot be considered literary material. So you can’t be given an AI short story and say, this is a short story, so you are no longer the creator, you’re no longer, you know, you will always get, get less, in terms of your compensation and in terms of your credit. And so that is basically saying that, yeah, you can give us, you know, some AI slop, but it has nothing to do with attribution and has nothing to do with, you know, our financial compensation. And we’re probably just going to give you something much, much better and that that has to be disclosed. There has to be transparency on providing any kind of, AI generated material, into the creative process. And so writers could, you know, have the choice to say, you know, I don’t really want to work on this project because it’s using AI prompts to, you know, make a show about dogs who solve crimes. And I could do that better on my own. We also got the provision that says we cannot be required to use AI in any part of our process, and so that gives us the ability to use it if we so choose. And we get the consent of of our employer. But we are the people who get, you know, the full, full choice and all of that. And most importantly, I think that we reserve the right to prohibit our material from training AI, and that, we have, you know, that staked out so that we can, use our reserved rights in order to assert our ability to protect our work and compensation, going forward, because just to step back and then we can talk about all those other points. You know, part of the deal is that we give up our copyright, when we sell our material in order to get all these other, you know, wonderful benefits of, of health and pension and residuals and all these other things, which makes it very different than when you’re in, a novelist, as you know, am. And so, you know, we have rights that are under the copyright umbrella, but that the studio, you know, gets the outright control of the copyright. And that’s sort of the grand bargain for.

Sarah Montana: And just before we go onto that, too, I think just going back to Leary’s very first point, which is about this being becoming sort of a gold standard or a jumping off point for a lot of other labor, you know, and we’ll go in, we can go into the finer points of that MBA in a second. It’s also been, you know, like like we said about journalists, they’ve been at the forefront of this being adopted a lot faster because, they don’t have the same copyright issues that we do, but we have because of this, been able to leverage that into all of these. You know, we have an MBA in film, TV and streaming. That is the minimum basic agreement. That’s all of us. Who are these freelancers who come together under one contract? But each of the journalists who work for different companies have these separate agreements, called Cbas, that are collective bargaining agreements and we’ve been able to get contracts at I language at future, The Onion, Pushkin, Chalk Bead Specialists, CBS News Digital and and one of the the you know, through lines to a lot of that contract language has been that like you cannot, you know, make decisions about firing us based on trying to replace workers with AI and also a lot more input and say as to not being forced to basically have their name or byline attached to AI generated articles because they care about, the integrity of their work as, as journalists. And so I think seeing the ripple effect of this and, and, and watching how that’s helped us as a guild pivot to take care of all these other, journalism shops, you know, they’re going to see the integration of AI and have to fight a fight back against it much more quickly than we are, as like screenwriters. Right? But because of that, the cool thing about our AI task force has been that we have, you know, journalists talking to film, TV, streaming people and that that contract language, but also our different experiences are allowing us to think differently about, like, all these different mediums as writers and fight for, like the rights of writers and, and the future of writing as like one big force as opposed to as separate sectors, which is, I think, really important as we move forward in the landscape, keep shifting.

A.M. Homes: So I want to ask a question that that I wasn’t anticipating, which is along the lines of we are a union of activists. We we get in there and we do things. One thing has come up a lot recently, which is the absence of regulation and guardrails with regard to AI. Are there things that we should be doing as as individuals and as members of our union to lobby for those additional protections on a larger scale?

Larry J. Cohen: Yes, yes and yes. I will say, it was quite a success that, the AI more torium was pulled in, some of the legislation from earlier this year from Congress. And so we have an opportunity within the states to really create powerful laws that can, protect privacy again, protect consent and protect compensation. And also make sure that our tax dollars aren’t subsidizing these machines in grand ways or taking our resources that are then making our energy costs higher. As we are experiencing here on the east side of the country. So I think that, I mean, there’s any number of areas that we should all be being proactive, you know, as good citizens. Because, you know, we’re in this Wild West moment, as I said, I think it’s there. And I have even talked about that. We’re in this Napster moment where, you know, this technology, it’s like too good to be true because it’s too good to be true. They they are just operating without any kind of real framework for how this is going to work in a real society. And they’re just expecting, you know, magic to make it all worthwhile or or even if it bursts and everyone gets hurt, it’ll still be worth it, because I don’t know in other way, we’ve talked about it. Is it? It’s an underpants gnome problem. You know, first you get, you know, all the underpants question mark and then profit. And you know, that is really bad for society. It’s bad for the economy. It’s bad for us as citizens in every state individually. So there’s just a million different ways that we can be, doing it. And, you know, I’m sure you guys have some good ideas beyond what I even just outlined.

Sarah Montana: Yeah, I think, you know, sort of like with any political activism, I would say pick the thing that you are passionate about that you feel like you can be dedicated to. And it’s like, what is the thing that you can do repeatedly in small doses over a long period of time, as opposed to taking one big swing? And so, like, if you’re somebody who lives in new Jersey or like, you know, I grew up near DC, these data centers and the rise in electric bills and what it’s doing to utilities is a massive issue. And, you know, we haven’t gotten into this, as in depth on this podcast today. But like the environmental toll of generative AI is astronomical like and I think that we’re like nebula aware of that. But I’m even watching very environmentalist like people who have dedicated like I have relatives and friends who have dedicated their lives to fighting climate change, who are still using ChatGPT and are thinking, it’s no big deal, right? Like the these data centers, the amount of water it consumes, like it is way worse than a lot of these changes we’ve already dedicated to making in our electronics or our homes or our cars or things like that. And so if that’s something that you care about deeply direct your activism there, like talk to your reps, talk to your local like it does not always have to be that you’re calling your state reps. You can call your you like really show up to your local city council meetings or your or your, you know, county board of Supervisors meetings and say like, I don’t want this in my backyard. I don’t want my utilities to go up. I have a problem with this. And we’ve seen on the local level across the country that there are communities that have won in that battle, that they have pushed these data centers out of their communities. And it’s made a big difference. I think if as a writer, if you want to direct your activism there, you know, on the federal level, this is going to be tricky. For a while, we’ve learned that, you know, the air task force met with the US Copyright Office in 2024, in October. Not a lot about that meeting was super useful as early as, like a month later. Right. But where we are going to be able to make changes and the guild is already working on legislation that they want to support is is on the state level. So call your state reps, call the people who, you know, email them, let them know that you care about this issue. Like, you know, at the end of the day, those things do add up in small doses if a lot of people do them. And I think that that that matters. And then the other thing I’d say that is activism, even though I know we’re all tired of like the Instagram era and like thinking that a post or a conversation changes things. But I think this is an issue, unlike some of the other things that have been in the political, like guys for the last, like social media, debate era, that people frankly just don’t know a lot about. And so there’s a big difference between debating this tale as old as time issue about like whether you like small or big government with your uncle at Thanksgiving and talking to your uncle on Thanksgiving, being like, hey, this is the information on how this works, it doesn’t mean that it’s going to change things. I think that there’s it’s very hard, unfortunately, to get people to resist shortcuts. And I think the problem right now is this is often being billed as a shortcut. And so whatever you can do in your life with the other artists that you’re in contact with, with, just like all your friends, your family, your, your communities, to say like this is not the shortcut that you think it is. I think eventually it may not stop them from engaging with it entirely. And I have friends who are in compulsory situations where they have to use that tech, or else they’re going to get fired at their company. Right? But if you can push back in small ways or not even engage in like aggressive debate, but really ask questions and challenge and be like, well, this is actually what this shows up. I think it adds up because we’re seeing, as Larry said, like Deutsche Bank put out a white paper about the AI bubble, the US economy right now is really being propped up on the idea that AI is going to pay off, but that valuation is like off the charts. And we’re already seeing some of these companies pivot to strategies or types of content that they swore they would never need to do, because this was a magical machine. Right? Like Sam Altman has said, you know, that he would never get into why would he ever need ads for, technology this brilliant. Now we’re looking at potentially adding ads.

Larry J. Cohen: They’re making sex bots. This is how they’ve planned to make money. They’re literally so desperate that they need to make the open. AI is like, we’re going to make erotica chat bots in order to make money. They literally have no idea how to actually make money off of this stuff. And this is the underpants gnome problem that, we’re living in. And so why would we want this technology that is deeply, inefficient and unprofitable to be at the center of everything? It just doesn’t make any sense.

Sarah Montana: Yeah, a lot would have to change between now and when this felt really safe or stable enough to completely transform. And again, I, I’ve said to some screen readers who have come to me and I’ve been really excited about this, like, it’s not that Larry and I are not anti-technology, we’re not Luddites. We’re like very into technological progress. We’re very.

A.M. Homes: Excited about.

Sarah Montana: Those kinds of things. But, you know, like the fear right now is like, this is feeling like a bubble. And as we learned with with streaming and some other things, just because, something can be a bubble and completely transform an industry and change things and then pop or maybe never deliver on the promise of the premise of what it is, and it doesn’t mean it doesn’t do a lot of destruction on the way down. You know, as we felt in the lead up to our strike and what the industry has been like in the last two years, like things can look good economically on paper, and it doesn’t mean that our most vulnerable members or most vulnerable members of society, or that, like some of the most interesting artists, don’t get squeezed out in that process just to make things on the ledger look right to sort of correct some very big and bold swings in terms of how we finance things. And so I think that’s I think, Larry, and areas of feeling, I think anybody who’s been in this industry for the last ten years and felt the like extreme highs and lows of what the Silicon Valley integration did, it’s like if you fell, if you feel a little seasick from that up and down roller coaster, then, we need to have a lot of caution in terms of how this is going to play out in the industry this time, because this is a much bigger change than just, than just venture capital.

A.M. Homes: But, you know, it’s interesting to me because on the one hand, I think, okay, so, you know, there’s ideas that this is a bubble. It’s very hard to go backward from where we are now in terms of this as technology. And I wonder, you know, as I’ve talked to people, they talk about how there is no off switch for AI. And you know that at some point we we believe that we are in control of it. I’m curious what you guys think about that piece of it and where that’s heading.

Larry J. Cohen: Yes, yes. No. The thing about it is that we are not anti, you know, technology people. I think we both always looked up to technology in the sense that it’s a tool. It’s supposed to improve our lives and that we’re supposed to be in control of. You know, we often talk about AI is kind of, you know, it should be akin to like Photoshop, where, yeah, we don’t have to go to the dark room anymore. We can if we want, of course, but that, you know, it’s supposed to enhance and help, are working process. So my secret hope, and this is just me speaking personally, is that if we can get to a place where perhaps I could become localized, so that, it could just be about you and your direct relationship offline, with this technology that it could be, you know, a tool to help, as you’re going through whatever, you’re working on, but that no matter what, that at the end of the day, you decide what happens to you. Your work cannot be monetized, without you doing something with it and that you’re not even relying on it. Generally, you’re relying on it as a way to kind of have have a different process for, for writing, perhaps there’s a lot of different ways to imagine this, but I think until we get to a place where these models are ethical, that your data is completely under your control and that there’s full transparency over all of this, it is just such a bad bargain. Like it’s just a bad deal. They’re getting more from us than we’re getting for them, and they’re trying to make us spend a minimum of $20 a month for the rest of our lives to use this thing that we don’t even need right now. A legal pad is more powerful than, spending. You know, however long trying to, you know, break an idea with, ChatGPT. It’s just the truth. And I think we all know it. So. Yeah, all I can say is that I think there’s a lot of different ways to imagine how this technology can become useful to us. But this current information is not it. It is dangerous. And I look forward to its demise. So we can start using this technology in more useful and productive ways.

Sarah Montana: I think the thing that I would add is that I’ve heard this argument a lot, too, of like, like this inevitability of technological advances and history just doesn’t reflect that. Right? Like the reason certain technological devices catch fire or like the sometimes it’s lightning in a bottle and sometimes things fade out is complex. We just find a pretty shoddy bike seat. And despite many innovations with more supportive bike seats, they didn’t catch on because humans got used to the bike seat that they wanted right? Mark Zuckerberg completely pivoted meta to really go in on this idea that everybody would want to be in a virtual reality world with an avatar because of what he was seeing in video game world and online culture and all that. And it didn’t take off because it turned out people didn’t really want that. And even in, you know, film, TV and streaming, it should have been a foolproof thing that people would want a 500th Marvel movie if you looked at the economics.

And it turns out humans are fickle creatures whose, like memetics and desires are not inherently that predictable. Otherwise, we wouldn’t need marketing departments, right? We’d just be able to predict things based on very reliable patterns. And one of the things that I think we we should think about as creators, as writers, is that, like, we don’t it’s not a given how audiences are going to react to AI content. Quite frankly, we’re seeing this must be I used as a cultural pejorative, like if somebody puts out a bad movie or they put out a weird album, that doesn’t sound human, it’s become the thing for people to be like, this is AI, this is AI. It’s not to say that it’s not tricking us. There is that moment with all those bunnies on a trampoline and a bear on a trampoline that go viral, and people like, I can’t believe I’m being tricked by this. I’m totally going to be scammed in old age. Oh my God. And I think all of that is very real, but it’s contributing to this deep skepticism where people feel like they can’t trust reality. And I don’t know that people really want to live in that world. I don’t know that they really want to live in a world at the end of the day where they want art like that. And what I’m feeling is I, as somebody who’s like, really do chronically online, like disgustingly online, is that there’s this division.

I think we’ve been really focused as screenwriters and TV, writers on this idea that AI is going to replace us in our industry. But I think that actually the like, split in attention is that, like, people are going to want high quality stories. People already complain about the cutbacks and formulaic things that have happened in our industry and all the sequels and all those things, like fans actually really don’t like that. What we’re actually going to be competing against in an AI sense is the slot machine. I think it’s going to be that it’s like, do you get on a platform that shows you silly, sloppy AI content, and are there generations that like to consume that as opposed to movies and TV? Like? I think that’s the more pertinent question. And what do we do about this capital war for attention as a commodity and shorter and shorter attention spans? I think that’s actually the more interesting technology question to ask then will I be able to make Twilight six Electric Boogaloo?

Larry J. Cohen: Well, then let me adjust the labor perspective on this technology too, because part of it is this idea that our value is less because this technology exists. Oh, this can write 30, 40, 50% of what we do is so so they claim so are we worth 30, 40, 50% less? Is this really what it looks like? Well, guess what. If I hand in something that is only 30, 40 or 50%, it is worthless. It is less than worthless. It is actually probably a lawsuit because I didn’t fulfill my contract. So this technology does not make us worth less. It actually makes us worth more because we cannot be replaced right now. And the truth is, is that as much as they say that this is all happening, there are so many stories of people being rehired after they say, we don’t need you, and then they say, well, can you be a contractor now? Can we not give you health insurance now? Can you clean up all the I slob graphic design because that’s my passion. Can you make this you know, actually work? Because the robots couldn’t do it. They just announced a humanoid robot that can come into your home. The Wall Street Journal just did a whole thing about it the entire time. It was controlled by a person wearing a VR headset. And that’s like, so where. What are we talking about? Yes. Great. It’d be amazing one day to have, you know, I help in all kinds of different ways, but we’re not there yet. So stop pretending that you can devalue us on a labor perspective. It’s just a lie. And I think we all have to remember that too, and not be afraid, because it is really just a business argument. That is being used against us. And once you frame it that way, I think it becomes a lot clearer about, you know, holding our ground and our value.

Sarah Montana: Look, I work in one of the most formulaic, parts of the industry, right? Like movie of the week, Christmas movies. People have their Super Bowl Sunday every fall, making fun of my movies on TikTok and Instagram and how they’re all the same. Right. And it turns out, actually, when you get inside the mechanics of trying to write a formulaic movie and I’m sure other people have experienced this, it’s actually really hard. It’s really hard to make that thing feel like it’s not the same as everything else that’s already existed, that there’s enough about it that’s very, very human. That makes people not want to change the channel. It actually takes a lot more attention and skill and craft than I think people expect. And so the people who are saying that this can replace us actually don’t understand what it takes to do this. They just are looking at a product and deciding whether it’s time to say, go or no, this is going on the garbage pile. And so that it’s a devaluation of our labor in terms of, the attitude toward it and the assessment of it. But in practical application, they’re not going to be able to do this without us.

Larry J. Cohen: And it’s all based on stolen work anyway. So it’s just like it’s a double fuck you. Pardon my language. That’s just what we’re dealing with here. They’re stealing from us. They’re trying to devalue us. And none of it is is right, legal or, economically sound. So I rest my case.

A.M. Homes: Before we rest our case. And clearly, we have to have multiple episodes to discuss all of these things and the way they’re going to evolve in the in the next year or two. But I wanted to ask you, so in terms of feeling like one’s work has been devalued or so on, let’s talk a tiny bit about what happens if somebody turns in a script and feels like the results are getting. It’s actually an AI generated response. We’ve talked about what it is to be a writer and whether one could, should, whatever, engage with AI in that way. But what about if you get AI notes?

Larry J. Cohen: You know, I think we’ve been hearing, stories of this. And I mean, one thing I will just say out loud is that, you know, the executives are devaluing themselves, by participating in this. Yeah. You know, we’ve heard stories of executives saying that they’re, consolidating notes using AI. I don’t even I don’t trust that. But, you know, at the end of the day, the writer receives them and has to execute them. And so we don’t want to be rewritten by AI, and that, I think it’s within all of our rights to ask if these notes are artificially created or if they, they came from a person. And much like we have to defend our work, I think it’s also important that executives have to defend their notes so that we can execute them in good faith and understand them. And I think that’s a really powerful tool for all of us to use to sort of challenge in a very, like I said, good faith way to get to the heart. You know, the note behind the note, as we say.

Sarah Montana: Yeah. I think the other thing that you can do, as we talked about before and journalists are employing this, is like there are sites out there where you can run those notes through a program and see if they seem to be, you know, if it comes back 100% match for a ChatGPT or copilot. Now, you know, which is important. And there’s the ability to come back and just say, like, hey, I ran this through a thing that says that these were AI generated, like, can we talk about that? In which of these notes like, look, one of the tricky things that, that about how unions work in our contract language and all that is that we actually don’t have a lot of control over how studios decide to use this to do their jobs. But like Larry said, really pushing back and being able to be like, normally we would have a discussion about the note behind the note. But as a writer, if you can feel that there’s not a justification. I think that’s part of navigating this relationship. I think the other thing, I’ve had producers who have been like, what if we just, you know, like, oh, we’re trying to come up with titles for your movie. What if we just ask ChatGPT? And I’ve had to be like, please don’t like it is also sort of incumbent on you as a writer to be like, I’m not comfortable with that. Or like, I don’t think that’s a good idea even as a brainstorm. Like let’s, let’s just not, and so it it takes a little it takes some courage. But I think that, I think you’re able to push back and I think that, I think I also don’t want to lose their jobs right now either. Let’s be honest. Like, it’s bad out here for us. And it’s also it’s bad for a lot of people who work at studios right now, too. And, so I think that some of the fervor around this might contribute to some panic. But also, you know, we’re going to have to see how their relationship with this evolves, too, as they scramble to stay in business.

A.M. Homes: No one wants to be the exact who gave the I know it’s to, you know.

Sarah Montana: A grade who adds no value at this point, right? Like, oh, well, if you’re just giving it if you’re taking all of our nodes and consolidating them into this thing and, and AI is doing it, then why are we paying you. Yeah.

Larry J. Cohen: And I would just say it’s I think it’s so important right now more than ever that we have this culture in our guild where we feel comfortable talking to each other about this and saying, what are you hearing? What are you hearing? What are you doing? What should we be doing? You know, and that whether you, you know, members want to reach out to us directly or find someone else that they trust, I think having this openness amongst ourselves can actually help us get to the tea leaves of this moment a lot quicker, so that we can anticipate what’s coming next. We can tell you what we’re hearing and vice versa, and that all of us have a different piece of the puzzle and that if we can share enough of it, we can kind of get ahead and see what’s coming down the road.

Sarah Montana: Yeah. I really want to reiterate, too, that, like, you are not going to get in trouble if you come to us and say, I’m using AI and here’s how I’m using it. We want to know how you’re engaging with this. We obviously have all the features we talked about and we want to have a conversation about it. But like part of what the task force is doing is trying to really figure out how this is transforming the industry. And that includes you. Like, we really want to have conversations about this. And so please reach out to me or to Larry. And we’re both on council. You can, you know, our our email addresses are there on the website. But the other thing I will say, too, is that your captains are also people you can talk to about that. And the captains are really sort of the backbone of this guild. And if they don’t have an answer, they’re going to get an answer for you and they’ll come to us and we can figure this out. So if you’re seeing something and also our staff is fabulous around this issue too. Like they’re really they’re really educated. We’re all on task force together. So like if you have something with an employer that feels squidgy or you’re something in your contract that you’re like, I don’t know if this makes sense or you just see anything that feels weird, you are never going to get in trouble with the guild for that. But the information is helpful to us in advocating not just for you, but just like we talked about in 2023. For everybody else, I am somebody who has a hard time standing up for Sarah Montana. Like if you’re like, oh, I want a pacer, I the appeal of this industry is having an agent and a manager who will be mean for me. The appeal of the union is having people who will be mean for me. But I often come to the guild and talk about these kinds of things because I know that I’m not just doing that to advocate for myself. If I have an issue, somebody else is guaranteed to also have it, and that means that we need to be talking about it as a collective. So please, please reach out to us. Also, Larry and I love to debate. We’re happy to get coffee and like, you know, have a fun, engaging debate if you feel totally differently than how we we talked about this, great. We welcome it. We love it. See you at Thunderdome.

A.M. Homes: So I just want to say, you know, it’s it’s so wonderful to talk to both of you about this. And there is a lot of energy and a lot of anxiety, I think, for everyone about the role of AI, both in our work and in our cultural and historical future, but it is also really, really important for members to realize that we are really curious about what their experience is with AI are and what they have, their anxieties or stresses are about it, and also the way in which it’s affecting the industry, whether, you know, in terms of how they’re getting responses or the work that they’re doing. So, yes, absolutely. People should reach out to Sarah and Larry and it is just such a treat. But we’re going to have to do another episode, you know, in six months to see where we are now. Because I will say to is rapidly evolving. That’s the thing is, we are absolutely at the very beginning of something, even though I has been around for a long time, it has changed. And it will continue to change. And just thank you guys so much. Is there anything that you want to say in closing?

Larry J. Cohen: I just want to say all these headlines that you see every week that tell you this or that about all the things I can do, just take another look. Just look under the hood of what you’re seeing, because much of it is a psyop, designed to make you feel like this is inevitable. And I can just tell you from actually doing, you know, work trying out all this technology, it’s usefulness is so much more limited than you realize. And that, and that’s the part that I feel like is not part of the conversation as much as it needs to be. So I just want everyone to feel like, you know, yes, there’s there’s all these headlines. Yes, there’s there’s all this talk about it. But at the end of the day, when you are doing your work and you are, you know, finishing your scripts, the question is, is this actually useful to you? You should find it out for yourself. But the truth that we’ve found is that it’s not, and that’s kind of the real moment that we’re in. So that’s kind of, I think the most important thing for people to realize right now.

Sarah Montana: Yeah. And just to add to what Larry said, I think that like, I look, I used to be an opera singer, and that felt like the Olympics of singing and training was really important in what you did every day. Sort of like made the difference between whether or not you could hit X, download or do this thing, or really even execute on your craft. Right. But being a writer is being a creative Olympian like it is what you do every day in the practice you put in and the writer’s block and the parts of writing that are the hardest and the most frustrating are the places where you’re, in fact, training for the next breakthrough and the ability to run the four minute mile as opposed to the five minute mile. Right? Like that is how we build. And it is, I think the danger and allure of this technology is this idea like, oh, it can help me do it faster, or it can help me do it better. It can’t, because like, then ultimately you’re degrading your ability to progress and get better at this thing. And I look, I know we all want health insurance and we all want to be paid well and we all want to. I obviously, make Christmas movies. And so there’s, you know, all of that is very, very real. And at the end of the day, if we really only cared about, like, the economics of this or we just wanted shortcuts to make money, none of us would be in this field there just like a thousand ways to make more money. Do not compromise on the thing that makes you elite, that makes you, like, inherently like your genius and your ability to grow in your artistry and your you know, I know we’re all tired of the word craft, but like, you know, we all want long careers, you know, engaging with this art form in this medium because we care about it and we love it. And when you start asking, it’s different than going to another human to have a sound board. When you start asking a machine to generate for you. I don’t know. It’s kind of like doping or taking these other shortcuts. There will be consequences for that. You won’t actually be like the athlete that you could have been at the end of the day, even if it wins you some things in the short term. So that’s sort of my philosophy around it these days.

A.M. Homes: It is just so wonderful and so intense. But yes, I think, we all need to be paying attention and we all need to be making sure also that as you’ve said, that our data is safe and we’re not being fed into machines and we’re not offering ourselves up to be fed. And, I look forward to talking to you guys more and and everybody, please do be in touch with Sarah and Larry with any questions or experiences about AI. Thank you so much for joining me today on the Writers Guild podcast.

Intro: On writing is a production of the Writers Guild of America East. The series is produced by East staff members Jason Gordon, Tianna Timmerberg, and Molly Beer. Production. Editing and mix by Giulia Hjort. Original music by Taylor Bradshaw. Artwork designed by Molly Beer. To learn more about the Writers Guild of America East, visit us online at wgaeast.org or follow The Guild on all social media platforms at @WGAEast. If you enjoyed what you heard today, please subscribe to the podcast and give us a five star rating. Thanks for listening. Until next time. Right on.

Back to top