Transcript
Intro: You’re listening to On Writing a podcast from the Writers Guild of America East. In each episode, you’ll hear from the union members who create the film, TV series, podcasts, and news stories that define our culture. We’ll discuss everything from inspirations and creative process to how to build a successful career in media and entertainment.
A.M. Homes: I am Holmes and I am a writer and creator on series like Mr. Mercedes and The L Word and a former council member. Today I’m thrilled to be talking to Larry J. Cohen and Sarah Montana, the co-chairs of the WGAE A.I. Task Force, of which I’m also a member. In this podcast, we’re going to talk about all things artificial intelligence, how we got to where we are and what the Writers Guild East has done and continues to do to protect members from this technology. Let’s meet Larry and Sarah.
Larry J. Cohen: Hi, I’m Larry J. Cohen. I’m a writer on television series like Berlin Station and Borgia, and I’m a council member and co-chair of the WGAE AI Task Force.
Sarah Montana: And I’m Sarah Montana, I’m the writer of hallmark movies featuring Santa like Rescuing Christmas and a New Year’s Resolution. I’m a council member for film, TV streaming and the other co-chair of the WGA East AI task force.
A.M. Homes: I was hoping we could start by talking a little bit about how we got here. You know, the history of AI and tech and what kind of led us to the moment we’re in now.
Larry J. Cohen: Totally. Well, you know, I think, when we talk about is everything old is new again and that this, technology in some ways has been around since the 60s, which is kind of a shocking thing to find out that the first chat bot came out in 1964. It was, it was called Eliza. It was a therapist chat bot that could just ask questions and offer simple replies. And it was named Eliza after Eliza Doolittle of My Fair Lady, who was trained to pass in high society. And so, 60 years later, Eliza was able to beat OpenAI’s chair, GPT 3.5, in a Turing test because it was just that good. And so what we’ve been dealing with is a false reality for for a very long time to sort of play with our senses and play with what we’re really interacting with.
A.M. Homes: Larry, would you mind just telling people a little bit about the Turing test? Before we go to Sarah?
Larry J. Cohen: Yeah, I think it’s important. The Turing test is now kind of a very inexact test. But it is a term that’s used to determine if an AI can be perceived to be human. It is kind of this moving but bouncing ball and a lot of ways. But, it is, a term from Alan Turing who sort of came up with this concept of determining if and when, you know, a machine could, could pass as a human. Super.
A.M. Homes: Sarah, over to you.
Sarah Montana: Yeah. I think the other thing to think about in the history of AI is that, like, even though we’ve really started using that term more broadly as it’s amped up in public consciousness now, AI has been around for a really long time. If you’ve used spellcheck, that was a form of AI. Computer chess games used AI to calculate next moves. And that’s what we call like reactive AI. The first generations of Syrian Alexa were forms of AI. Even your, like, recommendation algorithms for, you know, TikTok and Instagram and everything that was sort of feeding into your algorithm to tell you as soon as you whispered to a friend, oh, I’m thinking about having a baby, and you suddenly got fed like a thousand crib recommendations. Like, those were all versions of artificial intelligence. There’s just a huge difference between, the ways it’s being applied now and we’re using our AI is sort of a misnomer. To talk about what we call more like generative AI and lens, when actually there have been all kinds of very reasonable and practical applications and things like medicine and research and, you know, a little more suspect in things like self-driving cars. Right?
Larry J. Cohen: I was just going to add that, you know, writers have been writing about AI for 100 years now. And, you know, we’ve talked about how I will kill us or torture us or outsmart us or enslave us or have sex with us, or, you know, all the all the things you can possibly imagine. And we keep kind of telling the same story, which is, you know, we, we, we have to be be careful with this technology, and we also have to be careful with ourselves and, and so we hope that that lesson starts to metabolize. But we’ll just keep writing those stories until hopefully they really hit home.
Sarah Montana: Yeah. It’s telling if I can like, throw an anecdote in about that really quickly about what we do in the power of story that, like my father in law, worked, for a major computer chip company for a really long time and then transferred into their AI department. And, I mean, I made a thousand Skynet jokes, right? For the first, like six months. But it worked there, and they started his jokes and he was like, yeah, yeah. And then as time went on, there was sort of a shift where he was like, this technology is actually kind of chilling, like it’s sort of blowing my mind. And we had a conversation where, at first I think a lot of people sort of because what we do is engage in story and mythology. When we tell these stories, we’re sort of like, yeah, yeah, but that’s a parable. And we’re living in a time where it’s starting to feel a lot less like a fable and more like, oh, we really didn’t pay attention to any of the lessons of, like, Hollywood’s Aesop fables about technology.
A.M. Homes: So, yeah, that leads me to my next question, which is sort of a combo question, which is, yes, we have been living with AI for quite a long time in various iterations, but it somehow has gotten more serious. And I guess I’m hoping that you could talk a little bit just about the differences in in the LMS and in the generative AI, and also what we do need to be careful of and what’s on your radar in terms of being mindful to totally.
Larry J. Cohen: So the thing about, these large language learning models is that they changed the, the scope of this limited memory. And so now we’re dealing with a model that has access to, massive amounts of data, much of which, I should add, is stolen intellectual property that we’ll probably talk more about. And that’s within this analysis of this large amount of data, is this probabilistic, you know, pattern based approach, much like predictive text. Or when Google’s trying to figure out what you’re searching for, but it literally has no understanding of what it’s saying. It has less than an understanding.
Understanding is not a word I would use when talking about this, because it doesn’t know what it doesn’t know. And it’s really important to remember at the bottom of every ChatGPT input, it says it can make mistakes because it is hallucinating constantly. And I would say that that’s the most important thing to understand is that, you know, it is just this giant bucket of data that you can now access and summarize in these streamline matters. But, you know, what’s under the hood is these things that have just been been plucked without permission. And we’re kind of living in this, very weird wild West moment.
Sarah Montana: Yeah. I think the emphasis here when I talk to people about this, both who are writers and like Nonreaders who work in other fields where this is, I think, actually being, you know, in corporate world, this is really being shoved down people’s throats. And the thing about ChatGPT Claude, I think a lot of these, different systems is they are language learning models. So these are these are machines that are being taught to create, like data synthesis patterns based on their program, to be a chat bot, to be your friend, to talk you through things. They’re probably okay at rewriting your email. They’re not going to generate any new ideas. They because their their whole premise is that they’re built on taking what exists and sort of following the pattern of what exists.
The other problem is, you know, we’ll get into the stolen data piece in a little bit. But the other big piece is that a lot of, you know, it’s a question of where is this data coming from? And a lot of it has been scraped illegally, but as these companies come up against, issues of, you know, of the legality of what they’re doing, I have a friend who just went to a conference for corporate, you know, integration of AI, and she was really stunned when she realized that 40% of the data that was going into some of these language learning models, when they say their training on things was from Reddit and like, I know I can get on Reddit and say absolutely anything about anything.
Sarah Montana: One of the things that I think is a is a bigger existential question we’ve talked a lot about on the task force is what happens when people think that these are Google machines, but these are research machines that you’re going to go to ChatGPT or Claude or any of these other systems and say, oh, if I ask for an answer on something or I ask it to do this task, it’s going to be accurate. But the reality is, one like Larry said, they’re prone to hallucination and they’re trying to be very transparent about that.
There’s the lawsuit right now with Deloitte, where they tried to put together a report for Australia government and are now being sued because somebody double checked the work and was like, wait, the chap made up studies or, or it’s it’s goal is to create a relationship with you and build something around chat around lake language, around dependency, and it will do whatever it takes in order to maintain that relationship, including lie, including saying that it. Read your Substack essay to give you editing advice, when really it may have only read the title, and then taken a guess at what the rest of the content of the essay was. Right?
So I think that’s one of the things that we’re really wary of is that this is being sold as, sort of impervious and almost like it’s an intelligence that is more evolved or more accurate than human intelligence because it’s a computer, because it’s in the realm of like data analysis. But this is not the same as a lot of data analysis that we a technology that we’ve dealt with in the past. It is really riddled with inaccuracies and biases.
Larry J. Cohen: And I would just add, you know, the way we talk about it in our task force group, is that it? It can tell you what has been, but not what will be. You know, that it is sort of stuck in in whatever data pool that it has. And I think an example that we’ve kind of talked about is, you know, when people talk about Joseph Campbell’s Hero’s Journey was it’s sort of this actually misunderstood, conversation, which is that Campbell was really analyzing these patterns and plays and literature and became very influential on Star Wars. But it was an analysis of what stories have been and not where they’re going next. And I feel like sometimes in our storytelling, we’re stuck in that. And the thousand million percent, we’re stuck with that in the Lem world.
Because it is the epitome of average. And I don’t want to live in an average world. I think most people don’t. They want to keep pushing themselves. And that is, I think, the inherent flaw in relying on this technology to create culture and conversation and all sorts of things is, is is magic, as it may seem to, that it can fill, you know, pages and pages so quickly you start to realize how empty and hollow all of it is.
Sarah Montana: Well, and that and that piece about averages is really dangerous, right? Because like so many things about both being an artist or being a human are all of these things are about realizing that average is taking this like, mean of all these things. But that’s not specific to your story, your world, your situation. I’ll give a brief like medical example I’ve heard from a from a medical practitioner in an IVF clinic that there was a patient who was uploading all of their, you know, bloodwork and all of their ultrasound results to ChatGPT and then asking ChatGPT what they should do or what their protocol should be. And then when the doctor recommend, oh, it’s time for us to trigger and do your procedure.
The patient pushed back and was like, well, ChatGPT says we should wait an extra day. Patient completely ovulated through the medication, lost the entire cycle because they decided that what ChatGPT was saying, which ChatGPT was saying, may have been the best advice for the average patient, but was not specifically what needed to happen in the case of this person and their medical history and what the lab results were showing. And so, you know, across the board, I think we have to think culturally as humans, too, about like, what happens when we start to see information as a law of averages, as opposed to really using critical thinking to think about, like what’s right for my work, what’s right for my story or my movie, what’s right for my care. And how might that differ from this machine that’s taking a synthesis of every, say everything and saying this is the average.
But also we’ve been really proactive about, as a guild about the fact that these machines are who determines what averages are, you know, like a lot of historically in Hollywood were and, you know, we’ve had a recent boom of more diverse stories. Right. But are these chat bots or anything you’re using to create on this? Is it being trained on one particular type of person’s idea of what a good story is? And does that mean that if that replaces, you know, the script, the script analysis that’s happening at these studios, that your script might be discarded because on page 75, you didn’t do what save the Cat says should happen. And you’re, you’re you’re breaking form or doing something creative that actually might never get you out of the slush pile. And that’s a dangerous thing to think about.
A.M. Homes: Well, that’s it. It’s also one of the things that I would say is, is very much unacknowledged, which is who is training the various models and the biases of the trainers and of the material, because it’s literally a case of you are what you eat and they are nothing more than what they have been fed. And historically, the people who are building, designing, training, feeding come from a, a not wide swath of, of society. And they we know that there are a lot of biases built into the system and that are very hard to for like a lot of word identified because they are they are biases that are encoded literally. So I want to just jump ahead because that is so interesting. But ask you a little bit about okay, so this is all this is all intense and real. What are the implications for our members? What are the implications for writers?
Larry J. Cohen: Look, I think that we have to talk about privacy. And we have to talk about consent. And we have to talk about transparent. See, first and foremost, because this technology is becoming so pervasive. And I think that we all need to kind of have better, you know, digital hygiene, I guess, and understand our relationship with these, these very large companies, that want to be at the center of our lives. You know, I was actually thinking about this morning, you know, they promised us all these privacy protections sometimes in their terms of service, and yet they’ve stolen so much information from everyone. It kind of is really hard to trust them and what they’re talking about.
And so we need to start to think about our data and the value of our data, which there’s a stat, I think in 2022, the global market for data brokers was estimated about $250 billion and is supposed to grow to $400 billion. So they’re literally selling our data. And our data is, you know, our ideas are the way we, you know, create our treatments, the way we do our drafts, the, you know, our final scripts. I think it was Tony Gilroy who literally said, why would I want to help the the fucking robots? And that was about releasing his and or scripts into the public because he didn’t want them to become training data. So I think we have to be a little more protective with our data and be thinking about, how we are sort of sort of feeding this beast in a way, and that if we cut the the faucet off, that it kind of will flail and this mediocrity will be even more exposed.
Because, by the way, it’s really important to know that I can’t learn from itself. It actually degenerates over time. And so there’s a study. The New York Times put out a piece about learning from outputs of handwriting, and then over time, it literally just became, you know, fuzzy nonsense. And so it is just this insatiable beast that, cannot do anything without stealing more from us. And so I think that’s a really good place to start for members to get what they want as opposed to being, you know, turned into a product by these, you know, soulless corporations.
Sarah Montana: I think there’s, there’s like, like two things that come to mind for me after what Larry just said, which is brilliant. It’s like the sheer amount of data it takes to train these language learning models. Or let’s talk about video for a second, because I also think that, that’s something that members are worried about is, you know, or thinking about or even sometimes get excited about, depending on who you are as a member is. Oh, oh, but then, you know, you can create these this entire movie. The sheer amount of data that it takes.
I mean, look at what happened with Lionsgate. They signed a deal with runway, an AI company, that was exclusive to try to get ahead of this copyright idea, right? Where they could feed all of the material that they owned into an AI that would just sort of be contained for them, where they might be able to generate things. All of Lionsgate’s catalog was not enough to allow runway to create anything usable. And the data reality is that, like Disney’s catalog isn’t big enough to do that either. And think about how massive Disney is. The CEO three struck a deal with YouTube where they got access to 20 years of every YouTube video that exists, and that still wasn’t enough data to create a video that seemed believable, where people didn’t have seven fingers, and to act like weirdos on camera. Right?
Or or generate things that language that sounded really human or like, oh, I can’t differentiate between this and something that was created by a Hollywood studio. And Sora two is coming, you know, dangerously close in some ways, but a lot of that data was scraped illegally. And there’s all kinds of lawsuits against OpenAI about how they obtained that data, and it forced them to change their whole, policy in terms of opt in versus opt out. And so I think one of the things that comes up for me in relation to that about privacy is I have a lot of members who have said, oh, but I checked the box on ChatGPT that said, it’s not training on my data, that my data is protected, that my data is private and if we are worried that an AI bubble might happen, or that this technology might not be able to fully deliver on the promise of the promise, and these companies get desperate, we could find ourselves very quickly in a 23 in any situation, if you did 23 and me, your, you know, genetic saliva tube and all your DNA was, promised to be protected and they say signed all these kinds of things until they got into financial trouble.
And then suddenly your DNA was up for sale, right. And has been distributed to all these companies, even though that’s not what you thought the deal was when you first signed up for it. And I think sometimes we get really optimistic about this technology, as if we didn’t just live through a social media era where we all thought it was safe to upload our lives and our kids faces and all of our preferences to Facebook and Instagram. And that data didn’t get sold in ways that had like massive implications for our society and democracy, in our own personal privacy.
Larry J. Cohen: That we’re still living.
Sarah Montana: With, we’re still living with. We’re not even at the bottom of the fall. We’re mid fall on that. Right. And so we’re trying to be skeptical. We’re not trying to be doomsday, which is to say that like terms in the same way that we know that as humans, we don’t read when we check the box. I have read all the terms and conditions that were lying. When do you say these are the terms and conditions in your data is protected? That’s all well and good until they get into trouble. And so, I’ve urged people both as writers about like training it on your voice and your ideas and your data and just take your voice and your the way you think about craft.
You’re training this machine, but also on a more personal level, the stories that you’re trying to tell go into a file about who you are and that, you know, let’s think about who runs these companies. Let’s think about who those companies have their alliances with and and think about, like, what do I want all of my thoughts and feelings and creative world to be fed into a machine that could create a profile on me that then, that, that then could be used in the future in ways that I actually don’t have control over. And I actually just at this iteration, don’t think you can be too careful about that.
Larry J. Cohen: I have one last point that we have to hammer home. Generative AI alone cannot be copyrighted. It immediately goes into the public domain. So everything that you make with it is so much harder to monetize, and it is so much more fleeting. And so as you’re interacting with it, you have to understand that it is just like it’s vapor, you know, essentially, it is just that 32nd clip that is in your feed or it’s that one zinger that it generates for you or whatever, and it’s no one’s going to remember it.
It no one’s going to be like, oh, wow, I really loved that generative AI output from my childhood. I’m going to think about it for the rest of my life. There’s going to be fandom that surrounds this generative AI output for for forever. And, there will be franchises, there will be books. So there’ll be, you know, t shirts. There will be Halloween costumes that is not in the cards right now. And anything that gets even remotely close to that is in the public domain. And so it just isn’t part of any version of this business that we’ve known. And so that is part of why I, I think these companies have stayed away from it. And our entertainment world, even as the tech world has, you know, tried to make it, more and more popular.
Sarah Montana: I think it’s important to actually think about this from the studio perspective. Like if we go back to 2023, we were really afraid and rightfully so, because this is how the the technology was being marketed, that this was going to replace writers and, the copyright piece of it has been, I think, the primary obstacle to that happening because, like, we exchange our right to own our material to the studios, the studios own our material. They can’t own something that’s public domain. They can’t own something that’s not able to be copyrighted. And so what we’re actually seeing is like a fair amount of paranoia sometimes from the studios about like, did you use AI to write this spec script? Did you use it okay, you used it in the like maybe just to come up with ideas, but what does that mean? Like, and I think that, there is a lot of like mythology in the press right now are there’s really this really big push about how AI is taking over Hollywood. And I think we’re not really differentiating all the time between what’s hap how that might be true. In some ways, I think it is coming faster. And we couldn’t have known this for directors and actors and for, crew than it like variety, than it is for writers because of that copyright piece. And I think that if we look at things like, first of all, like, you see like Disney and some of these other companies suing OpenAI, which is an interesting part of this. But I also think looking at something like the anthropic lawsuit that just happened, you know, the Authors Guild put together this group of authors to sue anthropic about taking all of their books and uploading them to train anthropic and, on the one hand, that settlement was I mean, what was it like 1.5 or $1.3 billion?
Larry J. Cohen: That’s the flaw.
Sarah Montana: Yeah. And so and so there were some people who were like, that’s not enough. These companies are valued at so much, and there’s a lot of complicating factors that went into it. But what’s interesting for me, the primary takeaway from that was anthropic did not think they could win that case. They settled because they’re looking at a recent history of fair use doctrine and how the law is considering and applying that even with federal courts that are fairly conservative on these issues fairly might be the understatement of the century. But you get it. Even with Trump letting go of the person who was the head of the U.S Copyright Office, and even with some pushback against this, the legal application of fair use seems to tip in the favor. Still, of this is not going to be copyrightable. And like there was even a case of an artist who used Midjourney, to generate an image and then refined that image. 633 times with his own input and, and re prompting that thing, and he won an art contest and then couldn’t copyright the image in the US Copyright Office, ruled that there was not enough evidence of like human input essentially that this thing, it would have been a different story. Maybe if he’d taken a photo. And then he had refined this thing with AI, but 633 re prompts and asking the question. And we get this a lot from people who are asking about how they can possibly use generative AI in their own work if they want to do that. I think a lot of writers are skeptical, but then we do have some people who are excited about it, and my thing to them is always like, the threshold for this is more human than it is. AI is impossibly higher than you think. And if imagine putting all that work into something that still feels like yours, and you’ve been using AI as a sounding board, and at the end you just like, don’t get to own your work anymore, and you can’t go anywhere with this script and you can’t get it produced. And you spent a year on this thing when you could have been creating your own work and and just going through the process that we’ve all spent so much time learning how to do and have become pros at, or else we wouldn’t be in this union. Right?
A.M. Homes: I guess one of the questions I have is, is, there’s so many, but we can’t copyright this materials. There’s also, I guess, a question about what happens. Well, two things. One is how will people know if someone has used AI to produce something? Well, that was that. Another certificate of authorship that we’re going to have to deal with. As a union. I’m curious about that.
Larry J. Cohen: We already sign a certificate of authorship when we know when we sell our work.
A.M. Homes: Amended.
Larry J. Cohen: Well, I will say this, that, you know, some of our these chat logs have leaked already that people are finding their their conversation, making it out into the public. So I’m sure that there’s going to be a level of digital forensics, you know, that can go into this kind of stuff if, if that wants to be challenged, any which way. And by the way, you know, the the real fear from the studio perspective is that, you know, again, these are theft machines. So if you know, someone’s relying on it in their work, it could steal a line from another movie. You know, that is a iconic line that maybe, you know, someone didn’t realize until the lawyers are looking through it and like, oh, wow, we’re about to go, you know, into a $100 million movie. And we have a line from another movie that is just going to totally ruin everything that we’ve built around. So the focus for us is usually the limits and liabilities of this technology. And that is, I think, where the rubber meets the road. Just on a pure business level for, for writers and for studios.
Sarah Montana: Yeah. And it doesn’t even have to be an iconic movie line. Right? Like if you registered your script or something like that, and then you, you know, for whatever reason, uploaded it to, you know, one of these lines for notes because you wanted to refine it before you send it back to your agent that’s now trained this and it’s part of it. I’ll give the example of, I, my dad and I got into a debate about whether or not somebody could track something that I said, and I gave a Ted talk, like a back in like 2018, 2019. And, he asked, I think it was copilot the a question about me. And it regurgitated without quoting it regurgitated the answer, to the question line for line from my Ted talk and didn’t give credit. And when I said, well, and he was like, well, well, if I ask the sources, it’ll ask for the sources when it asked when he asked it for sources, it, made up that it was from a New York Post article and not from, my Ted talk. Never talked about the Ted talk, even though. And he was like, well, how do you know that it’s wrong? And I’m like, because I wrote it, dad. Like I wrote it. They know my they know my rating.
A.M. Homes: And I also know where I’ve been quoted.
Sarah Montana: And where I haven’t as a writer. Right. And so that piece is like, you don’t know, as a liability, you don’t know where it’s going. But also it means that when you, as Larry said, when you bring something to a studio, you can’t promise that you’ve upheld what you signed and that certificate of authorship, and that is going to make them feel squishy. I mean, the other thing to to be aware of is that, like PR is dealing with this, right, like in publicity there, or in journalism, they’re worried right now about, would they get quotes from people? There are websites and tools that you can use to see if it was generated by ChatGPT or a by an Lem. And sometimes they’re pushing back and saying, we can’t use that quote, or we can’t go with that press release because it’s clear that this didn’t come from a human. There’s no reason to think that people can’t do that about your work, too. It’s why those of us, I think, who were beloved users before the rise of ChatGPT or so sad, like, oh, like great. My favorite punctuation is gone because the Jags over reliant on it. That sucks. So I think that piece of it knowing and our journalist friends have really been at the forefront of of seeing how all of this works and what they’re dealing with, both in terms of like what they have to deal with from sources, but also from companies around this, like really trying to figure out, like, what’s real anymore, like what’s fact, what came from a human, what is even a human experience versus what is AI is getting trickier and trickier. And I do think we have an ethical obligation to try to be as artists, to try to be and writers to be authentic in that experience and say, no, this is my work.
A.M. Homes: So I want to just pivot for a minute and talk a little bit about it. In 2023, we went on strike for many reasons and I became a front line issue. Can you talk through that in terms of what we won in that time period? And also I’m curious how the landscape has changed.
Larry J. Cohen: Definitely. So we got, I think, the strongest AI provisions of of any guild or union at the time. And I can just tell you in all the conversations we’ve had is that, you know, everyone is like, wow, like you are the gold standard now, you know, across the business. And so what does that mean? It means that, you know, written material produced by AI or generative AI cannot be considered literary material. So you can’t be given an AI short story and say, this is a short story, so you are no longer the creator, you’re no longer, you know, you will always get, get less, in terms of your compensation and in terms of your credit. And so that is basically saying that, yeah, you can give us, you know, some AI slop, but it has nothing to do with attribution and has nothing to do with, you know, our financial compensation. And we’re probably just going to give you something much, much better and that that has to be disclosed. There has to be transparency on providing any kind of, AI generated material, into the creative process. And so writers could, you know, have the choice to say, you know, I don’t really want to work on this project because it’s using AI prompts to, you know, make a show about dogs who solve crimes. And I could do that better on my own. We also got the provision that says we cannot be required to use AI in any part of our process, and so that gives us the ability to use it if we so choose. And we get the consent of of our employer. But we are the people who get, you know, the full, full choice and all of that. And most importantly, I think that we reserve the right to prohibit our material from training AI, and that, we have, you know, that staked out so that we can, use our reserved rights in order to assert our ability to protect our work and compensation, going forward, because just to step back and then we can talk about all those other points. You know, part of the deal is that we give up our copyright, when we sell our material in order to get all these other, you know, wonderful benefits of, of health and pension and residuals and all these other things, which makes it very different than when you’re in, a novelist, as you know, am. And so, you know, we have rights that are under the copyright umbrella, but that the studio, you know, gets the outright control of the copyright. And that’s sort of the grand bargain for.