Queen Mobile Blog

Hard Fork: Casey đến Nhà Trắng + Cuộc chiến bản quyền về trí tuệ nhân tạo + HatGPT

Hard Fork: Casey Đến Nhà Trắng + Cuộc Chiến Bản Quyền A.I. + HatGPT
Bài viết này được tạo ra bằng phần mềm nhận dạng giọng nói. Dù đã được những người ghi âm người kiểm tra, nó có thể chứa lỗi. Vui lòng xem lại âm thanh bản phát thanh trước khi trích dẫn từ bản dịch này và gửi email đến transcripts@nytimes.com nếu có câu hỏi.

kevin roose
Casey, tuần này tôi muốn nói chuyện trên chương trình về một công nghệ nguy hiểm và tôi tin rằng chính phủ nên can thiệp để điều chỉnh nó.

casey newton
Cái gì vậy?

kevin roose
Ôi, Dots.

casey newton
Dots?

kevin roose
Vâng.

casey newton
Mứt kẹo Halloween?

kevin roose
Đúng.

casey newton
Ý kiến của tôi là, từ những gì tôi biết, chúng được làm từ nhựa tái chế, vì vậy tôi không hiểu tại sao chúng lại cho trẻ em ăn. Bạn đã từng thử một viên kẹo như thế chưa? Chúa ơi.

kevin roose
Vâng, tôi thử rồi. Chúng không nằm trong danh sách kẹo hạng nhất trong suy nghĩ của tôi.

casey newton
Ý những viên kẹo khác đã hết hàng ở cửa hàng à?

kevin roose
Đúng, hết sạch. Chúng là thứ duy nhất còn lại tại Target. Vì vậy, chúng tôi mang về những viên kẹo Dots này và tôi thử, như mọi người. Tôi gặm một viên Dot, và một chiếc răng rơi ra.

casey newton
(Cười) Đợi đã. Điều đó chắc không phải do viên kẹo Dot chứ?

kevin roose
Vâng, nó có phần nhầm lẫn với viên Dot đó. Tôi cảm thấy có một thứ cứng trong miệng và nhận ra rằng tôi vừa gãy răng, vào một viên Dot.

casey newton
Không thể nào!

kevin roose
Đúng vậy.

casey newton
Chắc do viên kẹo đó quá cứng và bám dính, phải không?

kevin roose
Đúng, nó đã bật mất một phần của vương miện trong miệng.

casey newton
Không thể nào!

kevin roose
Và vì vậy, tôi đã phải dành Halloween tại phòng nha sĩ, điều trị khẩn cấp cho răng.

casey newton
Thật là khó chịu.

kevin roose
Và tôi đã đi cùng với một nửa khuôn mặt tê chảy. (CASEY CƯỜI) Rất ma quái.

casey newton
Bạn biết không, tôi có thể đề xuất nhiều trang phục tốt cho điều đó – Ảo Ứng Nguyệt Hồn được gợi nhớ ngay. Thực sự, bất cứ điều gì có mặt nạ che nửa khuôn mặt là được.

kevin roose
Vâng, người đàn ông lạ lùng, kháy sữa đi cùng với một đứa trẻ nhỏ là ai nhỉ? Vậy nên, đó không phải là cách dễ chịu để sống Halloween của tôi.

casey newton
Bạn biết điều thú vị là hàng năm, luôn có một cuộc hoảng loạn xung quanh kẹo Halloween. Bạn biết, vẫn có người kêu lên rằng hãy mở từng cái bao bì và đảm bảo không ai bỏ một lưỡi dao vào đó. Và chúng ta luôn cười. Chúng ta nói, ôi, bạn cần điều giải trí. Bạn gặm vào một viên kẹo và phải đến nha sĩ điều trị khẩn cấp.

kevin roose
Đúng. Đúng. Thật là tồi tệ. Và những viên kẹo Dots này – chúng quá kẹo và dính. Chúng ta phải làm gì đó, và tôi kêu gọi chính phủ Biden can thiệp và cấm việc sản xuất viên kẹo Dots.

casey newton
Thì lệnh điều hành về vấn đề đó ở đâu, thưa Tổng thống?

kevin roose
Vâng.

casey newton
Tôi là Kevin Russo, nhà bình luận công nghệ cho “The New York Times”.

casey newton
Tôi là Casey Newton từ “Platformer”.

kevin roose
Và đây là “Hard Fork”.

casey newton
Tuần này, tôi đến Nhà Trắng để nói chuyện với chính phủ Biden về lệnh điều hành trí tuệ nhân tạo mới. Sau đó, chuyên gia về bản quyền, Rebecca Tushnet, tham gia để thảo luận về những phát triển lớn trong cuộc chiến pháp lý giữa nghệ sĩ và các công ty AI. Và cuối cùng, một vòng Hat GPT đầy hứng khởi.

kevin roose
Casey, tuần này có điều gì lớn xảy ra với bạn không?

casey newton
Kevin, tuần này tôi đã đến Washington, DC để tìm hiểu một số thông tin về những gì đang diễn ra ở nước này liên quan đến trí tuệ nhân tạo.

kevin roose
Vâng, vậy bạn nhận được một lời mời rất thú vị tuần này để đến Nhà Trắng để thực sự nói chuyện với một số quan chức ở đó về lệnh điều hành mới này về trí tuệ nhân tạo. Và câu hỏi đầu tiên của tôi, rõ ràng, là lời mời của tôi đâu? Nhưng câu hỏi thứ hai của tôi là, điều đó như thế nào?

Vì đây là những điều tôi biết về Nhà Trắng. Tôi biết đó là nơi Tổng thống sống.

casey newton
Đúng.

kevin roose
Tôi biết có cái gọi là Phòng Oval và cái gọi là Tòa Tây. Tôi cũng biết rằng cho đến gần đây, có một con chó ở Nhà Trắng tên là Commander, đã cắn người.

casey newton(Cười): Có một bức chân dung của Commander ở Nhà Trắng, và tôi đã chụp ảnh bức chân dung đó, chỉ vì nó làm tôi vui mừng.

kevin roose
Bạn đã bị cắn một chút, như một vết cắn chó kỷ niệm?

casey newton(Cười): Từ lúc tôi bước vào sân, tôi đặt nhiều sự chú ý vào lần nhìn. Tôi nói, chó ở đâu? Bởi vì tôi muốn gặp nó và vuốt ve nó. Vì việc gì có thể t

Nguồn: https://www.nytimes.com/2023/11/03/podcasts/hard-fork-executive-order-ai-copyright.html

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

kevin roose

Casey, I want to talk this week on the show about a technology that is dangerous and that I believe the government should intervene to regulate.

casey newton

What’s that?

kevin roose

Uh, Dots.

casey newton

Dots?

kevin roose

Yes.

casey newton

The Halloween candy?

kevin roose

Yes.

casey newton

I mean, from what I understand, they’re made out of recycled plastic, so I don’t know why they’re feeding them to children. Have you ever tasted one of those things? Good Lord.

kevin roose

I sure did. So as we were getting ready for trick-or-treaters this year, my wife picked up some Dots. It was — I wouldn’t say it’s like a top-tier candy in my estimation.

casey newton

Was all the other candy gone at the store?

kevin roose

Literally, yes. It was the only thing remaining at Target. So we bring home these Dots, and I’m testing the candy, as one does. So I bite into a Dot, and a tooth comes out.

casey newton

(LAUGHING) Wait. That’s, presumably, not out of the Dot.

kevin roose

Uh, it’s sort of jumbled in with the Dot. I feel this hard thing in my mouth, and I realize that I have just broken my tooth —

casey newton

No!

kevin roose

— on a Dot.

casey newton

Dot?

kevin roose

Yes!

casey newton

Is it because it’s so hard and sticky?

kevin roose

Yes, it took off the crown on my molar.

casey newton

No!

kevin roose

And so I had to spend Halloween at the dentist’s office, getting emergency dental work done.

casey newton

That is horrible.

kevin roose

And I went trick-or-treating with half of my face numb. (CASEY LAUGHS)

It was very spooky.

casey newton

You know, I could recommend, actually, a lot of good costumes for that — Phantom of the Opera comes to mind. Really, anything with a mask that covers at least half your face.

kevin roose

Yes, who is this strange, drooling man accompanying a toddler? So yeah, that was not a pleasant way to spend my Halloween.

casey newton

You know what’s so funny about this is that every year, there is a panic around Halloween candy. You know, it’s like, well, you’d better open up every single wrapper and make sure nobody’s stuck a razor blade in that. And we always laugh. We say, oh, you people need to calm down. You bit into candy and had to go to get emergency dental work done.

kevin roose

Yes. Yes. It was very bad. And these Dots — they’re too sticky. We got to do something, and I’m calling on the Biden administration to step in and outlaw Dots.

casey newton

Where’s the executive order on that —

kevin roose

Yeah.

casey newton

— Mr. President?

kevin roose

I’m Kevin Russo, tech columnist for “The New York Times.”

casey newton

I’m Casey Newton from “Platformer.”

kevin roose

And this is “Hard Fork.”

casey newton

This week, I visit the White House to talk to the Biden administration about its new executive order on artificial intelligence. Then, copyright expert Rebecca Tushnet joins to discuss some big developments in the legal battle between artists and AI companies. And finally, an invigorating round — Hat GPT.

kevin roose

Casey, anything big happen to you this week?

casey newton

Kevin, I went to Washington, DC, this week to get some answers about what’s happening in this country related to artificial intelligence.

kevin roose

Yeah, so you got a very exciting invitation this week to go to the White House to actually talk to some officials there about this new AI executive order. And my first question, obviously, was where’s my invite? But my second question is, what was it like?

Because here are the things — I went to the White House once when I was a child, part of a school tour. Very exciting. Remember very little of it. But here are the things I know about the White House. I know it’s where the president lives.

casey newton

That’s right.

kevin roose

I know there’s something called the Oval Office and something called the West Wing. I also know that until recently, there was a dog at the White House, named Commander, who bit people.

casey newton

(LAUGHS): There’s a portrait of Commander at the White House, and I took a picture of the portrait, just because it tickled me.

kevin roose

Did you get a bite, just like a commemorative dog bite?

casey newton

(LAUGHS): I was — let me tell you. From the moment I walked onto the grounds, my head was on a swivel. I’m saying, where is that dog? Because I wanted to meet him and pet him. Because what could be better for the podcast than if I’d been bitten by the President’s dog?

kevin roose

Did you bring some treats?

casey newton

(LAUGHS): No. But it’s funny you mentioned treats. Because we went on the Monday before Halloween, so Monday of this week. I walked down with our producer, Rachel. We kind of took in the sights and the sounds. And as we walk onto the grounds of the White House, there are children in costumes everywhere.

kevin roose

Aw.

casey newton

So I do not see a dog, but I do see a LEGO, a Cheeto, a Tyrannosaurus, a Transformer, a lot of Barbies. And everywhere we went throughout the executive office building, the offices of the staffers had been transformed into some sort of, you know, Hollywood intellectual property is, I guess, what I would say. There was a Barbie room. There was a Harry Potter room.

kevin roose

Wow.

casey newton

The hosts in the White House digital office had transformed their office into something called the Multiverse of Madness. And when you took a left, you were standing in Bikini Bottom from the SpongeBob Squarepants Universe. There were bubbles blowing everywhere.

And I’m setting this scene, because you have to understand, I am there to listen to the President talk about the most serious thing in the world. And while we were interviewing his officials about the executive order, we’re literally hearing children screaming about candy. So it was an absolute fever dream of a day at the White House.

kevin roose

So amid all of the shrieking children and the costumes and the Multiverse of Madness, there was actually, like, a signing ceremony with the President where he did put this executive order into place.

casey newton

That’s right. Yeah. So after we had some interviews at the executive office building, we walked over to the East Room of the White House, which was very full of people from industry, people who work on advocacy around these issues. And not only did the President come out, but the Vice President came out. Chuck Schumer, the Senate majority leader, was there.

kevin roose

Yeah, it was a big deal. So before we get into what you learned from talking with the President’s advisors, let’s actually just talk about this executive order. So I spent a long time going over it this week. It’s more than 100 pages — very long executive order. And it’s also very comprehensive. It’s sort of like a grab bag of regulations and rules governing artificial intelligence in all of its forms.

casey newton

Yeah. And we could dive in in any number of places. I think the part of the order that has gotten the most attention is the aspect that attempts to regulate the creation of next-generation models. So the stuff that we’re using every day — the Bards, the GPT 4s — those are mostly left out of this order.

But if there is to be a GPT 5 or a Claude 3, presumably, it will fall under the rubric that the President has established here. And when it does, it will then have some new requirements, starting with, they will have to inform the federal government that they have trained such a model, and they will have to also disclose what safety tests they have done on it to understand what capabilities it has. So I mean, to me, that is the big screaming bullet that came out — is like, OK, we actually are going to at least put some disclosure requirements around the Bay Area.

kevin roose

Totally. The industry, I would say, was surprised by this. The people I talked to at AI companies — they did not know that this exact thing was coming. And they were also not sure what the threshold would be where these rules would kick in. Would they apply to all models, big or small?

And it turns out that one threshold for when these requirements kick in is when a model has been trained using an amount of computing power that is greater than 10-to-the-26th-power floating point operations, or FLOPS. I looked this up. That is 100 septillion FLOPS.

casey newton

Wow, that’s more FLOPS than we’ve ever had on this podcast.

kevin roose

(LAUGHS): Well, so right, that was the piece that I think caught the industry’s attention. Another big part of the executive order addresses all of the ways that AI could basically exacerbate harms that we already have, like discrimination, bias, fraud, disinformation. There are some specific requirements in it that government agencies are supposed to figure out how to prevent AI from encouraging bias or discrimination in, for example, the criminal justice system or whether AI can be used for processing federal benefits applications in a way that’s fair to people.

And to me, the big takeaway from this — the thing that, if you know nothing else about this executive order, you should know, is that it basically signals to the AI industry from Washington, we are watching you. Right? This is not going to be another social media where you have a decade to build and chase growth and spread your products all over the world before we start holding hearings and holding people accountable. We are actually going to be looking at this in the very early days of generative AI.

casey newton

Yes, that is true. But it is also, I think, proving to be really controversial.

kevin roose

Totally. So let’s talk about some of the controversies around this executive order. Because the provision that you mentioned, this sort of computing threshold over which you have to tell the government that you are training an AI model, has been getting a lot of blowback from people in the tech industry. So describe what you’re hearing.

casey newton

People are losing their minds, like, legitimately. Like, you can go on X and Threads and see Yann LeCun, who is a major proponent of open-source AI, ringing a bunch of alarm bells. And there really is a huge dispute in this community right now around the idea of open-sourced AI versus a more closed approach.

So briefly, open-source technology can be analyzed, examined. You can look at the code. You can usually fork it, change it to do your bidding. And the people who love it say, this is actually the safest way to do this.

Because if you get thousands and thousands of eyes on this, including people who might not have a direct profit motive, you are going to eventually build safer, better tech, you’re going to democratize that tech, and we’re all going to be better off. Right? And then, you have the people who are taking a closed approach.

And in that group, I would include OpenAI, Anthropic, Google. And they’re saying, well, we do see a lot of potential avenues for harm here. And so instead of just putting it up on GitHub and letting anybody download it and go nuts, we are going to build it ourselves. We’re going to do a bunch of rigorous testing. We’ll tell you about the test, but we are not going to let everyone play with it.

kevin roose

And this debate has been swirling in Silicon Valley for months now, but it really seems to have come to a head over this issue of having to report to the government if you are training a model larger than a certain size. So let’s just talk about that. Because to me, I don’t get the backlash to this.

It’s not telling AI developers, you can’t make a very large model, you’re not allowed to. It’s not even saying you can’t make an open source model that is very large. All it’s saying is, if you’re building a model that is bigger than a certain size, 10-to-the-26th-power FLOPS, or —

(laughs)

it’s just very fun to say “FLOPS.”

casey newton

It’s so fun to say “FLOPS.” And the next time one of my friends has a huge failure, I’m going to say, it’s giving 10-to-the-26th-power FLOPS. I’m saying, you FLOPSed so hard, you’re going to have to tell the federal government, bitch.

kevin roose

So it’s just saying, you have to tell the government, and you have to actually tell them that you’re doing safety testing, and sort of, if you’ve found anything dangerous that these models can do. So I would say the people who are objecting to this are not objecting to anything specific that applies to models currently existing.

They’re just — they’re mad that at some point in the future, AI developers may be required to tell the government what they’re doing, which strikes me as being very similar to what companies in other industries — if you’re making a new pharmaceutical drug and you’re trying to sell it to millions of people, you have to tell the government. It has to be approved. So why is this any different than that?

casey newton

So I agree with you, but let me just sort of try to Steelman the other arguments, right? Here’s what I’m hearing from the folks that are in this open-source community. They believe that what we are seeing is the beginnings of regulatory capture.

kevin roose

Now, just define regulatory capture.

casey newton

Regulatory capture is when an industry sets out to ensure that to the extent any regulations are passed, it gets those regulations passed on its own terms. And it sort of pulls the ladder up so that incumbents always maintain the power and challengers can never compete.

kevin roose

Right. Basically, using regulation to draw a moat around yourself, such that smaller competitors who don’t have armies of lawyers and compliance people and people to fill out forms for the government — they can’t compete with you.

casey newton

That’s right. And just to really lay it out, people are making a really specific accusation, which is that Sam Altman from OpenAI, Dario Amodei from Anthropic, and some of the other big AI players here who are taking this closed approach — they did this intentionally, that they do not actually believe that AI poses any existential risk beyond what we have with just sort of ordinary computers.

And they went to the government. They freaked them the hell out. They said, regulate us now, and oh, by the way, here’s exactly how to do it. And now, they are starting to get what they want. And the result is going to be that they are the winners who take all, and everyone else is left by the wayside.

kevin roose

But this is crazy to me. Because it’s not like these companies and the people running them started sort of hyping up the risks of AI recently, right? These are people who have been talking about this — some of them for many years.

I mean, Dario Amodei, Sam Altman — these are not people who became worried about AI recently, just as soon as they had big companies to protect and products to sell. They are people who, I think, are genuinely worried that AI could go wrong and are trying to put in place some common-sense things to prevent that. So I just don’t get this argument, this very cynical argument, I would say, that the people who are talking about the risks of AI are just doing it to enrich themselves.

casey newton

I agree with you. I think where I have a little bit more doubt in my own mind is which approach do I actually think will lead to safety over the longer term. Is it a closed approach where we put very powerful AI in relatively few hands? Or is it one where it is widely available to the public?

And to be honest, that is just an issue where I am trying to learn and listen and read and talk to people. But I’m curious if you have a gut instinct on that.

kevin roose

I mean, my gut instinct is that it was always going to be regulated somehow, right? AI is too powerful a technology not to invoke attention from the government and from governments around the world. This is technology that is not just going to be built into chat bots. This is going to be used in defense in the financial markets, in education. Kids are going to be using this stuff.

So clearly, there was going to be a point, at least to me, where the government stepped in. Now, that arrived, I think, sooner than I would have thought, right? Because the government is usually pretty sclerotic and slow-moving.

casey newton

The US government in particular.

kevin roose

Exactly. But I think I was not surprised to see that governments are taking a strong and early approach to AI, because it is just such a powerful technology. Now, I think the debate between closed and open-source is, basically, everyone sort of arguing for their own position, right? The companies that make large models — they do see some of the risks of those models.

And I think they’re quite genuine in wanting the government to step in and protect against some of the worst-case scenarios. The open-source people — I think I struggle to understand what they believe. Because I don’t think they’re saying that AI has no risk attached to it.

casey newton

Some of them are. I have VCs who are texting me, saying that you can already make a bioweapon just by googling and that if you think that the AI makes that any easier, then you are a fool. This is what people are talking —

kevin roose

I’ve been using Google for a long time, and it has never once told me how to make a novel bioweapon.

casey newton

(LAUGHS):: A challenge with having really good safety discussions about this stuff is that I personally just do not try to use these tools for evil, you know? And so it’s hard to know what is the case here, but I’m with you.

kevin roose

So OK, this is the debate that’s happening in Silicon Valley about the executive order. But let’s talk about your visit to the White House, because you actually did have some conversations with some of President Biden’s advisors about this. What did they say?

casey newton

So on this open-source point in particular, I talked to Arati Prabhakar, who directs the Office of Science and Technology Policy. And I just said, does the government have a stance on whether it wants to see more open-source development or more closed development? And here’s what she told me.

arati prabhakar

If I were still in venture capital, I would say the technology is democratizing. If I were still in the Defense Department, I would say it’s proliferating. And they’re both true.

And that — I mean, this is just the story of AI, over and over again, right? Bright side and dark side. And you just have to understand it and deal with it as it is. And the open-source issue is one that we’ll definitely continue to work on and hear from people in the community about and figure out the path ahead.

kevin roose

That’s interesting. Because it does seem to me, like, if you had asked me what is a Biden White House executive order on AI going to look like, I would probably say that it’s going to be focused much more on the harms, the potential harms, of AI than the potential upsides. But what really struck me about reading this executive order is just how balanced it tried to be, sort of, striking this middle ground between optimism and pessimism — kind of, AI is going to do all these great things, and AI has these potential harms associated with it.

casey newton

Yeah, and I actually put that question to Ben Buchanan, who is an AI advisor for President Biden, about what he was seeing, if there were any green shoots out there that was making the administration say, oh, there’s potentially a lot of good that AI can do for us for the American people. Here’s what he told me about that.

ben buchanan

I think it’s even more than green shoots. I think we wouldn’t be trying to so carefully calibrate the policy here if we didn’t think there was substantial upside as well. So look at something like microclimate forecasting for weather prediction, reducing waste in the electricity grid and the like, accelerating renewable energy development. There’s a lot of potential here, and we want to unlock that as much as we can.

kevin roose

So that seems like, to me, a pretty balanced view of AI. On one hand, it could help us with microclimate forecasting. On the other hand, it could cause some harm, especially when it comes to things like weapons and cybersecurity. So is that kind of the vibe you picked up from this White House visit in general — is that this is a White House that is trying to cautiously but enthusiastically wade into AI regulation?

casey newton

Yes. And I would say that honestly, this was a pleasant surprise. Right? Like, I write about technology policy and proposed regulations a lot, and I don’t like a lot of what I see. When he was campaigning to be president, President Biden said that we should get rid of Section 230 of the Communications Decency Act, which would mean, effectively, that Google and every technology platform was responsible for every person posted on its platform, which I just think would be bad for a lot of reasons we don’t have to get into. But like, to me, that was the worst kind of tech policy, because you’re painting with the broadest possible brush, you’re ignoring any positive use cases, and you’re just sort of legislating with a giant hammer. This is not that approach. These are people who have done the homework, who have been very thoughtful.

They still have a lot to do. Again, the policy reads very sweeping. What it means in practice, I think we’ll have to see how it plays out. But there are good ideas here.

kevin roose

So I guess my big question about this executive order is, like, is this enough? Right? This is a big, sweeping executive order, touches on a lot of different parts of AI, and a lot of different parts of the federal government. But I also remember a time not too long ago where you and I were talking about these existential threats from AI, these kinds of near-term scenarios whereby AI would get so powerful that it would start to displace millions of people from their jobs or improve itself recursively in a way that would allow it to take over and potentially wreak havoc on humanity — like, these things that did not seem super far-fetched to us just a couple of months ago.

And now, I’m hearing you talk about the need for balance and trying to find the green shoots of what AI could do. So has your view changed on AI, or has something in AI itself changed in a way that makes you less nervous? And do you actually think that more regulation is needed?

casey newton

Well, let me take the first question first. Has something changed that has made me less nervous? I kind of go back and forth on this. It depends on the day. There’s some times when I’m using GPT 4, and it does something so good that it is spooky in a way that makes me think, oh my gosh, the future is going to look so different from today. What do we do now?

But then, a week will go by, and my everyday life looks the same as it has for a while, and I think, well, maybe society has actually just sort of adapting to this, and this isn’t quite the disruptive change that I was thinking. It’s very hard to know in the moment what the one – to two – to three-year future of all of this looks like. And so I try to just keep my eyes focused on, well, what happened today?

That’s kind of the first part of it. The second part of it is the sort of mythical-future GPT 5 and all the equivalents where all the other companies — we just don’t know how good they’re going to be. Like, what we know is that there have been massive leaps in each successive version of these models.

What does the next massive leap look like? As humans, we’re really bad at conceiving of exponential change. Our brains think linearly. And so if we’re one step away from an exponential change, I’m just telling you, it’s like my brain is not good about understanding all of what that is going to mean.

So I don’t want the government to get so far out ahead of things that it is prevented from doing all the things that Ben Buchanan just talked about, like helping to address climate change, for example, using the power of AI. If the government could do that, that could be a great thing. I don’t think we need to slam on the brakes so hard that we don’t allow for the possibility of that.

But do I want the government saying, oh, if you’re going to train the largest language model yet, we’d like you to tell us? I do lean on the side of, like, yes, like, let’s tell someone. I want someone paying attention to this. So that’s kind of where I am. Where are you?

kevin roose

Well, I think one problem and one challenge with regulating AI right now — it is very hard to regulate against theoretical future harms.

casey newton

Yes.

kevin roose

If we know one thing about the history of regulation, in this country at least, it is that often, the biggest regulations are passed in the wake of truly horrendous damage. Right? It took the financial markets collapsing in 2008 for Dodd-Frank to be passed to regulate the banking system. A lot of our labor laws and labor protections came after things like the Triangle Shirtwaist Factory fire, when people died because there were not adequate safety protections at their workplace.

Typically, something very bad happens. People either die or are badly harmed. And then, regulators and legislators step in, pass new laws, write new regulations, try to get things under control. So unfortunately, I think that’s going to be true of AI as well. I think this is sort of a good stopgap measure in addressing some of these potential future harms. But I actually don’t think the real, true, good, robust regulation will arrive, unfortunately, until something quite bad happens with AI.

casey newton

I think that is true. But there is still reason to hope, I think, in this executive order. For example, it talks about using the Department of Commerce to try to develop content authenticity standards for the very meaningful reason of wanting to ensure that when the government communicates to its citizens, that citizens know that the communication actually came from the government. That’s kind of an existential problem for the government.

It’s not a horrible problem today, but it might very well be in a few years. So the government is getting ahead of that. And the hope would be, well, maybe they’re able to develop some authenticity standards, so that when the stuff becomes more serious, we are prepared, right?

It does similar stuff around the possibility of bioweapons. So I do think the smart thing here is, they’re trying to identify, well, what sort of seems like it might be easy to do with a much more powerful version of this thing and start to develop some mitigations today?

kevin roose

Right. And I think what will be interesting to see is not just how the US regulates this but also how the European Union, which is really, I think, ahead of the US when it comes to actually trying to regulate AI — they have this AI Act that might get adopted as soon as next year. And then, there’s this big AI safety summit that happened in the UK this week, where a bunch of AI researchers and executives and industry people and various government officials talked about some of the more existential risks. So I think it’s quite possible that Europe gets ahead of the US when it comes to regulating AI and sort of sets the de facto standard, sort of the way that it’s been happening with social media.

casey newton

Yeah.

kevin roose

All right. So that is the executive order on AI and your trip to the White House. I’m glad you got to go. Was it everything you hoped it would be?

casey newton

I mean, look, here’s the thing. Not to stand for the federal government, but when it wants to, the government can be pretty frickin’ majestic. As a kid, like, you ingest so much mythology about American history and democracy and everything. It’s like, OK, now, you’re in the room, seeing it happen. So yes, I will — at the risk of sounding cringe, yes, I did enjoy my trip to the White House and watching democracy in action.

kevin roose

Will you wear a damn tie next time?

casey newton

I will wear a tie next time. (LAUGHING) Actually, I have to say, our producer, in what was a transparent effort to get me in trouble, asked one of our minders at the White House, don’t most people wear a tie here? And the man looked very uncomfortable, because I think he wanted to not embarrass me, but he was like, yeah, pretty much everybody wears a tie.

kevin roose

Well, good for you. You’ve embarrassed the “Hard Fork” podcast in the hallowed halls of democracy.

(CASEY LAUGHS)

What is wrong with you?

casey newton

I don’t know. The shirt I was wearing — I was like, I didn’t have, really, a tie that would go with that shirt.

kevin roose

God, did you have a belt?

casey newton

Of course I had a belt.

kevin roose

Were you wearing shoes?

casey newton

I was wearing — yes —

kevin roose

Were you dressed as the QAnon shaman? Did you have a Viking hat on? My god!

casey newton

I — I didn’t think enough about it. And I — and I do feel bad, and I want to apologize to President Biden that I was not wearing a tie.

kevin roose

Wow, that’s the last time you’re getting invited back.

casey newton

Yeah, maybe.

kevin roose

Hey, White House, if you want someone to wear a tie next time you invite a representative from the “Hard Fork” podcast, invite the real journalist.

When we come back, Harvard Law School Professor Rebecca Tushnet on why AI image generators may be here to stay, whether artists like it or not.

(MUSIC PLAYING)

So Casey, we’ve been talking a lot on the show about models and copyright, this issue of whether artists and writers and other people whose works are sort of ingested by large AI models have any recourse when it comes to getting paid or credited, or even potentially suing the companies that make these models.

casey newton

Yeah, this feels like one of the big questions in AI right now. We’re using these tools. We’re thinking, hmm, on some level, I actually helped make this thing without my consent. Uh, where’s my cut?

kevin roose

Totally. And it’s been sort of a cloud hanging over the entire AI industry. And this week, we actually got an update on how the legal battle is going. A case brought by a group of artists, including Sarah Anderson, who’s a cartoonist, who we interviewed on the show many months ago — she and some other artists sued Stability AI, the company that makes the Stable Diffusion image generator, along with two other companies, Midjourney and DeviantArt.

casey newton

And wait, by the way, I think we should say, I think this is the first known incident of two “Hard Fork” guests being involved in litigation.

Because we did have Stability AI CEO Emad Mostaque on here.

kevin roose

True. So this case, Anderson et al versus Stability AI et al, has been making its way through the courts. And this week, a judge made a pretty significant ruling on Monday. The judge dismissed the claims against Midjourney and DeviantArt, two of the companies that had been sued, saying these claims are defective.

casey newton

Which is one of the harshest things a judge can say to you, by the way — is that your claims are defective.

kevin roose

(LAUGHS): Totally. So some of these allegations were dismissed, because the artist’s works weren’t actually registered with the Copyright Office. But there was one claim that the judge did let stand, which is this direct infringement claim against Stability AI.

The judge says, basically, you have 30 days to go back and clarify and sort of refile and amend this complaint. But basically, a big win for the AI companies, because most of the claims brought by these artists were dismissed.

casey newton

Yes. On one hand, that is true. But on the other, the core claim, the one that you mentioned at the top of this segment, is allowed to go forward. And so we are going to see these two sides hash it out, at least a little bit, about whether the artists have been wronged here in a way that can get them some money.

kevin roose

Totally. So I have just been fascinated by this whole area of law recently, because this does seem like kind of the original sin of the AI industry in the eyes of a lot of creative workers — is that the way you build these models, whether they’re image generators or language models or video generator models, is you take a bunch of work, probably much of which is copyrighted, you feed it into these systems, you train the model, and then you can produce outputs that mimic the work of living artists.

And I think, understandably, a lot of people are upset about that. And so this question of, is this legal, is this protected under our copyright doctrine, or do we need some kind of change to the laws to better protect artists and creative workers — that does seem like a really central question in the world of AI right now.

casey newton

That’s right. And so that’s why we said, Kevin, we need a lawyer.

kevin roose

(LAUGHS): Yes. So we decided to bring in Rebecca Tushnet. She is a professor at Harvard Law School. She specializes in the First Amendment, intellectual property, and copyright law. I also read, according to her bio, that she is an expert on the law of engagement rings —

casey newton

Which, unfortunately, we ran out of time before I could ask her all my questions about that. But maybe for a future segment.

kevin roose

Yeah, we’ll have her back to talk about the engagement ring legal issues. I don’t even know where you’d start on that.

casey newton

Well, I recently went through a messy divorce. And that’s a joke.

kevin roose

OK, so let’s bring on Rebecca Tushnet.

Rebecca Tushnet, welcome to “Hard Fork.”

rebecca tushnet

Thanks for having me.

kevin roose

So before we get into talking about this specific case, I want to just understand how a copyright law expert thinks about AI and these AI image generators, and also these language models we’ve been hearing so much about and all the copyright questions that have come up around them. So when you saw things like ChatGPT, Stable Diffusion, Midjourney, DALL-E, start to rise to prominence last year, what did you think?

rebecca tushnet

So I thought that copyright had the tools to handle this, that they’re pretty conventional questions. On the other hand, if people decide that we need something new, we’ve changed copyright laws before. So it’s quite possible that we could fruitfully get a new law. But right now, we do have established principles. And I don’t think that they break when confronted with AI.

casey newton

So that totally surprises me, right? I feel like when we’ve talked about this on the show, it has been in the context of, wow, this seems really new. But what about it struck you as conventional?

rebecca tushnet

So in terms of whether you can get a copyright for the output, we do have a history of saying, OK, at what point does a human being’s use of a machine break the connection between the human and the output? And my view is that a lot of AI output should be uncopyrightable, because it doesn’t reflect human authorship, which we’ve rarely considered before, but have sometimes had to decide, for example, what about a photograph?

And if you’re giving a copyright in a selfie, is that the same thing as giving a copyright in the footage from a security camera that’s running 24/7? And you know, although you sometimes do have to draw lines, that’s not unknown to the law, and we can just decide what our rules are going to be without really disrupting anything, in part because most of the time, it doesn’t come up whether a human is enough involved.

casey newton

So at the risk of derailing, I am just super fascinated by this question. So I can see your point of view. If I just type the word like “banana” into DALL-E and it produces a banana, I could see the argument that I didn’t really have a lot to do with any of that and probably shouldn’t be granted a copyright.

But these days, people are writing these meticulous prompts. It’s a banana that is dressed like a detective in a 1940s noir movie, but he’s at Disneyland, right? And the output of that actually feels like it did have a little bit more human authorship in it to me. But I’m not a lawyer. Like, in your view, is that all sort of the same thing?

rebecca tushnet

So I guess what I would say is I’m still mostly of the opinion that the prompt alone shouldn’t count, although you can find people who disagree. But here’s my pitch, which is you often get a choice of multiple outputs that look quite different from each other. And so I have two questions.

First, are all of them the same thing, or does the fact that they look different show that, in fact, the prompt just didn’t specify enough to be firmly connected as a human creation to the output? And then, the second question I have for this point of view that the prompt should be enough to get copyright is, OK, so what about the ones you reject? You’re like, no, that’s not what I wanted. Are they still yours?

If it wasn’t within your contemplation, like, there’s room for accident and serendipity in human creation. But there’s also a point at which the serendipity is no longer yours.

casey newton

Right.

kevin roose

Right.

rebecca tushnet

And to me, the fact that you get three very different-looking versions suggests that the serendipity is on the machine side.

kevin roose

That’s interesting.

casey newton

So super interesting but not what this case is about.

kevin roose

Yeah, so that’s the copyright issue with the outputs of these models, but this case, the Stability AI case, which also looks at tools like Midjourney and DeviantArt, is about the inputs to these models, the data that they are trained on. And the core question of this lawsuit is basically, does training an AI model on copyrighted material, whether that’s images or something else, count as infringement?

And I’m curious what you make of that argument. Because that’s something that I’ve heard from artists, from writers who are mad that their books were used to train AI language models. What are the copyright implications that we know of how these models are trained?

rebecca tushnet

Again, my view is, we actually have a set of tools for dealing with this. And of course, you can disagree with them. But the background is, of course, the rise of the internet and Google looming large over everything.

So Google, of course, made massive copies of lots of stuff, including things that weren’t put online. So that’s the Google Books project. And the courts came around to the conclusion that this is basically all fair use.

Now, there are things you can do that are not fair, just to be very clear. Right? But Google, for example, with the book project, doesn’t give you the full text and is very careful about not giving you the full text. And the court said that the snippet production, which helps people figure out what the book is about but doesn’t substitute for the book, is a fair use.

So the idea of ingesting large amounts of existing works, and then doing something new with them, I think, is reasonably well established. The question is, of course, whether we think that there’s something uniquely different about LLMs that justifies treating them differently. So that’s where I end.

casey newton

So I think this is an interesting analogy to think about for a minute. Like, if I’m hearing you right, you’re saying, when you think about what Google does, it creates this index of the web, right? It looks at every single page.

And in many cases, it is making copies of those pages. It is caching those pages, so that it can serve them up faster. That is all intellectual property of one sort or another. And then, you enter a query into Google, and it spits out a result, which takes advantage of that intellectual property without reproducing it exactly.

I think the question for me is, is that truly analogous to a situation where I’m a very popular artist, people love to type my name into Stable Diffusion, you get images that look like my life’s work, and I get $0 for that?

rebecca tushnet

And so part of the answer is, well, is the output actually infringing? Right? So if it’s not, then no. And if it is, then actually, I want to start asking questions. Why and who’s responsible for it?

So there’s lots of circumstances where, for example, people can use Google and say, I want to watch “Barbie.” And although Google has made reasonable efforts to make that not the first thing that you get, it’s not impossible to figure out how to use Google to watch “Barbie” without authorization —

casey newton

To find a bootlegged copy that I’m not paying for, yeah.

rebecca tushnet

But we have a robust system for attributing responsibility to the person who tried really hard to find the infringing copy on Google. So there are definitely some principles of safe design. But the fact that they aren’t perfect really shouldn’t be the end of the question, sort of, who’s responsible for it. And the more you get someone saying, like, I tried really hard and I was able to create something that looked like Sarah Anderson’s cartoons after a 1,500-word prompt, I’m thinking that’s on you.

kevin roose

So let’s get to some of the specifics in this case. So there were a number of different claims made by the artists who are suing these AI companies. One of them is this argument that these models are basically collage tools, that their images, their copyrighted works, get sort of stored in the model in some compressed form, and that this actually is a violation of their copyright. Because they’re not truly being transformed. They’re just sort of being turned into these sort of mosaic collage things on the other end.

Now, the companies and people who work in AI research have said, like, this is not actually how these models work. But this is the argument that the artists in this case are making. What do you make of that argument?

rebecca tushnet

It’s a little perplexing. I am also not a programmer, but it does sound fairly consistent when you talk to them, that no, there aren’t pictures in the model. There’s a whole bunch of data. And there are these unusual occurrences, usually, when the data set contains 500 versions of “Starry Night,” where it might get pretty good at producing something that is a lot like “Starry Night,” but for the average image, it’s not in there and can’t be gotten out, no matter how hard you try.

So I would say, in some sense, though, it doesn’t really matter in the traditional fair use analysis. Because courts have generally said, if you’re doing something internally that involves a lot of copying, but if your output is non-infringing, then that’s a strong case for fair use.

casey newton

It strikes me we’ve been talking a lot so far about what is not a copyright violation. It might help me just to remind myself, what is a copyright violation? Like, give me some cut-and-dried cases of, oh, yeah, that’s against the law.

rebecca tushnet

So when somebody hosts a copy of “Barbie” and streams it to all comers, if they do that without permission, there’s going to be a problem.

casey newton

Right. Copyright, at least when it was first conceived, is about literal, identical copies of something that you do not own, that you are directly profiting from.

rebecca tushnet

Right. And then, we’ve expanded it as well to cover the idea of derivative works, which is a contested category, but the basic idea is, if you’re the author of a book, you should have the right to make a movie or a translation of the book — that that’s your right.

casey newton

Yeah, a lot of Kevin’s articles have been described as derivative work.

kevin roose

(LAUGHING) Hey, now.

casey newton

I’m not sure — I’m not sure if that’s illegally true, but I just read that online.

kevin roose

So Rebecca, in this case against Stability AI, the court dismissed a bunch of the claims from the plaintiffs, just based on procedural standing grounds. Some of the works that they said were copyrighted actually weren’t registered.

But the one claim that the court did not dismiss was this direct infringement claim against Stability AI. And that really goes to this question of fair use, which is the legal doctrine that allows people to use copyrighted material without a license in some circumstances. The AI companies have argued that basically, what they are doing is protected under fair use, and the artists have disputed that.

And that part of the lawsuit is being allowed to move forward. So notwithstanding everything else that the court ordered here, isn’t one take-away that artists can still argue with fair use that they can still pursue copyright claims, based on the use of their art as training data for these AI models?

rebecca tushnet

So this is the classic thing — can you sue over this? Well, it’s America. You can always sue. Right? Can you win? That’s a very different question. And can you afford to litigate? A completely different question.

But also, this is still very early days. The direct infringement training part of the claim just requires a different fair use analysis than the other claims, which, in general, were about the outputs. And so I would say nobody should really rest on their laurels right now.

casey newton

I was really struck a few weeks back, when OpenAI licensed some old articles from the “Associated Press.” Presumably, many of these articles were already online and could have been scraped by OpenAI for free and used to train their future models. If you’re a lawyer for OpenAI and they say, we want to license that data, as a lawyer, are you thinking, hmm, this could create a perception that this work has value and that we should be paying to license all of it? Or are the laws robust enough that it can do that as a goodwill gesture without incurring any more liability?

rebecca tushnet

Look, people will definitely say, oh, you licensed this, this means you have to license everything. But the law has historically not been receptive to that argument. Because litigation is expensive. So what courts and other fair use cases have said is, just because you were willing to negotiate to avoid a really expensive lawsuit doesn’t mean that it isn’t fair use.

It’s just that fair use can be expensive to litigate. And so it’s reasonable to license, even if you didn’t have to. The question is still, for the people who won’t license or who you can’t find, is that fair use.

kevin roose

And if you are an artist who’s following along with these cases involving generative AI systems, and you’re thinking, well, I want to keep my work out of these systems, or at least be paid some compensation when my work is used to train these systems, do I have, in your view, any legal protections? Or would we need to pass new laws and amend some of these fair use provisions for me to have any recourse?

rebecca tushnet

Well, what I would say is, you’re seeing this rise of voluntary opt-outs. And that’s very similar to what developed with Google. So Google respects what are called robot exclusion headers. Although it’s probably fair use to scrape for many purposes, they still won’t do it.

And so I think a development like that is really powerful, even though it’s not based in any legal requirements. So I would say there’s definitely things you can do in terms of getting paid. I mean, the classic thing about this is, only publishers with big piles of works can ever hope to get paid. Because it’s just not worth it to license on an individual basis.

casey newton

You know, at the same time, we’re starting to see companies like Adobe put out models that do compensate artists. I think that right now, even if there isn’t a strong legal case to use — to have to use a tool like that, it does seem like there is a moral and ethical case to use tools that essentially have the permission of everyone involved. And so I wonder if maybe the long-term future here is just that we have to rely more on moral arguments and shame to get the world we want than these copyright laws that are less well suited to the purpose.

rebecca tushnet

Here’s the thing. I’m extremely skeptical about these models. Because again, if they’re done by the big publishers, they are not in the business of actually delivering most of the money to the authors or the artists. Because the fact of the matter is, a lot of the time, the image will not look like anything in the data set.

So you could sort of randomly attribute, I suppose, or you could pass it through the fraction of the time that it looks close to a particular image. And I would just say, are you going to be able to go to Starbucks on that money? I wouldn’t place too many bets.

There are situations where, for example, if you just train entirely on one artist, that might well be different. And that’s a design choice. And right now, there’s a case proceeding, brought by Westlaw, for the copying of its headers, where they write their own summaries of a court decision.

And the court said, we’re going to go to a jury on that. And the reason is, Westlaw owns the set on which things are trained. But that’s also to make my point that these licensing deals are not going to help individual authors. The people who wrote the summaries at Westlaw do not see any more money even if Westlaw prevails on this.

kevin roose

So in some sense, the bigger your model is, the more data it was trained on, the more potentially protected you are from some of these claims. It’s sort of a strange incentive that it sets up where if you want to win lawsuits brought by individual creators or publishers, you should just make your model as big as possible and slurp up as much data as you can. Because then, they can’t come back and say, hey, that looks a lot like the specific thing that I made that is protected.

rebecca tushnet

So I see why you say that’s strange, but in fact, it’s exactly how you would make a general-purpose tool. So Photoshop being useful for lots of different things is more clearly a neutral tool than something that’s like, well, here’s a program that will draw Disney characters.

kevin roose

Right, or counterfeit money or something like that. That would be less protected, whereas you can use Photoshop to draw Disney characters and try to counterfeit money. But because it can also do all these other things, the courts are less likely to see that as an infringement. Is that what you’re saying?

rebecca tushnet

Yes.

kevin roose

OK.

casey newton

And we will be trying to counterfeit money later in the show, so stay tuned for that. Curious to see how that works out.

kevin roose

(LAUGHS):: Now, I’m not a lawyer, but I feel like I have a pretty good grasp of one of the issues that is at stake here, which is, who does the liability fall on? So if I’m using Photoshop and I create a counterfeit picture of money and I print it out and I try to use it as store, that’s not on Adobe for making Photoshop. That’s on me.

And that is one of the arguments that you hear from these companies, is we just make the tools. How users use them can be illegal or not. But either way, we are shielded. Is that a sound legal argument?

rebecca tushnet

In general, yes. And so some of my questions are about the tweaked models that create infringing material or people are making, say, to generate porn. But in general, they are taking the models, and then tweaking them themselves to do that. And that’s on them.

casey newton

Well, what I’m hearing is that for so long in our society, the artists and the writers have been living on easy street. But now, finally, along come these new technologies to take them down a peg, and they’re actually going to have to work for a living. So sorry to the artists and the writers out there.

rebecca tushnet

So can I just say one thing, which is that Cory Doctorow has this line about, the problem is capitalism. That is, giving individual artists more copyright rights is like giving your kid more lunch money when the bullies take it at lunch. Because the bullies are just going to take all of the money you give, right?

You can’t solve a problem of economic structure by handing out rights to somebody who doesn’t actually have market power to exercise. Because the publisher is still going to say, well, if you want to publish with me, you’ve got to give me all the rights. And you will say, I would love to be in print, so you’ll do that, which is why I think we need to talk about how we pay artists generally, rather than thinking that we can fix it with AI.

casey newton

Right.

kevin roose

Right. Well, fascinating. And I hope we can have you back if the courts do upend our entire fair use doctrine and push these companies out of business. But —

casey newton

Or if we get into any sort of legal trouble.

kevin roose

(LAUGHING) Yeah. Yeah, any copyright issues, we’ll have you on speed dial.

rebecca tushnet

All right. I’m a lawyer. I’m not your lawyer.

kevin roose

(LAUGHING) OK.

Not yet. Although I did just Venmo you $1, so I think now, officially, you are my lawyer.

casey newton

This conversation is privileged.

kevin roose

Privileged. Yes. Rebecca Tushnet, thank you so much for joining us.

casey newton

Thank you so much.

rebecca tushnet

Thank you for having me.

kevin roose

Motion to adjourn.

casey newton

(LAUGHING) Motion to adjourn?

kevin roose

Is that a good joke?

When we come back, it’s time for Hat GPT.

(MUSIC PLAYING)

Casey, what’s your favorite Halloween candy?

casey newton

Um, I think, at the risk of being a little controversial, I really love a York Peppermint Pattie.

kevin roose

Wow. How old are you?

casey newton

(LAUGHS): Wait, is that —

kevin roose

I like a Werther’s. I like a nice, hard Werther’s candy.

casey newton

Is that considered an old-person candy?

kevin roose

I think so.

casey newton

Look, it’s chocolate, and it’s creamy, and it’s minty. I mean, that’s —

kevin roose

I’ve never been offered a York Peppermint Pattie by anyone under the age of 70.

casey newton

You know, at the old Facebook offices, they had a big jar of them. And so whenever I would go down there, on the way in and out, I was always, like, grabbing a couple of Peppermint Patties.

kevin roose

Wow. And that’s why you’re captured by industry.

Supplied you.

casey newton

Never bite the hand where the Peppermint Patties come from.

kevin roose

Do you think they had a secret dossier on you that was like, Casey Newton from “Platformer” loves Peppermint Patties. Let’s get a big bowl out so he’ll be more favorable to us.

casey newton

No, those places buy so many candies and so many foods. They don’t need to bother having a dossier. You walk in. They’re like, oh, what’s your favorite food? Lobster bisque? Yeah, we have that.

kevin roose

(LAUGHS): Speaking of candy, Casey, it is time once again for our favorite game. It’s time for Hat GPT.

casey newton

Pass the hat.

You know, we’re on YouTube now, Kevin, and one of our wonderful listeners commented, I’m so excited, because I want to see if there’s actually a hat for Hat GPT. And now, we can actually just show, indeed, that there is a hat.

kevin roose

There is a hat GPT. We did also get some YouTube comments saying that this looked like a budget hat that was not professionally designed. To that, I would like to say, you are correct. This is something I made in about five minutes on vistaprint.com. And I think I paid, like, $22 for it. So if anyone wants to make us a better Hat GPT hat, our inboxes are open.

casey newton

Absolutely. And hopefully, the hat will become more and more elaborate and ornate over time, and that’s how you’ll know that the show is healthy and thriving.

kevin roose

(LAUGHS): Yeah, eventually, it’ll be, like, a 10-gallon Stetson.

casey newton

That’s — I mean, that’s what I want.

kevin roose

Hat GPT, of course, is the game where we draw news stories about technology out of a hat, and we generate plausible-sounding language about them until one of us gets sick of the other one talking and says, stop generating.

casey newton

That’s correct.

kevin roose

All right.

Oh, this one is sad.

casey newton

OK.

kevin roose

“AI Seinfeld is broken, maybe forever.” This one’s from 404 Media. And this is about “Nothing Forever,” the 24/7 endless AI-generated episode of “Seinfeld” that has been running on a Twitch live stream for many months.

casey newton

Captivated the nation when it first came out.

kevin roose

One of my favorite AI projects of all time, I got to say. So this is a report that says that “For the last five days or so, one of the main characters of the AI-generated ‘Seinfeld’ show has been endlessly walking directly into a closed refrigerator. ‘Nothing Forever’ is very broken, stuck on a short repeating loop for days. It’s also more popular than it’s been in months.”

So people are tuning in to watch what may be the end of the endless AI-generated “Seinfeld.”

casey newton

And I just want to ask, what is the deal with walking into the refrigerator?

(KEVIN LAUGHS)

But you know, there’s something beautiful about a show that was famously about nothing, being recreated as an AI project that, over time, just evolved into almost literally nothing, and then got more popular when it did.

kevin roose

Yeah, it’s a good metaphor. I can’t wait until we start just really phoning it in and get mysteriously more popular as the show goes on.

casey newton

Next week on “Hard Fork,” we walk into a refrigerator.

kevin roose

(LAUGHS): Tune in to see the live stream.

casey newton

Stop generating.

kevin roose

OK. You’re up.

casey newton

(CLEARS THROAT): All right, Kevin. This next story is a tweet from something called Dell Complex, which describes something called the “Blue sea frontier compute cluster,” which is a barge. Are you familiar with a barge-based compute platform?

kevin roose

(LAUGHS): So I saw this going around on social media the other day. And I think it is sort of what they call an augmented reality corporation. I think it’s an art project, but it’s basically a bit these people are doing, saying, we are so mad about the Biden administration’s draconian executive order mandating that big AI developers report their models to the government that we are going to build, essentially, a floating AI-computing cluster on a barge in international waters, so that we’re not subject to any regulations.

casey newton

So — and it says here that there are going to be more than 10,000 NVIDIA H100 GPUs on every platform. So this is literally seasteading for AI.

kevin roose

Yes.

casey newton

Yeah.

kevin roose

Yes.

casey newton

Well, look, I’m very sympathetic to barge-based projects in general. I don’t know if you remember the Google barge. Remember the Google barge?

kevin roose

Not really.

casey newton

The Google barge was a project in the early 2010s, where Google was considering building retail stores on floating barges that would travel from port to port.

kevin roose

(LAUGHS): OK. I’m just picturing old-timey movies where people are waving at the ships as they come in, but it’s just, like, a giant Google store pulling up with new Pixels.

casey newton

I mean, it would have been the thrill of a lifetime if this happened. The project got canceled. I can’t imagine why. But for about a year or so, I would just think the words, “Google barge,” and would just smile, because it made me so happy.

kevin roose

You could way it was a sunk cost.

(LAUGHING) Sorry.

casey newton

Well, no, I don’t want to talk about this anymore.

kevin roose

(LAUGHING) Stop generating. All right. This one says, “Joe Biden grew more worried about AI after watching ‘Mission Impossible: Dead Reckoning,’ says White House Deputy.” This is from “Variety.”

And this is apparently from Bruce Reed, the Deputy White House Chief of Staff, who told the “Associated Press” that Joe Biden had grown, quote, ”‘impressed and alarmed’ after seeing fake AI images of himself and learning about the terrifying technology of voice cloning.

According to Reed, Biden’s concerns about AI also grew after watching ‘Mission Impossible: Dead Reckoning, Part I’ at Camp David,” which is a movie where there’s this sort of mysterious AI entity that wreaks havoc on the world. Casey, what do you think about this? Did “Mission Impossible” come up in your conversations with President Biden’s advisors?

casey newton

You know, it didn’t, although he talked — he appeared to deviate from the script when he was giving his remarks. Because it was supposed to say something like, with just a three-second clip of your voice, it could fool your family.

And he stopped and was basically like, forget your family. It can fool you. He’s like — he’s like, he says, I look at these things, and I think, when the hell did I say that? That’s actually a direct quote.

kevin roose

Jack.

casey newton

Yeah. He didn’t say Jack, but it was implied. There was an implied Jack. And —

kevin roose

(LAUGHING) Silent Jack.

casey newton

Silent Jack, yeah. Everybody laughed.

kevin roose

This is fascinating to me. Because it actually does appear that he grew more alarmed about AI after watching a fictional Hollywood movie about a nonexistent AI program. And so I get why people in Silicon Valley want Hollywood to make more positive movies about AI, because it’s like the president is watching a movie, and then all of a sudden decides to start writing some regulations. That feels weird.

casey newton

Yeah. Here’s what I’m going to say. I hope the next “Mission Impossible” movie is about how Congress managed to pass a law and just really inspire a lot of our lawmakers to do literally anything. It’d be a really great thing for this country.

kevin roose

“Mission Impossible: Privacy Regulation.” Coming to a theater near you.

casey newton

Stop generating.

kevin roose

OK.

casey newton

I love this story. “Microsoft accused of damaging ‘The Guardians’ reputation with an AI-generated poll speculating on the cause of a woman’s death next to an article by the news publisher.” So this is very sad. “The Guardian” wrote a story about the death of Lily James, a 21-year-old water polo coach who was found dead with serious head injuries at a school in Sydney last week. This went up on the Microsoft news aggregator. But because it’s Microsoft and you know it’s got that AI now Kevin they created a poll.

kevin roose

No.

casey newton

And it put it next to this article. And the poll asked, what do you think is the reason behind the woman’s death? Readers were then asked to choose from three options — murder, accident, or suicide.

kevin roose

Oh, god. This sucks so much. Like, I sort of vaguely have a sense of how this could have happened, right? Like, Microsoft runs, like, msn.com and maybe some other news aggregators. It pulls in stories from all over the place.

And then, like, we know that they are very big on AI right now. So maybe they’re slapping, like, AI sort of things around the stories that they’re aggregating. But don’t do this for stories about people dying. That should be like a very easy no.

casey newton

Yeah, it really should. But I think we just sort of see this thing over and over again, which is that when newsrooms play around with generative AI, and they don’t keep a very close eye on its output. Then, they just find themselves in this ridiculous amount of trouble. So my hope is that this will be the last that we see of these silly AI-generated polls.

Kevin, when you die, do you want me to pull our listeners on how we think it happened?

kevin roose

No, no. I don’t. That’s terrible.

casey newton

I have this theory that the use of generative AI in news — it just — it always trends toward crap. You know what I mean? Like, you have this idea and you think, oh, this is so cheap, and it’s so futuristic. And let’s put it into practice, and we’ll show innovative we are. And in practice, it always just trends toward crap. So this is —

kevin roose

It’s so dystopian. Oh, my god. Imagine you live a dignified life. You accomplish some things. Your obituary gets written up in a major newspaper. And then, they attach some poll to it, generated by AI. Was Casey a good person? Sound off in the comments?

casey newton

I know.

kevin roose

Ugh.

casey newton

I mean —

kevin roose

Horrible.

casey newton

A Microsoft spokesperson told “The Guardian,” “We have deactivated Microsoft-generated polls for all news articles, and we are investigating the cause of the inappropriate content. A poll should not have appeared alongside an article of this nature, and we are taking steps to prevent this kind of error from reoccurring in the future.” Of course, raising the question, what kind of content is appropriate to have a stupid poll next to it?

kevin roose

No, no, no, no, no, no, no, no. Do not let the humans off the hook for this. Because someone at Microsoft decided, you know what would boost our engagement on these news articles? Slapping AI-generated polls. It is not the AI’s fault that these polls ran. It is the Microsoft person who decided to implement these polls, and we should not let them off the hook for that.

casey newton

All right, and now, we actually want to poll our listeners. Who do you think is more at fault? Do you think it was the humans or the AI? Please vote on the AI-generated poll that will be underneath the article.

kevin roose

All right, last one.

casey newton

Last one.

kevin roose

“Cruise stops all driverless taxi operations in the United States.” This is from “The New York Times.” “Cruise, the driverless car company, said last week that it would pause all driverless operations in the United States two days after California regulators told the General Motors subsidiary to take its autonomous cars off the state’s roads.

The decision affects Cruise’s robot taxi services in Austin, Texas, and Phoenix. It’s also pausing non-commercial operations in Dallas, Houston, and Miami.” Now, this came after Cruise’s license to operate driverless fleets was suspended by the California DMV, citing an October 2 incident in which a Cruise vehicle dragged a San Francisco pedestrian for 20 feet after a collision.”

So Cruise cars, which we have ridden in together, are now off the roads in the entire United States. What do you make of this story?

casey newton

The safe street rebels have won. Like, this was the future liberals want. And we’re now left without these cars. This particular accident is very controversial. My understanding is that the victim of this incident was hit by another car first.

kevin roose

By a human driver.

casey newton

Yes.

kevin roose

Yes.

casey newton

And so that was sort of the initial problem — was this person was hit by a human driver, and then —

kevin roose

Was dragged under a Cruise car, which was trying to pull over on the side of the road but ended up dragging this poor person. Horrible story.

casey newton

Horrible story.

kevin roose

I think in general, regulators are just very on high alert for anything dangers involving self-driving cars. But this is a big blow to Cruise, I would say, which has struggled to convince people that its rides are safe. There have been a lot of documented incidents of traffic jams caused by Cruise vehicles.

I will say the Waymo vehicles that are operating in San Francisco have not been affected by this. They’re still out on the roads. I actually took one this week, and it felt quite safe to me. But I would say there are still a lot of questions about driverless cars. Do you think we are in a sort of moment where regulators are kind of getting nervous enough to shut all of this stuff down, or is this just kind of a speed bump on the way to these cars being more widely adopted?

casey newton

Is that a traffic pun speed bump?

kevin roose

Oh, god, no. So look, here’s the thing. I haven’t talked to the regulators. I don’t know how they’re thinking about this. I think it’s clear that they are enforcing much stricter scrutiny on the self-driving cars than they ever would on these terrifying murder machines that everybody drives around in all day. And honestly, I just hope that it gets sorted out quickly if for no other reason than, where are San Franciscans supposed to have sex now, Kevin?

I mean, this had become such a beloved pastime of citizens of this fair city. And now, well, if you can’t find a Waymo, you’re out of luck. It’s true. Well, I did take a Waymo this week, and I noticed something new in the Waymo, which is that they now come with barf bags.

casey newton

Is there — is there typically a lot of turbulence in these Waymo rides?

kevin roose

I don’t think it’s for turbulence. I think it’s for drunk people. I think it was a trick-or-treat special. There must be a story behind this. If you have — because if you vomit in an Uber, the driver has to clean it up, and they can charge you a cleaning fee.

If you’re in a Waymo, there’s no one to clean up after you, so they got to put the barf bag in there. And if you are the person who vomited in a Waymo, causing them to make this policy change, we do actually want to hear from you.

casey newton

That story was just really a rich, rich canvas to discuss so many aspects of society, wasn’t it?

kevin roose

Yeah, but the ride was very smooth, and so I was confused for a minute. I was like, should I be expecting turbulence? Should I be buckling up extra tight? What’s going on here? All right. That’s it for Hat GPT.

casey newton

Close up the hat.

kevin roose

Casey, do you want to put on the hat?

casey newton

I don’t look — well, I have famously spiky hair, so hats are kind of not really for me.

kevin roose

It looks good.

casey newton

But also, I’m wearing headphones.

kevin roose

Yeah. But —

casey newton

I don’t know.

kevin roose

We do need a bit — we got to up the hat budget.

casey newton

We got to up the hat — what is the hat budget on this show?

kevin roose

It’s $22 and some cents from vistaprint.com.

casey newton

“Platformer” will chip in a few bucks. We’ll see if we can get you a decent hat.

kevin roose

Yes.

(MUSIC PLAYING)

What are we doing?

casey newton

What are we doing?

kevin roose

Clap, one, two, three. That was — you didn’t clap.

casey newton

Because I had a fidget spinner!

kevin roose

Clap! One, two, three. Fidget spinner. Guy goes to the White House one time. All of a sudden —

casey newton

I’ve always had a fidget spinner!

kevin roose

— he’s exempt from the clapping rule. Oh, my god.

casey newton

Show some respect.

kevin roose

Do I have to call you Mr. Newton?

casey newton

That would be nice. The Biden people sure did.

It’s not true, actually. They called me Casey.

kevin roose

“Hard Fork” is produced by Rachel Cohn and Davis Land. We had help this week from Emily Lang. We’re edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today’s show was engineered by Rowan Niemisto, original music by Elisheba Ittoop, Sophia Lanman, Rowan Niemisto, and Dan Powell.

Our audience editor is Nell Gallogly. Video production by Ryan Manning and Dylan Bergeson. Special thanks to Paula Szuchman, Pui-Wing Tam, Kate LoPresti, and Jeffrey Miranda. As always, you can email us at hardfork@nytimes.com.


Exit mobile version