1
00:00:18,160 --> 00:00:21,000
Speaker 1: And we are back with another edition of the Federalist

2
00:00:21,079 --> 00:00:25,399
Radio Hour. I'm Matt Kittle, Senior Elections correspondent at the Federalist,

3
00:00:25,920 --> 00:00:30,760
and your experience sharpa on today's quest for Knowledge. As always,

4
00:00:30,760 --> 00:00:33,640
you can email the show at radio at the Federalist

5
00:00:33,719 --> 00:00:38,359
dot com, follow us on ex at FDRLST, make sure

6
00:00:38,399 --> 00:00:41,560
to subscribe wherever you download your podcast, and of course

7
00:00:41,640 --> 00:00:45,000
to the premium version of our website as well. Our

8
00:00:45,039 --> 00:00:49,640
guest today is Neil Chilson, former FTC Chief Technologist and

9
00:00:49,840 --> 00:00:55,600
currently head of aipolicy with the Abundance Institute. Where are

10
00:00:55,640 --> 00:01:02,520
we taking artificial intelligence? Where is artificial intelligence taking us? Well?

11
00:01:02,520 --> 00:01:05,560
We ask those questions and many more on this edition

12
00:01:05,640 --> 00:01:07,959
of the Federalist Radio Hour. Neil, thank you so much

13
00:01:08,000 --> 00:01:12,159
for joining us. It's great to be here speaking of here.

14
00:01:13,519 --> 00:01:17,400
Is this really you? Or is this AI? Or as

15
00:01:17,439 --> 00:01:23,079
they used to say when I was much younger dating

16
00:01:23,159 --> 00:01:27,439
myself here, is it live or is it memorys? We've

17
00:01:27,439 --> 00:01:31,760
come a long way since cassette tapes, certainly in America.

18
00:01:32,560 --> 00:01:35,519
I jest, of course, this is the real Neil Chilsen,

19
00:01:35,599 --> 00:01:41,400
but it does raise the question of just how much

20
00:01:42,040 --> 00:01:46,640
we are inundated with AI and the difficulty sometimes of

21
00:01:47,239 --> 00:01:54,040
telling artificial intelligence from reality. I think this thing is

22
00:01:54,200 --> 00:01:57,879
only going to intensify, of course, as we move forward.

23
00:01:58,319 --> 00:02:00,400
Where are we going with all of this?

24
00:02:01,840 --> 00:02:04,439
Speaker 2: Well, as much as my wife and my kids might

25
00:02:04,480 --> 00:02:09,280
appreciate an AI upgraded dad and husband, they're stuck with me,

26
00:02:09,360 --> 00:02:10,840
and so are you guys, the real me.

27
00:02:13,080 --> 00:02:15,360
Speaker 3: But you know that's I say that sort of jokingly.

28
00:02:15,439 --> 00:02:19,360
Speaker 2: But where we are right now with AI is continuing

29
00:02:19,439 --> 00:02:25,599
a long trend of adding new capabilities to computers. And

30
00:02:26,080 --> 00:02:30,520
they're surprising in this case, and they're surprising in some ways.

31
00:02:31,159 --> 00:02:34,639
But the history of artificial intelligence, a term that was

32
00:02:34,680 --> 00:02:38,879
coined in the nineteen fifties, has been exactly this, surprising

33
00:02:38,919 --> 00:02:43,800
new things everybody gets excited about. We figure out, oh,

34
00:02:43,879 --> 00:02:48,039
this is not actually the same thing as human intelligence,

35
00:02:48,080 --> 00:02:52,840
not exactly. It's powerful, and then we sort of it

36
00:02:52,840 --> 00:02:55,439
gets adapted. It's in our phones, it's in our computers,

37
00:02:55,439 --> 00:02:58,120
and we move on. So, you know, artificial intelligence. The

38
00:02:58,159 --> 00:03:01,439
cutting edge of artificial intelligence technology when I was first

39
00:03:01,439 --> 00:03:02,879
getting into computers in the.

40
00:03:03,000 --> 00:03:04,919
Speaker 3: Nineties was chess playing.

41
00:03:05,680 --> 00:03:09,240
Speaker 2: And now everybody's phone can play chess better than ninety

42
00:03:09,319 --> 00:03:11,520
nine point nine percent of humans on the planet, and

43
00:03:11,960 --> 00:03:16,360
we don't think of that as somehow you know, terminator

44
00:03:16,400 --> 00:03:18,280
style artificial intelligence.

45
00:03:18,280 --> 00:03:19,319
Speaker 3: And so so.

46
00:03:19,319 --> 00:03:22,800
Speaker 2: Where we are now is we have these really powerful

47
00:03:23,240 --> 00:03:27,199
tools called large language models that can be used for many,

48
00:03:27,240 --> 00:03:30,159
many different types of things. But one of the ways

49
00:03:30,199 --> 00:03:33,039
they're being used is in a sort of chatbot form

50
00:03:33,199 --> 00:03:37,599
where you can ask questions and get really comprehensive, detailed,

51
00:03:38,000 --> 00:03:44,319
sometimes made up answers, and that's that's really powerful. It

52
00:03:44,400 --> 00:03:49,080
teaches us that there's a lot to learn about humans

53
00:03:50,039 --> 00:03:53,520
that you can that you can collect and gather into

54
00:03:53,599 --> 00:03:57,199
computers in these in these formats and then query in

55
00:03:57,240 --> 00:04:00,639
a way that gets you know, very persuasive, very interesting,

56
00:04:01,719 --> 00:04:06,199
often very entertaining content, whether it be text, video, audio.

57
00:04:06,680 --> 00:04:11,360
I love making songs. Actually, my little girls love making

58
00:04:11,439 --> 00:04:14,240
up new songs using some of the apps out there

59
00:04:14,919 --> 00:04:17,040
where you can just type in a prompt and get like,

60
00:04:17,240 --> 00:04:19,560
you know, a song about unicorns or a song about

61
00:04:19,600 --> 00:04:23,959
you know, their kids club running around the neighborhoods as

62
00:04:24,279 --> 00:04:26,319
kids spies or something like that, and so like we

63
00:04:26,399 --> 00:04:29,199
love that stuff. And so yeah, there's a lot of entertainment.

64
00:04:29,240 --> 00:04:31,160
There's a lot of power in these tools, and there's

65
00:04:31,160 --> 00:04:34,079
some risks and people need to be aware of that,

66
00:04:34,120 --> 00:04:35,560
and policymakers do as well.

67
00:04:36,160 --> 00:04:39,000
Speaker 1: Some of the uses that you mentioned, you know, are

68
00:04:39,079 --> 00:04:42,839
fun to play around with. Certainly, like you said making

69
00:04:42,920 --> 00:04:47,160
up songs, I've seen that in play and the technology

70
00:04:47,279 --> 00:04:51,720
is really quite good and it takes you literally seconds

71
00:04:51,759 --> 00:04:55,240
to do what the Beatles spent, you know, the better

72
00:04:55,279 --> 00:04:58,480
part of a year doing to make Sergeant Pepper's Lonely

73
00:04:58,560 --> 00:05:01,079
Hearts Club band. I'm not sure saying that the artistry

74
00:05:01,279 --> 00:05:04,800
is the same, but I'm you know, the production value

75
00:05:04,920 --> 00:05:08,079
is certainly there. Yeah, the risk that you talk about

76
00:05:08,120 --> 00:05:14,480
in this arena copyright infringement, Where are we with that today?

77
00:05:14,560 --> 00:05:19,480
And where are we heading in terms of intellectual property?

78
00:05:20,120 --> 00:05:23,279
Speaker 2: So there's two concerns on the copyright front. One is

79
00:05:23,959 --> 00:05:26,759
on the training side, So when a company is building

80
00:05:26,800 --> 00:05:30,079
these models and they use a bunch of different content

81
00:05:30,199 --> 00:05:33,639
in order to train on, what are the legal restrictions

82
00:05:33,680 --> 00:05:36,240
on that? And then the other side of it is

83
00:05:36,279 --> 00:05:39,040
on the output side. So when a user types in

84
00:05:39,079 --> 00:05:41,839
a prompt and says, you know, give me a picture

85
00:05:41,839 --> 00:05:46,040
of Mickey Mouse, you know, battling the Marvel team or

86
00:05:46,079 --> 00:05:49,399
something like that, and the model puts out something that

87
00:05:49,519 --> 00:05:56,040
includes you know, maybe some sort of intellectual property protection

88
00:05:56,439 --> 00:05:57,600
protect and content.

89
00:05:58,519 --> 00:06:01,120
Speaker 3: Who's responsible for that? Is it the user, is it

90
00:06:01,240 --> 00:06:03,000
the model? And so on.

91
00:06:03,120 --> 00:06:05,639
Speaker 2: The first thing about like the sort of ingestion of

92
00:06:05,680 --> 00:06:08,000
this content, the use of this content. We've had some

93
00:06:08,079 --> 00:06:13,000
court decisions and there is a category of this that

94
00:06:13,680 --> 00:06:16,480
judges are thinking of as fair use right. And so

95
00:06:16,519 --> 00:06:19,639
you're training these models the way you might train you know,

96
00:06:19,759 --> 00:06:23,439
anybody reading a book, and so the content of the

97
00:06:23,439 --> 00:06:27,079
model or the content isn't copied into the model, It

98
00:06:27,160 --> 00:06:29,319
is used to train the model, and then the model

99
00:06:29,439 --> 00:06:32,720
understands that content in the context of you know, lots

100
00:06:32,759 --> 00:06:35,600
of other content. And so there are ways in which

101
00:06:35,600 --> 00:06:39,480
the courts are saying that that is fair use. There

102
00:06:39,519 --> 00:06:43,399
was an anthropic settlement in this space where the big

103
00:06:43,439 --> 00:06:47,040
problem the anthropic had was that they hadn't paid for

104
00:06:47,120 --> 00:06:49,600
the original content, right, and so they hadn't bought the

105
00:06:49,600 --> 00:06:53,439
books that they were scanning. They had just downloaded pirated

106
00:06:53,439 --> 00:06:55,199
copies of the book. And the court said, no, no, no,

107
00:06:55,240 --> 00:06:57,519
you can't do that. You need to buy the content.

108
00:06:57,920 --> 00:07:00,000
You need to buy you the copy of the content

109
00:07:00,040 --> 00:07:03,439
and use that to train. You can't just use pirated content.

110
00:07:04,120 --> 00:07:07,560
But overall, that doesn't that doesn't say that much about

111
00:07:08,439 --> 00:07:10,360
you know, I think that still leaves the door open

112
00:07:10,399 --> 00:07:12,480
for companies to train on a lot of this data,

113
00:07:12,879 --> 00:07:15,319
which I think is probably the right decision. I think

114
00:07:15,360 --> 00:07:19,040
it is very analogous to training to reading a book.

115
00:07:20,639 --> 00:07:23,800
And therefore, you know, we don't have copyright that says

116
00:07:23,800 --> 00:07:26,959
you can't learn from a book without permission from the

117
00:07:27,000 --> 00:07:30,639
copyright holder, and so on the other side, we don't know.

118
00:07:30,720 --> 00:07:34,439
We don't know yet. Those cases are still ongoing. We

119
00:07:34,519 --> 00:07:38,439
don't know how courts are going to partition liability for

120
00:07:39,000 --> 00:07:44,160
copyright infringement between the user who writes the prompt and

121
00:07:44,319 --> 00:07:47,639
the company who is running the model or trained the model.

122
00:07:48,480 --> 00:07:51,800
There's some really complicated questions there about what is infringement

123
00:07:52,439 --> 00:07:55,519
even on that side, but then splitting it up between

124
00:07:55,560 --> 00:07:57,959
the user who's asking for something and the model who's

125
00:07:57,959 --> 00:08:01,240
generating it, it gets pretty complicated there. You know, we

126
00:08:01,279 --> 00:08:05,279
don't we don't sue pencil companies because people draw images

127
00:08:05,319 --> 00:08:09,120
of of you know, Mickey Mouse, sure with using a pencil,

128
00:08:09,399 --> 00:08:10,319
and that wouldn't.

129
00:08:09,959 --> 00:08:10,639
Speaker 3: Make a ton of sense.

130
00:08:10,680 --> 00:08:12,680
Speaker 2: And so we're just trying to figure out the right

131
00:08:12,720 --> 00:08:15,720
analogies here and what makes sense economically.

132
00:08:15,720 --> 00:08:19,959
Speaker 1: Still, that's very complicated in the same legal neighborhood. What

133
00:08:20,079 --> 00:08:25,160
about reputational damage. We've already seen some cases on that front.

134
00:08:25,240 --> 00:08:30,639
And listen, there are AI companies you know that are

135
00:08:31,040 --> 00:08:35,679
I think very responsible AI companies. They're doing what they

136
00:08:35,759 --> 00:08:39,120
can and they're investing a lot of money to make

137
00:08:39,159 --> 00:08:42,960
sure that these systems are working right and they don't

138
00:08:43,000 --> 00:08:49,039
cause damage. But sometimes they do reputational damage. What about

139
00:08:49,080 --> 00:08:54,120
the liability issue on that front? Where are the courts

140
00:08:54,279 --> 00:08:55,480
on that particular matter?

141
00:08:56,080 --> 00:08:59,559
Speaker 2: So the big, the big challenge. The courts have looked

142
00:08:59,600 --> 00:09:02,159
at this a couple of ways. There's a bunch of

143
00:09:02,240 --> 00:09:07,240
challenges when you're talking about defamation or libel. The biggest

144
00:09:07,320 --> 00:09:10,759
challenge is that often this involves public figures, and public

145
00:09:10,799 --> 00:09:15,000
figure liability requires showing that there was a sort of

146
00:09:15,039 --> 00:09:16,360
malicious intent.

147
00:09:17,919 --> 00:09:18,679
Speaker 3: By the party.

148
00:09:19,080 --> 00:09:21,600
Speaker 2: And the question again, the same question comes up here

149
00:09:21,639 --> 00:09:24,039
that comes up in the generation who is the party

150
00:09:24,360 --> 00:09:26,399
at fault here? Is it the person who typed the

151
00:09:26,440 --> 00:09:29,360
prompt who said, like, tell me everything you know about

152
00:09:29,399 --> 00:09:33,279
you know X public figure? Or is it the model generation?

153
00:09:34,679 --> 00:09:38,399
And then on top of that, unless it's made public,

154
00:09:39,519 --> 00:09:43,720
this type of generation by itself is not. I don't

155
00:09:43,720 --> 00:09:48,519
think it's usually subject to defamation law. So the question

156
00:09:48,720 --> 00:09:51,639
is if I generate, If I type a prompt into

157
00:09:51,720 --> 00:09:56,679
chat GPT and I get inaccurate information back, but then

158
00:09:56,840 --> 00:09:59,759
I publish it on my social media website or in

159
00:09:59,799 --> 00:10:03,799
a newspaper article or something, it's the publication of that

160
00:10:03,799 --> 00:10:05,879
that is the real problem. It's not the fact that

161
00:10:06,639 --> 00:10:09,360
the chat GPT came up with it, because nobody sees

162
00:10:09,399 --> 00:10:11,120
that except the person who asks for it. And so

163
00:10:11,600 --> 00:10:15,399
I do think that these defamation cases against the models

164
00:10:15,440 --> 00:10:18,200
are going to be complicated by that factor. I think

165
00:10:18,240 --> 00:10:20,440
that it's difficult to say that the model is that

166
00:10:20,600 --> 00:10:24,679
fault for the distribution of a falsehood when the model

167
00:10:25,200 --> 00:10:28,320
just generates content and then the person has to make

168
00:10:28,320 --> 00:10:29,120
a choice to then.

169
00:10:29,039 --> 00:10:32,279
Speaker 3: Put it out into the world. And so I think

170
00:10:32,279 --> 00:10:34,080
that's a complicating factor. And I think that.

171
00:10:34,000 --> 00:10:37,240
Speaker 2: Means while the models are trying very hard to be

172
00:10:37,360 --> 00:10:40,759
accurate on this sort of stuff and users, it's still

173
00:10:40,799 --> 00:10:43,679
really on the users to verify that the content that

174
00:10:43,720 --> 00:10:46,759
they're posting, that they're taking from that and using in

175
00:10:46,799 --> 00:10:49,679
the real world is accurate. And I think that's where

176
00:10:49,720 --> 00:10:51,240
the liability probably should sit.

177
00:10:51,559 --> 00:10:54,240
Speaker 1: Yeah, tied end to that. We have some very interesting

178
00:10:54,440 --> 00:10:58,679
cases on free speech. We have some and AI. We

179
00:10:58,759 --> 00:11:07,480
have some some concerning moves by politicians, by lawmakers. I'm

180
00:11:07,519 --> 00:11:09,559
thinking of a guy who would like to be president

181
00:11:09,600 --> 00:11:11,840
of the United States in twenty twenty eight. There's no

182
00:11:11,879 --> 00:11:16,399
doubt about that. Gavin Newsom in California and the battles

183
00:11:16,480 --> 00:11:22,480
that have gone on there legally speaking in terms of

184
00:11:23,360 --> 00:11:29,759
content that is driven for political advertising political parody. Really,

185
00:11:30,480 --> 00:11:36,000
that's the interesting part about this, the brave new world

186
00:11:36,120 --> 00:11:41,279
of political advertising or political parody where, for instance, you

187
00:11:41,399 --> 00:11:47,000
have a famous AI video that came out last year

188
00:11:47,720 --> 00:11:52,559
that had Kamala Harris, then the Vice President of the

189
00:11:52,679 --> 00:11:56,440
United States, and a candidate, the candidate the Democrats candidate

190
00:11:56,440 --> 00:12:01,279
for president. The AI technology made her say some things

191
00:12:01,320 --> 00:12:05,600
that she certainly did not say. It was very amusing

192
00:12:05,879 --> 00:12:10,320
to about half of the country. Wasn't so amusing to

193
00:12:10,360 --> 00:12:13,759
the other half. So Gavin Newsom and crew in California

194
00:12:14,360 --> 00:12:18,360
took that and some other instances and used that as.

195
00:12:20,039 --> 00:12:24,399
Speaker 3: Kind of a red flag law for or a red flag.

196
00:12:24,080 --> 00:12:29,679
Speaker 1: Incidance for AI and communication and put limits on what

197
00:12:29,799 --> 00:12:32,399
you could do or penalties on what you could do

198
00:12:32,919 --> 00:12:36,200
if you did this. Kind of thing. The courts have

199
00:12:36,519 --> 00:12:39,279
entered into this case. Where does all of that stand today?

200
00:12:40,440 --> 00:12:44,159
Speaker 2: So yeah, So California and some other states have looked

201
00:12:44,240 --> 00:12:48,879
at how to govern the use of AI generated content

202
00:12:48,960 --> 00:12:53,000
in advertising, in political advertising in particular, and they faced

203
00:12:53,000 --> 00:12:59,159
some real constitutional challenges here. Political speeches right at the

204
00:12:59,200 --> 00:13:04,519
core of First Amendment rights. Lying in political speech is

205
00:13:04,559 --> 00:13:11,960
not policed by the courts as because they often say, essentially,

206
00:13:12,559 --> 00:13:14,960
now you might be able to bring a defamation case possibly,

207
00:13:15,000 --> 00:13:18,759
but again we're talking about public figures, and even in

208
00:13:18,759 --> 00:13:21,919
the political context that gets even harder. Courts tend to

209
00:13:22,000 --> 00:13:25,159
really say, like, if you're going to do political ads,

210
00:13:25,159 --> 00:13:28,120
it's going to be up to the voters to decide

211
00:13:28,279 --> 00:13:33,200
who's telling the truth. And and AI doesn't really change

212
00:13:33,240 --> 00:13:36,039
that that much. It's always been easy to create false

213
00:13:36,120 --> 00:13:40,120
content right now. What you can do maybe with AI

214
00:13:40,399 --> 00:13:42,919
is the types of deep fakes that you're talking about,

215
00:13:43,000 --> 00:13:46,679
the types of putting words in somebody's mouth in a

216
00:13:46,840 --> 00:13:48,279
very convincing way.

217
00:13:48,879 --> 00:13:52,039
Speaker 3: I think deceptive content.

218
00:13:51,759 --> 00:13:55,039
Speaker 2: Like that is still it's right for you know, competitors

219
00:13:55,080 --> 00:13:58,799
to call it out. I think that companies are trying

220
00:13:58,799 --> 00:14:02,279
to figure out how to balance at but political parity

221
00:14:02,519 --> 00:14:06,559
is highly protected free speech, and I think any type

222
00:14:06,600 --> 00:14:10,440
of you know, government thumb on the scale about what

223
00:14:10,519 --> 00:14:13,879
people can and can't say is just ripe for abuse.

224
00:14:14,000 --> 00:14:16,639
You'll end up getting police, you know, from one political

225
00:14:16,679 --> 00:14:20,360
perspective or another. One party will will bear the brunt

226
00:14:20,360 --> 00:14:22,399
of this more than the other. And I just think

227
00:14:22,440 --> 00:14:26,000
that it's a super risky road to walk down. The

228
00:14:26,039 --> 00:14:30,159
better the better options here are more speech, not censoring

229
00:14:30,200 --> 00:14:32,600
the ability to create content in the first place.

230
00:14:33,320 --> 00:14:35,840
Speaker 3: And I think that some of the companies.

231
00:14:35,399 --> 00:14:39,639
Speaker 2: Have had some early experiences in trying to shape the

232
00:14:39,679 --> 00:14:43,519
content that was coming out of their models, and I

233
00:14:43,600 --> 00:14:46,879
think they've dialed that back to say, like, hey, we're

234
00:14:46,919 --> 00:14:50,600
going to largely try to lean on the side of

235
00:14:51,080 --> 00:14:56,200
generating what the user is asking for and protects protect

236
00:14:56,279 --> 00:14:59,159
speech there as long as they're not generating illegal content.

237
00:14:59,240 --> 00:15:01,759
I think that's the better that's the better frame.

238
00:15:02,120 --> 00:15:04,840
Speaker 3: I hope that more and more companies move towards that direction.

239
00:15:08,200 --> 00:15:11,639
Speaker 4: Did a single company save the stock market from crashing

240
00:15:11,679 --> 00:15:14,519
into a recession? The Watchdot on Wall Street podcast with

241
00:15:14,639 --> 00:15:17,759
Chris Markowski. Every day, Chris helps unpack the connection between

242
00:15:17,840 --> 00:15:20,279
politics and the economy and how it affects your wallet.

243
00:15:20,399 --> 00:15:24,320
Tech powerhouse in Nvidia's earnings report did not disappoint, But

244
00:15:24,399 --> 00:15:26,840
what does that tell you about the value of AI?

245
00:15:27,240 --> 00:15:30,360
This cannot save the market forever. Whether it's happening in

246
00:15:30,440 --> 00:15:32,840
DC or down on Wall Street, it's affecting you financially.

247
00:15:32,919 --> 00:15:33,440
Speaker 3: Be informed.

248
00:15:33,519 --> 00:15:35,360
Speaker 4: Check out the watch Dot on Wall Street podcast with

249
00:15:35,440 --> 00:15:38,720
Chris Markowski on Apple, Spotify or wherever you get your podcasts.

250
00:15:43,159 --> 00:15:48,159
Speaker 1: Well, let's face it, if the courts really did punish

251
00:15:48,960 --> 00:15:54,799
politicians for lying, our correctional system would be just overbroad.

252
00:15:55,840 --> 00:16:01,519
Speaker 2: Yeah, political speech is full of half to true misframeans

253
00:16:02,200 --> 00:16:04,679
and courts just don't want to get into that. I

254
00:16:04,720 --> 00:16:09,039
think that's that's just fraught territory, and I think I

255
00:16:09,039 --> 00:16:11,039
think it's right to leave it to the people to

256
00:16:11,080 --> 00:16:12,759
make the decisions about who they trust.

257
00:16:13,080 --> 00:16:17,759
Speaker 1: Yeah, that is America, and that's the bottom line. The information,

258
00:16:17,919 --> 00:16:22,480
of course is it's a different age for all of that.

259
00:16:22,679 --> 00:16:26,600
But we've had the same issues and the same problems

260
00:16:26,919 --> 00:16:28,879
over the two hundred and fifty years of this one.

261
00:16:30,240 --> 00:16:33,840
Let's turn our attention to the emotional aspects of AI.

262
00:16:34,480 --> 00:16:38,679
Harvard Business School has an interesting piece up came out recently.

263
00:16:38,799 --> 00:16:43,879
Feeling lonely an attentive listener is an AI prompt away,

264
00:16:44,399 --> 00:16:49,639
and it delves into the brave new world of companionship

265
00:16:50,559 --> 00:16:55,440
with AI. There are well, I think what we're finding out, Neil,

266
00:16:55,720 --> 00:16:59,159
over the last year or so is that there are

267
00:16:59,519 --> 00:17:02,919
a lot of lonely people out there, and they have

268
00:17:03,039 --> 00:17:09,000
turned AI to solve that loneliness problem. Where does all

269
00:17:09,079 --> 00:17:11,920
of that stand today, and where do you think that's

270
00:17:11,960 --> 00:17:12,519
all going?

271
00:17:13,839 --> 00:17:15,759
Speaker 2: Yeah, I mean, I think you're totally right. I mean,

272
00:17:15,799 --> 00:17:19,759
we do have an epidemic of sort of people isolating

273
00:17:19,759 --> 00:17:23,599
themselves from others, and I think this is an outgrowth of,

274
00:17:23,799 --> 00:17:25,799
you know, some really bad policies that we had around

275
00:17:25,839 --> 00:17:33,119
the COVID pandemic, as well as just some general fracturing

276
00:17:33,200 --> 00:17:40,160
of American you know, socialization institutions that we've you know,

277
00:17:40,319 --> 00:17:42,559
historically relied on to bring us together with people.

278
00:17:43,920 --> 00:17:45,000
Speaker 3: And so I.

279
00:17:44,920 --> 00:17:49,799
Speaker 2: Think that trying to fill this gap with AI chatbots

280
00:17:49,960 --> 00:17:53,799
is at best a sort of temporary measure.

281
00:17:55,480 --> 00:17:56,079
Speaker 3: I think that.

282
00:17:57,680 --> 00:18:01,359
Speaker 2: I don't know how to exactly measure this. You know,

283
00:18:01,400 --> 00:18:05,119
these chatbots are general purpose. Most of them are general purpose.

284
00:18:05,160 --> 00:18:07,039
They're not aimed at this sort of function. Now, there

285
00:18:07,079 --> 00:18:10,359
are some companies who are offering specifically this type of function,

286
00:18:10,680 --> 00:18:12,960
but most of the chatbots that people are using our

287
00:18:13,079 --> 00:18:15,599
general purpose. People are using them for a wide range

288
00:18:15,599 --> 00:18:19,880
of things, from academic research to generating funny videos, and

289
00:18:19,960 --> 00:18:23,359
some people use them occasionally to talk to about like

290
00:18:23,680 --> 00:18:28,359
personal problems or you know, relationship problems or things like that.

291
00:18:28,839 --> 00:18:29,640
Speaker 3: I think some of that.

292
00:18:29,559 --> 00:18:34,079
Speaker 2: Can be very useful and very helpful, but certainly would

293
00:18:34,079 --> 00:18:36,799
want to keep an eye out on people replacing time

294
00:18:36,839 --> 00:18:43,039
with other humans with these these chatbots. It's just it's

295
00:18:43,119 --> 00:18:46,680
just not it's not the same obviously, and it doesn't

296
00:18:46,720 --> 00:18:51,680
create the same types of deep connections with community that

297
00:18:51,759 --> 00:18:54,480
I think are essential to human flourishing.

298
00:18:54,559 --> 00:18:54,880
Speaker 3: And so.

299
00:18:56,519 --> 00:18:59,799
Speaker 2: My hope would be that these systems, as they build out,

300
00:19:00,200 --> 00:19:04,839
are aimed at improving people's ability to engage with other people,

301
00:19:05,279 --> 00:19:07,680
because there certainly are people who are not as practice

302
00:19:07,720 --> 00:19:11,400
at that who maybe spent you know, two years, especially

303
00:19:11,400 --> 00:19:13,200
when we get into the kids sector, who spent maybe

304
00:19:13,240 --> 00:19:17,240
two years like doing zoomed school and need to get

305
00:19:17,279 --> 00:19:21,240
back into the swing of, you know, talking to other people.

306
00:19:21,279 --> 00:19:24,920
And I think these tools could can help give tips

307
00:19:24,920 --> 00:19:25,720
on that sort of thing.

308
00:19:26,000 --> 00:19:26,720
Speaker 3: But I hope they don't.

309
00:19:27,400 --> 00:19:29,799
Speaker 2: I hope people don't rely on them as a substitute

310
00:19:29,839 --> 00:19:32,960
for reaching out, being brave, talking to people they haven't

311
00:19:32,960 --> 00:19:36,359
talked to before, and getting to know people in their community.

312
00:19:36,920 --> 00:19:40,279
Speaker 1: We certainly have learned over the last few years just

313
00:19:40,400 --> 00:19:45,400
how dangerous, how much peril lockdown policies have put our

314
00:19:45,519 --> 00:19:49,960
children in especially, but society in general, when it comes

315
00:19:50,000 --> 00:19:55,160
to picking up with communication and relationships and all of

316
00:19:55,200 --> 00:19:58,640
those sorts of things. You know, Neil, though, on this topic,

317
00:19:58,799 --> 00:20:02,039
they always say the heart wants what the heart wants,

318
00:20:02,480 --> 00:20:05,599
The market wants what the market wants. And there are

319
00:20:05,680 --> 00:20:09,519
some strange areas in the marketplace for AI. Not to

320
00:20:09,559 --> 00:20:14,160
say that, you know, there isn't a market for it,

321
00:20:14,279 --> 00:20:17,000
but the question maybe should there be a market for it.

322
00:20:17,319 --> 00:20:22,400
One of those areas is connecting with the dead, and

323
00:20:22,440 --> 00:20:25,640
this has become an interesting subject of late. But the

324
00:20:25,720 --> 00:20:31,240
AI products that offer people the ability to chat with

325
00:20:31,720 --> 00:20:35,960
or hear the voice again of a loved one who's

326
00:20:36,039 --> 00:20:39,799
passed away, what about all of that?

327
00:20:41,559 --> 00:20:44,440
Speaker 2: I mean, grief is a grief is a crazy thing, right,

328
00:20:44,480 --> 00:20:47,039
and so I think that people try to deal with

329
00:20:47,119 --> 00:20:48,640
it in a lot of different ways, some of it

330
00:20:48,640 --> 00:20:50,519
healthy and some of it not healthy.

331
00:20:50,559 --> 00:20:52,279
Speaker 3: What I would be worried about. Here would be.

332
00:20:52,720 --> 00:20:56,680
Speaker 2: Apps that are you know, claiming some sort of therapeutic

333
00:20:57,279 --> 00:21:00,759
effect on the heart, like misclaiming that, fault claiming that,

334
00:21:01,039 --> 00:21:04,079
or falsely claiming that you know, they're going to provide

335
00:21:04,119 --> 00:21:07,480
some sort of resolution. I think people, like I said,

336
00:21:08,559 --> 00:21:12,000
people might use these as tools to engage with maybe

337
00:21:12,039 --> 00:21:14,920
the content that somebody left behind. And I think that

338
00:21:14,960 --> 00:21:19,160
could be interesting if done in a healthy way. Right,

339
00:21:19,400 --> 00:21:23,079
if you are able to look at like air quote

340
00:21:23,200 --> 00:21:26,960
talk to you know, the all the letters that your

341
00:21:27,079 --> 00:21:30,359
your father or your grandfather left behind, that might be

342
00:21:30,400 --> 00:21:33,160
a really engaging way to learn more about their life

343
00:21:33,599 --> 00:21:36,079
and to think about like what they meant to you.

344
00:21:36,519 --> 00:21:40,920
But I don't think there are healthy ways to do that,

345
00:21:40,960 --> 00:21:42,640
and there are unhealthy ways to do that, And I

346
00:21:42,640 --> 00:21:45,599
would be worried about, you know, apps that are claiming

347
00:21:45,640 --> 00:21:49,240
to provide some sort of therapeutic effect when they're not

348
00:21:49,279 --> 00:21:53,960
doing that, rather than maybe ones that are talking about

349
00:21:53,960 --> 00:21:57,559
how they can help you understand you know, your past

350
00:21:57,599 --> 00:21:59,720
and understand your connection to your family better.

351
00:22:00,079 --> 00:22:02,039
Speaker 3: Some of that could be very interesting and positive.

352
00:22:02,559 --> 00:22:05,400
Speaker 1: I can't wait to get an AI letter the kinds

353
00:22:05,440 --> 00:22:08,599
of letters I get from some members of family and friends,

354
00:22:08,640 --> 00:22:13,519
particularly on the older spectrum of family and friends, the

355
00:22:13,559 --> 00:22:18,880
old merry medical Christmas letters where they tell me about

356
00:22:18,920 --> 00:22:22,359
all of their health ailments in the space of two

357
00:22:23,079 --> 00:22:24,160
very long pages.

358
00:22:25,319 --> 00:22:27,759
Speaker 3: I suspect, I suspect, I suspect.

359
00:22:27,759 --> 00:22:31,279
Speaker 2: If you get Christmas cards this you know, this season,

360
00:22:31,440 --> 00:22:33,400
there's there's a good chance a chunk of them were

361
00:22:34,400 --> 00:22:37,359
helped in being written by AI. But can I give

362
00:22:37,400 --> 00:22:41,759
you one example. My in laws recently came across a

363
00:22:41,799 --> 00:22:47,000
handwritten letter from you know, my father in law's dad.

364
00:22:47,799 --> 00:22:51,920
It was a very it was a terrific piece consoling

365
00:22:51,960 --> 00:22:54,839
somebody else in the loss of their their spouse. But

366
00:22:54,960 --> 00:22:57,480
some of it was actually quite difficult to read because

367
00:22:57,480 --> 00:23:01,079
it was handwritten. I used, you know, I scanned it

368
00:23:01,119 --> 00:23:04,400
into chat GPT really quickly and asked it if it

369
00:23:04,400 --> 00:23:08,880
could create a transcript of it. And that was extremely helpful.

370
00:23:08,880 --> 00:23:11,440
Actually it decrypted some of the words that we had

371
00:23:11,480 --> 00:23:13,799
we were really struggling with and uh, and we were

372
00:23:13,839 --> 00:23:16,160
able to enjoy this this letter in a way that

373
00:23:16,240 --> 00:23:18,880
I don't think we would have struggled more to figure

374
00:23:18,920 --> 00:23:21,680
out what the handwriting said had we had we not

375
00:23:21,799 --> 00:23:22,079
use that.

376
00:23:22,200 --> 00:23:26,440
Speaker 3: So there are really positive uses to this technology. But there.

377
00:23:26,480 --> 00:23:30,799
Speaker 2: You know, people shouldn't be substituting, you know, moving through

378
00:23:30,839 --> 00:23:33,720
their grief process by you know, pretending that they could

379
00:23:33,720 --> 00:23:36,759
still talk to their their their past relatives.

380
00:23:36,759 --> 00:23:39,200
Speaker 1: I think, yeah, I don't I don't doubt that that

381
00:23:39,839 --> 00:23:43,799
kind of positive application and the implications of that. You know,

382
00:23:43,839 --> 00:23:48,240
you talk about, let's face it, cursive writing as a

383
00:23:48,279 --> 00:23:52,279
lost art in America anyway, But there's some beautiful cursive writing,

384
00:23:52,759 --> 00:23:55,839
uh in you know the history of family letters and

385
00:23:55,880 --> 00:23:59,839
those sorts of things. But I think about those, you know,

386
00:24:00,160 --> 00:24:04,160
as a history buff those important historical documents, you know,

387
00:24:04,240 --> 00:24:07,839
the letters from Lincoln that I can't always you know,

388
00:24:07,920 --> 00:24:11,119
I'm looking at the penmanship, and he's got pretty good penmanship,

389
00:24:11,200 --> 00:24:13,559
but there are areas where I can't find it. I

390
00:24:13,559 --> 00:24:17,440
think that's interesting. That's a very interesting application. We're going

391
00:24:17,519 --> 00:24:21,480
to talk more about some of you know, those very

392
00:24:21,680 --> 00:24:25,160
useful applications for AI coming up. Our guest today in

393
00:24:25,240 --> 00:24:29,200
this edition of The Federalist Radio Hour Neil Chilson, former

394
00:24:29,279 --> 00:24:34,039
FDC Chief technologist and currently head of AI policy with

395
00:24:34,160 --> 00:24:38,240
the Abundance Institute. I want to get to the jobs

396
00:24:38,319 --> 00:24:42,519
question though, because that is a huge concern, a growing

397
00:24:42,640 --> 00:24:46,039
concern for a lot of Americans what is fact? What

398
00:24:46,200 --> 00:24:50,920
is fiction? About what AI is doing and is about

399
00:24:51,000 --> 00:24:52,559
to do to the job market.

400
00:24:53,920 --> 00:24:58,079
Speaker 2: Well, sorting out the facts can be challenging in sort

401
00:24:58,119 --> 00:25:00,680
of the effects of new technology on jobs. What we

402
00:25:00,759 --> 00:25:05,079
do know is that companies are spending a bunch of

403
00:25:05,160 --> 00:25:07,960
time on trying to figure.

404
00:25:07,599 --> 00:25:10,319
Speaker 3: Out how to use AI, these new AI.

405
00:25:10,160 --> 00:25:17,480
Speaker 2: Tools in their systems, but the actual implementation is actually

406
00:25:17,559 --> 00:25:20,519
still in the very early stages for almost all companies,

407
00:25:20,559 --> 00:25:23,079
and so while they're spending time and money on it,

408
00:25:23,759 --> 00:25:27,039
figuring out how to use it in their environments is

409
00:25:27,920 --> 00:25:31,240
less clear. I do think that there are lots of

410
00:25:31,279 --> 00:25:34,799
individuals who are figuring out how to use it, and

411
00:25:34,880 --> 00:25:38,279
that is moving faster because they can iterate within their company.

412
00:25:38,279 --> 00:25:40,440
They can try lots of different things. They can try

413
00:25:40,519 --> 00:25:42,920
to see how, hey, how does this help me write emails?

414
00:25:42,920 --> 00:25:45,119
How does it help me draft? How does it help

415
00:25:45,160 --> 00:25:47,039
me brainstorm? How does it help me take notes?

416
00:25:48,039 --> 00:25:49,359
Speaker 3: But we're still not.

417
00:25:49,400 --> 00:25:55,400
Speaker 2: Seeing like a huge productivity boost yet that would suggest

418
00:25:55,519 --> 00:25:58,960
that this is sort of replacing a bunch of time

419
00:25:59,039 --> 00:26:02,079
that people are spent, or that it's enhancing and allowing

420
00:26:02,079 --> 00:26:03,440
them to move into other areas.

421
00:26:03,599 --> 00:26:04,119
Speaker 3: Not yet.

422
00:26:04,359 --> 00:26:06,359
Speaker 2: I think some of that's just because we're still at

423
00:26:06,359 --> 00:26:09,519
the very early stages of this stuff working its way

424
00:26:09,559 --> 00:26:14,119
into the job sector. I think, you know, we've heard

425
00:26:14,240 --> 00:26:19,000
some talk about how, you know, early career people are

426
00:26:20,240 --> 00:26:23,359
struggling to get jobs, and the finger is being pointed

427
00:26:23,400 --> 00:26:29,519
at AI. I think that it's it's more about uncertainty

428
00:26:30,079 --> 00:26:32,839
in the economy. And some of that uncertainty, for sure,

429
00:26:33,039 --> 00:26:35,759
is being driven by not knowing what jobs are going

430
00:26:35,799 --> 00:26:38,160
to be affected by AI in the future. And so

431
00:26:38,200 --> 00:26:40,039
I don't want to say that it's not about AI

432
00:26:40,079 --> 00:26:42,400
at all, but there are lots of other factors in

433
00:26:42,440 --> 00:26:48,200
the economy that suggest uncertainty, and so I think teasing

434
00:26:48,240 --> 00:26:50,119
that out is very complicated. We'll have to see a

435
00:26:50,160 --> 00:26:53,359
little bit more. One other area of research has been

436
00:26:53,400 --> 00:26:56,839
about the types of tasks that these large language models

437
00:26:56,839 --> 00:26:58,519
are good at and where they're not good.

438
00:26:58,519 --> 00:27:00,720
Speaker 3: And what we know is right now is that for.

439
00:27:00,720 --> 00:27:06,240
Speaker 2: Certain types of jobs, AI has this effect of leveling

440
00:27:06,400 --> 00:27:10,240
up quickly people who are earlier or less experienced, and

441
00:27:10,319 --> 00:27:14,039
so in like a call center job, for example, where

442
00:27:14,079 --> 00:27:17,640
you're offering you know, your troubleshooting for customers over and

443
00:27:17,680 --> 00:27:21,119
over and over, these tools can really make somebody who

444
00:27:21,200 --> 00:27:25,319
is new to this job perform at a very much

445
00:27:25,400 --> 00:27:28,839
higher level, much more quickly, But it doesn't help the

446
00:27:28,880 --> 00:27:32,680
people at the top of that experience trend very much.

447
00:27:32,759 --> 00:27:35,680
In those types of jobs and other types of jobs,

448
00:27:35,720 --> 00:27:41,799
say where you're running a scientific laboratory and you're using

449
00:27:41,960 --> 00:27:45,440
these types of models to help you brainstorm. There, it

450
00:27:45,519 --> 00:27:49,200
seems like it makes the highest performers perform even better,

451
00:27:49,599 --> 00:27:52,799
and so the distribution of effect is different. So there

452
00:27:53,200 --> 00:27:56,559
the highest performers get a huge boost and the lower

453
00:27:56,599 --> 00:28:00,359
performers don't, because the highest performers can tell, like sort

454
00:28:00,400 --> 00:28:04,519
of intuitively, which threads of this brainstorming process makes some

455
00:28:04,640 --> 00:28:06,640
sense and which ones don't, and so they get a

456
00:28:06,640 --> 00:28:09,759
productivity boost and the lower skilled people don't. And so

457
00:28:10,119 --> 00:28:12,920
I think it really is job dependent that's going to

458
00:28:12,920 --> 00:28:17,720
be the case. I think for general purpose technologies overall,

459
00:28:18,039 --> 00:28:20,920
we don't know all the applications of AI. We don't

460
00:28:20,920 --> 00:28:24,720
know exactly how it's going to help, but there's lots

461
00:28:24,759 --> 00:28:26,640
of different ways that people are figuring it out.

462
00:28:26,680 --> 00:28:28,200
Speaker 3: I actually saw the demo the other.

463
00:28:28,160 --> 00:28:32,039
Speaker 2: Day of how people are using AI and what they

464
00:28:32,079 --> 00:28:37,799
call augmented reality headsets on construction sites to be able

465
00:28:37,839 --> 00:28:41,359
to much more quickly and more safely figure out where

466
00:28:41,359 --> 00:28:45,000
they should put pieces, what's the next step that they

467
00:28:45,039 --> 00:28:48,640
need to take. And you can help people coordinate in

468
00:28:48,680 --> 00:28:51,599
a much more safe way using both AI and this

469
00:28:51,799 --> 00:28:55,160
VR together. But this is all still very early days.

470
00:28:55,160 --> 00:28:59,000
I don't think we really know the overall job impacts

471
00:28:59,039 --> 00:29:02,160
of this technology. What we do know is that intelligence,

472
00:29:03,440 --> 00:29:07,640
artificial intelligence is valuable because intelligence is valuable, and so

473
00:29:08,440 --> 00:29:12,359
to the extent that we can use AI to amplify

474
00:29:12,480 --> 00:29:15,440
our intellectual endeavors the way that we used you know,

475
00:29:15,480 --> 00:29:18,720
the steam engine and the combustion engine to amplify our

476
00:29:19,200 --> 00:29:24,480
our physical uh capabilities we had. There is huge potential

477
00:29:24,519 --> 00:29:27,480
here for a lot of productivity and that means that

478
00:29:27,559 --> 00:29:29,279
means change in the types of jobs.

479
00:29:30,640 --> 00:29:32,519
Speaker 3: And but we don't.

480
00:29:32,279 --> 00:29:34,400
Speaker 2: Really know how fast or how that's going to work

481
00:29:34,440 --> 00:29:36,680
out yet, so there's some uncertainty for sure.

482
00:29:37,160 --> 00:29:41,200
Speaker 1: You note that AI can sharpen skills of individuals. Can

483
00:29:41,200 --> 00:29:45,240
it steal the courage of members of Congress? Is that

484
00:29:45,359 --> 00:29:48,039
yet part of the technology suite?

485
00:29:48,839 --> 00:29:52,400
Speaker 2: Uh? Wow, I hadn't thought of that as an application.

486
00:29:53,279 --> 00:29:57,599
Somebody should train an app to do that? Uh, you know,

487
00:29:57,640 --> 00:29:59,599
train a model to do that. The question would be

488
00:29:59,640 --> 00:30:02,759
could we and get members of Congress to use it.

489
00:30:02,839 --> 00:30:07,680
I did hear that Josh Holly recently used chat GPT

490
00:30:07,960 --> 00:30:11,759
to explore some sixteen hundred's Puritan history and was quite

491
00:30:11,759 --> 00:30:14,400
impressed with the response that he got back. But I

492
00:30:14,440 --> 00:30:16,160
think that was a very early use for him. So

493
00:30:16,160 --> 00:30:18,279
I hope he digs in, and I hope member of

494
00:30:18,440 --> 00:30:21,400
members of Congress do as well. I think understanding how

495
00:30:21,440 --> 00:30:25,640
this technology works and doesn't work is really valuable, and

496
00:30:25,359 --> 00:30:28,200
it's it's not the type of technology that sits off

497
00:30:28,240 --> 00:30:30,000
somewhere else and you would have to like plan a

498
00:30:30,039 --> 00:30:32,440
trip to go do it. Anybody can try it out,

499
00:30:32,519 --> 00:30:36,920
see how it works, and I think that that experience

500
00:30:37,039 --> 00:30:43,319
is worth doing just to understand, you know, what exactly

501
00:30:43,359 --> 00:30:44,440
we're dealing with here.

502
00:30:45,079 --> 00:30:48,160
Speaker 1: Well, I mean, you know, it's always as a computer programmer,

503
00:30:48,200 --> 00:30:53,480
you know the model here, I guess the mantra more so,

504
00:30:53,559 --> 00:30:56,359
and that is garbage in, garbage out. It's what you

505
00:30:56,440 --> 00:30:59,440
put into the system, what you can expect to get

506
00:30:59,480 --> 00:31:01,640
out of it. And I think that pretty much described

507
00:31:01,680 --> 00:31:04,599
the sixteen nineteen projects, speaking of Puritan.

508
00:31:06,119 --> 00:31:07,400
Speaker 3: Travel, all of.

509
00:31:07,359 --> 00:31:10,279
Speaker 1: That sort of thing we've talked about. You know, some

510
00:31:10,359 --> 00:31:15,359
of the concerns obviously in this emerging technology that's been

511
00:31:15,400 --> 00:31:19,359
emerging by the way for the last seventy years, as

512
00:31:19,400 --> 00:31:23,759
you note, But there are some really powerful applications. I

513
00:31:23,759 --> 00:31:29,160
think about the healthcare arena, what about AI and healthcare?

514
00:31:29,359 --> 00:31:32,839
What about you know, what we're really seeing in terms

515
00:31:32,880 --> 00:31:38,359
of AI really driving positive change in different facets of

516
00:31:38,400 --> 00:31:38,960
our lives.

517
00:31:40,440 --> 00:31:43,039
Speaker 2: So I think the healthcare arena is a great example.

518
00:31:43,240 --> 00:31:46,079
The way that these tools work is they take large

519
00:31:46,519 --> 00:31:50,559
amounts of data that human as humans, we can't see

520
00:31:51,000 --> 00:31:54,519
complicated patterns in, and they help expose those patterns. And

521
00:31:54,599 --> 00:31:57,799
healthcare is full of this type of data where we

522
00:31:57,920 --> 00:32:00,720
have a lot of data about you know, we might

523
00:32:00,720 --> 00:32:05,839
have millions of CT scans of breast cancer, for example,

524
00:32:06,839 --> 00:32:10,279
but we still have a very manual process for identifying that. Well,

525
00:32:10,359 --> 00:32:14,160
these tools can can do that type of analysis faster

526
00:32:14,400 --> 00:32:17,720
and more accurately, and often they can identify, you know,

527
00:32:18,039 --> 00:32:22,759
risk factors earlier than doctors could, in particular in the

528
00:32:22,759 --> 00:32:26,000
breast cancer context. That's just one area. One of the

529
00:32:26,000 --> 00:32:29,079
other big applications it's possible here is you know, we

530
00:32:29,160 --> 00:32:33,720
treat and we research healthcare in this country and around

531
00:32:33,759 --> 00:32:38,480
the world, sort of ash in a very generic way.

532
00:32:38,519 --> 00:32:41,200
We sort of treat people as like an average human.

533
00:32:41,480 --> 00:32:44,640
But the truth is there's so much variation in the

534
00:32:44,720 --> 00:32:49,839
human body and in the human health health system. They're

535
00:32:49,960 --> 00:32:55,480
there that being able to diagnose at a much more

536
00:32:55,599 --> 00:32:58,440
personalized level what is going on in your body is

537
00:32:58,480 --> 00:33:01,160
something that AI is getting better and better at, and

538
00:33:01,200 --> 00:33:05,880
I think that that raises just huge potential benefits for

539
00:33:06,079 --> 00:33:10,079
customized medicine that is directed exactly at the problems that

540
00:33:10,160 --> 00:33:13,400
you have and the cluster of problems that you or

541
00:33:13,440 --> 00:33:16,400
your family member might have that won't look like the

542
00:33:16,519 --> 00:33:17,480
vast majority of.

543
00:33:17,440 --> 00:33:18,759
Speaker 3: Issues that other people have.

544
00:33:18,880 --> 00:33:22,799
Speaker 2: And so we've already seen that in what are called

545
00:33:22,880 --> 00:33:24,079
like orphan diseases.

546
00:33:24,200 --> 00:33:25,720
Speaker 3: These are diseases.

547
00:33:25,200 --> 00:33:28,720
Speaker 2: That might affect, you know, a thousand people across the

548
00:33:28,720 --> 00:33:31,799
world at any time, and there are millions of people

549
00:33:31,799 --> 00:33:35,759
who are suffering from these types of diseases that are

550
00:33:35,799 --> 00:33:39,960
so small and so targeted that it's very hard just

551
00:33:40,000 --> 00:33:43,799
from a business perspective or an economic incentive perspective, to

552
00:33:43,920 --> 00:33:46,519
create treatments for them. If you're only going to solve

553
00:33:46,559 --> 00:33:49,400
problems for a thousand people, it's hard to know. But

554
00:33:49,480 --> 00:33:52,519
what we're learning and there's some great research. I believe

555
00:33:52,519 --> 00:33:58,279
it's at the University of Washington that is taking existing drugs,

556
00:33:58,440 --> 00:34:02,279
existing treatments and identifying where those treatments could apply to

557
00:34:02,359 --> 00:34:06,920
some of these orphan orphan diseases. And they're really they're

558
00:34:06,920 --> 00:34:10,400
bringing like real benefit, like life saving treatments to people

559
00:34:11,079 --> 00:34:15,119
by applying existing approved drugs that otherwise wouldn't have been

560
00:34:15,360 --> 00:34:20,639
used in that context to these orphan diseases. And that's

561
00:34:20,679 --> 00:34:22,480
the sort of thing that you could really only do

562
00:34:22,599 --> 00:34:26,559
with AI because it can see you can enter in

563
00:34:26,599 --> 00:34:28,559
all this data and you can get these patterns out

564
00:34:28,559 --> 00:34:32,320
that identify, hey, maybe we should try this technique over

565
00:34:32,639 --> 00:34:34,920
on this thing where we've never thought to try it before.

566
00:34:34,920 --> 00:34:37,559
And so I'm super excited about those types of treatments.

567
00:34:37,599 --> 00:34:40,440
I think that's going to make our lives healthier and longer.

568
00:34:41,280 --> 00:34:43,480
And it's one of the areas where we need to

569
00:34:43,480 --> 00:34:47,119
get the policy right because the way we treat healthcare

570
00:34:47,199 --> 00:34:50,079
data in this country is makes it very hard to

571
00:34:50,119 --> 00:34:52,400
do some of these types of things, and so we

572
00:34:52,440 --> 00:34:54,440
need to get policy right. But if we get that right,

573
00:34:55,360 --> 00:34:58,440
the health benefits here are really really exciting.

574
00:34:58,960 --> 00:35:01,480
Speaker 1: Well, speaking of palas, see again you know the big

575
00:35:01,519 --> 00:35:07,639
beautiful bill has some interesting things in it about AI.

576
00:35:08,159 --> 00:35:12,519
It aims in many ways to stem the wave of

577
00:35:12,840 --> 00:35:19,599
state AI laws you know that that create this patchwork

578
00:35:19,719 --> 00:35:24,360
of you know, different boundary lines for AI, which you

579
00:35:24,400 --> 00:35:27,719
know is problematic to save the least, I mean, there

580
00:35:27,760 --> 00:35:32,880
has to be clearly some regulation in this area. It's

581
00:35:32,960 --> 00:35:37,119
just what that is now that did not make it

582
00:35:37,639 --> 00:35:42,159
through the you know, the process. Right Where where does

583
00:35:42,199 --> 00:35:43,280
all of that stand?

584
00:35:43,320 --> 00:35:43,480
Speaker 2: Now?

585
00:35:43,519 --> 00:35:46,920
Speaker 1: Where do you see that going? Because you know, it's

586
00:35:46,960 --> 00:35:53,199
it's understandable that AI companies have some concerns about trying

587
00:35:53,239 --> 00:35:57,800
to navigate so many different laws in this arena.

588
00:35:58,559 --> 00:36:01,599
Speaker 3: Yeah, there's two real concerns here. One is the one

589
00:36:01,599 --> 00:36:02,440
that you said, which is.

590
00:36:02,360 --> 00:36:05,719
Speaker 2: A sort of patchwork of you know, there were over

591
00:36:05,760 --> 00:36:09,039
one thousand state laws that related to AI that were

592
00:36:09,079 --> 00:36:13,800
introduced in in twenty twenty five so far, and you know,

593
00:36:13,880 --> 00:36:16,840
most of those were not problematic, but you still have

594
00:36:16,920 --> 00:36:19,039
to pay attention to them a little bit. Some of

595
00:36:19,079 --> 00:36:21,599
them were deeply problematic and didn't pass. Some of them

596
00:36:21,599 --> 00:36:24,239
were deeply problematic and did pass. And so I think

597
00:36:24,239 --> 00:36:27,400
California passed sixteen different AI laws this session.

598
00:36:27,480 --> 00:36:30,199
Speaker 3: And so so one of the.

599
00:36:30,119 --> 00:36:32,920
Speaker 2: Concerns is just this patchwork compliance, right, Like, how do

600
00:36:33,000 --> 00:36:36,480
I know what? What if I offer this product and

601
00:36:36,519 --> 00:36:38,880
there's somebody who used it in Missouri and somebody who

602
00:36:38,960 --> 00:36:40,920
used it in California, am I going to have to

603
00:36:40,920 --> 00:36:42,920
comply with two different sets of laws? How do I

604
00:36:42,960 --> 00:36:46,039
do That's that's that's a real problem. That's especially a

605
00:36:46,079 --> 00:36:49,519
problem for startups. Bigger companies can kind of afford to

606
00:36:49,559 --> 00:36:52,320
pay like a huge legal shop to try to figure

607
00:36:52,320 --> 00:36:55,320
out how to do this, but smaller companies are just

608
00:36:55,360 --> 00:36:57,760
going to struggle to do that. And so that's a

609
00:36:57,840 --> 00:36:59,920
problem for competition in this space, which I think is

610
00:37:00,159 --> 00:37:03,800
an important dimension. The second problem is what I call

611
00:37:04,320 --> 00:37:08,079
extra territoriality. And this is an old problem in our

612
00:37:08,119 --> 00:37:13,360
federalist system. It's in fact, it's what drove the founders

613
00:37:13,400 --> 00:37:16,000
to move away from the Articles of Confederation and to

614
00:37:16,119 --> 00:37:21,400
move towards the Constitution to have a more centralized government

615
00:37:21,440 --> 00:37:25,000
with limited powers, and then you know, the states to

616
00:37:25,039 --> 00:37:28,199
have you know, other powers, the police powers primarily. And

617
00:37:28,239 --> 00:37:32,280
so what that means is if California passes a law,

618
00:37:32,480 --> 00:37:38,159
say dictating what you know political advertising can look like,

619
00:37:38,880 --> 00:37:43,559
it embeds you know the values of the California legislature.

620
00:37:44,280 --> 00:37:48,679
But because California is such a large market, and because

621
00:37:48,719 --> 00:37:51,360
companies are going to want to sell into that market,

622
00:37:51,800 --> 00:37:54,719
that means that you know, probably the people in Oklahoma

623
00:37:54,760 --> 00:37:59,000
and Missouri and Iowa and you know, North Carolina, they're

624
00:37:59,039 --> 00:38:01,719
probably going to be opera within the same system that

625
00:38:01,800 --> 00:38:04,679
California has dictated. And I think that that is a

626
00:38:04,679 --> 00:38:07,320
real problem, especially when we get into some more of

627
00:38:07,360 --> 00:38:09,880
the political and speech.

628
00:38:09,559 --> 00:38:11,239
Speaker 3: Based concerns in this area.

629
00:38:11,320 --> 00:38:14,880
Speaker 2: We don't want California to be setting the national standard

630
00:38:15,519 --> 00:38:19,960
for what AI looks like. That's that's Congress's job. This

631
00:38:20,039 --> 00:38:23,840
is a national technology, and so you know, the big,

632
00:38:23,960 --> 00:38:27,320
the you know, big beautiful Bill had there was an

633
00:38:27,320 --> 00:38:32,159
opportunity there for Congress to do something. They tried, couldn't

634
00:38:32,159 --> 00:38:35,079
get it across the line on that particular one. But

635
00:38:35,119 --> 00:38:38,920
there's building interest in this having a federal approach to

636
00:38:39,000 --> 00:38:42,920
this technology, which is a national technology that has both

637
00:38:43,239 --> 00:38:48,000
like nationally economic importance but also just national security importance

638
00:38:48,079 --> 00:38:50,400
as well when we talk about, you know, competing with

639
00:38:50,519 --> 00:38:52,679
China in this area, and so I think there is

640
00:38:52,760 --> 00:38:56,360
building interest in doing something at the federal level. We

641
00:38:56,559 --> 00:38:59,840
just had another opportunity. There was an exploration of whether

642
00:38:59,920 --> 00:39:02,519
or not we could get this into you know, Congress

643
00:39:02,559 --> 00:39:05,840
was going to put this into something like the the

644
00:39:05,840 --> 00:39:11,159
the NDAA, which is the big appropriations bill for National defense.

645
00:39:12,480 --> 00:39:17,000
Ultimately that that didn't It wasn't the right vehicle. It's

646
00:39:17,000 --> 00:39:20,360
a consensus document. It was just challenging to get bipartisan

647
00:39:20,400 --> 00:39:23,559
support there. But I think Congress continues to be interested

648
00:39:23,559 --> 00:39:28,159
in this. I think people recognize that national technology should

649
00:39:28,159 --> 00:39:31,400
be you know, regulated at the national level, and so

650
00:39:31,639 --> 00:39:33,880
I think Congress is going to continue to explore that.

651
00:39:34,519 --> 00:39:36,719
The White House has been pushing pretty hard on this.

652
00:39:36,800 --> 00:39:39,000
President Trump has come out vocally and said that we

653
00:39:39,039 --> 00:39:43,079
need a federal framework in this space. We cannot subject

654
00:39:43,119 --> 00:39:47,760
to this national technology to you know, fifty state patchwork

655
00:39:47,840 --> 00:39:51,159
of laws, and we certainly can't let California dictate what

656
00:39:52,000 --> 00:39:54,320
our technology looks like when we're competing with China.

657
00:39:54,440 --> 00:39:58,880
Speaker 1: So well, just just to ask the hard producers of

658
00:39:59,239 --> 00:40:05,519
Iowa about the impact California laws can have on their marketplace,

659
00:40:06,880 --> 00:40:07,199
you know.

660
00:40:07,239 --> 00:40:10,480
Speaker 2: Or anybody who's bought a pickup truck right absolutely and

661
00:40:10,519 --> 00:40:14,199
had to meet you know, California cafe like fuel efficiency standards.

662
00:40:14,280 --> 00:40:15,480
Speaker 3: Yeah, yeah, exactly.

663
00:40:16,599 --> 00:40:18,800
Speaker 1: All right, Well, we're quickly running out of time. I

664
00:40:18,920 --> 00:40:21,880
just have a couple of questions left. The first is

665
00:40:22,400 --> 00:40:28,639
the resources involved in this. Obviously, AI, this technology requires

666
00:40:29,000 --> 00:40:33,880
a great deal of a particular natural resource. Where do

667
00:40:33,960 --> 00:40:35,519
you see all of that going ahead?

668
00:40:36,440 --> 00:40:40,400
Speaker 2: Yeah, so AI requires a lot of energy. The current

669
00:40:41,519 --> 00:40:45,400
models and the current chips that are used to run

670
00:40:45,440 --> 00:40:50,639
them require energy, and so we are. Unfortunately in this

671
00:40:50,719 --> 00:40:53,960
country we have been operating since the nineteen seventies under

672
00:40:53,960 --> 00:40:56,559
a sort of scarcity mindset when it comes to energy,

673
00:40:56,599 --> 00:41:00,360
This idea that we need to conserve and recycle and

674
00:41:00,480 --> 00:41:03,599
all of these are Efficiency is good, recycling is good.

675
00:41:03,679 --> 00:41:06,039
Using things the best we can is good.

676
00:41:06,480 --> 00:41:07,519
Speaker 3: But we have.

677
00:41:07,480 --> 00:41:12,360
Speaker 2: A wealth of resources in this country. We should be

678
00:41:12,519 --> 00:41:18,280
using them. Prosperity and energy intensity are directly related. It's

679
00:41:18,320 --> 00:41:20,519
hard to be a wealthy country without using a lot

680
00:41:20,519 --> 00:41:22,679
of energy, and we shouldn't think of that as a

681
00:41:22,679 --> 00:41:25,719
bad thing. We should be trying to build more energy

682
00:41:25,719 --> 00:41:28,880
in this country. And I think it's finally AI has

683
00:41:28,920 --> 00:41:32,039
sort of woken people up to the fact that, hey,

684
00:41:33,159 --> 00:41:35,679
this scarcity mindset needs to go away. We need an

685
00:41:35,719 --> 00:41:39,280
abundance mindset when it comes to energy. Building more energy

686
00:41:39,360 --> 00:41:42,440
is a good thing, and providing more energy is good

687
00:41:42,480 --> 00:41:44,280
and that's the sort of thing that over time will

688
00:41:44,400 --> 00:41:49,519
drive down prices and we'll drive up new solutions, and

689
00:41:49,559 --> 00:41:50,800
so these data centers are.

690
00:41:50,679 --> 00:41:53,119
Speaker 3: Sort of the spark for that. But I think we could.

691
00:41:52,920 --> 00:41:57,039
Speaker 2: All benefit from having more energy abundance in this country.

692
00:41:57,840 --> 00:41:59,920
And so the balance is going to be like, where

693
00:42:00,079 --> 00:42:01,960
we build this, how can we build it, how do

694
00:42:02,000 --> 00:42:05,000
we deal with the concerns of you know, local communities

695
00:42:05,400 --> 00:42:09,800
about energy prices. All of that means that we should

696
00:42:09,840 --> 00:42:13,719
aim towards, you know, not subsidizing these types of projects,

697
00:42:13,960 --> 00:42:17,239
but finding a way to enable them to help give

698
00:42:17,360 --> 00:42:20,920
back both on the energy production side, but also on

699
00:42:21,000 --> 00:42:24,360
the you know, the the amazing tools that they're building.

700
00:42:25,280 --> 00:42:27,000
We talk about data centers, I have to think we

701
00:42:27,000 --> 00:42:30,960
should just say supercomputers, like people should be like excited

702
00:42:31,000 --> 00:42:34,440
to have a new supercomputer in their backyard. I well,

703
00:42:34,440 --> 00:42:36,559
maybe that just makes me super nerdy. I am excited

704
00:42:36,599 --> 00:42:37,320
by such things.

705
00:42:38,679 --> 00:42:41,639
Speaker 1: These are not everybody, not everybody who's excited as you

706
00:42:41,679 --> 00:42:42,400
are about this.

707
00:42:42,559 --> 00:42:45,280
Speaker 2: But I think you're right, and I need to I

708
00:42:45,280 --> 00:42:47,840
need to step out of my own bubble sometimes. And

709
00:42:47,880 --> 00:42:51,000
there certainly are trade offs that come from from this,

710
00:42:51,920 --> 00:42:55,360
But I think overall, the benefits are enormous. There are

711
00:42:55,440 --> 00:42:59,880
communities that are very interested in the level of investment

712
00:43:00,519 --> 00:43:03,920
that come with these types of data centers. You know,

713
00:43:04,000 --> 00:43:07,400
Texas is building a bunch of them, Louisiana has is

714
00:43:07,400 --> 00:43:10,960
building some massive ones. These are creating you know, hundreds

715
00:43:11,159 --> 00:43:15,880
of thousands of construction jobs, and then you know the

716
00:43:15,960 --> 00:43:19,400
ongoing like, uh, you know, maintenance of the data centers

717
00:43:19,480 --> 00:43:22,079
is you know, less job intensive, but is you know,

718
00:43:22,159 --> 00:43:25,079
creates prosperity and creates a hub for technology in the

719
00:43:25,280 --> 00:43:27,599
in the nearby area. And so I think there's there

720
00:43:27,639 --> 00:43:30,039
are lots of benefits. I think we need to keep

721
00:43:30,079 --> 00:43:31,840
those in mind, and we need to deal with some

722
00:43:31,920 --> 00:43:34,039
of the myths. One of the biggest myths is around

723
00:43:34,039 --> 00:43:37,280
this water use issue, which is just totally made up.

724
00:43:37,280 --> 00:43:38,039
Speaker 3: I mean, there is.

725
00:43:38,519 --> 00:43:41,920
Speaker 2: Why data centers use less water than the typical like

726
00:43:42,159 --> 00:43:46,679
mid size brewery. It does, and so or like a

727
00:43:46,719 --> 00:43:50,599
small manufacturer often uses more water than a data center.

728
00:43:51,519 --> 00:43:52,800
Speaker 3: They are energy intensive.

729
00:43:52,880 --> 00:43:55,559
Speaker 2: But water is not really an issue in this space,

730
00:43:55,599 --> 00:43:58,840
and it's I don't know why that's become such a story,

731
00:43:59,719 --> 00:44:02,559
but I think people just feel, you know, they're there.

732
00:44:02,800 --> 00:44:05,119
They don't want their bodily fluids polluted. We learned that

733
00:44:05,159 --> 00:44:08,960
from uh uh, you know from you know, uh what

734
00:44:08,960 --> 00:44:10,679
what is that movie? I'm blanking on it right now,

735
00:44:10,719 --> 00:44:12,039
But Soil and Green?

736
00:44:12,519 --> 00:44:18,840
Speaker 3: That wasn't Soiling Green, it was Blanken. But but you know, people.

737
00:44:18,679 --> 00:44:22,159
Speaker 1: I'm trying to remember a movie with bodily fluids as

738
00:44:22,159 --> 00:44:28,320
it's core principle. Actually, I'm sure they're Strange love, doctor Strange.

739
00:44:29,119 --> 00:44:30,760
Now you got it, yes, yeah.

740
00:44:30,920 --> 00:44:31,840
Speaker 3: Doctor Strange love.

741
00:44:31,840 --> 00:44:34,079
Speaker 2: And so I don't know why the water talking point

742
00:44:34,119 --> 00:44:37,519
has been so viral, but it's just not true.

743
00:44:37,760 --> 00:44:39,559
Speaker 1: It really has been. That's I mean, that's what that's

744
00:44:39,840 --> 00:44:42,199
really what I was getting at is that's the that's

745
00:44:42,239 --> 00:44:45,440
been the topic of conversation. Now. Of course, it's been

746
00:44:45,480 --> 00:44:48,360
the topic of a conversation that's in no small part

747
00:44:48,519 --> 00:44:53,920
driven by environmentalist and extreme environmentalist of course. And they

748
00:44:53,920 --> 00:44:57,320
have certainly never led us astray before right now.

749
00:44:58,639 --> 00:45:02,960
Speaker 2: They they I don't, And honestly, there's it's such a

750
00:45:03,000 --> 00:45:08,119
self defeating mindset that that the environmentalists are have been

751
00:45:08,119 --> 00:45:10,840
the primary driver of that mindset I said before, of

752
00:45:10,880 --> 00:45:15,000
like energy scarcity, that basically that that humans using the

753
00:45:15,039 --> 00:45:18,360
resources that God put here on our planet are is

754
00:45:18,880 --> 00:45:21,639
a bad thing. More or less and so I think

755
00:45:22,639 --> 00:45:25,280
they certainly have led us astray before I think on

756
00:45:25,280 --> 00:45:27,960
this water thing, they very much are leading us astray.

757
00:45:28,880 --> 00:45:32,280
You know, electrical use is a energy use is a challenge,

758
00:45:32,519 --> 00:45:34,639
and we need to make sure that we we get

759
00:45:34,679 --> 00:45:37,320
that balance right. And to me, the right solution there

760
00:45:37,360 --> 00:45:39,960
is to build more. We have the ability, we have

761
00:45:40,000 --> 00:45:42,679
the technical capability, we have the deep resources in this

762
00:45:42,719 --> 00:45:43,519
country to do it.

763
00:45:43,880 --> 00:45:45,719
Speaker 3: We need to do it, or you know.

764
00:45:46,079 --> 00:45:49,320
Speaker 2: These data centers will be being built in China and

765
00:45:49,599 --> 00:45:53,679
in you know, Saudi Arabia and in places where they'll

766
00:45:53,719 --> 00:45:58,519
be outside of the sort of US cultural influence, and

767
00:45:58,519 --> 00:46:01,320
then they won't be serving our national interest. And so

768
00:46:02,800 --> 00:46:05,119
I worry about that as a challenge. I think it's

769
00:46:05,519 --> 00:46:06,840
what we can tackle and we should.

770
00:46:07,800 --> 00:46:11,000
Speaker 1: Well. If my AOC clock is right, we have about

771
00:46:11,000 --> 00:46:14,559
two and a half years of existence left. If I'm

772
00:46:14,559 --> 00:46:17,360
not missing because I was out twelve years at one point,

773
00:46:17,400 --> 00:46:20,840
I haven't checked in. I probably should. Final question for you,

774
00:46:21,119 --> 00:46:23,519
and this is it. You know, I've made no secret

775
00:46:23,559 --> 00:46:27,079
to you as we've talked about these issues in the past.

776
00:46:27,840 --> 00:46:30,800
AI technology scares the hell out of me. Yeah, it's

777
00:46:30,880 --> 00:46:37,519
probably because I didn't. I'm old enough to live in

778
00:46:37,559 --> 00:46:42,039
a time where I didn't envision the technology could possibly

779
00:46:42,199 --> 00:46:48,320
get beyond the Atari twenty six hundred and frogger. So,

780
00:46:48,880 --> 00:46:51,639
you know, maybe I'm a ludite on that front. But

781
00:46:52,199 --> 00:46:57,639
here's the question, plain and simple. Will AI eventually become

782
00:46:57,719 --> 00:47:01,880
our overlords and slave us and our progeny?

783
00:47:03,519 --> 00:47:09,400
Speaker 2: No, no, no, these are these are advanced computers. They

784
00:47:09,440 --> 00:47:12,679
don't have motives, They don't have initiative.

785
00:47:13,000 --> 00:47:14,000
Speaker 3: What they have right now.

786
00:47:14,079 --> 00:47:17,079
Speaker 2: What these systems have right now is they have the

787
00:47:17,159 --> 00:47:22,039
ability to identify patterns in a wide range of data

788
00:47:22,679 --> 00:47:27,239
and collate it in a response to a query, and

789
00:47:27,400 --> 00:47:29,760
we can do really amazing things. It turns out we

790
00:47:29,760 --> 00:47:32,840
can do really amazing things with that. But there is

791
00:47:32,880 --> 00:47:37,360
not They do not have initiative or drive. I worry

792
00:47:37,360 --> 00:47:41,280
more about how humans will misuse them than I do

793
00:47:41,639 --> 00:47:44,639
about whether or not the AIS will somehow become independent.

794
00:47:44,719 --> 00:47:47,840
Right now, there is not a clear pathway to that

795
00:47:47,960 --> 00:47:50,280
sort of like autonomy, and so.

796
00:47:51,480 --> 00:47:53,280
Speaker 3: I don't worry about that at all. I worry.

797
00:47:53,800 --> 00:47:55,880
Speaker 2: I worry about not being able to take advantage of

798
00:47:55,880 --> 00:47:59,840
all the huge benefits that these technologies bring because because

799
00:47:59,840 --> 00:48:03,360
we have people who are too worried about you.

800
00:48:03,320 --> 00:48:05,480
Speaker 3: Know, science fiction scenarios.

801
00:48:05,039 --> 00:48:07,920
Speaker 1: So well it is. It is a brave new world

802
00:48:08,159 --> 00:48:11,800
in many facets. It's a very interesting new world, and

803
00:48:11,840 --> 00:48:15,000
it's a new world filled with all kinds of possibilities,

804
00:48:15,039 --> 00:48:19,760
as you say, many of them extremely positive. I'm not,

805
00:48:20,440 --> 00:48:22,880
you know, too much of a Luddite to understand that,

806
00:48:23,679 --> 00:48:26,440
but we need to stop every once in a while

807
00:48:26,480 --> 00:48:29,480
and discuss the impacts thus far and get a sense

808
00:48:29,519 --> 00:48:32,519
of where we're going. And you helped us do exactly that.

809
00:48:33,119 --> 00:48:35,960
Speaker 2: I appreciate it absolutely, and I should point out that,

810
00:48:36,239 --> 00:48:39,599
you know, it's not just the ludd Eights that like Froger.

811
00:48:39,679 --> 00:48:43,119
My six year old daughter plays Froger all the time,

812
00:48:43,239 --> 00:48:45,800
so it has a long life. Some of these things

813
00:48:45,840 --> 00:48:48,920
stick with us. Great technology can bring joy to people,

814
00:48:49,719 --> 00:48:52,400
you know, for decades. So that's what I'm hoping AI does.

815
00:48:52,840 --> 00:48:55,920
Speaker 1: Introducer to dig Doug and you may never see her again.

816
00:48:56,760 --> 00:48:57,519
Speaker 3: That might be right.

817
00:48:58,199 --> 00:49:01,360
Speaker 1: Thanks to my guest today, Neil Chilse, head of AI

818
00:49:01,519 --> 00:49:05,119
policy with the Abundance Institute, you've been listening to another

819
00:49:05,239 --> 00:49:07,840
edition of the Federalist Radio Hour. I'm Matt Kittle, Senior

820
00:49:07,840 --> 00:49:11,760
Elections correspondent at the Federalist. You'll be back soon. With more.

821
00:49:12,199 --> 00:49:16,000
Until then, stay lovers of freedom and anxious for the fray.

822
00:49:22,880 --> 00:49:28,480
Speaker 2: I heard the fame, voice the reason, and then it

823
00:49:28,679 --> 00:49:34,000
faded away.

