1
00:00:01,080 --> 00:00:04,799
How'd you like to listen to dot
net Rocks with no ads? Easy?

2
00:00:05,360 --> 00:00:09,480
Become a patron for just five dollars
a month. You get access to a

3
00:00:09,480 --> 00:00:14,240
private RSS feed where all the shows
have no ads. Twenty dollars a month,

4
00:00:14,279 --> 00:00:18,440
we'll get you that and a special
dot net Rocks patron mug. Sign

5
00:00:18,519 --> 00:00:35,880
up now at Patreon dot dot NetRocks
dot com. Welcome back to dot net

6
00:00:35,960 --> 00:00:39,439
Rocks. This is Carl Franklin,
this is Richard Gamble and this is the

7
00:00:40,439 --> 00:00:43,320
coming out on the twenty first.
So this is the last dot net Rocks

8
00:00:43,359 --> 00:00:47,439
show published before Christmas, but we've
got a few more to record. Just

9
00:00:47,479 --> 00:00:51,479
Geekouts. How you been, man? You know I've been. I've been

10
00:00:51,520 --> 00:00:54,359
working on the scripture the Geekouts turns
out. You know, it's a lot

11
00:00:54,399 --> 00:00:59,200
of stuff. It's been a busy
year, yeah, sure has. How

12
00:00:59,240 --> 00:01:03,039
about you? What are you well? Last night I did my first recording

13
00:01:03,320 --> 00:01:07,799
of one of my songs in the
studio, in the new studio. Oh,

14
00:01:07,840 --> 00:01:11,000
a new track, A new well. It's an older track that nobody's

15
00:01:11,040 --> 00:01:15,879
heard yet. But the first recording
I did was all kind of discombobulated.

16
00:01:17,319 --> 00:01:22,799
It's slower, it's at one hundred
bpm and and there, and with acoustics

17
00:01:22,799 --> 00:01:26,560
strumming and picking and stuff, and
therefore it's very easy to get off time.

18
00:01:26,040 --> 00:01:30,000
So I want to tell you about
the cool stuff that I did that

19
00:01:30,079 --> 00:01:34,239
I did this with. I recorded
the acoustic guitar in the bass first,

20
00:01:34,359 --> 00:01:40,879
and then I used a studio one's
bend tool to quantize the audio. What

21
00:01:41,200 --> 00:01:46,920
quantize the audio? It basically worked
right out of the box, and I

22
00:01:47,040 --> 00:01:49,879
had tried it before, but it
hadn't really worked well. Maybe the default

23
00:01:49,920 --> 00:01:56,439
settings are good now. But what
it does is it finds the transience according

24
00:01:56,480 --> 00:02:01,719
to your quantize value eighth notes or
whatever, and then it essentially moves the

25
00:02:02,000 --> 00:02:09,439
transience and either stretches or compresses the
audio between the transience and basically turns it

26
00:02:09,479 --> 00:02:15,199
into quantized like it's right on the
beat. Okay, so it's just like

27
00:02:15,360 --> 00:02:20,960
making the beat perfect. Yeah,
And there's no it does not sound artifacted

28
00:02:21,159 --> 00:02:24,000
at all interesting to me anyway.
So then I did the whole the same

29
00:02:24,039 --> 00:02:29,240
thing with the drums. For the
drums, I used my iPad at the

30
00:02:29,319 --> 00:02:31,400
drum set because it's all the way
across the room right right, and the

31
00:02:31,439 --> 00:02:37,840
iPad I installed this thing called studio
one remote, and so I set markers

32
00:02:37,879 --> 00:02:42,520
in the three places where I wanted
to record drums and I could just go

33
00:02:42,680 --> 00:02:46,759
back undo record. So I did
all the recording from the drum set with

34
00:02:46,960 --> 00:02:53,439
my iPad on a music stand and
I did two sections, one with brushes

35
00:02:53,479 --> 00:02:57,840
and one without. And that it
took a while to record, you know,

36
00:02:57,879 --> 00:03:00,800
but it was great because I could
just take another takenother take till it

37
00:03:00,840 --> 00:03:05,719
was right, and then I quantized
the drums. Those drummers are never on

38
00:03:05,800 --> 00:03:09,199
beat anyway. I was pretty close, but I wasn't perfect, and I

39
00:03:09,319 --> 00:03:14,159
really wanted this to be tight interesting
and it really is. So I'm very

40
00:03:14,159 --> 00:03:19,560
excited. Everything worked and yeah,
what can I say? Very happy the

41
00:03:19,639 --> 00:03:25,759
new studio. Yeah, fully up, rational nice. Hey let's get started

42
00:03:25,759 --> 00:03:36,560
with better no framework? All right, man, what do you got?

43
00:03:36,599 --> 00:03:39,159
Well? I found this post on
board panda dot com. Have you ever

44
00:03:39,199 --> 00:03:44,800
seen that crazy site? Yeah?
Yeah, So this is thirty of the

45
00:03:44,840 --> 00:03:53,039
worst Christmas gifts people ever received,
as shared in this online group. One

46
00:03:53,080 --> 00:04:02,400
of my favorites is a dish towel. I was eight, It was eight

47
00:04:02,479 --> 00:04:06,439
years old, and my parents gave
me a dish towel. Nice and it

48
00:04:06,479 --> 00:04:12,240
wasn't even a new dish towel.
I learn it was just like they went

49
00:04:12,280 --> 00:04:15,560
to the kitchen, found a dish
towel, wrapped it up in paper,

50
00:04:15,599 --> 00:04:21,480
and then get to this eight year
old kid. H it. Yeah,

51
00:04:23,000 --> 00:04:29,319
Dad, Christmas presents so hours and
hours of fun and yucks and silliness.

52
00:04:29,399 --> 00:04:31,519
Silliness, Yeah, which we need
once in a while, right, especially

53
00:04:31,600 --> 00:04:34,160
around the end of the year.
It's like we've been working hard all year.

54
00:04:34,959 --> 00:04:39,240
Now let's take some time. Awesome. Anyway, That's what I got.

55
00:04:39,399 --> 00:04:42,879
Who's talking to us? Richard grabbed
a comment of of Show eighteen seventy

56
00:04:42,879 --> 00:04:46,720
three, the one we just published
a little while ago with Leah Melanino from

57
00:04:46,800 --> 00:04:50,439
NDC and Portal. We were talking
about sustainable development. You know, I'm

58
00:04:50,560 --> 00:04:54,160
just thinking about the energy can say. We've got a great comment here from

59
00:04:54,199 --> 00:04:56,279
Jackie. You said, Yeah,
Hi, Carlin Richard. I want to

60
00:04:56,279 --> 00:05:00,360
express my thanks for tackling this important
subject. This conversation resonated with me a

61
00:05:00,360 --> 00:05:03,040
lot, as I'm a software engineer
working with Azure Cloud technologies. In our

62
00:05:03,079 --> 00:05:08,000
related note, I'd like to share
my recent experience with an iPhone. I've

63
00:05:08,079 --> 00:05:11,759
always been an Android user, primarily
do the perception that iPhones are overpriced to

64
00:05:11,839 --> 00:05:15,560
non standard devices. I think Apple
pretty much dictates the standard. Just look

65
00:05:15,560 --> 00:05:18,959
at out what happened they did to
RSS. Often seen as more suitable for

66
00:05:19,040 --> 00:05:24,040
users seeking opinionated ux like my mother, or a symbol of social status like

67
00:05:24,079 --> 00:05:29,439
my brother Jackie, like your diss
in the fan, right it's Christmas.

68
00:05:30,680 --> 00:05:32,839
However, when my Google pixel broke
and I had to let it in for

69
00:05:32,920 --> 00:05:36,959
service, I was left with no
choice but to use a backup iPhone fives,

70
00:05:38,000 --> 00:05:41,000
a decade old model offered by my
brother. See he may be a

71
00:05:41,040 --> 00:05:44,240
social status seeker, but at least
they'll give you a spare iPhone. It's

72
00:05:44,279 --> 00:05:46,680
not a bad guy. He's not
a bad guy. Yeah, you had

73
00:05:46,720 --> 00:05:50,600
to reassess this relationship, man.
And to my surprise, the iPhone five

74
00:05:50,800 --> 00:05:56,720
S was still receiving security updates and
all my essential mobile apps function flawlessly on

75
00:05:56,800 --> 00:06:00,319
it. This is something I couldn't
even say about some five year old low

76
00:06:00,399 --> 00:06:04,040
end Android devices, not that any
iPhone is a low end device. The

77
00:06:04,079 --> 00:06:10,279
only drawback was battery life. Yeah, battery is not going to be great.

78
00:06:10,759 --> 00:06:14,480
This experience made me realize that even
though iPhones may not be considered entirely

79
00:06:14,519 --> 00:06:17,879
sustainable, due to a lack of
adherence to certain standards like USBC or Place

80
00:06:17,879 --> 00:06:23,399
Little batteries. Their longevity is great
right up until you drop them. Therefore,

81
00:06:23,439 --> 00:06:27,120
I've decided to offer an iPhone model
with a USB C port. Thank

82
00:06:27,160 --> 00:06:32,720
you EU who demanded that Apple start
using USB c's. This all allow me

83
00:06:32,720 --> 00:06:35,600
to utilize most of my Android gadgets
and extended lifespan on the phone. And

84
00:06:35,600 --> 00:06:40,399
thanks again for your dedication to the
dot net Rocks podcast. It has been

85
00:06:40,480 --> 00:06:44,639
a valuable source of knowledge and inspiration, especially during my career transition to dot

86
00:06:44,680 --> 00:06:47,399
net c sharp looking forward to future
episode, and I can add to that

87
00:06:49,040 --> 00:06:57,240
the iPhones are inherently more secure than
Android phones just because Apple is such a

88
00:06:57,279 --> 00:07:00,959
closed system. That's one of the
benefits actually of having It's sort of security

89
00:07:00,000 --> 00:07:02,839
by obscurity. Nobody wants to try
and act them. Really, I don't

90
00:07:02,839 --> 00:07:08,959
know about obscurity, but security by
by iron fist control over everything, everything

91
00:07:08,959 --> 00:07:13,759
that goes on that phone, right, So yeah, I like I That's

92
00:07:13,959 --> 00:07:15,560
that's one of the reasons why I
have an iPhone. And the guys at

93
00:07:15,600 --> 00:07:19,040
Security this week Patrick Hanes and Twain
Laflat say the same thing. Yeah,

94
00:07:19,040 --> 00:07:23,560
I get the iPhone. The phone
is more secure the Android phone. I

95
00:07:23,560 --> 00:07:27,240
mean under the hood androids Linux.
Everybody knows Linux is terrible, especially Daniel.

96
00:07:27,240 --> 00:07:30,199
He knows that that. I'm just
gonna poke at the Linux guy the

97
00:07:30,319 --> 00:07:34,360
whole day. So we're gonna do
but we're gonna poke at each other at

98
00:07:34,439 --> 00:07:40,079
Christmas time. Come on, hey, Jackie, thank you so much for

99
00:07:40,120 --> 00:07:42,519
your comment. Glad you really liked
the show. And a copy of music

100
00:07:42,519 --> 00:07:44,680
co Buy is on its way to
a un If you'd like a copy of

101
00:07:44,759 --> 00:07:47,120
music Cobe, I write a comment
on the website at dotnet rocks dot com

102
00:07:47,279 --> 00:07:49,480
or on the facebooks. We publish
every show there, and if you comment

103
00:07:49,519 --> 00:07:51,959
there and read on the show,
we'll send to your copy of music code

104
00:07:51,959 --> 00:07:55,279
by and you can follow us on
Twitter if you like. But the real

105
00:07:55,319 --> 00:08:00,600
fun happens. I'm mastadon, I'm
at Carl Franklin at tech hub Social,

106
00:08:00,680 --> 00:08:03,439
and I'm Rich Campbell at masodon do
Social send us a two You might get

107
00:08:03,480 --> 00:08:07,720
a mug if we read it on
the show. Think so pretty sure you

108
00:08:07,759 --> 00:08:09,959
won't? Yeah, pretty sure?
You get a copy of music opy.

109
00:08:09,199 --> 00:08:11,560
Kind of how that worked? Did
I say mug? He did? All

110
00:08:11,680 --> 00:08:15,399
right? Let me say that again, Brandon, all Right, senas,

111
00:08:15,439 --> 00:08:16,480
although that was kind of funny,
we might want to leave it in.

112
00:08:16,519 --> 00:08:22,120
It's kind of funny. I'm pretty
sure you won't. Let's leave it in.

113
00:08:22,199 --> 00:08:26,360
It's Christmas time, what the hell? All right, let me bring

114
00:08:26,399 --> 00:08:33,399
on our guest, Daniel Marbach as
a distinguished Microsoft MVP and software maestro at

115
00:08:33,440 --> 00:08:37,159
Particular Software, Daniel knows a thing
or two about code. By day,

116
00:08:37,399 --> 00:08:43,840
he's a devoted dot net crusader espousing
the virtues of message based systems. By

117
00:08:43,919 --> 00:08:48,360
night, he's racing against his own
mischievous router hack, committing a bevy of

118
00:08:48,440 --> 00:09:00,720
performance improvements before the clock strikes midnight
and he turns into a pumpkin. Yes,

119
00:09:00,879 --> 00:09:05,120
exactly that. What's a router hack? What are you going to do?

120
00:09:05,240 --> 00:09:11,360
Low wrt? No, it's just
it's a very simple sort of trick

121
00:09:11,399 --> 00:09:16,600
because I've been contributing to open source
and various things, and I'm just I

122
00:09:16,759 --> 00:09:20,480
just like to spend some time with
code because I feel like it's sharp and

123
00:09:20,519 --> 00:09:24,759
sort of my understanding of just actor
i'm working with. And I had appeared

124
00:09:24,799 --> 00:09:28,639
in my life where I just couldn't
stop right because it's like and then it

125
00:09:28,759 --> 00:09:31,039
was at first it was like one
am, two am, three am in

126
00:09:31,080 --> 00:09:37,919
the morning. And then luckily I
always basically my only one rule was that

127
00:09:37,960 --> 00:09:41,200
I will not extend my alarm clock
to a later point in the day,

128
00:09:41,679 --> 00:09:43,960
so otherwise my days would have shifted. But at some point I was like,

129
00:09:45,000 --> 00:09:46,559
Okay, that's it. I need
to change something in my life.

130
00:09:46,919 --> 00:09:52,480
So basically I'm switching off the internet
around midnights at my whole Oh and that's

131
00:09:52,519 --> 00:09:58,480
when you have to hack your router
so exactly, because that usually when you're

132
00:09:58,559 --> 00:10:01,879
when you're like in the middle of
thing and is I should Google bing this

133
00:10:01,080 --> 00:10:05,080
or whatever you're using, right,
you're like, ah, the internet doesn't

134
00:10:05,080 --> 00:10:07,639
work anymore. Ah, it has
time until tomorrow and then I switch off

135
00:10:07,679 --> 00:10:11,240
and just go to bed. So
yeah, I never worked that way for

136
00:10:11,360 --> 00:10:13,080
me. I had to turn the
bugger back. I put the same rule

137
00:10:13,120 --> 00:10:18,240
in place because I had teenage daughters. And now you hear the audible groans

138
00:10:18,279 --> 00:10:20,600
at midnight, right, it's like, oh, yeah, everybody's in bed,

139
00:10:20,879 --> 00:10:24,039
sure they are. But me,
it's like, I guess got to

140
00:10:24,120 --> 00:10:26,960
you know, finishing up to the
go to push the code fails. Yeah,

141
00:10:28,080 --> 00:10:31,120
you're like, uh, it's me. I got to fix the rider.

142
00:10:31,360 --> 00:10:37,559
So your performance wonk, are you? Well? Actually, I work

143
00:10:37,639 --> 00:10:43,279
for the company called Particular Software.
I guess you had Dhan on the police

144
00:10:43,960 --> 00:10:48,399
before, right, so you know
we had a few particular rights and a

145
00:10:48,440 --> 00:10:52,559
few others and yeah, so I
basically my day to day shop is I'm

146
00:10:52,879 --> 00:10:58,720
building a robust and reliable frameworks and
libraries for people that sort of want to

147
00:10:58,759 --> 00:11:03,639
sort of build it distributed systems primarily
based on messaging stuff like ASH, service

148
00:11:03,679 --> 00:11:09,840
bus, SQS, SNS, storage
queues in the old days, God forbids

149
00:11:09,960 --> 00:11:13,519
MSMQ. Right, but it's still
it's still out there thriving. Surprise,

150
00:11:13,559 --> 00:11:20,600
surprise, It's quite heavily used there
in the industry. Yeah, and it's

151
00:11:20,600 --> 00:11:24,279
one of the great things is feature
complete, right, it it's just there

152
00:11:24,360 --> 00:11:28,919
if you're still running on Windows and
other on Linux like I do. And

153
00:11:35,639 --> 00:11:39,320
one of the things that we do
there is we want to make sure that

154
00:11:39,840 --> 00:11:43,559
the customers that they're using in service
bus can focus on their just writing their

155
00:11:43,600 --> 00:11:48,600
business code, don't need to write
any plumbing code, and that stuff should

156
00:11:48,679 --> 00:11:56,000
run as efficiently as possible. Right, So I guess performance throughput was always

157
00:11:56,039 --> 00:11:58,879
sort of right from the center sort
of in my day to day job.

158
00:12:00,039 --> 00:12:05,240
But they also care a lot about
it because I believe especially it came us

159
00:12:05,320 --> 00:12:07,879
out in the comment right that you
read out Richard. There's like, if

160
00:12:07,879 --> 00:12:11,279
you're targeting the cloud, or if
you're shifting the cloud, or if you're

161
00:12:11,320 --> 00:12:18,039
already are in the cloud, sort
of you are basically built by the amount

162
00:12:18,039 --> 00:12:22,399
of resources that you're using in the
cloud. Yeah, so there's a direct

163
00:12:22,720 --> 00:12:28,320
revenue relationship to that consumption, which
it leaves giuse of some kind of incentive.

164
00:12:28,960 --> 00:12:31,080
Yeah. Correct, You put down
your credit card and then you get

165
00:12:31,120 --> 00:12:37,120
surprised at the end of the month. Especially I'm surprising yourself as one thing.

166
00:12:37,240 --> 00:12:41,399
Surprising the CFO is another. That's
a very loud noise from a large

167
00:12:41,440 --> 00:12:46,720
office. Yeah, that's the question. I mean, there's so many ways

168
00:12:46,720 --> 00:12:50,799
to tweak performance. One is just
by using the latest netstack correct, and

169
00:12:50,840 --> 00:12:56,840
then you know, keeping your new
get packages updated. But on top of

170
00:12:56,879 --> 00:13:00,000
that, you know, are what
are kind of knobs are you post?

171
00:13:00,200 --> 00:13:03,399
Are you pulling software knobs, hardware
knobs? All of the above. So

172
00:13:03,480 --> 00:13:07,600
let me cook before I answer your
question. Let me quickly go to what

173
00:13:07,639 --> 00:13:11,879
you said about updating the dot net
the dot net version. That's actually really

174
00:13:11,960 --> 00:13:18,600
interesting because Microsoft has this block series
where they essentially talk about their teams migrating

175
00:13:18,639 --> 00:13:24,120
to for example, from dot Net
framework to newer dot Net versions or from

176
00:13:24,159 --> 00:13:26,759
dot net six net eight, And
one of the cool things they blocked there

177
00:13:26,840 --> 00:13:35,279
is they the Microsoft Teams Infrastructure team, they basically migrated from dot Net framework

178
00:13:35,440 --> 00:13:41,720
to dot net six and just by
basically migrating to that LTS version of dot

179
00:13:41,759 --> 00:13:46,399
Net, they were actually able to
sort of buy almost twenty four percent reduce

180
00:13:46,480 --> 00:13:50,440
their monthly cost expenditure in asure,
which is pretty amazing if you think that's

181
00:13:50,519 --> 00:13:54,759
amazing about that, right, And
that's definitely one way to do it.

182
00:13:54,799 --> 00:13:58,000
So I always encourage people to sort
of if they can stay up to date

183
00:13:58,120 --> 00:14:03,559
with the with the latest dot Net
versions. Definitely, Yeah, I'm a

184
00:14:03,559 --> 00:14:07,240
big fan of that series on the
dot net blog, just because you know,

185
00:14:07,440 --> 00:14:11,720
you talk about like the teams guys
migrating to a new version of dot

186
00:14:11,799 --> 00:14:15,840
net and the benefits they got from
it, and also the things they struggled

187
00:14:15,879 --> 00:14:18,159
with on that, like what problems
they had. But to me, more

188
00:14:18,159 --> 00:14:22,159
than anything, it's like, hey, you know, if these guys got

189
00:14:22,200 --> 00:14:26,200
this kind of benefit and we're able
to move that big an app you're going

190
00:14:26,279 --> 00:14:30,759
to be okay, like you can
do it. Yeah, absolutely. But

191
00:14:31,200 --> 00:14:33,559
to come back to your question,
Carl, I think one of the things

192
00:14:33,559 --> 00:14:37,080
that I try to apply to sort
of in my thinking is I want to

193
00:14:37,120 --> 00:14:43,480
make explicit trade off as I'm going
with things, and so that means I

194
00:14:43,519 --> 00:14:46,759
want to be aware of sort of
is this code going to be executed on

195
00:14:46,799 --> 00:14:50,360
the hot path or at scale,
right, so, and how many times

196
00:14:50,399 --> 00:14:54,320
a second is that going to be
or is it just something that runs on

197
00:14:54,360 --> 00:14:58,879
a sort of a background chop once
a day or twice a day or because

198
00:14:58,960 --> 00:15:03,840
then usually if it's just executed once
or twice a day, it doesn't really

199
00:15:03,879 --> 00:15:09,240
matter that much whether it's super fast
or not. But then when it's executed

200
00:15:09,279 --> 00:15:15,600
on the hot path potentially hundreds and
thousand times per second, then it's usually

201
00:15:15,600 --> 00:15:20,919
good to sort of become more of
performance aware, but performance aware. Unfortunately,

202
00:15:22,039 --> 00:15:24,480
this is also something that gets thrown
around quite a lot sort of in

203
00:15:24,519 --> 00:15:30,519
the industry, and then everyone assumes
everyone knows what performance awareness means, right.

204
00:15:30,919 --> 00:15:33,879
But one of the things that I
struggled with was where should I even

205
00:15:33,919 --> 00:15:39,960
get started to become performance aware,
Because apparently if you go down sort of

206
00:15:39,000 --> 00:15:46,440
the literature of looking at benchmarking and
performance optimizations. You can actually go from

207
00:15:46,639 --> 00:15:52,080
just doing little things up to setting
up your entire CICD pipeline with dedicated hardware

208
00:15:52,720 --> 00:15:58,840
doing regression testing. Right, But
usually we don't start there. Usually we

209
00:16:00,120 --> 00:16:03,799
or somewhere else. And that's one
of the things that I'm trying to apply.

210
00:16:03,879 --> 00:16:06,960
So usually I ask myself a bunch
of questions when I look at the

211
00:16:07,000 --> 00:16:10,440
code, right, So, for
example, I I go and look for,

212
00:16:11,120 --> 00:16:15,039
well, what could be the CP
and memory characteristics? What could that

213
00:16:15,080 --> 00:16:18,759
be for the specific line that I'm
looking at that I know is on the

214
00:16:18,799 --> 00:16:22,759
hot path? And then I usually
start thinking about, so, are there

215
00:16:22,759 --> 00:16:25,960
any sort of low hanging fruits that
I can sort of apply to this,

216
00:16:26,240 --> 00:16:30,639
maybe do some more efficient string splitting
options and stuff like that that I know

217
00:16:30,799 --> 00:16:36,559
from reading the performance blog posts,
and I can apply there and that is

218
00:16:36,600 --> 00:16:40,480
and I'm presumed that's Stephen Taub's post, Yes, of course, yeah,

219
00:16:40,600 --> 00:16:45,080
that the book the Book of the
Taub right y, yeah, exactly.

220
00:16:45,759 --> 00:16:48,840
And then most of the time,
right, so there are a few tricks

221
00:16:48,879 --> 00:16:52,960
you can apply. So, for
example, if you're allocating a byttery,

222
00:16:52,279 --> 00:16:56,799
right, and you know that you
you actually just need it for every iteration.

223
00:16:57,039 --> 00:17:00,679
What you can do is you can
sort of move that away from the

224
00:17:00,679 --> 00:17:04,640
hot path allocated once and clear it
and then you're no longer having allocations that

225
00:17:04,680 --> 00:17:10,839
you're executing like on the hot path, and then the garbage collection doesn't have

226
00:17:10,920 --> 00:17:14,759
to clean up a lot of things, and there things get get get better

227
00:17:14,759 --> 00:17:18,720
as well. But then what how
do you measure that, Daniel, Like,

228
00:17:19,000 --> 00:17:22,440
how do you know it got?
Like, that's a tricky one that

229
00:17:22,720 --> 00:17:26,720
the impact of garbage collection, Like, how would I measure that I reduce

230
00:17:26,759 --> 00:17:30,240
the amount of garbage collecting? Yes, that's that's a that's a very good

231
00:17:30,319 --> 00:17:37,559
question. So usually what I do
is before I so I call this performance

232
00:17:37,599 --> 00:17:41,119
loop. So, for lack of
a better term, I call the performance

233
00:17:41,160 --> 00:17:45,839
loop. So what I usually start
with is when I have ip I hypothesis

234
00:17:45,880 --> 00:17:52,799
about a piece of code that needs
to that it creates uh garbage. Then

235
00:17:52,880 --> 00:17:57,400
I what I do is I write
a test harness, and essentially what that

236
00:17:57,519 --> 00:18:02,559
harness sort of does. It sort
of takes whatever I'm looking at, takes

237
00:18:02,559 --> 00:18:07,519
it into specific context of my suspicions, and then it executes it in that

238
00:18:07,559 --> 00:18:14,200
specific scenarios. And then I attached
profilers to that piece of code, And

239
00:18:14,720 --> 00:18:19,079
what I usually do is I take
at least a memory snapshot and also create

240
00:18:19,480 --> 00:18:25,599
a CPO snapshot. And of course, if you have an io bond system,

241
00:18:26,079 --> 00:18:29,920
like you have a database in place, or you have HDP called stuff

242
00:18:29,960 --> 00:18:33,000
like that, you also want to
look at your you want to do io

243
00:18:33,119 --> 00:18:37,640
based profiling as an example, right, because usually when you look at your

244
00:18:37,640 --> 00:18:41,920
iosystem like the database, that's you
can basically achieve orders of magnitude of performance

245
00:18:41,960 --> 00:18:48,640
improvement by tweaking your sequel querits right
before you will start thinking about memory allocations

246
00:18:48,680 --> 00:18:52,119
and stuff like that. But assuming
you have sort of removed that part,

247
00:18:52,640 --> 00:18:59,720
then essentially those sort of profiler snapshots
give you an indication of where could you

248
00:18:59,759 --> 00:19:04,240
voteocus on And here comes the next
problem when you attach your profiler is that

249
00:19:04,720 --> 00:19:11,359
you might see lots and lots of
allocations from different subsystems and component on that

250
00:19:11,440 --> 00:19:17,519
specific cultree and where should you even
start? So I usually try to sort

251
00:19:17,519 --> 00:19:23,799
of apply a combination of I call
it the one percent improvement philosophy vverasus deliberate

252
00:19:23,960 --> 00:19:32,119
contextual based optimizations that I want that
I want to do so because for example,

253
00:19:32,200 --> 00:19:37,000
I believe that if you do enough
little performance optimizations over time, that's

254
00:19:37,000 --> 00:19:42,559
the one percent improvement sort of philosophy, then eventually they will end up making

255
00:19:42,599 --> 00:19:47,359
a big impact. And we can
see that Microsoft applies it as well to

256
00:19:47,400 --> 00:19:49,559
do dot net run time. Right, They're doing lots and lots and lots

257
00:19:49,559 --> 00:19:55,279
and lots of small changes all over
the place, and the compounding effect of

258
00:19:55,359 --> 00:19:59,920
these changes they essentially are massive when
you look at them in sort of the

259
00:20:00,039 --> 00:20:03,720
greater scheme of things, right,
And what profiling tools are you using here?

260
00:20:03,839 --> 00:20:07,599
Is this just like the built in
profiler. Okay, you made a

261
00:20:07,680 --> 00:20:15,440
joke about me. You're running on
Linux, right, so so well,

262
00:20:15,519 --> 00:20:21,160
so I'm a big fan of the
sort of chet Brains tools I've been using

263
00:20:21,400 --> 00:20:26,799
for a year alone. So I'm
using usually dot trace and dot memory sort

264
00:20:26,799 --> 00:20:33,640
of those two tools, and Writer
also has some built in analysis, so

265
00:20:33,680 --> 00:20:37,720
for example, they do when you
execute your tests or you execute your solution.

266
00:20:37,839 --> 00:20:41,039
They also do some dynamic program analysis
where they show you sort of the

267
00:20:41,079 --> 00:20:45,920
allocations that your stuff had or the
the CP that it wasted. So I

268
00:20:47,000 --> 00:20:51,400
tried to use those tools, but
primarily dot trace and dot memory to get

269
00:20:51,599 --> 00:20:56,240
to get an overview of what is
actually going going on, right, And

270
00:20:56,759 --> 00:21:00,599
I mean, yeah, there are
built in profilers, but if you're willing

271
00:21:00,640 --> 00:21:04,079
to pay for one, there are
better ones. Yeah, I mean,

272
00:21:04,400 --> 00:21:07,160
to be fair, a visual Studio
is great, right, It's like visual

273
00:21:07,200 --> 00:21:11,880
Studio has depending on the license,
I'm not entirely familiar with the licensing terms

274
00:21:11,920 --> 00:21:17,400
there, but it has great tooling
built in. Or if you are sort

275
00:21:17,400 --> 00:21:22,240
of very advanced and mostly on Windows, you can also use perfew, right.

276
00:21:22,319 --> 00:21:26,960
So perfew is a very powerful tool
that you can use, although I

277
00:21:27,000 --> 00:21:33,079
struggle with it a bit, I
must say every time it's like using WINDPG.

278
00:21:33,440 --> 00:21:36,720
Every time I use these tools,
I have to sort of get some

279
00:21:36,839 --> 00:21:41,200
cheat sheets onto my machine in order
to remember the complex commands. And that's

280
00:21:41,559 --> 00:21:45,200
one of the benefits those too.
There are tools you need to learn,

281
00:21:45,440 --> 00:21:49,519
and I mean I suspect you use
them more than most people. And if

282
00:21:49,519 --> 00:21:56,359
you can't keep them in your head, then nobody can. I mean,

283
00:21:56,400 --> 00:21:59,519
I've done a lot of performance tuning
over the years, and people already surprised.

284
00:21:59,519 --> 00:22:00,680
It's like why are are you reading
the docs? Like don't you know

285
00:22:00,759 --> 00:22:04,839
this? It's like, listen,
there's a lot of knobs on these things,

286
00:22:04,920 --> 00:22:07,720
and if you don't go through the
steps you can waste a lot of

287
00:22:07,759 --> 00:22:11,799
time, yes, and actually wasting
a lot of time. That's actually a

288
00:22:11,920 --> 00:22:15,359
very good, good sort of comment
that you made there, because I think

289
00:22:15,440 --> 00:22:22,759
even if even if you are more
familiar with performance optimizations and benchmarking, it's

290
00:22:22,799 --> 00:22:26,000
like and profiling at the end of
the day, that's not my day job.

291
00:22:27,319 --> 00:22:33,680
My day job is building robust and
reliable messaging frameworks and middlewares and the

292
00:22:33,759 --> 00:22:37,599
platform at particular, and not doing
performance optimizations all day long. That's not

293
00:22:37,680 --> 00:22:41,480
my job description, right, And
I guess that many people that are also

294
00:22:41,519 --> 00:22:45,680
listening to that podcast have the same
thing, right, They would like to

295
00:22:45,759 --> 00:22:52,480
dive into profiling and benchmarking performance optimizations, but they only have a limited budget

296
00:22:52,920 --> 00:22:56,480
in order to spend on those types
of things. And that's why I always

297
00:22:56,519 --> 00:23:03,559
recommend start with a test harness,
reproduce the scenario, attach your profiler,

298
00:23:03,599 --> 00:23:07,799
and then use your domain knowledge of
the things that you're working to basically sift

299
00:23:07,880 --> 00:23:12,079
through the noise of allocations and CPU
and on the call stack and then figure

300
00:23:12,079 --> 00:23:17,920
out Okay, probably here is where
we can make the biggest impact on sort

301
00:23:17,960 --> 00:23:23,000
of reducing the numbers of CPU cycle
spent or reducing the numbers of garbage allocations

302
00:23:23,039 --> 00:23:27,599
that are happening there. But sometimes, like I said before, it's also

303
00:23:29,240 --> 00:23:33,079
where you think you have the most
knowledge in and then applying sort of the

304
00:23:33,400 --> 00:23:37,319
one percent improvement over time in order
to sort of make things better and better

305
00:23:37,359 --> 00:23:45,160
and not trying to gold plate so
everything out of an existing code path.

306
00:23:45,400 --> 00:23:51,000
But I guess we now have sort
of touched a little bit on the how

307
00:23:51,000 --> 00:23:53,039
would I even know where to get
started? Right? We talked about profiling,

308
00:23:53,079 --> 00:23:57,759
We talked about doing CPO memory at
least right always get sort of two

309
00:23:57,839 --> 00:24:03,039
views on the code base. But
the next thing is then I mean improvements.

310
00:24:03,559 --> 00:24:07,920
Of course, that goes more into
the territory of knowing your stack,

311
00:24:07,400 --> 00:24:11,640
knowing your language, knowing the libraries
that you work with. Like Carl said,

312
00:24:11,759 --> 00:24:15,720
also looking for has the library and
you release that we can pull in

313
00:24:15,920 --> 00:24:19,920
or maybe reach out to the maintainers
and say, hey, by the way,

314
00:24:21,079 --> 00:24:26,119
we did the profiling snapshot and we
found out that this library allocates that

315
00:24:26,240 --> 00:24:30,720
much amount of memory. And guess
what when you reach out with a profiler

316
00:24:30,799 --> 00:24:37,000
snapshot to third party tooling providers,
they are like super happy because you're then

317
00:24:37,039 --> 00:24:41,759
in the one percent sort of customers
and then it's like, hey, now

318
00:24:41,759 --> 00:24:45,559
we have data from the customer that
we can see what's actually going on.

319
00:24:47,680 --> 00:24:52,240
You've now described a workload in a
meaningful way to them correct correct, right,

320
00:24:52,839 --> 00:24:59,119
And so for example, I myself
have done that as well. At

321
00:24:59,119 --> 00:25:03,839
some point I st over sort of
memory inefficiencies in the ash service bus s

322
00:25:03,920 --> 00:25:10,000
d K, and then I saw
I had a hunch. I wrote the

323
00:25:10,039 --> 00:25:15,359
test harness attached profiler was able to
show that when you access the bodies of

324
00:25:15,400 --> 00:25:21,240
a service bus message that allocates unnecessary
memories every time you essentially access the body.

325
00:25:21,799 --> 00:25:25,680
I was able to sort of show
the profiler snapshots show the memories.

326
00:25:25,680 --> 00:25:29,480
And then I, even because I
was lucky, I already knew the library

327
00:25:29,480 --> 00:25:33,279
a little bit, I guess,
and I was able to contribute a fix

328
00:25:33,799 --> 00:25:40,279
for that memory allocation problem to the
Azure service bus. Yeah that's a good

329
00:25:40,279 --> 00:25:49,960
way to get an email from Clements. Yeah, yeah, I've I've I've

330
00:25:51,039 --> 00:25:55,799
had comments from from Clements originally on
on some of my pull requests as well.

331
00:25:57,359 --> 00:26:00,400
And then and there in lies the
point, like you get into it.

332
00:26:00,400 --> 00:26:03,000
And of course the kicker, of
course is at its open source,

333
00:26:03,039 --> 00:26:07,160
so you could just contribute a fix
yes, yes, yeah, but I

334
00:26:07,160 --> 00:26:11,400
guess, I mean, nobody expects
you to write but at least having a

335
00:26:11,440 --> 00:26:15,039
memory profile or a CPU profile,
and like you said, Richard, showing

336
00:26:15,079 --> 00:26:18,279
what's going on in production, is
it. Yeah, I just think it's

337
00:26:18,279 --> 00:26:22,960
always challenging to get down to the
brass tax like that. To me,

338
00:26:23,039 --> 00:26:27,319
most of my profiling experience has been
trying to optimize an e commerce site where

339
00:26:27,359 --> 00:26:30,079
it's like we're just you know,
we're running. We're now looking at a

340
00:26:30,119 --> 00:26:34,680
buy of more servers because the site's
so busy, like an optimization can mean

341
00:26:34,720 --> 00:26:41,920
a lot of money and the prob
I was a nance guy at the time,

342
00:26:41,599 --> 00:26:45,000
and that was the tool that showed
me. I mean, this may

343
00:26:45,039 --> 00:26:48,000
not have been it was always that
balance between this is a very complicated method

344
00:26:48,160 --> 00:26:52,240
and so it's consuming a lot of
resources and it's a very simple method,

345
00:26:52,359 --> 00:26:56,400
but it's called hundreds of thousands of
times, and so the fact that the

346
00:26:56,440 --> 00:27:03,960
tool would help sort out that weight
of often called, and so were thought

347
00:27:03,160 --> 00:27:10,279
minute optimizations will make big differences versus
rarely called but complex enough that you will

348
00:27:10,279 --> 00:27:14,599
get some return on that, you
know, didn't I'd never worried about optimizing

349
00:27:14,640 --> 00:27:18,559
in mind calls because it just didn't
call that often. But all of that

350
00:27:18,640 --> 00:27:26,200
mainstream shopping cart recommendation engine, you
know, custom render pieces, ad pieces

351
00:27:26,240 --> 00:27:29,680
like those, were all the things
where it's like these get called a lot

352
00:27:29,960 --> 00:27:33,920
even though they don't look that big, and just playing with string compilation like

353
00:27:34,039 --> 00:27:40,279
those kinds of things made a huge
difference in the end. But the challenge

354
00:27:40,279 --> 00:27:41,240
I think for a lot of folks
is they just want to get into the

355
00:27:41,240 --> 00:27:47,319
code and this idea of you snap
the harness on first and get a baseline

356
00:27:47,359 --> 00:27:49,839
set of profiles in place, and
then as you said, the magic word,

357
00:27:49,880 --> 00:27:53,200
the hypothesis says, if we do
an optimization here, it'll make a

358
00:27:53,240 --> 00:28:00,359
difference. Now you go tinker,
then run the benchmarks again and it's they

359
00:28:00,400 --> 00:28:04,640
did we make a difference? And
King and if you didn't revert. Yes,

360
00:28:06,200 --> 00:28:10,920
And because there's no performance code i've
ever written them was easier to read

361
00:28:10,920 --> 00:28:14,920
than the original. Yes, ever, absolutely true, And I think what

362
00:28:15,319 --> 00:28:18,319
you said is super crucial. That
is sort of the performance look that I

363
00:28:19,359 --> 00:28:25,519
that I apply. Is so when
you have the harness and then usually that

364
00:28:25,559 --> 00:28:27,880
reproduces this scenario right, and then
like you said, you do the improvements

365
00:28:27,920 --> 00:28:33,559
that might be several iterations of ideas
that you tinker around with, and then

366
00:28:33,920 --> 00:28:38,319
you might execute several benchmarks to sort
of look at those sort of optimizations that

367
00:28:38,359 --> 00:28:44,599
you're doing. Maybe it's even several
micro benchmarks sort of measuring sort of little

368
00:28:44,640 --> 00:28:48,720
improvements in that call stack that you
came up with during that tinkering phase.

369
00:28:48,240 --> 00:28:52,599
And then at the end what you
do is you bring it back into that

370
00:28:52,680 --> 00:28:56,000
harness, right, and then you
look at sort of the end to end

371
00:28:56,039 --> 00:29:02,200
profile again where you look at again
the CPU and memory profile at least actually

372
00:29:02,519 --> 00:29:04,599
to actually see the before and after
right, and then you see on your

373
00:29:04,640 --> 00:29:11,039
graphs, oh, we spend six
hundred and fifty megabytes of memory on that

374
00:29:11,359 --> 00:29:15,640
specific scenario before. Now we're at
six hundred. Now you know that you

375
00:29:15,759 --> 00:29:22,000
actually have gained something, and you
also have the numbers that from the benchmarks

376
00:29:22,000 --> 00:29:29,279
that you run against each individual part
of the callstack that you try to optimize.

377
00:29:30,200 --> 00:29:33,119
I have one question to you,
Richard. You said you use the

378
00:29:33,240 --> 00:29:37,000
NDS profiler. Do you also happen
to sort of because I had a period

379
00:29:37,680 --> 00:29:41,759
where I used several tools because I
had sort of I know it when I

380
00:29:41,799 --> 00:29:45,720
see it type of investigations, So
I used several tools that have had different

381
00:29:45,759 --> 00:29:51,599
sort of dashboards and overviews that sometimes
gave me sort of a slightly different view

382
00:29:52,240 --> 00:29:56,039
based on the preferences of the tooling
against the test harness, and then I

383
00:29:56,079 --> 00:30:00,160
was like, oh, there it
is. Yeah, I mean different problems

384
00:30:00,160 --> 00:30:04,359
in different spaces. And granted a
lot of my experiences are from a while

385
00:30:04,440 --> 00:30:08,960
ago where there wasn't as many tools
as there are today, but yeah,

386
00:30:10,079 --> 00:30:15,720
definitely there's a difference between tweaking a
piece of code that you know sits in

387
00:30:15,759 --> 00:30:19,119
a call stack for a web page
and understanding a sort of end to end

388
00:30:19,200 --> 00:30:23,799
run where it's like, oh,
the real problem here is that there's a

389
00:30:23,839 --> 00:30:29,759
repeated call to a database enough that
it's doing a reauthenticate in the middle,

390
00:30:30,039 --> 00:30:33,359
or it's forcing a recompile of a
stored procedure. By the way, on

391
00:30:33,440 --> 00:30:37,599
the day you find one of those
from a method call and you got all

392
00:30:37,599 --> 00:30:40,720
the way down to but we call
it this many times and so forces is

393
00:30:40,799 --> 00:30:45,240
recompiled and that creates this overhead like
that's a very good day because those are

394
00:30:45,359 --> 00:30:49,400
hard to find, like just a
tough place get too, but you know

395
00:30:49,559 --> 00:30:56,640
your point's well taken. There are
Each tool provides its own viewed to that

396
00:30:56,880 --> 00:31:00,319
and we ended up I think it
was a dying trace where we were only

397
00:31:00,359 --> 00:31:06,480
able to see all this is a
multiple database interaction problem before we really saw

398
00:31:06,559 --> 00:31:11,400
the behavior correctly. And guys,
I want to pause for just a few

399
00:31:11,440 --> 00:31:18,279
moments for these very important messages,
and we're back. It's dot net rocks.

400
00:31:18,279 --> 00:31:22,640
I'm Carl Franklin, that's Richard Campbell, hey, and that's Daniel Marbach.

401
00:31:22,799 --> 00:31:30,240
We're talking about performance, squeezing performance
out of our applications. And Daniel,

402
00:31:30,319 --> 00:31:34,039
right before the break, you were
going to make a point about memory

403
00:31:34,079 --> 00:31:40,319
allocations. Memory allocations. Yeah,
So what I wanted to say is I

404
00:31:40,880 --> 00:31:44,480
feel like I need to sort of
clarify one thing because I talked a lot

405
00:31:44,519 --> 00:31:48,880
about sort of memory allocations. I
also sort of highlighted a little bit the

406
00:31:48,279 --> 00:31:52,720
CPU stuff, right, But people
might get sort of the message that all

407
00:31:52,759 --> 00:31:59,319
that matters is memory allocations, and
I definitely don't want to say that way

408
00:31:59,599 --> 00:32:02,440
because I just feel like for me, I've always sort of started looking at

409
00:32:02,440 --> 00:32:07,720
memory allocations because I've seen that these
are the areas in the applications that I

410
00:32:07,839 --> 00:32:10,839
worked with and the systems that I
worked with where I can make the sort

411
00:32:10,880 --> 00:32:16,400
of the biggest impact to reduce the
GC overhead without sort of going into sort

412
00:32:16,400 --> 00:32:22,079
of the algorithmic complexity and stuff like
that that sometimes comes with tweaking algorithms where

413
00:32:22,240 --> 00:32:28,440
CPU cycles are spent. And I
remember, I don't know the exact quote,

414
00:32:28,440 --> 00:32:31,920
but David Fowler once sort of tweeted
or is it still called tweet,

415
00:32:31,960 --> 00:32:38,880
I don't know, but he shouted
into the interwebs that essentially, apparently memory

416
00:32:38,920 --> 00:32:43,960
stream two array and other sort of
two array are still the biggest source of

417
00:32:44,039 --> 00:32:49,559
memory allocations in dot net systems out
there, which kind of shows how important

418
00:32:50,000 --> 00:32:55,559
sort of thinking about memory allocations actually
and certainly in this case of scale that

419
00:32:55,680 --> 00:32:59,039
you know, when we're dealing with
lots of federations. Again, I come

420
00:32:59,079 --> 00:33:01,880
from the e commerce space. The
other thing I ran into was we typically

421
00:33:01,920 --> 00:33:07,279
had to build our load tests to
run for longer because you needed to get

422
00:33:07,359 --> 00:33:16,279
into multi generational memory to actually understand
behavior in production. That lighting up a

423
00:33:16,319 --> 00:33:22,240
load test that ran for ten minutes
and wrapped up didn't give you the same

424
00:33:22,400 --> 00:33:28,759
results as what was actually happening with
your server, which was two days in.

425
00:33:29,160 --> 00:33:34,039
Because the way that memory gets allocated
over multiple generations became a huge part

426
00:33:34,039 --> 00:33:37,240
of the problem not that we had
to wait two days, but often we

427
00:33:37,279 --> 00:33:40,920
had to go a couple of hours
before you actually get that Gen two,

428
00:33:42,000 --> 00:33:46,720
Gen three reshuffling a memory enough to
say this is fully fragmented and restacked memory

429
00:33:46,720 --> 00:33:52,920
a few times, and that's when
those orphan long duration objects created problems for

430
00:33:52,039 --> 00:33:55,000
us. It was like, there's
this chunk of memory in the middle of

431
00:33:55,039 --> 00:33:59,559
the pool and it's been there for
two hours, and what the hell is

432
00:33:59,559 --> 00:34:05,359
that? Screwing up every GZ But
you only found that from these longer runs,

433
00:34:05,559 --> 00:34:09,760
Daniel, is there anywhere that replacing
a task with a value task will

434
00:34:09,800 --> 00:34:14,800
be a problem. I mean,
that's a common way to reduce gen zero

435
00:34:15,000 --> 00:34:21,000
allocations is to use value tasks.
Value task is definitely an interesting, sort

436
00:34:21,039 --> 00:34:27,639
of a newer ish addition to dot
net. I know that this is a

437
00:34:27,639 --> 00:34:30,719
little bit of a controversial topic because
some people are like, yeah, you

438
00:34:30,719 --> 00:34:35,599
should be using value task everywhere.
I'm more sort of in the camp of

439
00:34:35,920 --> 00:34:39,599
using it where it was designed for, which is for io bound paths where

440
00:34:39,639 --> 00:34:45,360
you essentially have the maturity of the
calls or sort of getting a cached value

441
00:34:46,679 --> 00:34:52,719
and then only like I don't know, out of ten calls maybe one or

442
00:34:52,760 --> 00:34:57,079
two are actually doing the actually operation. That's where I feel like value task

443
00:34:57,199 --> 00:35:00,880
is sort of a new sort of
approach to sort of make sure that you

444
00:35:01,199 --> 00:35:07,800
don't have that many allocations anymore.
But to be honest, I think in

445
00:35:07,960 --> 00:35:12,559
most systems, when you have to
sort of do that type of optimizations,

446
00:35:12,639 --> 00:35:17,920
then you're already super far because I
bet many sort of applications systems that they're

447
00:35:19,000 --> 00:35:22,599
running on top of maybe ASPO,
net Core or some others, they have

448
00:35:22,719 --> 00:35:30,400
other problems like memory stream two array, unnecessary bitary allocations, stringifying stuff that

449
00:35:30,480 --> 00:35:35,079
doesn't need to be stringified, all
sorts of that stuff. Instead of really

450
00:35:35,480 --> 00:35:39,639
thinking about switching from task to value
task, how about switching from web API

451
00:35:39,800 --> 00:35:52,400
to gRPC. It's a good one. Yeah, I mean even for example,

452
00:35:52,440 --> 00:35:58,440
switching from newtons of chasing to system
text chasing right or using source generated

453
00:35:59,079 --> 00:36:01,719
I'm sorry, which ones faster?
Are you going to touch that? No?

454
00:36:01,920 --> 00:36:06,440
Let's not go there, let's not
well, yeah, I mean you

455
00:36:06,480 --> 00:36:10,559
open the door, you might as
well walk through it. I would much

456
00:36:10,679 --> 00:36:15,719
rather to talk about a little Swiss
cheese or something like that than talking about

457
00:36:15,760 --> 00:36:20,199
that. Well, I mean,
we don't want the listeners to get the

458
00:36:20,480 --> 00:36:24,599
impression that just by switching from Newton's
Soft to system text Chason, you're going

459
00:36:24,679 --> 00:36:30,760
to find a performance improvement. Did
you really mean that? So we actually

460
00:36:31,239 --> 00:36:39,159
have seen when we switched from Newton's
Jason across the board quite hefty improvements,

461
00:36:39,800 --> 00:36:47,679
especially when combined the system text Chasin's
US combined with source generated approaches, where

462
00:36:47,760 --> 00:36:52,119
you also sort of are more AOT
friendly. As an example, you have

463
00:36:52,199 --> 00:36:59,480
far faster startup times on ash of
Functions or aw ISLAMDA. So there is

464
00:36:59,519 --> 00:37:04,519
definitely merits to to to that.
But I don't wanna I don't want to

465
00:37:04,559 --> 00:37:07,599
say Newton's have Chason is bad.
I think it has its place. Well,

466
00:37:07,679 --> 00:37:12,559
the same guy's working on system texts
Chason. I mean it's James Newton

467
00:37:12,639 --> 00:37:15,400
King, yeah exactly, and g
RPC web right right exactly. Yeah,

468
00:37:16,280 --> 00:37:20,800
being part of that new version of
dot net and integrated with that team.

469
00:37:20,920 --> 00:37:23,079
Like there's a lot of performance people
there. You just have the resources.

470
00:37:23,239 --> 00:37:28,079
Yeah, Like if the eye of
sore On that is Stephen Tabb is paying

471
00:37:28,119 --> 00:37:31,360
a chance in your code, your
coach is going to be faster. Yeah.

472
00:37:31,440 --> 00:37:35,880
Yeah, that guy's amazing. But
again, I mean, if if

473
00:37:35,920 --> 00:37:38,559
you if you have your harness and
you attach it and do some profiling,

474
00:37:38,599 --> 00:37:45,360
and you find out that the civilization
subsystem is actually the problem that makes everything

475
00:37:45,440 --> 00:37:49,400
so much slower, then that change
makes sense. But I guess there are

476
00:37:49,440 --> 00:37:53,599
also other areas of improvement that you
can sort of leverage. But I would

477
00:37:53,679 --> 00:37:57,239
like to, if you don't mind, I would like to switch a little

478
00:37:57,239 --> 00:38:00,960
bit into sort of the benchmarking stuff
if we still have some sure for you.

479
00:38:02,400 --> 00:38:08,280
Because Richard and I we talked about
this as well in recently in Warsaw

480
00:38:08,280 --> 00:38:13,920
and in Importal about benchmarking stuff,
and I think that was a point where

481
00:38:13,920 --> 00:38:15,960
I said, all right, we
need to make a show about this.

482
00:38:17,760 --> 00:38:22,920
Yeah. So one of the things
that I found really interesting is when the

483
00:38:22,960 --> 00:38:29,880
first time I sort of got into
contact with benchmarking was I read lots of

484
00:38:29,920 --> 00:38:34,400
blog posts about benchmark net and sort
of I was looking at sort of these

485
00:38:34,880 --> 00:38:37,920
benchmarks out there's like ads, it's
easy. It's like a unit test,

486
00:38:38,039 --> 00:38:42,719
right, It's like I've written plenty
of unit tests with x units, god

487
00:38:42,719 --> 00:38:45,920
forbid MS test or whatever. Right, So it's like I know this,

488
00:38:45,079 --> 00:38:51,239
it's it's not going to be difficult. But I was quite surprised that essentially

489
00:38:52,679 --> 00:38:57,599
you required to have a different understanding
in order to have a good benchmark.

490
00:38:57,639 --> 00:39:00,840
And one is a unit test has
two days, right, it's either red

491
00:39:01,000 --> 00:39:07,039
or it's green, so fast or
failed. Right. But a benchmark is

492
00:39:07,079 --> 00:39:10,880
something really different because what a benchmark
does is essentially especially when you use a

493
00:39:10,920 --> 00:39:15,320
benchmark on net by the way,
excellent tool, shout out to all the

494
00:39:15,360 --> 00:39:22,480
people that have been involved there,
it's it's like you are executing a given

495
00:39:22,559 --> 00:39:28,599
scenario under hundreds and thousands of iterations, right, And so what it means

496
00:39:28,679 --> 00:39:32,719
is there is no pass or failed. You basically get sort of standard deviations,

497
00:39:32,760 --> 00:39:37,239
you get g C, you measure
the GC involvement and stuff like that.

498
00:39:38,039 --> 00:39:44,199
So that's what the benchmark is,
right. But it doesn't just start

499
00:39:44,239 --> 00:39:47,960
with understanding what the benchmark is.
It also goes to what how should I

500
00:39:49,000 --> 00:39:53,800
even put my code under benchmark harness? Right? And that turned out to

501
00:39:53,840 --> 00:39:58,760
me the really tricky part because I
was only reading about this. Oh,

502
00:39:58,800 --> 00:40:04,480
here is a before and after comparison
between string concatenation, a string builder and

503
00:40:04,519 --> 00:40:07,960
the value string builder, which is
the fastest, right, super easyst scenario.

504
00:40:08,400 --> 00:40:15,159
But when you're actually taking your production
system's code that you had under a

505
00:40:15,199 --> 00:40:17,679
test harness, and you sort of
filtered in and say, okay, there

506
00:40:17,719 --> 00:40:22,280
is a bunch of things that we
need to improve, and then you want

507
00:40:22,320 --> 00:40:25,320
to measure it before and after.
How do you take this code which is

508
00:40:25,360 --> 00:40:30,559
probably not just a public method on
a static class that you can take and

509
00:40:30,599 --> 00:40:35,320
then put into a benchmark. How
do you actually take all the sort of

510
00:40:35,360 --> 00:40:38,360
make sure that you can measure what's
going on, remove all the side effects

511
00:40:38,360 --> 00:40:43,199
that you don't want to have in
your code, so that you have reliable

512
00:40:43,400 --> 00:40:46,119
sort of benchmarking results, and that
you can compare the before and enough to

513
00:40:46,199 --> 00:40:52,840
And that was the thing that I
struggled tremendously with it, and I found

514
00:40:52,880 --> 00:40:57,320
my way sort of to do it, especially sort of when I was still

515
00:40:57,360 --> 00:41:01,840
sort of growing on the before becoming
performance aware. And that was essentially I

516
00:41:01,880 --> 00:41:06,400
went with a very simple, simple
approach. I essentially took sort of the

517
00:41:06,480 --> 00:41:10,320
components that were sort of on that
code path. I usually copy pasted the

518
00:41:10,400 --> 00:41:16,719
code into a sort of a dedicated
source repository, stripped away all the unnecessary

519
00:41:16,760 --> 00:41:21,280
stuff. For example, when you
when I had an IOC container in place,

520
00:41:21,719 --> 00:41:24,159
I removed it and I added anew
of the thing that I wanted to

521
00:41:24,239 --> 00:41:28,679
new up. Or if I had
IO bond stuff and I didn't want to

522
00:41:28,719 --> 00:41:31,480
measure the IE bound stuff, I
removed it with a task complete the task

523
00:41:31,880 --> 00:41:37,320
stuff like that, right, and
then I had sort of a dedicated code

524
00:41:37,320 --> 00:41:42,079
base that was in a specific state, in a controllable state where I knew

525
00:41:42,360 --> 00:41:45,320
all the noise that is that I
don't want to sort of measure is gone,

526
00:41:45,400 --> 00:41:52,000
and now I can focus on that
specific sort of benchmarking scenario that I'm

527
00:41:52,039 --> 00:41:57,280
looking at. But it is the
unit testyle test. Effectively, it was

528
00:41:57,400 --> 00:42:00,599
like just bench this piece so that
I know I've gotten results from that.

529
00:42:01,280 --> 00:42:07,280
I'd also say, as a corollary
to this, that you do get decreases

530
00:42:07,280 --> 00:42:13,079
and performance with later versions because you're
doing more and so often it's like,

531
00:42:13,239 --> 00:42:16,599
you know, the whole conversation with
the PM, because I've gotten a situation

532
00:42:16,639 --> 00:42:22,039
where we literally had benchmarks as part
of the CICD pipeline. Now they're coming

533
00:42:22,039 --> 00:42:24,599
back and saying, hey, the
new version is slower than the old version.

534
00:42:24,599 --> 00:42:28,800
It's like the old version didn't do
all these things you asked for,

535
00:42:29,039 --> 00:42:34,239
Like this is the overhead for the
feature you asked and that was getting into

536
00:42:34,360 --> 00:42:37,159
SLA rules and things where it's like
the customer expects us to deliver this in

537
00:42:37,480 --> 00:42:40,400
x many second fractions of a second, it's like, well, we're getting

538
00:42:40,440 --> 00:42:45,559
closer to the limit because the customer
asked to do more update the SLA.

539
00:42:46,159 --> 00:42:51,599
It's interesting that you mentioned that because
I think one of the sort of benefits

540
00:42:51,639 --> 00:42:54,679
of the approach that just described is
that you can easily get start with not

541
00:42:54,760 --> 00:43:00,280
even thinking about what is how can
we actually capture regression or where should we

542
00:43:00,400 --> 00:43:06,760
execute those tests? What is a
reliable cic the environment in order to have

543
00:43:06,960 --> 00:43:14,880
sort of measurable and statistically relevant results
from this environment, right, But what

544
00:43:14,920 --> 00:43:19,639
you're essentially talking about is sort of
more regression testing as well. And there's

545
00:43:19,679 --> 00:43:23,599
actually there's actually a lot of great
guidance a little bit hidden in the dot

546
00:43:23,639 --> 00:43:30,360
net Performance Repository. I think it
was also driven by Adam Sidnik and some

547
00:43:30,519 --> 00:43:36,519
other people from his team. I
actually talked to him at a little bit

548
00:43:36,639 --> 00:43:39,599
over the course of I was preparing
for a talk about this specific topic.

549
00:43:39,639 --> 00:43:44,320
I talked to him a lot about
it, and they have a tool that

550
00:43:44,480 --> 00:43:46,920
essentially allows you to when you use
the benchmark dot net, what you can

551
00:43:46,960 --> 00:43:52,599
do is you can actually execute benchmark
dot net against the specific version of a

552
00:43:52,639 --> 00:43:57,000
benchmark and the specific version of the
code, you can store the artifacts,

553
00:43:57,599 --> 00:44:01,440
and then you can execute it again
against the changed version, store the artifacts,

554
00:44:01,440 --> 00:44:05,840
and then you can use the compared
to that they have to sort of

555
00:44:06,840 --> 00:44:09,920
sort of create a diff between the
before and after version, and then you

556
00:44:09,960 --> 00:44:16,519
can define a threshold that sort of
determines when it was an unacceptable sort of

557
00:44:17,119 --> 00:44:23,159
regression, and then for example,
you can fail your CICD environment but kind

558
00:44:23,239 --> 00:44:28,880
of performance because of performance, right, but CICD. It's also interesting because

559
00:44:29,119 --> 00:44:35,280
andre Akinschin wrote an excellent blog post
about this topic because he did as part

560
00:44:35,320 --> 00:44:37,360
of sort of investigations, he looked, for example, at git ub action

561
00:44:37,519 --> 00:44:46,000
runners and the result there is that
you essentially cannot use git ub action runners

562
00:44:46,639 --> 00:44:54,159
to actually do regression testing for performance
because they're just so unreliable. So basically

563
00:44:54,360 --> 00:45:05,800
you basically have up to three times
different sort of three times different execute memory

564
00:45:05,960 --> 00:45:12,519
and CPO difference between builds. Right, So it's it's insane. So wow,

565
00:45:12,559 --> 00:45:16,760
that's crazy. Yeah, yeah,
it's it's quite fascinating. Right.

566
00:45:16,800 --> 00:45:20,760
So that means you just means if
you're going to benchmark issue this way,

567
00:45:20,800 --> 00:45:22,920
you have to control your CICD pipeline
record, right, and you have to

568
00:45:22,960 --> 00:45:30,039
have dedicated hardware that you sort of
put somewhere or rent basically bare metal hardware

569
00:45:30,079 --> 00:45:34,440
where you and have run their infrastructure
that sort of allows you to sort of

570
00:45:34,920 --> 00:45:38,039
put those tests to that reliable hardware. And you can't use sure. I

571
00:45:38,039 --> 00:45:43,760
mean you're saying the clouds not an
option here, Well it is definitely it

572
00:45:43,880 --> 00:45:46,280
is an option, right, Yeah, it's just you're gonna you can't use

573
00:45:46,320 --> 00:45:50,079
the built in infrastructure for this.
You have to set it. You have

574
00:45:50,079 --> 00:45:55,280
to write your own YAML YAMO you
have done this. Like one of the

575
00:45:55,320 --> 00:45:59,119
one of the issues we were dealing
with is we were up to I don't

576
00:45:59,119 --> 00:46:02,000
know, three or four thousand different
web tests we wanted to do, and

577
00:46:02,079 --> 00:46:07,360
they took a day and we needed
under fifteen minutes, and so we would

578
00:46:07,440 --> 00:46:12,400
light up twenty instances of the site
in the cloud and parallelize all the tests.

579
00:46:12,800 --> 00:46:15,559
My goal was always by the time
you got back to your desk from

580
00:46:15,639 --> 00:46:20,280
coffee, the test results were back
and you had failed. Yeah, right,

581
00:46:20,360 --> 00:46:22,800
like that that was because instead it's
still in your head. Like what

582
00:46:22,880 --> 00:46:30,719
we realized was that every minute that
goes by after they've pushed the codes leaking

583
00:46:30,760 --> 00:46:35,320
from their mind. And if it's
and if it's a day, it could

584
00:46:35,360 --> 00:46:37,199
be anybody, like they're going to
have to start over picking it back up

585
00:46:37,239 --> 00:46:40,239
again. But if we could get
it to them in under an hour,

586
00:46:40,360 --> 00:46:45,320
like fifteen minutes was the magic number, they knew exactly what Oh, I

587
00:46:45,320 --> 00:46:46,519
know what that error is, and
off they'd go again, like it just

588
00:46:46,639 --> 00:46:52,800
saves so much remediation. Yeah,
so I jacked up the test bill because

589
00:46:52,800 --> 00:46:55,719
it saved the dev bill. Amazing. Yeah. So, and again I

590
00:46:55,719 --> 00:47:00,840
think it's it's all. When we
talk about ESHA DevOps run or get up

591
00:47:00,840 --> 00:47:05,760
action runs, usually there is a
shared sort of infrastructure that you have there.

592
00:47:05,840 --> 00:47:08,239
And yeah, so you just it's
not good for benchmarket. You don't

593
00:47:08,320 --> 00:47:13,480
know what performance you're going to get, no repeatable results, which oddly are

594
00:47:13,639 --> 00:47:17,440
necessary. But if you're talking past
fail, that's fine. You don't care

595
00:47:17,440 --> 00:47:21,800
if it ran twice as long,
half as long. It's just pass fail.

596
00:47:21,840 --> 00:47:24,119
For functionality, that's just fine.
Correctly, no big yeah. But

597
00:47:24,159 --> 00:47:29,920
again my message here is I want
to hammer this home. I think becoming

598
00:47:30,000 --> 00:47:34,320
performance aware, I think it's a
journey and you don't need to basically end

599
00:47:34,400 --> 00:47:37,599
up where we just oh, well, you will end up eventually there where

600
00:47:37,639 --> 00:47:42,360
we just talked about with having potentially
your own hardware if you need to do

601
00:47:42,400 --> 00:47:46,599
regressive testing, but already having your
harness in place, understanding the profilers so

602
00:47:47,079 --> 00:47:52,559
that you can sum in where you
should actually make those sort of performance improvements.

603
00:47:52,880 --> 00:47:55,199
Then using a tool like benchmark dot
net, which saves you a lot

604
00:47:55,239 --> 00:48:00,199
of headache because it's sort of sort
of is already designed to mitigate the most

605
00:48:00,320 --> 00:48:04,360
all the stuff you'd end up writing
for yourself any direct So just use it

606
00:48:04,400 --> 00:48:08,119
and a bunch of smart people are
working on correct and it mitigates, and

607
00:48:08,159 --> 00:48:12,599
then you can start there, maybe
copy paste your code at the beginning,

608
00:48:12,719 --> 00:48:15,199
isolate the things that you want to
do, get started there, and then

609
00:48:15,320 --> 00:48:20,599
at the later point in time,
where when your company is already sort of

610
00:48:20,719 --> 00:48:25,519
more performance aware, you can start
slowly introducing sort of more sort of mature

611
00:48:25,800 --> 00:48:30,880
ways of actually doing performance testing and
regression testing all the way along. Yeah,

612
00:48:30,920 --> 00:48:34,960
but you're you know, what you're
applying there is like you're pretty far

613
00:48:35,000 --> 00:48:37,880
down the path at that point too. Yes, to me. The big

614
00:48:37,920 --> 00:48:44,760
thing here is when does performance creep
into the requirements? Because a lot of

615
00:48:44,800 --> 00:48:46,960
folks you know, early days of
projects, it's just not even on the

616
00:48:47,039 --> 00:48:52,760
radar, right, but you know, to actually file performance problems as a

617
00:48:52,800 --> 00:48:55,880
bug to get them on the sprint, right, to be part of the

618
00:48:55,960 --> 00:49:00,079
conversation at all, Like that's already
that's arguably the starting point of any of

619
00:49:00,119 --> 00:49:06,360
that path is that it's bubbled up
to the point where business cares about it.

620
00:49:07,039 --> 00:49:08,239
You know, the line I used
to do when I did these talks

621
00:49:08,440 --> 00:49:12,800
was performance is like air. You
only care about it when you don't have

622
00:49:12,880 --> 00:49:20,239
it. Yeah. I'm a big
believer in having non functional requirements in the

623
00:49:20,280 --> 00:49:24,719
design or and the architecture sort of
built in and having explicit discussions about non

624
00:49:24,760 --> 00:49:30,039
functional requirements. It's also prioritize them
with your business stakeholders in order to make

625
00:49:30,079 --> 00:49:34,440
the right trade offs. Well,
and you know the sneaky part about that

626
00:49:34,559 --> 00:49:37,719
is let let them tell you it
isn't important, so later when they decide

627
00:49:37,719 --> 00:49:42,880
it is important. But you said, you know, because again, performance

628
00:49:42,920 --> 00:49:45,159
is one of those things where nobody
cares about it until they do. Yes,

629
00:49:45,280 --> 00:49:47,159
yeah, right, if I suck
the air out of the room,

630
00:49:47,159 --> 00:49:52,960
you're suddenly really interested in air.
Absolutely, And it's the same thing.

631
00:49:52,000 --> 00:49:54,840
It's like you never thought about.
You know, you can debate all day

632
00:49:54,920 --> 00:49:59,079
a render time of two seconds versus
four seconds. I'm sorry, I'm so

633
00:49:59,159 --> 00:50:02,280
web centric on this stuff, and
that's not a big deal. Everybody knows

634
00:50:02,320 --> 00:50:07,760
that thirty seconds is bad, right, and so that's sort of these kinds

635
00:50:07,800 --> 00:50:13,440
of thresholds, and it's hard to
talk about that in out of context.

636
00:50:13,760 --> 00:50:17,320
You kind of have to make a
slow page for everybody to start getting hey,

637
00:50:17,639 --> 00:50:22,920
slow page bad, set requirements for
minimum performance and figure out where they

638
00:50:22,920 --> 00:50:25,719
are. Any good news is,
of course there's lots of written documentation on

639
00:50:25,760 --> 00:50:29,679
its like you don't have to invest
And one of the things that I also

640
00:50:29,760 --> 00:50:35,400
really appreciate by doing these types of
investigations is and small improvements is you learn

641
00:50:35,440 --> 00:50:38,960
a ton about the cold path in
question, and that gives you a lot

642
00:50:38,960 --> 00:50:43,840
of insights into a potential redesign in
the future, right, because so many

643
00:50:43,840 --> 00:50:46,000
people are just throwing out there,
just rewrite this, right, but they

644
00:50:46,000 --> 00:50:50,239
have make it, make it faster. You're not going to make it better

645
00:50:50,360 --> 00:50:52,320
if you don't know what they are
like this, like, yes, I

646
00:50:52,360 --> 00:50:57,639
also find that most folks who spend
time in the tuning part understand the x

647
00:50:57,679 --> 00:51:00,159
you should have, like the behavior
of software in a deeper level than a

648
00:51:00,199 --> 00:51:04,000
lot of folks that wrote it in
the first place, because often you're just

649
00:51:04,039 --> 00:51:07,039
trying to get to the deliverable.
Does the future do the future requirements that

650
00:51:07,079 --> 00:51:10,599
were there? I know we only
have a few minutes left, but at

651
00:51:10,599 --> 00:51:17,480
one point in your investigation, do
you consider re architecture, which is obviously

652
00:51:17,920 --> 00:51:25,159
the most expensive and risky thing way
to improve performance. But you know,

653
00:51:25,280 --> 00:51:30,559
I mean, as you're going through
a project and looking or a tool or

654
00:51:30,559 --> 00:51:35,599
something and looking at every little thing
that you can squeeze out, and something

655
00:51:35,679 --> 00:51:39,039
jumps out at you, Oh,
well, you know this should be refactored

656
00:51:39,159 --> 00:51:45,960
or maybe even completely re architected.
How often does that happen? It's a

657
00:51:45,960 --> 00:51:50,000
difficult question to answer generically, but
I can give you a complete example.

658
00:51:50,360 --> 00:51:53,760
So I wrote a lot. It
contributes a lot to the ASHER service bust

659
00:51:53,840 --> 00:51:59,639
on that st K, and essentially
I think it was down sort of twenty

660
00:51:59,719 --> 00:52:04,960
pool requests on the sort of the
path where the sort of the ASH service

661
00:52:04,960 --> 00:52:07,719
bus sort of takes you get the
byte the race from ASH service bus and

662
00:52:07,760 --> 00:52:12,320
handed over to ASH of functions or
to your code that is running. Where

663
00:52:12,599 --> 00:52:17,119
I did lots of lots of tiny
improvements until I actually understood sort of how

664
00:52:17,159 --> 00:52:23,320
the body management of the byte payloads
actually really really work. And then only

665
00:52:23,400 --> 00:52:29,079
then I came up with a better
idea how to sort of manage that that

666
00:52:29,159 --> 00:52:32,960
body work from different aspects, and
that then led to sort of even more

667
00:52:34,079 --> 00:52:39,119
orders of magnitude of improvement how the
body is sort of managed and less allocations,

668
00:52:39,119 --> 00:52:44,599
more efficient in CPU cycles. But
it was like I think, Okay,

669
00:52:44,639 --> 00:52:47,239
I contributed in my free time,
whatever that means these days when you

670
00:52:47,239 --> 00:52:51,880
were constantly online, right, But
I contributed in my free time, I

671
00:52:51,880 --> 00:52:57,679
guess, over a year to this
code base, until together with the team,

672
00:52:57,719 --> 00:53:00,840
we realized, oh, there are
there are actually things we can sort

673
00:53:00,840 --> 00:53:06,239
of really refactor and make things even
faster. So I get I'm actually I

674
00:53:06,280 --> 00:53:09,880
have the tendency to go a very
long time on a specific code path before

675
00:53:09,920 --> 00:53:16,159
I even reconsider re architecting or redesigning. Of course, small impress can also

676
00:53:16,199 --> 00:53:21,519
sort of sometimes mean you're not doing
up something. You're making something a singleton

677
00:53:21,599 --> 00:53:25,440
that previously wasn't a singles or something
like that, right, and if it's

678
00:53:25,440 --> 00:53:31,159
still not performant enough, that's when
you think about re architecture, because what's

679
00:53:31,199 --> 00:53:35,559
also great is right when it's running
in production, it's making money right,

680
00:53:35,719 --> 00:53:38,679
and it gives you insights about the
good about what it's doing. And when

681
00:53:38,679 --> 00:53:45,079
you're architectu and redesigning for that period
of time, you have no validation whether

682
00:53:45,159 --> 00:53:49,559
the stuff that you're changing towards will
work. And with the small improvements you

683
00:53:49,639 --> 00:53:53,400
have constant feedback loops. And I
think I feel that's super super important.

684
00:53:55,280 --> 00:53:59,880
So what's next for you? Man? What's in your inbox? Well?

685
00:54:00,280 --> 00:54:05,199
Christmas time of course, drinking way
too much graft beer probably over Christmas time,

686
00:54:05,239 --> 00:54:15,280
a delicious stout or something like that. What is this? But but

687
00:54:15,400 --> 00:54:20,440
next year I will be I will
be in dot Net Day Romania at eating

688
00:54:20,599 --> 00:54:25,559
US conference. I will be delivering
a workshop about reliable messaging in ASHER,

689
00:54:25,960 --> 00:54:30,840
sort of deep diving into ash service, bus storage, ques, event hups

690
00:54:30,880 --> 00:54:36,440
and event grets. That's going to
be really interesting. And what else I

691
00:54:36,480 --> 00:54:40,440
don't know yet from a conference perspective, but definitely sort of increase a little

692
00:54:40,440 --> 00:54:45,599
bit more of my contributions to open
source stuff because I still have a few

693
00:54:46,320 --> 00:54:52,039
to contribute to a few open source
libraries. Yeah, it sounds good,

694
00:54:52,079 --> 00:54:54,840
well, Daniel, thanks for spending
this hour with us. It's been great,

695
00:54:55,000 --> 00:54:59,639
Thank you all right, and we'll
see you next time on dot net

696
00:54:59,760 --> 00:55:24,199
rock. Dot net Rocks is brought
to you by Franklin's Net and produced by

697
00:55:24,280 --> 00:55:30,199
Pop Studios, a full service audio, video and post production facility located physically

698
00:55:30,239 --> 00:55:36,199
in New London, Connecticut, and
of course in the cloud online at pwop

699
00:55:36,480 --> 00:55:39,320
dot com. Visit our website at
d O T N E t R O

700
00:55:39,400 --> 00:55:45,719
c k S dot com for RSS
feeds, downloads, mobile apps, comments,

701
00:55:45,039 --> 00:55:50,119
and access to the full archives going
back to show number one, recorded

702
00:55:50,159 --> 00:55:53,079
in September two thousand and two,
and make sure you check out our sponsors.

703
00:55:53,239 --> 00:55:57,920
They keep us in business. Now
go write some code, See you

704
00:55:57,960 --> 00:56:07,000
next time in the band. Sadly
seen a summer time that means hard than

705
00:56:07,119 --> 00:56:09,559
my Texas in Line Revell
