1
00:00:05,120 --> 00:00:10,240
What's going on? Everybody? Welcome
back to another episode of Adventures in DevOps.

2
00:00:10,560 --> 00:00:14,919
I'm your host Will Button, but
I'm flying solo today. Warren is

3
00:00:15,039 --> 00:00:20,679
at the DevOps conference in Zurich,
so he'll be back with this next week.

4
00:00:21,000 --> 00:00:25,719
But meanwhile, joining me in the
studio, I have the founder and

5
00:00:25,960 --> 00:00:32,039
CEO of net Data, Costa Sausis
Costa. Welcome and I did Butcher your

6
00:00:32,079 --> 00:00:43,280
name didn't I? Yes, it's
man. Well, welcome to the show.

7
00:00:43,320 --> 00:00:47,240
Thank you for joining me here.
Thank you very much for inviting me.

8
00:00:47,719 --> 00:00:51,679
It's very nice to be here.
Yeah. So you're the founder and

9
00:00:51,840 --> 00:01:00,000
CEO of net Data, a monitoring
solution for simplifying and modernizing infrastructure observability,

10
00:01:00,280 --> 00:01:04,439
which is that's a that's a huge
task. I was you know, as

11
00:01:04,439 --> 00:01:07,159
we were talking before the show,
we've been doing this for a while and

12
00:01:07,239 --> 00:01:12,840
so you know, we've learned a
lot of lessons in the last few decades

13
00:01:12,920 --> 00:01:19,000
about doing this. So can you
give our listeners a little bit about your

14
00:01:19,000 --> 00:01:23,200
background and how you found yourself at
this point in life. Yes, So

15
00:01:23,280 --> 00:01:27,319
that's a funny story because you know, the data is a monitoring solution that

16
00:01:27,640 --> 00:01:34,560
was let's say it happened by accident. I never wanted to build a monitoring

17
00:01:34,599 --> 00:01:40,079
solution, even actually when I was
starting, when I had started building it,

18
00:01:40,319 --> 00:01:46,120
my intention was not to built a
monitoring solution. So the idea,

19
00:01:46,159 --> 00:01:52,079
guys, is the following I was
migrating some infrastructure from prem to cloud.

20
00:01:53,079 --> 00:01:57,359
We had several several problems. It
was very early in the cloud industry,

21
00:01:57,439 --> 00:02:05,680
let's say, after spending a few
big budget actually and building a large team

22
00:02:06,359 --> 00:02:12,280
of skills and consultants and advisors and
the likes, and with the help of

23
00:02:12,360 --> 00:02:19,520
the cloud provider six months past,
no outcome. Problems were still there.

24
00:02:20,360 --> 00:02:22,960
In the early days. I don't
know if you remember this, we were

25
00:02:23,000 --> 00:02:27,240
talking about that cloud is a little
bit alive and it behaves a little bit

26
00:02:27,280 --> 00:02:32,319
differently, and all this kind of
all these kind of discussions, which my

27
00:02:32,520 --> 00:02:38,400
understanding, it's a little bit of
garbage. It's it's at the end of

28
00:02:38,439 --> 00:02:43,560
the day. But anyway, after
spending quite some time there, I found

29
00:02:43,639 --> 00:02:46,319
myself, you know, it was
very painful. We had it was a

30
00:02:46,360 --> 00:02:53,280
fintech company. We were doing transactions, payments on the poss et cetera,

31
00:02:53,400 --> 00:03:00,319
you know, cards, and we
had many retail chains that we we were

32
00:03:00,360 --> 00:03:05,599
serving and the CUE the people that
were waiting in line to actually finish the

33
00:03:05,639 --> 00:03:08,800
transactions, pay for the goods and
go home. It was going around the

34
00:03:08,879 --> 00:03:19,360
blocks. It was stressful times.
So by that time I started thinking,

35
00:03:19,400 --> 00:03:23,080
come on, what is wrong?
Why we cannot find what's happening? Why

36
00:03:23,759 --> 00:03:29,800
monitoring systems are so I had the
impression that everything that they had built,

37
00:03:29,840 --> 00:03:32,479
you know, all the dashboards,
all the tools, everything there was just

38
00:03:32,680 --> 00:03:43,039
something to make me feel happy.
It didn't provide any value. But so

39
00:03:44,840 --> 00:03:47,879
initially this is how I started.
I was so picked off that I started

40
00:03:47,960 --> 00:03:53,680
building a tool to consolidate all the
consoles. So what I wanted is not

41
00:03:53,759 --> 00:03:59,120
to build a monitoring solution. I
said, okay, we have Metrix and

42
00:03:59,240 --> 00:04:04,000
data all over the let's build a
tool to aggregate everything. That was the

43
00:04:04,039 --> 00:04:08,800
goal. The goal. So instead
of having people being on the console of

44
00:04:08,840 --> 00:04:12,000
the database and the console of the
systems, and the console of this or

45
00:04:12,080 --> 00:04:19,240
that, aggregate everything into one environment
that we replaces the consoles. So the

46
00:04:19,319 --> 00:04:25,079
monitoring tool replaces the consoles. Then
in order to do that, I said

47
00:04:25,120 --> 00:04:27,319
a few goals. The first is, come on, I need the same

48
00:04:27,360 --> 00:04:30,879
fidelity if the consoles are per second, I want this thing to be per

49
00:04:30,959 --> 00:04:34,360
second if the if the consoles have
I don't know ten thousand metrics, I

50
00:04:34,560 --> 00:04:39,000
want ten thousand metrics. So whatever
the consoles do, no, no,

51
00:04:39,600 --> 00:04:46,439
no discounts at all. Once I
started building these, and you know,

52
00:04:46,519 --> 00:04:49,519
the first generation of data was born, and actually I did the same thing

53
00:04:49,639 --> 00:04:56,079
that the console tools do in many
cases. So if you have a freeze

54
00:04:56,399 --> 00:05:00,000
or a point is missing, a
sample is missing, and something cannot because

55
00:05:00,000 --> 00:05:04,040
elected, then I have a gap. It's not something that smooths out today's

56
00:05:04,160 --> 00:05:11,160
monitoring systems, most of them smooth
it out. Yeah, but in the

57
00:05:11,319 --> 00:05:15,560
data from the first day, this
was a gap. I failed to collect

58
00:05:15,639 --> 00:05:23,240
that thing at that time. So
the idea is that I worked alone weekends

59
00:05:23,319 --> 00:05:29,319
and nights, and you know,
I was a COO in the day and

60
00:05:30,199 --> 00:05:38,319
an open source maintainer at night.
So after building this for a couple of

61
00:05:38,439 --> 00:05:44,040
years and enlisted so and the people
loved it, of course, mainly because

62
00:05:44,120 --> 00:05:48,759
it gave them all the fidelity that
they were missing from monitoring tools, all

63
00:05:48,800 --> 00:05:54,480
the information. It's also fully automated, so does boards come up by themselves.

64
00:05:54,680 --> 00:06:00,120
Everything happens by itself. So with
the discovery of metrics, everything so

65
00:06:00,360 --> 00:06:02,920
database or no moving parts, plugins, that the base everything isn't there.

66
00:06:03,920 --> 00:06:09,360
So once I saw the low it's
kirokeatet. I released it and it's kirokeated

67
00:06:09,399 --> 00:06:13,680
immediately, probably one of the fastest
growing products on githubs. So it got

68
00:06:13,800 --> 00:06:16,240
ten thousand git caub starts in two
weeks or something like that. Oh wow.

69
00:06:16,519 --> 00:06:21,519
So once I saw this, I
said, okay, you know,

70
00:06:21,800 --> 00:06:27,639
initially you feel the you feel proud, but at the same time you feel

71
00:06:27,680 --> 00:06:30,959
the responsibility. You say, well, now I have built something that is

72
00:06:30,040 --> 00:06:35,360
in total thousands systems around the globe. I hope I didn't mess it up

73
00:06:35,759 --> 00:06:43,079
somehow. So you feel the responsibility. So I started getting you know,

74
00:06:43,480 --> 00:06:47,279
ideas, and new people came in
and they were contributing and all this kind

75
00:06:47,319 --> 00:06:56,000
of you know work that happens in
communities. That was amazing. So a

76
00:06:56,040 --> 00:06:59,120
couple of years later I decided to
start a company. I said, okay,

77
00:06:59,399 --> 00:07:01,600
with we have something here. This
is not this is not a toy

78
00:07:01,680 --> 00:07:08,600
anymore. This is something important that
people use every day to actually monitor their

79
00:07:08,639 --> 00:07:13,600
systems. So the I this is
how Neata was born. Of course I

80
00:07:13,720 --> 00:07:16,439
needed to find a plan for you
know, because this is funded the funded

81
00:07:16,519 --> 00:07:21,680
companies, so you need somehow to
make money and be in the market,

82
00:07:21,800 --> 00:07:26,040
have to go to market strategy,
et cetera. So what I did then

83
00:07:26,199 --> 00:07:30,480
is that I decided that the most
important thing to do is to keep this

84
00:07:30,040 --> 00:07:35,959
high fidelity nature of data, real
time, high fidelity across the board everywhere.

85
00:07:36,439 --> 00:07:40,800
How do you scale that thing?
That's that's a tricky part. How

86
00:07:40,839 --> 00:07:45,519
do you scale it? Most of
the monitoring solutions centralize everything, even the

87
00:07:45,560 --> 00:07:48,839
commercials and the data dogs and Dina
trace and you really can all the all

88
00:07:48,879 --> 00:07:54,279
the all providers today, even the
open source from et cetera. All of

89
00:07:54,360 --> 00:08:00,480
them centralize everything to one database and
then query the database or this day basis

90
00:08:00,519 --> 00:08:03,240
one for medicine, for logs,
et cetera. How do you scale it?

91
00:08:05,600 --> 00:08:09,439
The first thing is that I said, okay, we have something that

92
00:08:09,600 --> 00:08:13,160
works at the edge and it's high
fidelity. It collects everything, has a

93
00:08:13,240 --> 00:08:18,480
lot more information, amazing coverage of
technologies, it is real time. Even

94
00:08:18,639 --> 00:08:22,439
the latency is amazing. So on
the data has one second data collection visualization

95
00:08:22,600 --> 00:08:26,560
latency. So your presenter on the
console to do a change and the next

96
00:08:26,600 --> 00:08:31,919
second that does work goes choosing the
result. Oh wow, So I said,

97
00:08:31,960 --> 00:08:37,200
okay, how do we change that, how do we scale it?

98
00:08:37,279 --> 00:08:43,279
And then I said, okay,
let's go distribute it. So instead of

99
00:08:43,360 --> 00:08:48,960
centralizing everything to one place that becomes
a hooge. After a while, it

100
00:08:48,120 --> 00:08:56,840
becomes overwhelmed that I said, okay, let's let's try a completely different approach.

101
00:08:56,080 --> 00:09:03,080
Let's have the data spread out across
the infrastructure in little islands here and

102
00:09:03,200 --> 00:09:09,200
there, and let's figure out a
way if we can do it to actually

103
00:09:09,759 --> 00:09:16,240
merge everything at quiritie can we have
the data all over the infrastructure and at

104
00:09:16,559 --> 00:09:22,000
the dashboard you have the feeling that
this is one thing you can see everything.

105
00:09:22,360 --> 00:09:26,679
So that was the idea, and
this is what we implemented. We

106
00:09:26,759 --> 00:09:31,759
spent a few years, we implemented
the thing. So today in the data

107
00:09:31,879 --> 00:09:35,919
is a modern solution that you installed
to discovered. Still all the same stuff

108
00:09:35,960 --> 00:09:39,600
exists today. So it's a modern
solution that you don't need to configure anything,

109
00:09:39,080 --> 00:09:45,879
mainly because we don't cherry pick information. So on the centralized infrastructure monitoring

110
00:09:45,919 --> 00:09:50,720
systems you have to cherry pick.
You have to know beforehand what metrids,

111
00:09:50,720 --> 00:09:54,320
which methods you need, how frequently
you need them. Since we eliminated this

112
00:09:54,519 --> 00:10:00,480
factor, so let's have everything in
high resolution. Again. The next goal

113
00:10:00,639 --> 00:10:05,240
was, okay, why to configure
anything? Since we can ingest everything,

114
00:10:05,919 --> 00:10:11,159
let's to discover everything. Let's just
ingest everything by default. The next goal

115
00:10:11,399 --> 00:10:16,159
was, okay, since we now
we ingest everything, why to go through

116
00:10:16,240 --> 00:10:22,000
the process of configuring dashboards metric by
metaic insult by charge. Let's find a

117
00:10:22,120 --> 00:10:28,519
way to create the meaningful doashboards out
of the box by itself. So we

118
00:10:28,639 --> 00:10:35,000
attached metadata that allows the data to
correlate the metrics run time and present them

119
00:10:35,039 --> 00:10:39,759
in beautiful dashboards meaningful doshboards. The
same happened with alerts. Since we collect

120
00:10:39,840 --> 00:10:45,000
everything, we have everything instead of
having alerts, you know the default threshold

121
00:10:45,039 --> 00:10:48,960
alerts that you have If the aggregated
bumps goes above this trigger an alert.

122
00:10:50,480 --> 00:10:52,600
Instead of doing this, what we
did is that, okay, can I

123
00:10:52,840 --> 00:11:00,799
monitor component by components bottom up the
entire infrastructure? So can we alarms for

124
00:11:00,919 --> 00:11:05,000
a disc, alarms for a netwical
interface, alarms for a container, alarms

125
00:11:05,039 --> 00:11:09,080
for a posters database for an instance
of posters, for an instance of an

126
00:11:09,120 --> 00:11:15,480
ENGINEX. So today we see about
three hundred and fifty alerts that monitor components

127
00:11:15,799 --> 00:11:18,759
of your infrastructure. Right, So, The idea is that you install a

128
00:11:18,879 --> 00:11:24,759
data and suddenly and out of the
box in minutes, in seconds, you

129
00:11:24,879 --> 00:11:30,120
have a fully functional politory system that
you didn't do anything to get it,

130
00:11:30,279 --> 00:11:33,480
apart from installing in the data.
That is the beauty. Wow. Okay,

131
00:11:33,840 --> 00:11:37,919
we got to pause there because I'm
just trying to wrap my head around

132
00:11:37,960 --> 00:11:43,120
this. Like hearing you say it, it just makes perfect sense. The

133
00:11:43,240 --> 00:11:48,120
part that I'm struggling with is like, this makes so much sense. Why

134
00:11:50,000 --> 00:11:54,639
am I three decades into my career
and just now having this revelation Because you

135
00:11:54,720 --> 00:12:00,480
know, like you said, like
when you install a monitoring system, it's

136
00:12:00,600 --> 00:12:05,759
like it's like getting grilled by an
interrogator. What day do you want?

137
00:12:05,919 --> 00:12:09,000
What stresholds are? What frequency?
And you're like, man, I don't

138
00:12:09,120 --> 00:12:16,200
know. I just got here when
you had these answers. Yes, and

139
00:12:16,360 --> 00:12:20,600
you know, we went a lot
further. So, for example, let's

140
00:12:20,600 --> 00:12:24,240
assume that you use that. So
you installed it, you have one hundred

141
00:12:24,279 --> 00:12:28,120
several, two hundreds of thousand servers, you installed the data. Everything works

142
00:12:28,159 --> 00:12:31,200
by itself, you have alert you
have doshboards. Everything works. Okay,

143
00:12:31,679 --> 00:12:35,519
now you go and see the dashboard. Wait a moment, you see the

144
00:12:35,679 --> 00:12:41,720
matrix for the first time. These
are metals you are not familiar, right,

145
00:12:41,559 --> 00:12:48,840
right? Can we make the charts
easy to grasp, to digest at

146
00:12:50,039 --> 00:12:54,559
first sight? And what information do
we need on a chart? So we

147
00:12:54,000 --> 00:12:58,240
invented the needle framework, the middle
frame. Google is a little tool but

148
00:12:58,320 --> 00:13:03,879
above every chart that are allows you
to understand where data are coming from,

149
00:13:03,080 --> 00:13:07,240
which notes, which instances, which
them, and what labels they have and

150
00:13:07,519 --> 00:13:15,120
give you statistics about the sources that
contribute to the chart. M hmm.

151
00:13:15,919 --> 00:13:20,480
Once you have this, then we
can we started discussing about Okay, now

152
00:13:20,559 --> 00:13:24,519
that we have really a lot of
metrics and everything is automated and everything is

153
00:13:24,639 --> 00:13:28,879
visualized by default, can we come
up you know, it's it's the trouble

154
00:13:28,919 --> 00:13:35,440
shooting that we say that there is
a troubleshooting to why because we try to

155
00:13:35,639 --> 00:13:41,919
solve the problem of being efficient a
troubleshooting time. So how how it works

156
00:13:41,919 --> 00:13:46,360
for more for most monitoring systems.
So you have an infrastructure and you face

157
00:13:46,399 --> 00:13:50,600
a problem there is a dive in
your shales or your users or whatever you

158
00:13:50,639 --> 00:13:54,440
say, a dive or a chart. Okay, what do you do next?

159
00:13:54,759 --> 00:14:01,559
You start speculating, Oh, probably
it's it's database. It's going the

160
00:14:01,639 --> 00:14:05,559
database if you don't have charge,
let's figure out how to build charge for

161
00:14:05,679 --> 00:14:11,320
that. Let's validate the assumption.
Oh, it's not the database, it

162
00:14:11,360 --> 00:14:13,159
should be the network. I think
we have a problem in a network.

163
00:14:13,240 --> 00:14:16,480
Right. This is how it works. You're speculating all the time and you

164
00:14:16,679 --> 00:14:24,240
hope that your experience will help you
pinpoint the right cause because the monitoring itself

165
00:14:24,279 --> 00:14:28,919
cannot tell you. So what we
did is that we added to the data

166
00:14:28,120 --> 00:14:37,320
supervised machine learning. So we train
multiple machine learning models for every metric,

167
00:14:37,960 --> 00:14:43,960
multiple for every metric. And then
what we do is that data is able

168
00:14:43,039 --> 00:14:48,279
to detect a normal list in real
time based on the past of its metric,

169
00:14:48,919 --> 00:14:52,759
so it is the path of its
metric trends models. And then during

170
00:14:54,000 --> 00:14:58,960
a collection, it decides if they
just collected value is an anomal orner it's

171
00:14:58,960 --> 00:15:03,000
an aler okay, And we store
this in the database. So together with

172
00:15:03,120 --> 00:15:05,519
example, we say oh, this
was a normalogy or no, no,

173
00:15:05,679 --> 00:15:11,399
this was long a noomas. Now
the beauty of this is that we create

174
00:15:11,480 --> 00:15:15,600
a tool we call it a dobe
advisor that you highlight a spike or a

175
00:15:15,679 --> 00:15:20,519
diver, whatever is interesting, and
we have we build a scoring engine inside

176
00:15:20,600 --> 00:15:24,879
inside the data, so it goes
for that timeframe that you highlighted across all

177
00:15:26,080 --> 00:15:30,679
the metrics. It doesn't matter how
many how many metrics are there, and

178
00:15:30,919 --> 00:15:37,440
scores them based on their anomaly rate. So your your your ham moment,

179
00:15:37,759 --> 00:15:41,159
Oh the disc did that is within
the list, so you don't need to

180
00:15:41,240 --> 00:15:46,519
speculate. We just go there,
press a button and the data comes up

181
00:15:46,600 --> 00:15:48,360
with a list. You show the
list. You say, okay, if

182
00:15:48,399 --> 00:15:50,879
this happened, this happened, I
know, I know, I know what

183
00:15:52,080 --> 00:15:56,440
is this? What happened here?
How the database did that, or why

184
00:15:56,480 --> 00:16:00,440
did this or why the network did
that. So the idea is to simplify

185
00:16:00,960 --> 00:16:08,679
tremendously trouble to allow yousers be a
lot more efficient in the resolution of problems.

186
00:16:10,720 --> 00:16:15,120
So not only do you have access
to the data points from the metrics,

187
00:16:15,200 --> 00:16:18,919
but you're also putting them in context. So when you look at the

188
00:16:19,559 --> 00:16:23,080
when you look at the graph or
the dashboard, you see the numbers,

189
00:16:23,120 --> 00:16:26,720
which you're providing context to say is
this a good number or is this a

190
00:16:26,759 --> 00:16:33,360
bad number? M HM. And
we in our visualization we have added an

191
00:16:33,360 --> 00:16:40,960
anormally ribbon where you can see in
real time what the anrmally the anomalists do

192
00:16:41,679 --> 00:16:47,720
the machine learning does how the machine
learning detects anomalists in real time. What

193
00:16:47,919 --> 00:16:53,000
we found also is so when we
had all this infrastructure that was training all

194
00:16:53,080 --> 00:16:59,519
this kind of stuff and detecting anomalists
in real time, we realize that anormalist

195
00:16:59,600 --> 00:17:04,960
happening clusters. So you go there
and you see that they are normal lists

196
00:17:06,559 --> 00:17:11,079
on across nodes happening in clusters within
a node. So a lot of methods

197
00:17:11,119 --> 00:17:15,519
get anomalous wearing something or when there
is an anomally within a node, but

198
00:17:15,640 --> 00:17:19,880
also a lot of notes together in
very short time, one after the other,

199
00:17:21,000 --> 00:17:26,720
they get anomalous high in a great
percentage. So we didn't know that.

200
00:17:26,000 --> 00:17:33,559
We realize this by we're reviewing the
data and then we build a tune

201
00:17:33,960 --> 00:17:41,440
to actually to allow people to review
a normal list across the infrastructure. So

202
00:17:41,599 --> 00:17:45,160
now we have a chart, for
example, that gives you for a line

203
00:17:45,240 --> 00:17:49,519
for every node that you have,
and it's a percent that the percentage of

204
00:17:49,680 --> 00:17:56,759
metrics being anomalous concurrently, so you
can see the strength of the anomally and

205
00:17:56,880 --> 00:18:00,440
the spread of the anomally that at
the same time. So this is the

206
00:18:02,039 --> 00:18:04,839
story. This is a short the
story of data. Let's say we are

207
00:18:04,960 --> 00:18:14,799
trying to make observability a lot more, a lot easier for people to tell

208
00:18:14,839 --> 00:18:21,559
you that there was when develops started, there was this little diagram that the

209
00:18:21,880 --> 00:18:26,720
consultants or the consultants films. They
were saying that, you know, devlops

210
00:18:27,319 --> 00:18:36,920
is a joint between data science,
software engineering, and IT infrastructure, and

211
00:18:37,079 --> 00:18:41,400
were saying, at the at the
the three bubbles, at the point they

212
00:18:41,599 --> 00:18:47,359
join, all three of them,
you have DeVos. My understanding, this

213
00:18:47,519 --> 00:18:52,119
is in theory it is okay,
but in practice to have an extremely good

214
00:18:52,240 --> 00:18:59,519
data scientist that is also software engineer
and it knows about IT architecture and the

215
00:18:59,680 --> 00:19:03,200
depth of the IT technology that exists. Come on, guys, this guy

216
00:19:03,279 --> 00:19:08,559
does not exist. I don't know
if the world may have a couple of

217
00:19:08,640 --> 00:19:11,440
them, three of them, I
don't know for sure. You cannot have

218
00:19:11,599 --> 00:19:18,599
one next to you. The idea
is that monitoring needs to be simpler,

219
00:19:18,920 --> 00:19:23,039
and it can be simpler. No
need to learn query within the data.

220
00:19:23,160 --> 00:19:26,319
We don't. You don't need to
learn query language so you can filter,

221
00:19:26,519 --> 00:19:32,480
slice dies the data. You know, it's like a tube. You can

222
00:19:33,000 --> 00:19:36,559
change the tube the way you see
I fits by point and click to create

223
00:19:36,640 --> 00:19:40,720
those words by drag and drop.
So the idea is that we are trying,

224
00:19:41,000 --> 00:19:45,319
let's say, let's say that we
we try to bring the technology in

225
00:19:45,440 --> 00:19:52,319
the monitoring technology that the best organization
of this world have so real time per

226
00:19:52,400 --> 00:20:00,880
second high resolution machine learning everywhere and
bring it to everyone in a very simply

227
00:20:03,000 --> 00:20:07,359
affordable package. Because the data also
is mainly because of its distributed design,

228
00:20:08,039 --> 00:20:12,839
is the most cost efficient solution.
Oh right, yeah, because you're not

229
00:20:12,960 --> 00:20:18,279
stuck with a couple of years from
now having to run these monster servers just

230
00:20:18,400 --> 00:20:22,920
to maintain the amount of data you
got. Actually, we use resources,

231
00:20:23,319 --> 00:20:29,160
computer resources that are available and spare. It's your servers. They have two

232
00:20:29,240 --> 00:20:33,039
PERCENTPE to spare and I don't know, two hundred megabytes of RAM, that's

233
00:20:33,079 --> 00:20:37,440
easy. And this is what we
use. Two percent cype of a single

234
00:20:37,480 --> 00:20:41,559
core, three percent cype of a
single core. This is what data needs.

235
00:20:41,119 --> 00:20:48,279
Resources and two hundred megabytes of RUM
and I don't know one gigabyte or

236
00:20:48,440 --> 00:20:53,759
two gigabytes of this that's it.
Well, So not to mention that if

237
00:20:53,799 --> 00:21:00,720
you compare with commercial offerings, all
of them require tremendous mount of egress bandwidth.

238
00:21:03,799 --> 00:21:08,880
The data does not stream anywhere.
It's it's inside there, so it

239
00:21:10,359 --> 00:21:14,440
out egress. Banquet will be used
only when you view the dashboard. If

240
00:21:14,480 --> 00:21:19,839
you don't view, there's nothing there, there's no egress. So that's the

241
00:21:19,880 --> 00:21:26,160
whole point. Simplify. Take something
that is the best out there. We

242
00:21:26,319 --> 00:21:33,119
integrated in the data technoledge for example
that we use a variation of Gorilla compression.

243
00:21:33,440 --> 00:21:41,039
But Facebook developed so Facebook has developed
a real time high resolution monitoring system

244
00:21:41,640 --> 00:21:45,119
data base, time data base,
and it's called Gorilla. We took the

245
00:21:45,200 --> 00:21:48,960
concept, we adapted to the data
and now Gorilla compressions in the data.

246
00:21:49,200 --> 00:21:53,160
So that's across the board. That's
what we do. Across the board.

247
00:21:53,319 --> 00:21:56,240
We're trying to bring the best and
give it to a This is why we

248
00:21:56,359 --> 00:22:07,759
say that we democratized monitor So when
you get to this level where you're bringing

249
00:22:07,799 --> 00:22:15,759
in this granular data from all across
your infrastructure, how do you how do

250
00:22:15,799 --> 00:22:18,880
you determine what's a surface for the
user? Because that seems like an avenue

251
00:22:18,920 --> 00:22:25,480
where you can get to information overload
really really quickly. So the idea is

252
00:22:25,519 --> 00:22:30,240
that, Yeah, so the idea
is that we group everything into meaningful stuff,

253
00:22:30,640 --> 00:22:34,720
so you have you go and it
says, here are medical interfaces,

254
00:22:34,880 --> 00:22:40,480
the top information about network, the
faces, this package and bandwidth for example

255
00:22:40,480 --> 00:22:42,799
in terrors. But then that are
all that is that you may need in

256
00:22:44,000 --> 00:22:48,640
order to explore what's happening. The
same happens everywhere, so your database,

257
00:22:48,680 --> 00:22:52,319
ever, your even your tables.
Do we go down to the intex level?

258
00:22:53,000 --> 00:23:00,200
Now all this information mainly because we
have this scoring. Eagine when when

259
00:23:00,279 --> 00:23:03,559
you don't know what to do,
so you are in a dashboard it has

260
00:23:03,599 --> 00:23:07,000
five hundred charge on it. Okay, what do I do now here?

261
00:23:07,839 --> 00:23:11,480
The first thing is that we there
is a button that says, okay,

262
00:23:14,599 --> 00:23:22,160
identify for me the ones that are
currently in the visible time frame, the

263
00:23:22,279 --> 00:23:26,200
most unormalous for example, so to
give you something to look at. You

264
00:23:26,279 --> 00:23:30,119
know, when you are looking for
something, you don't know where to start,

265
00:23:30,880 --> 00:23:36,400
but it will identify the doshboard sections. Okay, here I have twenty

266
00:23:36,440 --> 00:23:41,799
percent. Normally this is bad,
go look here. So the idea is

267
00:23:42,279 --> 00:23:49,880
one we've developed tools to help people
with the information overloads, let's say.

268
00:23:51,440 --> 00:23:56,880
But the most important thing is that
there are people that are afraid this information

269
00:23:56,000 --> 00:24:03,119
overload. There are other people that
enjoy the depth and the detail. What

270
00:24:03,400 --> 00:24:10,480
we hear from users is that when
you use the data for some time to

271
00:24:10,599 --> 00:24:14,359
explore your infrast actions. Let's assume
that you're not troubleshooting anything. You just

272
00:24:14,519 --> 00:24:21,599
want to understand. It's like feeling
the pulse and the breath of the infrastruction.

273
00:24:21,759 --> 00:24:25,079
You feel it because you see it
on every second. It's extremely high

274
00:24:25,119 --> 00:24:32,480
resolution, so you can understand what
is really happening there. And I think

275
00:24:32,559 --> 00:24:34,960
that this is the most important.
Of course, it's a tool for people

276
00:24:36,000 --> 00:24:40,000
that want to learn. If someone
wants to just traffic lights, oh it's

277
00:24:40,119 --> 00:24:45,359
healthy, it's not healthy. If
someone wants just this, it's very whelming

278
00:24:45,519 --> 00:24:49,640
for them. But if someone wants
to learn to dive, to understand to

279
00:24:49,839 --> 00:24:56,839
fix the problem, this is where
in the data steps in. Wow,

280
00:25:00,440 --> 00:25:04,640
that's wild, that's wild. I
mean, you know, because I feel

281
00:25:04,720 --> 00:25:10,640
like from my own personal perspective,
like you know, we've been approaching monitoring

282
00:25:10,720 --> 00:25:18,839
and observability the same way for so
long and then this just completely flips all

283
00:25:18,920 --> 00:25:23,160
of that upside down. But that's
the whole point, and I think this

284
00:25:23,319 --> 00:25:27,880
is the monitoring system that is missing
today. So PROMI, for example,

285
00:25:27,920 --> 00:25:34,279
are amazing for customizability. You can
build whatever imaginable you imagine it, you

286
00:25:34,359 --> 00:25:41,480
build it great. The big guys
data dog Dinah, Trace and they likes.

287
00:25:41,559 --> 00:25:45,519
They try to give you a silicopter
view of the most important things.

288
00:25:45,759 --> 00:25:55,039
Although they value high resolution, they
charge high resolutions, they are not by

289
00:25:55,119 --> 00:26:03,160
default. So I think that there
was no tool to cover this, this

290
00:26:03,559 --> 00:26:08,279
this area of real time deep dive
monitoring that you can go and take some

291
00:26:08,480 --> 00:26:15,400
in everything in high resolution and very
detailed. To tell you the truth I

292
00:26:15,599 --> 00:26:22,160
have. The simplicity that we added
to the tool is one thing that our

293
00:26:22,279 --> 00:26:26,440
users love. So when I go
and speak about data, I see in

294
00:26:27,079 --> 00:26:32,960
many I was enforced them, for
example a month ago, and it was

295
00:26:33,039 --> 00:26:41,119
amazing because you see people that are
skeptical about observability. There are some that

296
00:26:41,279 --> 00:26:48,039
are the point of observability denial.
The idea is that what they have an

297
00:26:48,119 --> 00:26:52,480
argument. What they say is that
come on, this is too complex,

298
00:26:52,759 --> 00:26:59,319
too expensive for what I get for
sure. So with what we tried,

299
00:26:59,880 --> 00:27:04,599
the the fact that we solve the
cardinalti granularity problem and we can scale infinitely

300
00:27:04,720 --> 00:27:11,240
without actually becoming a problem, this
allowed us to become a lot simpler.

301
00:27:12,000 --> 00:27:17,240
So it does a lot more.
But the thing is simple. You don't

302
00:27:17,359 --> 00:27:22,319
have to do anything, you just
have to use it. So this is

303
00:27:22,559 --> 00:27:26,039
I think that this is the booming
factor, is that it is higher resolution

304
00:27:26,400 --> 00:27:30,480
and at the same time extremely easy. You don't have to do anything.

305
00:27:30,559 --> 00:27:34,519
It doesn't require from you, not
even resources. Just give it the resources

306
00:27:34,559 --> 00:27:40,680
that you already have and that's pair. So I think this is the This

307
00:27:40,880 --> 00:27:45,960
is the combination of things that make
the data so appealing to you. Yeah,

308
00:27:47,039 --> 00:27:49,119
for sure, especially the ease of
use thing, because a lot of

309
00:27:49,559 --> 00:27:57,359
the other observability tools and just my
own personal experience with them is they they

310
00:27:57,480 --> 00:28:03,720
rely on me too much, and
and they rely on me to know what

311
00:28:03,960 --> 00:28:07,240
questions to ask. And I'm like, man, if I knew a question

312
00:28:07,440 --> 00:28:12,279
to ask, I probably wouldn't be
asking. You wouldn't be Yeah, yeah,

313
00:28:12,920 --> 00:28:18,759
you know. The interesting part is
that some people when it is when

314
00:28:18,519 --> 00:28:22,920
they hear, for example, a
distributed monitoring solution that everything is at the

315
00:28:22,039 --> 00:28:26,279
edge, the first thing that they
say is way the moment, man,

316
00:28:26,759 --> 00:28:33,240
this is going to be more heavy
than the other agents, right. Yeah.

317
00:28:33,640 --> 00:28:36,359
We did a comparison on our side. There is a blog post where

318
00:28:36,359 --> 00:28:41,440
I could compare the agents of all
monitoring solutions that we could find. The

319
00:28:41,640 --> 00:28:45,839
data is one of the lightest.
So the core of the data is written

320
00:28:45,880 --> 00:28:52,440
insane. It's severely optimized to be
performance m of course we have plug ins

321
00:28:52,480 --> 00:28:56,359
and the likes that are high leveled
and written in goal or whatever, but

322
00:28:56,599 --> 00:29:04,480
the core is x extremely optimized to
be very efficient. For example, we

323
00:29:04,599 --> 00:29:10,640
have we did a stress test against
against Prometheos, mainly because Promethels is the

324
00:29:10,799 --> 00:29:15,319
industry studdard. So one third,
let's SPU same lod. So we gave

325
00:29:15,400 --> 00:29:21,759
them three terabytes to almost three million
metrics and we said per second, everything

326
00:29:21,799 --> 00:29:25,119
per second, and we said again, let's let's see the resources that both

327
00:29:25,200 --> 00:29:30,319
systems need. The data and Promethels
the data used one third LESSIPU, half

328
00:29:30,400 --> 00:29:37,839
the memory, it used ten percent
less bandwidth, ninety eight percent less diskyo,

329
00:29:37,160 --> 00:29:44,400
almost the disc was idle all the
time with the data, and seventy

330
00:29:44,920 --> 00:29:51,519
it managed to fit seventy five percent
more data into the same store. So

331
00:29:51,759 --> 00:29:57,880
we have for example promythees has either
uncompressed data on disc at two points something

332
00:29:57,960 --> 00:30:04,200
two point ten and if I remember
correctly bites per example on disc or compressed

333
00:30:04,279 --> 00:30:10,920
with Gorilla that goes to one point
three one point three bytes per sample.

334
00:30:11,799 --> 00:30:18,160
The data has zero point five zero
point half a byte per sample on disc.

335
00:30:21,079 --> 00:30:22,960
It's extremely efficient. This is this
is why we did it in c

336
00:30:23,720 --> 00:30:27,559
to make it extremely efficient. Yeah, for sure, for sure. And

337
00:30:29,440 --> 00:30:33,039
you know the data traditionally was only
metrics. But last year I said,

338
00:30:33,519 --> 00:30:41,119
we had built our own logs management
thing, so it logs database that we're

339
00:30:41,119 --> 00:30:45,039
integrated into in the data. But
then last year I said, wait,

340
00:30:45,119 --> 00:30:49,279
the moment I think we're doing it
wrong, I realized that all of us

341
00:30:49,680 --> 00:30:53,200
have a new system, the journey. System the Journal is part of system

342
00:30:53,279 --> 00:31:00,519
G fort the logs, et cetera. This thing is amazing. Probably we

343
00:31:00,640 --> 00:31:06,759
don't know it, but it is
amazing. All the logs management systems they

344
00:31:07,039 --> 00:31:12,480
sufferre from cardinality. So how many
streams of data are right right? System

345
00:31:12,559 --> 00:31:18,200
the Journal does not care. So
every log line can have its own fields

346
00:31:18,799 --> 00:31:25,079
and its own values on these fields, and all of them will be indexed,

347
00:31:25,279 --> 00:31:32,880
all of them. So cardinality not
the problem. Full indexing on everything

348
00:31:33,000 --> 00:31:34,960
the only problem that has. So
even if you centralize, you put a

349
00:31:36,039 --> 00:31:41,680
lot of logs, web server logs
for jump to it. The ingestion process,

350
00:31:41,759 --> 00:31:48,319
et cetera. Are extremely optimized for
system the journal. The only problem

351
00:31:48,400 --> 00:31:52,480
that it has is that they need
disk space, so it requires more disk

352
00:31:52,559 --> 00:31:56,359
space than the rest, but it
does more with the disk space, and

353
00:31:56,400 --> 00:31:59,759
at the same time it is secure. It's the only logs management solution that

354
00:31:59,880 --> 00:32:08,160
has us fall tolerance and high availability
and even secure ceiling for securescess for ceiling,

355
00:32:08,599 --> 00:32:15,559
so it ensures that the logs cannot
be tempered. So the idea,

356
00:32:15,839 --> 00:32:22,000
the way we managed to use system
the journal is exactly the same way with

357
00:32:22,279 --> 00:32:27,519
the data, so we use them
in a distributed way. You don't need

358
00:32:27,559 --> 00:32:32,519
to centralize your logs. Why the
whole world tries to build a logs processing

359
00:32:32,599 --> 00:32:37,880
pipeline and they try to install real
time tools to do analysis and the likes

360
00:32:37,920 --> 00:32:40,839
and all this kind of stuff,
you know, in order to get insights

361
00:32:40,839 --> 00:32:46,160
from the logs. And what we
said is why do we need the pipeline

362
00:32:46,200 --> 00:32:52,480
in the first place. We have
an agent next to the logs that can

363
00:32:52,759 --> 00:32:58,640
quate the logs in real time,
extract all the information needed and report it.

364
00:33:00,119 --> 00:33:02,720
So why why to move the logs
from there? Let them be there.

365
00:33:05,079 --> 00:33:07,359
Yes, that's the same same approach
as you were talking about with the

366
00:33:07,720 --> 00:33:12,200
metrics collection you've got. You can
just take a little bit of the CPU

367
00:33:12,319 --> 00:33:15,279
and the memory that's not being used
on as system and just do the work

368
00:33:15,400 --> 00:33:22,839
there rather than the transport method of
shipping it off someplace and dealing with this

369
00:33:22,920 --> 00:33:27,039
stuff in volume. Of course,
we have centralization points, so if you

370
00:33:27,079 --> 00:33:30,799
have a femeral, you have a
coubernetus cluster, you can have any data

371
00:33:30,920 --> 00:33:32,960
partent. This is how we call
it. It's the same software thet but

372
00:33:34,079 --> 00:33:37,559
you have a parent now, so
all the ephemeral notes push metrics to it

373
00:33:37,640 --> 00:33:40,279
in real time. But these are
small and only do the degree need it,

374
00:33:40,759 --> 00:33:45,519
so you don't need to centralize your
entire infrastructure to this. And you

375
00:33:45,640 --> 00:33:49,920
can have multiple centralization points, one
here, one there on if you are

376
00:33:49,960 --> 00:33:52,759
a high if you have if you
use a hybrid cloud, you can have

377
00:33:52,920 --> 00:33:55,519
one on a WUS, one on
a P one on Azure, one on

378
00:33:55,720 --> 00:34:00,440
prem whatever. So the flexibility that
you have. You can have the same

379
00:34:00,559 --> 00:34:06,559
system the journal, you can have
centralization points which system Digenal methodology is not

380
00:34:06,559 --> 00:34:10,000
in the data system, the journal
remote and system the general upload, but

381
00:34:10,280 --> 00:34:16,079
it's exactly the same philosophy centralized to
the degree required for your operational needs.

382
00:34:16,760 --> 00:34:22,320
Yeah, and not because the system, the monitoring system requires it. So

383
00:34:22,400 --> 00:34:28,920
it's your operational needs that require some
centralization of some kind, gotcha. Yeah,

384
00:34:29,000 --> 00:34:31,960
So that puts the that puts the
decision back on you to meet the

385
00:34:32,039 --> 00:34:37,920
needs of your business rather than putting
the burden on you because of the tool

386
00:34:37,960 --> 00:34:42,639
that you chose. So and also
you can centralize for high high availability.

387
00:34:42,760 --> 00:34:45,800
Do you need high availability? Of
course, push the metrics somewhere, the

388
00:34:45,840 --> 00:34:49,320
logs pustem somewhere, to have two
copies of them, of course. But

389
00:34:49,559 --> 00:34:54,119
this is the idea. You centralize
for operasonal needs, not because the monitoring

390
00:34:54,159 --> 00:35:01,119
system mandates it, right, Yeah, yeah, I got you. That's

391
00:35:01,239 --> 00:35:06,480
wild. This is just this is
just I feel bad saying this, but

392
00:35:06,559 --> 00:35:10,199
it really is mind blowing to me
because it just makes so much sense.

393
00:35:12,159 --> 00:35:19,280
And I'm just I'm just shocked that
it's so obvious. But here we are.

394
00:35:20,880 --> 00:35:23,199
That's the beauty of it. You
know that most of the monitoring systems,

395
00:35:23,239 --> 00:35:29,639
you know, the monitoring systems evolved. There is a Initially we had

396
00:35:29,800 --> 00:35:34,840
nagios and the likes vise were check
basis. So they run a check,

397
00:35:35,519 --> 00:35:37,840
they take a result. Together with
the result, probably they have one or

398
00:35:37,880 --> 00:35:44,599
two metrics or some logline or something, a string to say what's wrong,

399
00:35:45,119 --> 00:35:50,800
and the check has status one or
more status actions there it may be healthy

400
00:35:50,840 --> 00:35:54,760
and healthy or warning or whatever.
All the systems, not the zubis,

401
00:35:55,000 --> 00:36:00,960
Essensiu, Insigna, solar, Aguain, spirited, the all of them are

402
00:36:00,079 --> 00:36:06,920
in this the first generation of check
based systems. Right then the world went

403
00:36:07,000 --> 00:36:15,480
to from technology that is metrics logs, so metrics databases, logs databases.

404
00:36:15,559 --> 00:36:20,039
It's not checks anymore. We gather
the information, we put them in the

405
00:36:20,119 --> 00:36:23,719
database, and the go analyze the
database. The problem with this is that

406
00:36:24,079 --> 00:36:29,400
it struggles at scales. It is
expensive generally to run, so you need

407
00:36:29,480 --> 00:36:37,000
to filter, you need to slower
the resolution. Cherry pic metrics lose some

408
00:36:37,199 --> 00:36:43,000
metrics some information in order to have
a performance system and a performance centralized system.

409
00:36:43,679 --> 00:36:47,360
Of course, this philosophy the guys, the data dog and Dinatries and

410
00:36:47,360 --> 00:36:52,519
the likes, they took it and
they built integrated environments, very nice,

411
00:36:52,920 --> 00:36:58,239
very nice, inte great environments.
So what we do is, I think

412
00:36:58,400 --> 00:37:02,519
it is the next generation. It's
the next evolution. So we take this

413
00:37:02,679 --> 00:37:07,400
philosophy of metrics, logs, et
cetera, et cetera, where you don't

414
00:37:07,440 --> 00:37:13,079
do checks right, but the checks
are on the logs and the metrics and

415
00:37:13,199 --> 00:37:19,280
the data, and we did it
distribute it. So we did it in

416
00:37:19,360 --> 00:37:24,480
a way that it's still integrated,
it's still one infrastructure, but we eliminated

417
00:37:24,880 --> 00:37:31,840
all the problems there that forced people
so far to provide low resolution insights or

418
00:37:32,199 --> 00:37:37,840
eliminate several insights from it, et
cetera. And I think that we did

419
00:37:37,880 --> 00:37:42,519
it in a very efficient way.
So the way I think of it is

420
00:37:42,559 --> 00:37:46,719
that the data is the next evolution
of monitoring systems, and it's nice to

421
00:37:46,800 --> 00:37:52,320
see. You know, one aspect
that we didn't speak so far is the

422
00:37:52,400 --> 00:37:57,639
following the data has. You know, monitoring systems are metrics, logs,

423
00:37:57,679 --> 00:38:00,440
traces. We don't do traces yet. I think we're when I started soon

424
00:38:00,639 --> 00:38:05,119
and it's going to be also distributed. But metrics locks, let's say,

425
00:38:05,119 --> 00:38:08,599
for the number and traces when you
have micro services. Developing micro services,

426
00:38:10,000 --> 00:38:15,519
what we found out is that there
are a lot of information that is neither.

427
00:38:16,000 --> 00:38:20,920
It's not metrics, it's not logs, it's not traces. It's all

428
00:38:20,960 --> 00:38:23,519
the show cats, for example,
all the network connections that the system has,

429
00:38:24,000 --> 00:38:30,440
it's all the the processes that run. It's so it's all the files

430
00:38:30,480 --> 00:38:37,239
that are open. So the idea
is that we created a mechanism where are

431
00:38:37,239 --> 00:38:43,159
our collectors the click ins that collect
data. So they collect to a postgress

432
00:38:43,320 --> 00:38:46,719
for the connect to posts to collect
for some some metrics. But they expose

433
00:38:46,800 --> 00:38:52,719
a function that allows the dashboard to
say, okay, show me the slow

434
00:38:52,800 --> 00:38:58,440
queries of postives. So query posts
the slow queries and give me a list

435
00:38:59,360 --> 00:39:02,239
of the slow which is that are
currently running. Similarly, give me all

436
00:39:02,280 --> 00:39:07,400
the network connections or the outbound connection
or the inbound connections, or the listening

437
00:39:07,519 --> 00:39:12,000
circuits. Show me all the processes
that are running. So the idea is

438
00:39:12,079 --> 00:39:20,000
that at the end we created a
monitoring tool. It's the original idea when

439
00:39:20,039 --> 00:39:24,360
you're trying to kill all the consoles, who still do right the consoles?

440
00:39:27,079 --> 00:39:30,920
Yeah, because that's like that's again
that's additional context. You know, you

441
00:39:30,039 --> 00:39:37,400
get your list of slow queries from
postgrass and eventually, if you don't quickly

442
00:39:37,480 --> 00:39:40,280
see oh well this is this is
a query problem because we're missing an index

443
00:39:40,480 --> 00:39:45,519
or whatever, you do end up
you know, sshing into that system and

444
00:39:45,679 --> 00:39:51,199
running top and checking the open files
handles and all of that stuff. Because

445
00:39:52,000 --> 00:39:59,239
no one can afford to run full
time monitoring of that level of detail using

446
00:39:59,480 --> 00:40:02,639
some of the other monitoring tools like
you you you would bankrupt your company trying

447
00:40:02,639 --> 00:40:07,719
to collect and store all of that
data exactly, and this is exactly what

448
00:40:07,000 --> 00:40:13,199
the data changes. So it's more
it's more cost deficient to run the data

449
00:40:13,519 --> 00:40:20,039
than running any other Commercially, it's
more cost deficient because we don't need resources,

450
00:40:20,079 --> 00:40:25,079
you don't need skills. It just
works. Yeah, and and the

451
00:40:25,159 --> 00:40:30,599
information is just there when you need
it. But it's not hurting when you

452
00:40:30,679 --> 00:40:34,159
don't need it. It's not costing
you when you don't need it. Exactly.

453
00:40:34,719 --> 00:40:42,880
Wow, So what's the what's the
onboarding process look like to start using

454
00:40:43,039 --> 00:40:47,800
that data? So today we have
so there is the open source agent that

455
00:40:47,960 --> 00:40:53,960
you We have two paths how people
can join the community. Two paths.

456
00:40:54,119 --> 00:41:00,519
One is you are a did have
fun open source fun or whatever and enthusiasts.

457
00:41:01,039 --> 00:41:06,320
You go and install the data the
agent. The agent is monitoring in

458
00:41:06,360 --> 00:41:08,519
a box, so by itself on
a single node, you will have it

459
00:41:08,639 --> 00:41:12,800
will do everything, does watch,
alert everything, it will do that do

460
00:41:12,920 --> 00:41:15,360
them for you. That's one path, and then you will you do the

461
00:41:15,400 --> 00:41:21,119
second installation the third and then you
realize that you can build parents or you

462
00:41:21,199 --> 00:41:25,320
can use nedata Cloud to unify an
infrastructure. The other part is that people

463
00:41:25,440 --> 00:41:30,519
that are looking companies that are looking
to replace their monitoring system, they go

464
00:41:30,639 --> 00:41:35,679
to the first five through Google or
whatever, they find our site. So

465
00:41:35,840 --> 00:41:39,599
they went to our site. There
is a trial where they sign up to

466
00:41:39,880 --> 00:41:46,320
nedata Cloud this time and nedata Cloud
instructs them to install agents with certain keys

467
00:41:46,320 --> 00:41:52,159
et cetera. In short, in
order to link them with their own space,

468
00:41:52,199 --> 00:41:58,440
their own account. So in both
cases you start for free. Nedata

469
00:41:58,480 --> 00:42:02,960
Cloud is a thing do centralize data. So the data are still distributed,

470
00:42:04,039 --> 00:42:07,199
but nedda Cloud is a layer that
builds a map of your topology, so

471
00:42:07,320 --> 00:42:12,119
it knows where are the nodes and
what retention they have and what metrics they

472
00:42:12,199 --> 00:42:16,760
have without centralizing the metrics themselves the
values. And then when you go to

473
00:42:16,920 --> 00:42:21,679
queer data, ne Data Cloud says, okay, I'm gonna quer this this

474
00:42:21,880 --> 00:42:24,519
and that notes to get the data, merges the data present them to you.

475
00:42:25,880 --> 00:42:30,639
Okay, all these are in real
time. All these yea, there's

476
00:42:30,679 --> 00:42:37,039
no latencies there. All these are
like these very very quick. So the

477
00:42:37,159 --> 00:42:40,480
idea is that people can start either
by the open source world, go to

478
00:42:40,519 --> 00:42:46,119
GitHub, download the software, install
it, or they can from a commercial

479
00:42:46,199 --> 00:42:52,800
world go to nedata cloud, sign
up a trial. Again, you're gonna

480
00:42:52,800 --> 00:42:55,880
download the agents, et cetera.
So Neddada Cloud, our commercial offering,

481
00:42:55,960 --> 00:43:00,239
let's say, uses the agents as
a distributed base, so we don't have

482
00:43:00,880 --> 00:43:06,159
different enterprise agents. It's the same
thing in the open source software. But

483
00:43:06,360 --> 00:43:10,679
what it provides is the following.
The first it allows it allows you to

484
00:43:10,719 --> 00:43:15,519
skill your infrastructure horizontally. So if
you don't use an data cloud, the

485
00:43:15,639 --> 00:43:20,679
only thing you can do is bigger, build a bigger parent, bigger and

486
00:43:20,760 --> 00:43:24,239
bigger and bigger to aggregate all the
infrastructure there. Like the old traditional centralized

487
00:43:24,239 --> 00:43:29,920
systems, it skills better than them, but still it's one system. This

488
00:43:30,800 --> 00:43:35,480
is one way with n data cloud. You can have as many independent centralization

489
00:43:35,639 --> 00:43:39,159
points or individual standard loone servers and
all of them will become one at query

490
00:43:39,239 --> 00:43:44,639
type. The second is success from
anywhere, so the agents. You need

491
00:43:44,679 --> 00:43:47,519
to hit the IP of the server
to access the dark ware for the data

492
00:43:47,559 --> 00:43:51,239
cloud. It doesn't matter. You
look into up in the data cloud and

493
00:43:51,519 --> 00:43:55,599
then you access severity. The data
cloud gives you a mobile lab for push

494
00:43:55,639 --> 00:44:00,480
notifications, so all the alerts across
the infrastruction will be pushed to your mobile

495
00:44:00,519 --> 00:44:08,280
lab adroid and dios. And also
it dispatches alert centrally. So instead of

496
00:44:08,320 --> 00:44:14,079
dispatching all the ages to dispatch alerts
to slug or pay your duty or whatever

497
00:44:14,199 --> 00:44:20,400
email whatever you use, the data
clouds receives all the transitions that dapplicates them

498
00:44:20,840 --> 00:44:25,119
and then dispatches the alert central.
There is a free ti air two for

499
00:44:25,679 --> 00:44:30,679
home users, et cetera. There
is a small free air on data cloud

500
00:44:31,119 --> 00:44:37,480
But overall, you know, mainly
because we decop it the cost of observability

501
00:44:38,760 --> 00:44:45,920
from the monitoring itself, it is
a lot significantly more cost efficient. So

502
00:44:46,119 --> 00:44:51,559
if you go for the commercial offering, if you go to data with the

503
00:44:51,639 --> 00:44:58,000
legs, you start at twenty and
thirty dollars a month per not right three

504
00:44:59,159 --> 00:45:02,960
oh wow. And even with data
Dog, like when you start with that

505
00:45:04,159 --> 00:45:13,400
level like that, quickly you quickly
learn that that's only the monitoring costs.

506
00:45:13,480 --> 00:45:15,639
Like once you start bringing in the
data, there's data charges as well,

507
00:45:15,760 --> 00:45:25,719
and those others I have actually here
because we did this comparison about the resources

508
00:45:27,360 --> 00:45:32,599
that data Dog for example needs versus
Data Sepe. Usage of the Data Dog

509
00:45:32,679 --> 00:45:39,639
agent fourteen percent the data through three
point six memory usage data DOG almost a

510
00:45:39,679 --> 00:45:45,239
gigabyte nine hundred and seventy two megabytes
are from the data one hundred and eighty

511
00:45:45,280 --> 00:45:54,679
one egress pernude eleven gabytes per month
perndes. The data nothing doesn't need an

512
00:45:54,800 --> 00:46:05,159
egress barnment. So it's you.
It's more expensive and you put more resources

513
00:46:05,199 --> 00:46:13,960
to it compared to Yeah, wow, So are there are there you?

514
00:46:14,079 --> 00:46:16,280
You know, we mentioned early on
in the podcast that your background started with

515
00:46:17,400 --> 00:46:27,159
retail transactions point of sale systems,
which are very high throughput and if you've

516
00:46:27,159 --> 00:46:35,639
ever worked in retail with a customer, they're they're not the most they're the

517
00:46:35,800 --> 00:46:37,719
easiest things to deal with. So
you come from a high stress environment.

518
00:46:38,079 --> 00:46:45,239
Are there is that? Is that
indicative of the type of customers who really

519
00:46:45,599 --> 00:46:50,119
embrace net data or is there a
specific industry that you work really really well

520
00:46:50,159 --> 00:46:52,840
with. I think we have people, we have businesses from all over the

521
00:46:53,599 --> 00:46:59,280
old industries. So we have people
from health care, we have people from

522
00:46:59,400 --> 00:47:04,880
manufacturing, we have people from technology, a lot of technology of course,

523
00:47:04,960 --> 00:47:10,719
right, So I think that the
key point here is, Look, if

524
00:47:10,760 --> 00:47:15,639
you go to a DevOps guy that
wake ups at three A at three am.

525
00:47:15,800 --> 00:47:22,840
We have another like this that says, I love three am. If

526
00:47:22,880 --> 00:47:27,079
you have a guy that wake ups
at three am, you have to understand

527
00:47:27,159 --> 00:47:34,280
that at three am has he's pissed
off? So you have you have to

528
00:47:34,440 --> 00:47:39,920
be real time. We have many
big companies, really among the Fortune five

529
00:47:40,039 --> 00:47:49,760
hundred say companies that they don't accept
the latency that others provide. Yeah,

530
00:47:50,159 --> 00:47:55,280
a minute latency is what. No, at three am, I will not

531
00:47:55,440 --> 00:48:00,880
wait one or two or three minutes
to see if fixed it or not.

532
00:48:00,400 --> 00:48:05,599
I wish say it now. Yeah, okay, So that's the that's the

533
00:48:05,719 --> 00:48:14,119
idea you have. People have to
value the fidelity the insights, so they

534
00:48:14,239 --> 00:48:17,960
need to. It's like people that
want to learn more, to understand more,

535
00:48:19,079 --> 00:48:24,239
to feel the infrastructure more. This
is our best clients, best customers,

536
00:48:24,400 --> 00:48:30,480
best users. On the other hand, people that they are afraid of

537
00:48:30,559 --> 00:48:32,519
this thing. If it works,
don't touch it, let's rebut it.

538
00:48:32,760 --> 00:48:37,400
That's the funny thing because many people
do that, Oh it doesn't work,

539
00:48:37,480 --> 00:48:44,079
let's put it. It's okay.
Then the data is not that useful for

540
00:48:44,199 --> 00:48:47,599
you to you you need Actually the
first kind of monitoring, the check based

541
00:48:47,719 --> 00:48:57,440
the check based systems for sure you
have just the light or not. So

542
00:48:57,559 --> 00:49:04,960
another thing that's that strikes me as
being really cool about this is that you've

543
00:49:05,000 --> 00:49:09,079
got all of the observe observability data
in one spot, because that's been one

544
00:49:09,079 --> 00:49:14,039
of the other challenges I've had over
the years is you get the alert,

545
00:49:15,000 --> 00:49:16,880
but now it's just an alert.
You don't really have the context, so

546
00:49:16,920 --> 00:49:22,079
you have to go somewhere else to
get the context of what what, why

547
00:49:22,159 --> 00:49:27,119
did this thing alert? What does
this mean? And there's another spot for

548
00:49:27,320 --> 00:49:31,719
every alert that we see. We
have a community site that people can go

549
00:49:31,960 --> 00:49:37,400
and see what others did about this. So you we have a CTA on

550
00:49:37,679 --> 00:49:45,480
the alert that goes to the forum
about this alert where people have this alert.

551
00:49:46,800 --> 00:49:51,000
We have an introduction with that that
we wrote ourselves. This is what

552
00:49:51,079 --> 00:49:53,519
the alert means. This is how
you what you should do if you have

553
00:49:53,639 --> 00:49:57,320
the system do like this, if
you have the other system to like that.

554
00:49:57,440 --> 00:50:05,960
But then people discuss forum that's wild
because like even even you know,

555
00:50:06,039 --> 00:50:08,079
there's just you get alerts for all
kinds of things, and then you add

556
00:50:08,159 --> 00:50:13,639
to that that you're getting paged at
three am. You know you you might

557
00:50:13,840 --> 00:50:19,880
need a little a little help there
to understand the context. So thin within

558
00:50:19,920 --> 00:50:22,920
the data is that the community is
vast. So so far we count more

559
00:50:22,960 --> 00:50:30,559
than ten million users. The community
grows with about five to ten thousand new

560
00:50:30,719 --> 00:50:36,039
users a day. Even Docker have
downloads. For example, we have about

561
00:50:36,119 --> 00:50:43,639
one hundred and fifty two hundred thousand
DOCKERHB downloads every day. Even on our

562
00:50:43,800 --> 00:50:47,079
Sash offering, we have about one
hundred and fifty two hundred sign ups every

563
00:50:47,159 --> 00:50:54,480
day. Business that's that's that's a
lot in a lot of project. And

564
00:50:55,000 --> 00:51:00,159
not only that, for example,
the love that we see from users is

565
00:51:00,199 --> 00:51:05,320
extreme. The data in terms of
user love starts. For example, we

566
00:51:05,599 --> 00:51:10,559
lead the c and safe observability category. We sur passed Elastic in October.

567
00:51:12,079 --> 00:51:16,519
Now we are leading the observability category
although we are not incubated where it seems

568
00:51:16,559 --> 00:51:23,039
if does not endorse the data.
But it's the most loved project. Let's

569
00:51:23,039 --> 00:51:28,760
say that. Yeah, that's just
crazy. I mean, and just af

570
00:51:28,800 --> 00:51:32,679
you're talking with you, I can
see why you're getting those kinds of numbers.

571
00:51:34,000 --> 00:51:38,119
It's just I'm still just trying to
wrap my head around it. It

572
00:51:38,239 --> 00:51:42,199
almost seems too good to be true. I'll just be honest with you.

573
00:51:42,320 --> 00:51:45,599
It's like what's the catch? Yes, what's the catch here? I will

574
00:51:45,639 --> 00:51:50,280
tell you all the data is not
not mature yet, so we're building that

575
00:51:51,400 --> 00:51:54,880
does boards are pretty new, it's
less than a year old. Even our

576
00:51:55,039 --> 00:52:00,920
database, it's the third version of
the database. We feel is the last

577
00:52:00,000 --> 00:52:06,239
year. So it's not that we
have not built that many tools on top

578
00:52:06,440 --> 00:52:10,320
of the infrastructure of the monitoring infrastructure
yet. So the baseline is there.

579
00:52:10,840 --> 00:52:15,239
You have high fidelity, you know, unlimited methings. You have all the

580
00:52:15,840 --> 00:52:20,840
building blocks to actually do the work. We are lacking in a lot of

581
00:52:21,000 --> 00:52:25,559
high level tools right that we're building. Now that's okay, yeah, yeah,

582
00:52:25,639 --> 00:52:29,840
for sure. The good thing is
that the foundation is very good.

583
00:52:30,400 --> 00:52:35,679
Yeah, and it's it's open source. Do you what's your what's the open

584
00:52:35,719 --> 00:52:38,519
source community look like? Do you
find a lot of do you get a

585
00:52:38,559 --> 00:52:45,360
lot of poor requests and input?
I will tell you we have a lot.

586
00:52:45,480 --> 00:52:51,480
We have about four hundred fifty five
hundred contributors. Some of them very

587
00:52:51,559 --> 00:52:54,320
few, are very skillful. Because
we write in sea. If you remember

588
00:52:54,480 --> 00:52:59,599
the course in c this is pretty, this is pretty. It's not easy.

589
00:53:00,679 --> 00:53:06,559
In the early days, a lot
of community used because come on,

590
00:53:07,039 --> 00:53:14,679
I used to be an engineer in
the nineties. Then suddenly in twenty fourteen

591
00:53:14,800 --> 00:53:19,480
fifteen, I became an engineer again, so I was rusking. You can

592
00:53:19,639 --> 00:53:22,880
understand that. So sub in technology
all these years, I was a coo

593
00:53:23,360 --> 00:53:28,440
all of all my time, it
was city o stuff like this. But

594
00:53:28,639 --> 00:53:34,000
suddenly I had to write quode myself
and people helped me. So a lot

595
00:53:34,119 --> 00:53:38,000
of people stepped in and showed me
how this is done, and you know,

596
00:53:38,760 --> 00:53:45,400
pushed me a little bit beyond my
limits. All this happened. But

597
00:53:45,519 --> 00:53:50,519
I think that today in that I
is a mature open source project. It's

598
00:53:50,639 --> 00:53:54,880
very robust. We also wanted to
crassies and the likes of that. It's

599
00:53:55,519 --> 00:54:04,000
very nice, it's very reliable software. So I think as time passes it

600
00:54:04,119 --> 00:54:09,920
becomes incrementally more difficult for people to
contribute code. Yeah, and actually what

601
00:54:10,440 --> 00:54:15,519
happens now, we're going system the
Journal D. We submit it to system

602
00:54:15,679 --> 00:54:24,360
D repositories patsies to make System the
Journal fourteen times faster. So we are

603
00:54:24,360 --> 00:54:30,159
also a community contributor to that thing. So the System the Journal that you

604
00:54:30,280 --> 00:54:32,920
have in your in the next version
that you're gonna have there, it's going

605
00:54:34,000 --> 00:54:39,159
to have parties of a data inside
to be fourteen times faster than it work.

606
00:54:39,840 --> 00:54:44,920
See, and that's cool that part, that part of open source really

607
00:54:45,159 --> 00:54:49,559
gets me excited. You know,
where you you're consuming an open source thing

608
00:54:49,840 --> 00:54:53,800
and you're like, ohw this can
be better, and rather than forking it

609
00:54:54,039 --> 00:54:58,199
or creating your own, like,
contributing back to that, I think is

610
00:54:58,440 --> 00:55:04,960
a more idea. You want them
to. You want the software that tend

611
00:55:05,039 --> 00:55:09,679
to be maintained and high quality.
So and actually, the work of applying

612
00:55:09,840 --> 00:55:15,639
patsies to your version, come on, that's not good. Yeah, It's

613
00:55:15,760 --> 00:55:19,440
better to have it there and they
maintain it from now on for sure.

614
00:55:19,559 --> 00:55:23,119
Yeah, because yeah, I've seen
that trend over the last few years of

615
00:55:23,199 --> 00:55:29,880
people patching or forking or creating a
competing product, and I just I think

616
00:55:29,960 --> 00:55:31,920
we would all be so much better
if you just contribute back to upstream,

617
00:55:31,960 --> 00:55:39,039
because you get all of the benefits
of your work, plus the benefits that

618
00:55:39,079 --> 00:55:43,679
everyone else in the community is contributing
as well. If you think of it

619
00:55:43,800 --> 00:55:45,239
in the data of the day,
because it's monitoring out of the box,

620
00:55:45,280 --> 00:55:51,239
it's an opinionated monitoring. So what
happens is that when you're install the data

621
00:55:51,400 --> 00:55:58,280
in your infrastructure, you're monitoring team
is the data. We are your monitoring

622
00:55:58,360 --> 00:56:02,320
things. You're just a consumer,
right, using it the same way we

623
00:56:02,599 --> 00:56:10,800
want system D to be there for
us exactly. It's utilizing the whole community

624
00:56:12,440 --> 00:56:19,159
in order to provide high quality to
end users. Yeah. Because and that's

625
00:56:19,320 --> 00:56:23,960
that's really key because very few of
us, none well none of us really

626
00:56:24,239 --> 00:56:30,360
except for the people who work system
D A are are getting paid to create

627
00:56:31,239 --> 00:56:37,159
you know, the journal Like our
customers aren't interested in our journaling, they're

628
00:56:37,199 --> 00:56:40,000
interested in the product. That I
think for projects like system D, this

629
00:56:40,159 --> 00:56:45,280
is easy because most of the engineers
that work there are paid by other companies

630
00:56:45,320 --> 00:56:49,920
by their company and work on that
thing. So other commands red hat or

631
00:56:50,239 --> 00:56:55,800
mund this or that, they contribute
resources that are working dedicated on this stuff.

632
00:56:57,360 --> 00:57:00,360
Yep, that's okay. So their
job is to work on system D,

633
00:57:01,360 --> 00:57:07,000
but they work for some other company
that uses system right for a startup

634
00:57:07,079 --> 00:57:10,719
like us, because we are startup
still, so we just started we started

635
00:57:10,760 --> 00:57:16,480
monetizing five months ago. Oh wow, okay, this is pretty new.

636
00:57:16,960 --> 00:57:23,159
Yeah, very so for us,
we cannot dedicate so much resources. But

637
00:57:23,480 --> 00:57:28,719
for sure, the whole point is
the community. The whole point is to

638
00:57:29,039 --> 00:57:34,360
aggrediate a lot of to unify the
community, gather together a lot of value

639
00:57:34,800 --> 00:57:43,679
the users. Yeah, so early
stage start up. What's the future?

640
00:57:43,719 --> 00:57:49,559
What are your what's on your roadmap
for data? Oh? You know,

641
00:57:51,079 --> 00:57:57,000
monitoring is endless, endless, and
there is no way for this to finish.

642
00:57:58,000 --> 00:58:02,280
What we are trying to do currently
cover us much because we want to

643
00:58:02,480 --> 00:58:08,000
grow our sales. We are not
sustained, so we need we still need

644
00:58:08,079 --> 00:58:15,480
to prove that the data is sustainable
and it deserves more. So when this

645
00:58:15,639 --> 00:58:22,119
process. The idea is that currently
we are looking to help the users that

646
00:58:22,239 --> 00:58:28,440
have a budget. This is our
main thing because this will allow us to

647
00:58:28,480 --> 00:58:34,159
survive. Once we pass through this, I think the next steps will be

648
00:58:36,400 --> 00:58:40,199
to address tracing, provide the high
level tools that people need, become more

649
00:58:40,320 --> 00:58:45,079
contextual in the US interface, and
the likes. I think that the data

650
00:58:45,400 --> 00:58:52,920
has the winning mix, the winning
combination in product design, easy, higher

651
00:58:52,920 --> 00:58:58,760
resuls hire insights, higher fidelity,
the overwhelming part of you said, or

652
00:58:58,840 --> 00:59:01,079
we already know how to fix it, so people not to feel overy well

653
00:59:01,239 --> 00:59:07,000
by the amount of information if we
switch to a more contextual approach at the

654
00:59:07,079 --> 00:59:14,920
presentation level. So we're going to
switch from overwhelming to very deep. Right,

655
00:59:15,679 --> 00:59:21,480
So these are these are changes that
we need to do as we progress.

656
00:59:21,719 --> 00:59:23,960
But I think the first thing is
people to to try in the data,

657
00:59:24,239 --> 00:59:30,199
seeing the data, if they use
the data in production environments for commercial

658
00:59:30,239 --> 00:59:34,639
purposes, to buy a license because
this will allow us to continue guys who

659
00:59:34,639 --> 00:59:38,199
are here to actually we are here
to provide value. And for sure,

660
00:59:38,800 --> 00:59:43,719
if people don't buy the value,
then what are we doing here? Yeah?

661
00:59:44,159 --> 00:59:46,320
No, And and it's a it's
a proven business model, you know,

662
00:59:46,480 --> 00:59:52,079
data Dog and and Dine Tracing those
other companies have proven that you that

663
00:59:52,159 --> 00:59:57,559
there is a market for this,
and so if you're able to tap into

664
00:59:57,639 --> 01:00:04,679
that market and provide better service at
cheaper price. Yes, it's not only

665
01:00:05,079 --> 01:00:09,559
cheap, it's the overall cost that
is better, all of it across the

666
01:00:09,599 --> 01:00:15,599
board, so even cost optimization.
We see a trend for the unification of

667
01:00:15,639 --> 01:00:22,400
the monitoring tools. So mostly because
tools. Monitoring tools are not that comprehensive.

668
01:00:22,559 --> 01:00:28,000
Let's say they have certain aspects that
are very good, but they're very

669
01:00:28,039 --> 01:00:31,679
bad at others. Mostly for these
people use a lot of tools, right,

670
01:00:32,679 --> 01:00:37,119
so people are trying to consolidate tools. They are trying to save money,

671
01:00:38,119 --> 01:00:42,800
become more efficient in troubleshooting. This
is what we do. This is

672
01:00:42,920 --> 01:00:45,960
exactly what we're trying to do with
the data. Simplify monitoring, make it

673
01:00:46,000 --> 01:00:52,000
accessible to everyone, let them focus
on their job, not a monitoring tool,

674
01:00:52,360 --> 01:00:54,440
nothing to learn, just some familiarity
with the tool and you're done.

675
01:00:55,440 --> 01:01:00,559
Stuff like this for sure. Yeah, Like simplification of the tool is huge

676
01:01:00,639 --> 01:01:08,280
because we're all getting increased responsibilities and
increased scope of work. So and that

677
01:01:08,360 --> 01:01:15,800
technology becomes more complex. Really,
absolutely, our time passes the complexity skyrokids.

678
01:01:17,079 --> 01:01:21,960
You cannot you cannot fit it in
your head anymore. So you need

679
01:01:22,079 --> 01:01:27,800
the tool to be smarter. Instant
looks, get smarter tools, tools that

680
01:01:27,960 --> 01:01:31,280
do stuff by themselves. Yeah,
and that to me, that's a huge,

681
01:01:32,280 --> 01:01:37,119
huge win here. Just providing the
information in context, what do I

682
01:01:37,199 --> 01:01:40,079
need to know at this given time, rather than me having to go find

683
01:01:40,159 --> 01:01:47,239
it something that presents it is huge. Well, this has been an eye

684
01:01:47,320 --> 01:01:53,920
opening conversation. I'm I'm excited.
I mean, I monitoring gets let's be

685
01:01:54,000 --> 01:01:57,800
honest, it's one of those hard
things to get excited about, but this

686
01:01:58,119 --> 01:02:04,519
is kind of exciting. It changes
the dynamics, it changes everything, Yeah,

687
01:02:04,679 --> 01:02:07,320
it does. I mean I think
back to like a few decades ago

688
01:02:07,519 --> 01:02:16,159
configuring nagios and zabs and thinking about
how that just led to this here.

689
01:02:16,679 --> 01:02:22,440
It's pretty cool. It's definitely definitely
something worth checking out. So thank you

690
01:02:22,519 --> 01:02:27,400
for joining me today, Thank you
for being here, for inviting me.

691
01:02:28,440 --> 01:02:34,800
It was great. I hope you
enjoyed it. I did. I thoroughly

692
01:02:34,960 --> 01:02:39,800
enjoyed it because this is a perspective
on monitoring and observability that I never had.

693
01:02:40,440 --> 01:02:45,119
I feel like I'm walking away from
this conversation with a completely different set

694
01:02:45,159 --> 01:02:51,199
of goals for how I'm going to
approach this problem in the future. Like

695
01:02:51,360 --> 01:02:53,559
you know, you hear, especially
in a start space, you hear the

696
01:02:53,719 --> 01:02:59,719
term disruptive throne around a lot,
But I mean this one, it kind

697
01:02:59,760 --> 01:03:05,239
of like it fits that category and
it's just such an odd place, like

698
01:03:05,639 --> 01:03:08,440
who thought, you know, you
could disrupt observability, But here we are.

699
01:03:09,239 --> 01:03:14,480
YEA, Let's hope it works.
It remains because you know, the

700
01:03:14,599 --> 01:03:19,960
devil is in the details for sure. Work hard to smooth everything out to

701
01:03:20,159 --> 01:03:24,239
to make it as perfect as possible. Yeah, but it's a big only

702
01:03:24,239 --> 01:03:28,199
the integration that we have, you
know, it's eight hundred integrations. Come

703
01:03:28,239 --> 01:03:32,119
on, it's it's and we're a
small team. We're thirty people. Wow,

704
01:03:32,440 --> 01:03:37,559
that's impressive. That's impressive. So
for all of our listeners, if

705
01:03:37,599 --> 01:03:43,760
they want to find out more about
data, where can they go data dot

706
01:03:43,800 --> 01:03:49,840
cloud or no data monitoring on Google, your shirts on GitHub. The data

707
01:03:49,880 --> 01:03:54,760
and data is that Apple didubs last
data, slast data. I think it's

708
01:03:54,800 --> 01:03:58,239
quite popular, so it's easy to
find. There's a big community, they

709
01:03:58,280 --> 01:04:01,840
speak about it and we have read
it. Also there is an army data

710
01:04:02,440 --> 01:04:08,960
so yeah, right on it sounds
good. Costa, thank you so much

711
01:04:09,000 --> 01:04:12,320
for joining me today. This has
been opening and I really appreciate it.

712
01:04:12,559 --> 01:04:15,840
And to all the listeners, I
hope y'all found this useful. I hope

713
01:04:15,840 --> 01:04:17,840
you go check out net data.
I know I'm going to and I will

714
01:04:17,920 --> 01:04:20,519
see y'all next week. By it
