00:00:19.600
We're going to be together for a little bit. So, um I really appreciate you all coming to this workshop today. Um I
00:00:28.000
think so. Are we a little early or is it the right time to start? We're good. Okay, let's cruise. All right. Um, so to
00:00:36.399
get started, Rosa gave me a love lovely introduction, but just to reiterate, I Yeah, my name is Kayla. I work at New
00:00:42.800
Relic. Um, if you've ever installed the New Relic RPM gem, that's the project
00:00:47.920
that uh my team and I maintain. And then I'm also a maintainer on the two uh open
00:00:54.960
telemetry Ruby repositories. and we'll be using code from both of those repositories today.
00:01:01.760
So, um, open telemetry, rails instrumentation, they're all very vast topics. So, just to kind of narrow it
00:01:07.760
down today, what we're going to talk about is how to add open telemetry to a Rails app. Um, we're going to talk about
00:01:14.640
core signals, traces, metrics, and logs. We'll talk about how instrumentation
00:01:20.320
works as well. We're not going to get into some of the more advanced topics related to open
00:01:26.240
telemetry, including the collector, baggage, span links, propagators. We're
00:01:31.600
not going to do too much troubleshooting like how to use this data. Um, and we're also not going to talk about how to
00:01:37.119
choose a backend for your open telemetry provider. So now that we've kind of set those
00:01:43.920
expectations, um, so our agenda, first I'm going to give a lecture on the basics, make sure we're all on the same
00:01:50.000
page and in terms of the vocabulary. Um, then we'll start adding open telemetry
00:01:55.840
to a Rails application. Um, if you guys are on the Slack for this event, there
00:02:01.920
is a Slack channel specifically for this workshop. I believe it's called 2025
00:02:07.759
event Rails open telemetry workshop or something like that. And on there there's a pinned post with a link to a
00:02:15.680
repository that we'll go through and add things to. Um so if you haven't cloned
00:02:20.879
that I recommend cloning that during the lecture, though I'll you know share links as well on the screen when we get
00:02:27.040
to that point. Um and then we'll have a 10-minute break halfway through. There is a break slide. We'll see if it all
00:02:33.760
syncs up, but um then we'll continue with the uh you know app adding things
00:02:40.160
and then we'll kind of close with a little bit about ways to continue engaging with open telemetry and
00:02:46.000
learning more. I also wanted to note take care of yourself. 2 hours is a long time. It's in the middle of a long uh
00:02:52.640
conference overall. So, if you need to get up and leave or get water, um there is also a gist that's linked in that uh
00:03:01.200
Slack channel that I mentioned. And so, that should help you kind of jump back in and catch up on any steps that you
00:03:07.680
might have missed. All right, so let's get into kind of the
00:03:13.599
lecture termsy part. Um so, the first thing is what is observability?
00:03:21.760
I like to use this example of a car. Um, so you can be driving a car and maybe
00:03:27.120
your check engine light comes on and that's usually a cause for some sort of concern. You want to know more. Um, and
00:03:33.840
you'll generally take your car, if you're not skilled in mechanics like me, to a mechanic and they might pull out
00:03:40.080
one of these, a code reader. And this is something that you can plug into your car and the mechanic will usually get
00:03:46.799
some sort of number um that is very specific to the problem that is
00:03:52.159
happening with your car and will allow them to help diagnose what's going on. Observability does this for
00:03:58.720
applications. It's a practice to help understand the internal state of your system based on its outputs and it's
00:04:04.879
achieved by collecting data or also referred to as telemetry.
00:04:10.879
Now there's a lot of debate in the observability monitoring world right now about terms and which terms are best. So
00:04:19.040
um you know monitoring is kind of seen as the old way of doing things. You know what's happening you you know what could
00:04:26.320
be problems and you track for that. You have very predefined metrics and dashboards and maybe we'll deploy you
00:04:32.880
know changes to your application with additional logging and things if you have problems. Observability tries to
00:04:39.840
take things a step further so that you know why something is happening and not just what is happening. It tries to have
00:04:46.479
a broader um depth of information um so that you can answer any question
00:04:52.479
including unknown unknowns. And the idea here is to become proactive so that the
00:04:57.520
data you already need has been deployed and there's fewer of those incidents where you know you need to deploy
00:05:02.800
another copy of your app to help troubleshoot things. Um, and this book that I've linked at
00:05:08.160
the bottom, um, observability engineering goes in way deeper on on
00:05:14.160
these ideas and I highly recommend it if you're interested in this topic.
00:05:19.600
These are some traditional observability vendors. So, if you know you've installed maybe software from one of
00:05:25.840
these companies into your application or just seen them around, um, their
00:05:31.520
business is kind of in this observability and monitoring space.
00:05:38.000
So um traditionally the way that observability would work is that you would download a vendor specific
00:05:43.840
library. We call them agents but agents are now becoming many other things. So we'll see if that sticks. Um and then
00:05:51.840
you know that vendor specific library gives you a strong baseline of what's called instrumentation. You know
00:05:58.320
different telemetry that is connected to libraries that you use.
00:06:04.000
But inevitably those vendor specific libraries don't cover everything that's unique about
00:06:09.280
your domain and so you'll need to add API calls to get additional telemetry
00:06:14.319
and they're also usually vendor specific and then that code goes to a single
00:06:20.080
backend where that vendor can visualize your data and store it and also charge
00:06:25.120
you whatever they want in order to have those visualization and storage opportunities. So you can very easily
00:06:32.080
get locked in. Um and open telemetry is here to kind of solve some of those
00:06:37.600
problems. So what is open telemetry? Um I I like to use this example of the
00:06:46.240
many many chords that at least I have in my drawer for all the different electronics that I have. You know, when
00:06:51.759
you have one chord per one electronic device, things can get really messy, especially if you lose that cord. um the
00:07:00.080
way that traditional observability vendors work kind of feel aligned to this where you need, you know, each one
00:07:05.280
of their tools and um all of their APIs inside of your application and it's kind of difficult to decouple the two. Open
00:07:12.960
telemetry is trying to be a little more like USBC. You know, we're moving to a new standard so that hopefully we only
00:07:18.080
have to have one cord for all of our devices and it's ubiquitous.
00:07:24.000
So, open telemetry is this vendor neutral solution. It's open source and it's a highly open source project where
00:07:31.440
um people from companies all across the world um both observability vendors and
00:07:36.800
what we call end users, people who install the vendors tooling or observability tooling into their
00:07:42.960
applications. Um and there's a lot of checks in place to make sure that not a single company ever gains kind of a
00:07:50.639
monopoly or control over the project. It's highly modular and customizable. um
00:07:57.120
you know some of those things that I mentioned we're not going to cover at the beginning allow you to really tweak the data that you um store and send and
00:08:05.520
um we'll get into a little more about how the open telemetry Ruby project is is structured to allow you to install
00:08:11.919
only what you absolutely want. There's also a public specification and
00:08:17.120
semantic conventions and those things um are also being defined in this open source sense where it's a very
00:08:23.520
democratic conversations everyone can go. Um but the goal of this project is
00:08:28.720
to create a new standard for observability information. It wants observability to be built in and
00:08:34.000
ubiquitous and not something that you add to your application once it grows to a certain size. it wants you to have it
00:08:39.440
in your application on day one so that you can start that practice of keeping an eye on your application using
00:08:45.440
telemetry. Um it also deals with the collection and
00:08:50.560
standardization of the data and not the visualization and storage. So those traditional observability vendors that I
00:08:56.640
showed you before now they're needing to compete on their storage costs and their visualization strategies, the kind of
00:09:02.959
features that they can build on top of the data rather than the data that they collect.
00:09:09.839
So to kind of reiterate this a little bit um there are pros and cons on each side.
00:09:16.160
Some of the elements with the vendors are open source. Open telemetry is fully open source. The vendor code is usually
00:09:22.240
a lot more mature and so you know it will likely have bugs worked out or a
00:09:27.279
deeper feature set or more instrumentation. Open telemetry is still growing and developing,
00:09:33.519
but you can get locked into a product and it can be difficult to walk if your bill is no longer compatible with what
00:09:39.279
your company can afford or wants to pay for observability. Um, open telemetry is
00:09:44.480
is vendor neutral. Um, you know, vendors and I guess open telemetry, they both
00:09:50.800
have their own configuration options. Um, and and that's well documented. But
00:09:56.640
with vendors, there's a a single company driving the effort, whereas open telemetry is many companies coming together, which can create different
00:10:03.200
ideas. Um, I think one of the biggest differences between the two is that uh
00:10:08.399
vendors have in-house support teams that are often, you know, paid really well and available constantly. With open
00:10:14.880
telemetry, you have maintainer support. And so you have people who are working in the open source. It's not usually
00:10:20.480
their first job. It's kind of their second job or another tag on to what they do every day.
00:10:26.160
So the turnaround time for uh pull requests and support tickets can be a lot longer and you may be the best
00:10:34.000
solver of your problem and you know perhaps your PR comes in and that's that's what actually solves the issue.
00:10:41.279
So um so yeah, so we're going to talk a little bit more about open telemetry and see how you can use it in your Rails
00:10:47.360
application. the the Ruby project in particular is what I've been working on and we um I guess yeah the rest of this
00:10:55.680
will will come a little later but I hope you have a better sense now of of what this project is like.
00:11:02.160
So let's get into the data that helps make observability possible. We have these core core signals is what we call
00:11:08.640
them which are just kind of the data types um to to help with this
00:11:13.760
observability practice. Logs, metrics, and traces.
00:11:19.440
Sure, most of you have heard of logs. Um they're the oldest form of telemetry. Uh essentially they're a timestamped
00:11:24.959
message. Uh they may be structured, they may be not. They're very useful for debugging specific issues and kind of
00:11:30.720
finding a narrative around whatever problem you're facing, but they are difficult to sort and correlate.
00:11:38.000
Metrics are measurements of your system over time, usually some sort of string that it's attached to a number that gets
00:11:44.640
recorded at a certain interval. Open telemetry metrics are kind of unique in that they have what's called dimensions,
00:11:50.880
which basically means they have attributes. So you can go further into organizing your metrics and kind of
00:11:56.880
sorting through them. They're great for looking at trends, anomalies, response time, and system health.
00:12:05.760
Traces are the newest uh telemetry on the block, though they've been around for a while. You can think of them as an
00:12:12.800
endto-end journey of your request through your system where every step in that journey is a span. So when you you
00:12:20.240
know file up your local host 3000 on rails and load that first page you know
00:12:26.160
it's not just action controller it's action view and your models and everything like that and traces will
00:12:32.399
capture each stage of that process they can also be distributed across services
00:12:37.600
which is really helpful for the microser architecture that uh many companies have moved towards so that you can see how
00:12:43.600
your services interact. I would say a downside to tracing is that it's difficult to collect all of the traces
00:12:50.639
that are possible. So your traces may be subject to sampling which means you might not have every single request
00:12:56.160
that's recorded just more of kind of a standard of a request and um trying to
00:13:01.600
figure out you know what data is the most meaningful out of all those possible steps can also be tricky.
00:13:08.480
Um so those core signals they um end up getting emitted to your application
00:13:14.880
through what we call instrumentation. So instrumentation is that code that records the telemetry. In open telemetry
00:13:22.320
semantic conventions help define what should be captured so that there's a standard that exists across similar
00:13:28.079
types and categories of libraries and also across languages.
00:13:33.360
And then in open telemetry, there's APIs that you call that collect the data that are the exact same APIs that you would
00:13:39.360
call in your application if you wanted to add any custom instrumentation.
00:13:46.320
Okay, so that's the lecture part. Um it is time to take out your laptops if you
00:13:52.000
want to follow along. Um, we are going to look at an observability backend to
00:13:59.360
kind of see how our data comes through. Um, the application that we're going to use here is, uh, linked in that Slack
00:14:06.639
channel that I was telling you about. Um, you can clone it with this command if you'd like,
00:14:13.519
but um, does everyone have access to the Slack channel? Is there anyone who doesn't know? Okay, cool.
00:14:20.399
Um so what we're going to cover in this portion is we're going to talk about configuring the SDK. Then we will get um
00:14:28.320
into instrumentation custom spans and attributes and then we'll talk about metrics logs and at the end hopefully
00:14:35.279
we'll have time to get a little deeper and talk about active support notifications and add our own
00:14:41.040
subscription uh for one. The way that this is going to work is
00:14:46.560
that we'll make a change. We'll talk about the change. I'll have some slides for it. You will start or restart your
00:14:52.560
server and there is a traffic script inside of the application that you can
00:14:57.680
click um that you can use or you can you know open up a UI in your browser and click around there to generate some
00:15:02.959
traffic. Then we'll hop over to the UI and um you know this is all these are
00:15:09.360
all steps that you could do in your own application. There are a few, you know, scenarios where we're updating specific
00:15:15.760
methods inside of this app. But if you wanted to think about maybe what might be a good fit for your application and
00:15:21.839
do the same things there, um, that's also completely fine.
00:15:27.360
The app itself, um, so in this repository, it's Rails 8. It's running Ruby 342 because I forgot to update to
00:15:34.160
344. Um, it's a very basic CRUD scaffold. There's very little extra on
00:15:39.360
top of it. Um it's a it's a hike tracker. So you have users and trails
00:15:44.560
and an activity belongs to a user and a trail. I think trails also have comments. Um the majority of this is
00:15:51.279
going to be focused on the activity resource. So the activity models and controllers.
00:15:57.440
There are also two directories in the home of that repository. There's the
00:16:02.560
hike tracker original and hike tracker instrumented. Original is what we're going to be
00:16:07.680
updating together. instrumented is like the answer key. So, if you don't want to
00:16:12.880
type everything out, you could copy and paste things across. Um, there's also a gist that I linked in that uh Slack
00:16:20.480
channel that has each of the steps we're going to follow in the code. So, you can also copy and paste those if that's a
00:16:26.639
little easier. And then that traffic generator script that I mentioned, it's simple. It's just making curl requests.
00:16:32.959
Um, and we have some kind of built-in errors that exist in this application. So we can take a look at those as well.
00:16:39.680
Um but it's just meant to be uh something very simple that will just hit the index and show pages.
00:16:46.399
So yeah, so I mentioned this before, but just remember the instrumented uh version of the app has the answers if
00:16:52.560
you if you need that. So the next part is about our telemetry destinations. We have our app. The app
00:16:59.120
will generate telemetry. We need something to view that telemetry. You can choose any vendor. If you're already
00:17:04.799
working with a vendor, I'm sure they have some great documentation about how to connect Open Telemetry to your app.
00:17:10.959
I'm using New Relic because that's what I'm most familiar with. Um, and they do have an opportunity for you to sign up
00:17:16.959
for a free account. If you do that, you will need a license key in order to follow along in the UI.
00:17:23.280
And here's a link to the other vendor options if you want to go that route.
00:17:28.480
And that link is also available in the gist. So, if you need to sign up for a New Relic account or another vendor, you
00:17:35.039
can do that with this link um for New Relic. And then if you do sign up for New
00:17:40.320
Relic, after you log in with your password and everything, there is a place for you to generate and copy your
00:17:46.559
license key um up on this top right hand corner.
00:17:52.080
So, I'm going to assume we're all or I guess is everyone good on the
00:17:57.120
observability vendor situation? Do people feel okay? Cool. Nice. Um, the
00:18:04.720
Ruby version is a suggestion, not a requirement. If you have don't have 342 installed, just change your Ruby version
00:18:10.880
file. As long as you have something 3.1 or above, that should work. Um,
00:18:17.200
all right, let's get into it. So, um, CD into the workshop and, uh, then CD into
00:18:24.080
Hike Tracker original since that's where we're going to be updating things. Um, you can run bin setup to bundle if you
00:18:31.760
haven't done that already and it will also prepare your database. Um, I recommend that you run dbced so that we
00:18:38.559
have some data to play around with otherwise your curl requests in the traffic script will be very sad.
00:18:46.080
Um, cool. Okay. So, let's get into tracers and exporters.
00:18:51.200
So, open telemetry is set up in such a way that every single part of it is pretty much in its own gem. And right
00:18:57.679
now these the the main open telemetry SDK and the main open telemetry API gems
00:19:03.840
only include stable code and that's our plan moving forward. Right now the only
00:19:09.039
signal that is stable is traces. Metrics and logs are still experimental so they live in their own gems.
00:19:16.000
Um, so the the first two pieces we're going to add is the SDK, which is what you need to generate traces and also run
00:19:22.240
some basic configuration on your app, and then the open telemetry exporter OTLP.
00:19:28.400
We'll talk a little bit more about that in a second. So the SDK, the way this works in OTEL, it installs the API and
00:19:36.320
it's an implementation of the API. So the API code in hotel is pretty much blank. You kind of just see the name of
00:19:42.640
the method and the arguments that are required. nothing else usually goes into it. You need an SDK to actually generate
00:19:49.280
the information. And the reason why open telemetry did that is they have this vision that there could be many possible
00:19:54.799
SDKs as long as they all adhere to the specification that are doing different kinds of things. Um so the SDK um the oh
00:20:02.960
the other thing that I wanted to point out is that instrumentation only calls the API. It doesn't ever require the
00:20:08.640
SDK. Um so that that way you can have that kind of modular SDK experience.
00:20:16.000
The exporter is what converts the data into OTLP which is open telemetry the
00:20:21.280
open telemetry protocol. So open telemetry has kind of established their own standard of the way that data should
00:20:27.360
look. Um it you know is just pretty much done by protobuff files. Uh the gem that
00:20:33.039
I suggested everyone to install sends the data to a configured endpoint using HTTP.
00:20:38.240
There are other exporters available if you would rather take that approach. Zipkin and Jerger are two other open
00:20:44.880
source options that we're compatible with. There's also a console exporter. So, if you don't want to use a backend
00:20:51.120
and just want to see data come through the console, I'll show you in a minute where you can make that adjustment. We
00:20:56.880
also have a gRPC um implementation, but it's experimental. So, we haven't released it
00:21:02.480
as a gem yet. It needs some testing. So if anyone's interested in writing gRPC testing that would be very welcome.
00:21:11.679
Cool. So we have our gems installed. The way that we configure open telemetry and add it to our application is by creating
00:21:18.320
an initializer. That's the most common process.
00:21:24.480
Inside of your initializer, you're going to call open telemetry SDK configure. This has a ton of options,
00:21:31.440
but we're going to build on them individually.
00:21:40.159
The configure method does a few things. So this is kind of more about the structure of hotel and how it works. Um
00:21:46.799
it will create and set what's called a global tracer provider. And this like provider model is kind of consistent
00:21:53.600
across the other signals as well. The tracer providers have an entry point for the API. They provide access to tracers
00:22:00.720
and they keep track of span processors. So tracers are the things that create
00:22:05.760
spans. Span processors are things that kind of batch up the spans or alter
00:22:11.280
spans to prepare them for export. There are other things that happen in
00:22:16.320
configure method too, but they're not really relevant to what we're doing today. Oh, and I have a slide about
00:22:21.840
this. So yeah, so tracers create traces. There's one tracer per scope. Um, so every instrumented gem has its own
00:22:28.640
tracer. Sometimes inside people's Rails apps, they will choose to create one tracer per area of their application.
00:22:36.159
Sometimes, you know, by like general resource or theme, sometimes all of their models have a tracer, all their
00:22:41.760
controllers have a tracer. It's kind of just whatever dimension, whatever way of slicing the data makes the most sense to
00:22:47.840
you, you can put everything in a single tracer. And that's what we'll do today. Um, yeah, I think everything about span
00:22:55.120
processors is pretty similar. So by default a batch span processor is attached and what that does is it sends
00:23:02.080
your spans to an inter to an exporter on an interval and it will start dropping spans after the max size for that batch
00:23:08.960
is reached. This is what that code looks like in the
00:23:15.600
SDK. You do not need to add this, but um a processor has an exporter and they're
00:23:23.120
connected and then you add the span processor to the tracer provider and that's how the exporter ends up getting
00:23:28.960
linked with the tracer provider. This is important for you to know if you want to
00:23:34.480
eventually configure open telemetry to do other things. You can add your own span processor. You can have as many
00:23:39.919
span processors as you want. Um they are processed sequentially based on the order of arrays. So the order that
00:23:46.320
they're added. Um but you can have multiple exporters which can be really handy if you also want to send your data
00:23:52.559
to multiple backends. Maybe there's multiple vendors that you want to um you know visualize your data with.
00:24:00.880
The next thing we're going to do is we're going to configure with some environment variables. So right now um there is a lot of configuration that's
00:24:06.720
kind of standard across different languages in OTEL through environment variables. The Ruby implementation also
00:24:12.480
has that configure method we looked at earlier. There is a specification for file-based configuration coming soon. So
00:24:18.640
in a YAML file, you'd have the ability to um do some more specific configuration. And with these
00:24:25.600
environment variables for the workshop, I would recommend just adding them to your initializer file at the top above
00:24:31.520
the configure call, but you can also pass them on the command line if that's easier.
00:24:37.520
So, what we're going to do here, this is connecting us to the New Relic endpoint. If you're doing something with a
00:24:43.039
different vendor, they probably have a different endpoint headers situation.
00:24:48.240
Um, so the endpoint is specific to New Relic's OTLP data. And then the um the
00:24:56.000
headers, you just need to make sure that you pass your New Relic license key after API key, no spaces. And those two
00:25:04.159
things should be all you need to do to start sending data to New Relic.
00:25:09.600
I mentioned before you can have more than one exporter. You can update this
00:25:15.120
hotel traces exporter environment variable to send to both the console and
00:25:20.240
the OTLP using this example here. Um but if you don't want to send to the console
00:25:26.559
by default it will go to OTLP. So you can eliminate um that environment variable entirely if you want. The
00:25:34.000
console output can be very noisy, but it can also be fun to see the raw data.
00:25:42.799
Oh, and this is what it would look like um if you were going to pass it to your Rails server line. But um we will add
00:25:50.880
more and more environment variables, and I'm not going to continue having examples for this.
00:25:58.799
So back to our configure block in our initializer, the next thing we're going to do is we're going to add some details for our
00:26:05.039
application uh including the service name and the service version. So um we
00:26:10.320
can call our app hike tracker. That'll make it easy to find in the UI and give it a version number if you want. Um this
00:26:16.880
could be maybe like a build number and an actual application. I think you could also use Rails application name here if
00:26:23.679
you didn't want to have a string.
00:26:28.880
So resources are um the representation of the entity producing the telemetry.
00:26:34.000
They don't really have their own um like data type that comes through. I like to
00:26:39.919
think of them as just like a bucket of attributes that get attached to the other data types that we're already sending the and so then they create
00:26:46.640
pretty consistent attributes across all the signals which help with that correlation when you're trying to solve
00:26:52.000
problems. And in the future, this could be changing. Like I mentioned, hotel is
00:26:57.600
still very much growing and changing. And there's talk of adding an entity that would be kind of in addition to a
00:27:03.919
resource. All right, we have the foundation down.
00:27:10.320
If you were to start up your application now and start uh generating some traffic, you would see nothing in your
00:27:18.240
um back end because up until this point, even though we've installed things, we don't actually have any calls to the API
00:27:24.960
to record telemetry. So to make it easy on ourselves, we're going to use this gem, open telemetry
00:27:31.279
instrumentation all that you can add to your gem file. And what it will do is it will install all of the available
00:27:37.919
instrumentation gems that the open telemetry ruby contrib repository has.
00:27:43.120
Um you know we have a lot of things you know for rails and different popular HTTP client libraries back background
00:27:50.720
jobs. I think most popular gems we have instrumentation for and we're always
00:27:56.080
looking to add more if folks are willing to help maintain the gem after they contribute it.
00:28:05.360
Um, so yeah, we talked a little bit about instrumentation earlier, code to add telemetry to your application. Um,
00:28:11.840
like I mentioned, it's available as individual gems or as an all gem. I'll show you an example for that in a
00:28:18.399
second. And there are three main strategies that instrumentation authors use in order to get this
00:28:24.320
instrumentation. They're called monkey patching, publish, subscribe, or pub,
00:28:29.840
and native instrumentation. Monkey patching is um usually done
00:28:36.159
through module prepending these days, but if you work on an ancient uh Ruby agent like I do, you also have alias
00:28:42.480
method chaining. Module prepending is basically like cutting to the front of the line. When you prepend to the
00:28:47.840
module, your code is going to go first and also potentially last um in the full
00:28:53.279
ancestors chain. Alias method chaining is kind of used for instrumentation by
00:28:59.440
slightly by renaming the original method to something like uninstrumented, redefining the original method name to
00:29:06.080
include your instrumentation code and then making sure that both of those aliases are available so that that way
00:29:11.679
um you're kind of tricking the computer into thinking that your um your instrumented method was in fact the
00:29:17.679
original one. Monkey patching is the riskiest form of instrumentation because the APIs that you patch can change and
00:29:24.960
we're starting to see it as kind of a last resort as better solutions are built in. If an API that you patch can
00:29:31.039
change, your data is not going to get sent anymore or sometimes it'll throw an error in your agent and that could
00:29:36.559
potentially crash your Rails application which is the thing I have nightmares about.
00:29:41.840
Um so here is an example of uh monkey patching um using module prepending.
00:29:49.520
This is the my SQL 2 gem. Uh this is a gem that uh the open telemetry
00:29:54.559
instruments directly. And in this query method uh is kind of the the central
00:30:00.640
point for most queries that are made by my SQL. We try to find those like as few methods
00:30:06.799
as possible to add instrumentation to in open telemetry. That looks like
00:30:12.880
calling this patch client method that runs when um the instrumentation is
00:30:18.159
getting loaded and um we prepend a new module onto the MySQL client.
00:30:26.399
So that will change the ancestors chain to add our open telemetry instrumentation in front of the MySQL
00:30:33.200
client. Now what that actually looks like as instrumentation is that we have defined
00:30:40.960
the query method in our um in our patch
00:30:46.399
and we're calling uh a tracer that was created specifically for this instrumentation and then calling in span
00:30:52.640
which accepts a block passing it attributes that have been defined by semantic conventions as well as this
00:30:58.559
span kind value um which kind of helps you organize your spans. into uh client and server and different
00:31:06.640
messaging uh concepts and then we call super so that that way everything goes back to my SQL 2. In this situation we
00:31:14.080
don't need to do anything after we pass the baton back to the original code and
00:31:19.600
um your span will end when super is finished and then our end is called.
00:31:27.039
Native instrumentation is where open telemetry really wants everybody to go. Um the idea is that you add APIs
00:31:33.360
directly to the library so that the library authors are paying attention to what needs to be instrumented and thinking about that and including that
00:31:39.760
as part of their maintenance. Uh this would move towards open telemetry's goal of ubiquity and the way that that looks
00:31:47.519
I think right now the example that I always think of is elastic search. The elastic search Ruby gem bundles uh boils
00:31:54.559
down to this elastic transport Ruby gem which includes this method perform request that almost everything you're
00:32:01.760
going to do with elastic search uh goes through. So they are calling these hotel
00:32:06.960
APIs specifically. They have some sort of configuration that makes sure that this code doesn't run if you're not
00:32:12.559
trying to use open telemetry. Um but you can see here Elasta Search is now
00:32:18.159
working on updating the conventions and making sure that the attributes are accurate um instead of the open
00:32:23.919
telemetry maintainers. The pub submodel um is what Rails uses
00:32:30.080
with active support notifications. It's great because there's public APIs which gives greater confidence into uh the
00:32:36.000
consistency of methods and it may not capture everything that instrumentation authors want which can be the downside.
00:32:42.480
I think there's a few things in Rails specifically related to models that aren't quite captured by this. Um, and
00:32:49.279
so we will usually use monkey patching to take care of those edge cases.
00:32:54.799
So inside the Rails code, the way that this looks is calling active support notifications instrument and then you
00:33:01.200
give it some sort of event name. So this is an example from the Rails API. you just call render and then you pass some
00:33:07.840
additional information to it and inside of the block you actually call the method that ends up getting called. So
00:33:13.760
very similar to the monkey patching is just defined here. Um for process action action controller
00:33:20.320
which we're going to use a little later. Uh this is what the code actually looks like. And when you call super on line
00:33:27.039
76, that is when um you know we pass the baton to the original process action
00:33:32.720
action controller method that is doing the things that we know and love.
00:33:39.679
The subscribe method is the next part of it. So the instrumentation authors will subscribe to the notifications so that
00:33:46.240
whenever those events run on the event loops, we can get this data and add it to the telemetry. Um, so with process
00:33:53.279
action action controller, we want to know how long that part of the request took. So we'll usually grab the duration. Um, this is kind of a trivial
00:34:00.320
example where all that happens is we uh log an info message. But this is this is
00:34:06.720
essentially how it works. Um, okay. So now that we know a little
00:34:13.679
bit more about how instrumentation works, we're going to add the instrumentation to our application. You
00:34:20.079
need to add a call to use all to your configure block in the open telemetry
00:34:25.599
initializer in order to do this. And this will install all of the compatible gems. If you don't want to use all and
00:34:32.800
go blindly into that, oh wait, then instead you can call config use, but you
00:34:39.280
have to specify every single gem that you want to install in this case. Um, so
00:34:45.599
it's kind of up to you as to whether you want more detail, less detail, more control, less control. Going back to the
00:34:52.879
previous slide, there are options to to configure every single one of those gems. And um, for rack, you know, if we
00:35:00.160
want to play around here and add a little bit of configuration, you can update this to um, allow some
00:35:08.400
response headers that will then get included on your rack traces. and we'll take we'll take a look at those.
00:35:19.280
All right. Um, so if you were going to use the use method instead, you would um
00:35:25.520
pass the response headers a little bit differently by going to the library and adding a comma instead of making a hash
00:35:33.440
of all of the um configurations. Now finding the configurations is still
00:35:40.560
a little tricky with open telemetry. You may need to actually look inside the code of the gem to find all the
00:35:46.160
available options which is the case with rack. There are um some gems where we have the configuration options defined
00:35:52.880
straight in the readme. Um but this is kind of part of open telemetry is there's a little bit you have to do yourself and a little digging you may
00:35:59.119
need to do at this time. Um so these are all the possible options
00:36:04.800
that you could use. Um I'm hoping that we'll get better documentation someday, but this um this model is pretty
00:36:11.599
consistent for every bit of instrumentation. So if you find the
00:36:16.640
instrumentation library, so in this case, rack, it usually has a bit of a repeated structure to get to a second
00:36:23.599
instrumentation RB file that will include these options.
00:36:29.599
Whoops. Um all right. Okay. Now we are finally
00:36:36.160
ready to start our rails server because we have added our instrumentation. So we should automatically get some data that
00:36:43.119
runs. Um so you can start your server and if you want to click around in the
00:36:48.320
UI um that will be helpful to kind of see um see what's going on in the
00:36:55.040
application firsthand. But just as well you can call a script sltra oh I'm sorry
00:37:01.200
that's wrong dot uh.sh SH rather than RB um and get some traffic as well.
00:37:10.320
So while we're doing that, I'm going to
00:37:17.359
make a few updates to this. Go through some of our steps again.
00:37:31.280
So, we'll add our gems. Uh, the nice thing with bundle is you don't have to run bundle again
00:37:37.680
afterwards, but I do here.
00:37:48.160
So, these are the the three gems we're working from right now.
00:37:55.200
go ahead and bundle once I cd into the hike tracker.
00:38:07.280
step, we're going to add the open telemetry initializer.
00:38:26.400
Call the configure method. Add our service name.
00:38:35.680
add our service version
00:38:41.760
and also add our instrumentation.
00:38:51.839
What is going on? Bundler is very sad. I was afraid about that. Okay. Well,
00:38:58.079
we'll just keep using um we'll keep using the slides. Has everyone else been able to bundle? Are
00:39:04.240
you guys having problems with the internet? Okay. Some some problems.
00:39:11.599
It's sending. Okay. It's thinking. Um hopefully one person at each table has it. Maybe you can share some screens.
00:39:19.440
Um let's see. So, I'll just hop over to instrumented then since we're having
00:39:24.640
trouble bundling over here.
00:39:29.839
Fun fact fact uh underscore is before
00:39:36.640
dash in uh the alphabet the alphabet. So that's why these look different. What am
00:39:43.760
I doing wrong here? Thank you.
00:40:00.800
Okay. So, hopefully when you started your server, you saw all of these
00:40:06.079
verbose logs with information about each piece of instrumentation that was installed. Um, and even though it's not
00:40:13.440
on this open telemetry initializer, I guess I might as well open the other one.
00:40:23.680
All the answers. Um, we, you know, added this allowed
00:40:29.119
response headers configuration option and you can see right here that it was
00:40:34.240
in fact applied. And so that's a really helpful way to verify that your instrumentation uh configurations are
00:40:39.280
working because a little typo can can cause problems.
00:40:46.240
And oh, we're getting some errors with SSL. So I
00:40:52.640
will just go back to my handydandy screenshots um and use those instead. So, if I was
00:41:00.000
running traffic and if we were able to go to a UI and the traffic had been running for about 30 minutes, um we
00:41:06.480
would see something really nice and fancy like this. I think it'll be a lot more limited on your application. This is just the homepage uh for the hike
00:41:13.680
tracker service in New Relic. Um actually, you know what? Will
00:41:20.319
will this work? So, this was some traffic that we that I ran a while ago.
00:41:28.160
We're thinking we could not retrieve this info. Okay, we'll just keep using this.
00:41:35.200
Um, so what this helps show you is just kind of an overall sense of the health
00:41:41.920
of your application. You can see if you've had any major changes in your response time or your throughput. Um the
00:41:48.319
error rate on this app is bonkers because uh there is a random error that gets generated every time um you visit
00:41:55.920
the activity show page. And you know hopefully your error rate is a lot more
00:42:01.760
flat than this. The next place that I want you to go in the UI after you get to that summary page is uh distributed
00:42:09.280
tracing. So this is where all of your traces live. Um, and you can see each
00:42:15.359
one of these uh kind of chunks down here represents a different route. So, this
00:42:22.160
is great because open telemetry just gives you all of this. And you know, when you click on a particular trace,
00:42:28.800
you can see a little more closely into um instead of just like the overview, the summary, one specific trace in time
00:42:36.480
that you generated. And when you open up the view on that trace, all of these little steps down here are different
00:42:43.200
spans that were created by your instrumentation.
00:42:48.720
Since this one has an error, we'll take a look at the error. So, we know here that it's a contrived kind of fake
00:42:54.960
error, but if you had a real error, it would exist on usually the span that
00:43:00.400
created it. In this case, it's that root span. Um, and then you could get more information about the message and try to
00:43:06.560
help solve the problem by being able to see that specific request that happened in the wild um that's related to this
00:43:13.200
error rather than, you know, kind of digging through Rails console logs and maybe um trying to parse the information
00:43:19.760
that way. Uh oh, here's a healthy one.
00:43:26.079
So we can see um there's also still a lot of uninstrumented time here and that
00:43:32.079
could be a place for you to add custom instrumentation which we will do so in a minute. Um so custom instrumentation is
00:43:39.599
what we refer to to things that the kind of standard semantic conventions don't automatically cover. It's usually
00:43:45.760
methods unique to your domain and we recommend that you make calls directly to the open telemetry API in order to do
00:43:51.920
that. Um, so what we're going to do is record a span for something specific to
00:43:57.119
our domain in an uninstrumented method and then add custom attributes to a current span elsewhere in the
00:44:03.280
application. The first thing we need to do in order to do this is create our own tracer. So
00:44:10.319
inside this initializer file below the configuration block, I'm going to recommend that you add um an an app
00:44:17.440
tracer. You can save it in a constant. You also don't have to because open telemetry will register all the tracers
00:44:23.440
by name. So anytime you call open telemetry tracer provider tracer with the same string, you're going to get the
00:44:29.520
same tracer. This just makes it easier. I think there's probably a better Railsy way to do this. Um, but this is the way
00:44:37.040
that I generally do it as I'm trying to debug problems. The next thing we're going to do is
00:44:43.200
we're going to use that app tracer inside of the activities model. If you look in the activities model
00:44:49.359
around line 13, there should be a duration method. And in that method, um,
00:44:55.119
we're going to add a call to the inspan method. So that's the same thing that we saw in the instrumentation earlier. So
00:45:01.440
you call that tracer.insspan and then you give your span a name. So this is the only required argument for
00:45:08.480
your span is a name. Um since it is a custom span and custom instrumentation,
00:45:15.520
you kind of have choice over what that name looks like. Since um the queries
00:45:21.520
that we looked at earlier usually like model name query, I thought activity duration could be a good situation here.
00:45:29.440
And since we want to record this whole method, we start that at the top, open up a block, and then add um leave leave
00:45:37.359
the original code here and add an end at the bottom.
00:45:46.240
So in this in span method, it will add a span to the existing trace. So that existing overall uh journey if there is
00:45:54.079
one that exists and if it doesn't, it will start a new trace. You can also add attributes to the span to get some more
00:46:00.240
rich information. And there are a lot of other arguments available that we won't go over today, including span links and
00:46:06.319
span events, span kind, like we mentioned earlier, too. If you wanted to add some attributes, um, you could, you
00:46:13.119
know, potentially for this method, uh, add the end time. Maybe you're curious if there is an end time or want to do
00:46:20.079
some kind of rounding about the end times. Um and in that case you would add a new
00:46:26.800
keyword argument attributes that has um accepts a hash and the key
00:46:33.680
value pairs in this must be the keys must be strings. Unfortunately open
00:46:39.920
telemetry is not quite like rails with hash within different access. You need to use strings for keys in all the
00:46:46.079
situations. This is for performance reasons. You know we want things to be really fast with instrumentation. We
00:46:52.400
don't want you to notice the instrumentation or have that add overhead to your application. And this is the way that we found to do that the
00:46:58.480
fastest. There we go. Another reminder.
00:47:05.760
Um, so, okay. So, you've added your custom span. Restart your server because we added something to our initializer.
00:47:11.839
So, I don't think it will hot reload. And then generate your traffic using script traffic sh in another terminal.
00:47:19.760
um or just you know continue to click on data in the UI.
00:47:25.680
Um I'll give it another try. Let's see if we have any any updates.
00:47:33.440
And if you're doing this on your own application, I would say think about, you know, if you are using a current observability vendor and maybe see a lot
00:47:40.079
of uninstrumented time, what's going like add a custom span to that. If you
00:47:45.680
are um if you have maybe some some queries that are kind of complex but are
00:47:51.920
also referring to other parts of your code that could be a good opportunity to add an additional span things that are
00:47:58.079
you know done by active record and uh my SQL postgres trilogy etc. Those will be
00:48:04.480
captured by spans um already by instrumentation, but it could it could
00:48:10.319
be interesting to just throw that um a new span into some of your domain logic to see how that changes your traces.
00:48:17.920
All right, we're still out. That's okay. Um but we should be able to look at
00:48:24.960
that. Oh, no. I forgot. I'm sorry. All right, cool. So, let's look at our
00:48:30.160
our pre-recorded span. Um, and back in slideshow mode.
00:48:36.880
So, I I don't know if you can read this and see this down here, but we now have
00:48:42.480
in that same activities ID trace that we looked at earlier a new span for
00:48:48.319
activity duration. And that just it occurs after the user query. We can see
00:48:53.680
where in this overall journey that method is actually getting called. Um,
00:48:58.800
and that just adds a little more color. to what's going on here. It looks like I
00:49:04.240
did not capture the custom attribute here. So, we we won't see that over there. But if you did choose to add the
00:49:10.400
custom attribute, it would be on this side. Um, so yeah. So, let's add a custom
00:49:16.640
attribute to the current span. Now, this is another way, you know, if you are if
00:49:21.760
there's something that's going on that like you need some domain information uh that's more specific that's outside of
00:49:28.559
the semantic conventions, that can be a great opportunity to add attributes. Um, in this case for our application, we're
00:49:35.599
going to do it um through a before action. If you go to the activities controller, I'd like you to add a before
00:49:42.160
action. Um, it's called add location attribute to span. very uh very clear
00:49:48.160
what it's doing. And you know, for the purposes of this demo, let's add it only to the show route. And then below the
00:49:55.680
main bulk of the controller actions in the private section, um create a new
00:50:01.200
method that's called add location attribute to span. In here, we're going to call the open
00:50:07.359
telemetry trace current span method. So, this one is a little different than the method we looked at before. We're not
00:50:13.280
using our tracer. We're using the open telemetry trace API. Um, it does a lot
00:50:18.880
of context management for you and so it will kind of track behind the scenes what the current span is and attached to
00:50:26.400
whatever is going on at the moment that this method is called and add attributes to that span. So um the since the tracer
00:50:35.680
is looking at like the overall picture of your application, this like open telemetry trace API, that's why um we
00:50:43.119
leave the current span out of the specific tracer. So yeah, so this is this is an attribute
00:50:48.960
you could add. You could add a different attribute if you'd like as well. Um we're not being super concerned about uh
00:50:56.000
very performance here. Um, and so yeah, so restart your server if you've added
00:51:01.200
this. Um, and I believe that in the course of your traffic script, you know,
00:51:06.640
it'll hit that show page for the activities. So you should start to have some traffic generated.
00:51:14.000
I think I've also been blazing through, so I'm going to give a minute for everyone's UIs to catch up. If you are able to use the UI, I think um it can
00:51:21.839
often be a little slow to actually load the data and and get it in there after we generate it.
00:51:31.200
Mhm. What?
00:51:39.839
Oh, where are you supposed to see the traffic at? Um so if you are using New
00:51:45.760
Relic You should see the traffic in your um in
00:51:52.000
your services view. I think there's probably one more page before that that
00:51:57.359
has all of the different entities that are available to you. And you should see
00:52:02.640
hike tracker somewhere on that list. I think that's usually your homepage or landing page when you get into New
00:52:08.400
Relic. And then um yeah, and then from there
00:52:13.599
you should see your traffic on that hike tracker entity summary page that we were looking at before. Um which lives in a
00:52:22.880
slide somewhere. Go back a little bit.
00:52:35.359
Yeah. So this this would be the homepage. And so you can see that the traffic is connecting and that the application is connecting to the back
00:52:42.319
end by um you know kind of seeing a chart probably not quite this robust
00:52:48.559
maybe just a couple of little dots here and there um to see data coming through.
00:52:55.280
Does that help? Okay. Yeah. Any other questions while we're
00:53:00.880
pausing for a minute? Yeah.
00:53:12.079
So there's not a clear I guess yes and no. So semantic conventions has opinions for a lot of attributes in certain
00:53:18.319
categories and you can look at the semantic conventions um documentation or repository to get those who sorry um but
00:53:27.839
uh if you're not going to use that I will just generally look at semantic conventions to kind of get an idea. I
00:53:33.280
think things are often scoped with, you know, if you're going to think about like a a hash that has nested keys,
00:53:40.319
everything is separated by a period. Um, that's I think the most common way.
00:53:56.240
All right, let's get back to it.
00:54:14.720
All right. So if we have done this all and everything is
00:54:20.319
cooperating, we should be able to go to one of our newly created activities traces um which
00:54:28.559
would be near the top of your traces when you click into them and take a peek at um the attributes for the root span
00:54:37.200
which is the span at the top and see your activity trail location attribute
00:54:42.800
over here. So, what we could do with this if I was playing around in the UI is that you
00:54:48.640
could start to query for your different activity spans and maybe um facet them
00:54:54.319
by location. Add some counts to see which locations are the most popular. Maybe that could help you prioritize
00:55:00.000
which locations you want to improve documentation for. This Yeah, activity trail location is very small though.
00:55:07.760
Um yeah, so that that was kind of seen as the first half. I think at this point
00:55:13.440
in time we can take a break and get back into it talking about metrics, logs, and
00:55:20.000
active support notifications afterwards. Um so yeah, so I think 10 minutes and
00:55:26.640
I'll be up here if anyone has any specific questions. you can either raise your hand and I'll come over to see how
00:55:32.559
things are going or come up and find me and and we can talk about that too.
00:55:38.640
Um I don't have a fancy timer though so um yeah that'd be great. Uh 10 minutes.
00:55:47.599
Thank you. Um cool. Okay. I hope that break was helpful. Is everyone feeling ready to
00:55:54.240
move on to the next section? We're good. Cool. All right. Um, I
00:55:59.280
learned some things during the break about how things are going. Sounds like most people are in a good spot. New
00:56:04.480
Relic entities are um maybe having trouble getting connected in some
00:56:09.599
places. Um, some things that helped.
00:56:22.480
So, I have my New Relic license key saved in an environment variable that's in my um my my Zshell config file. If if
00:56:31.760
you don't have that, just add the API key directly. The
00:56:37.760
you don't need any quotes or anything like that around it. Um just kind of formatting it like that will be fine. Um
00:56:45.200
you know, just don't push it up. Um the other thing was that uh there were some
00:56:52.720
errors that are coming up and I completely forgot to show why those are happening. Um so in the app controllers
00:57:02.160
activities controller I have hijacked the show action to
00:57:07.920
generate an error 30% of the time. Um, so there is a random number between 1
00:57:13.599
and 10 that'll get generated. If it's less than seven, we'll show the show page. Otherwise, we'll raise this bad
00:57:19.680
luck error. Um, this is a strategy that I use a lot as an instrumentation author to kind of make sure I can see what the
00:57:25.839
state is when errors are raised and make sure things are getting attached to the right spot. Um, errors and tracking
00:57:32.000
errors and trying to resolve errors is a really common use for telemetry. Um, you
00:57:37.599
know, being able to track error rates and things like that. So I I hope that this helps you see the way that open
00:57:43.119
telemetry is kind of automatically capturing your errors.
00:57:48.720
Um okay, so we're about to get into metrics. Are there any other questions about the traces and exporter and
00:57:54.799
instrumentation part of things before we go forward? We'll also have time for Q&A
00:58:00.079
at the end. I think we'll we're doing pretty good on time. All right, let's continue.
00:58:08.400
So all right, metrics.
00:58:13.440
Metrics um are in new gems. Like I told you before, metrics are not yet stable.
00:58:20.079
They were still technically in this experimental state, but that doesn't mean they're not usable. It means
00:58:25.520
they're usable with a small level of risk. Um to add them to your gem file,
00:58:31.040
you can do this here. Bundle add open telemetry metrics SDK, excuse me. um or bundle ad open
00:58:38.960
telemetry exporter OTLP metrics. Uh one thing in case you bundled before and are
00:58:45.680
using the instrumented version from before there were earlier versions of the gem that were installing the metrics
00:58:51.920
SDK from a bug fix branch. That bug fix has now been released. And so hopefully
00:58:57.119
you're getting version 0.7.3 installed. If that's not the one that's
00:59:03.440
installed, you might see an OTLP exporter error. There's not actually a problem with that error coming up
00:59:10.240
besides it being annoying. Your data should still be sent. Um there was just an issue if data wasn't collected for an
00:59:16.880
interval. Uh that error message would pop up. So install these two gems. Um
00:59:24.000
now what's going on here is very similar to the tracer provider. There's just a few other terms going on with metrics.
00:59:30.720
So, we have a metrics exporter um that gets initialized from that automatic
00:59:36.319
open telemetry SDK configure method that's already in your initializer. Um
00:59:41.839
instead of uh processors, metrics have readers. That's what they're called. Um
00:59:47.920
they do pretty much the same thing. They prepare the metrics for export. They um
00:59:53.359
in this case with the periodic metric reader have a specific interval that they go through in or uh before they
01:00:00.400
collect all the metrics and you um connect them through your meter provider
01:00:06.240
which is responsible for creating meters tracking metric readers very similar to
01:00:11.359
the tracer provider um so this is not code that you need to add this is what's
01:00:16.960
happening when open telemetry SDK configured is called by default and
01:00:22.000
similar Similar to traces, if you wanted to add a console exporter, the um the
01:00:28.640
environment variable is the same. I don't think I have a slide for that. Uh it is just OTEL_metrics_exporter
01:00:38.559
um as the environment variable is equal to console, OTLP. Just sub the word
01:00:43.680
traces for metrics. Um so yeah so the metric structure from that configure method create and set a
01:00:50.720
global meter provider. The meter providers are an entry point for the API. They provide access to your meters
01:00:57.520
and they keep track of metric readers. Now since metrics are structured a little differently from traces. The way
01:01:03.760
that meters work is meters create instruments. Um that's kind of you know if you're used to using an observability
01:01:10.480
vendor of a different sort usually it's it's I think of an instrument as just a
01:01:15.520
metric. Um instruments provide measurements. So every time an
01:01:20.640
instrument is called you get new data points for your metric and then the metric readers connect to exporters. Oh
01:01:27.599
and they don't prepare spans they prepare metrics for export.
01:01:32.960
This is an environment variable I'm going to suggest you change for the purpose of this workshop. So by default
01:01:39.280
the um the metrics exporter runs on an interval of 60 seconds but that will be
01:01:45.920
a long time in conference time. So um this is set in milliseconds. So if you change it to 3,00 milliseconds 3 seconds
01:01:54.480
you'll get a lot faster feedback in New Relic or your other observability back end. Um, I would recommend adding this
01:02:02.480
wherever the rest of your environment variables are, either in the initializer or add it to your your list in front of
01:02:08.880
your server command. All right, so we have configured our
01:02:17.200
meter provider. Now, we're going to create a new meter. This might look familiar. This is very similar to the
01:02:23.359
tracer provider setup, just kind of sub tracer for meter. Um, you might notice that this one is not a constant. We're
01:02:30.480
just using a local variable here. I think um I'm still I'm very open to
01:02:36.559
ideas about where the most Railsy place is to organize all of your different
01:02:41.839
metrics. Um, metrics are still pretty new. They are not fully spec compliant
01:02:47.440
unlike traces. So in the SDK, everything that you see in the specification for traces should be available in that SDK
01:02:54.799
gem. for metrics. We're not quite there yet. We're getting a lot of features, but some of the things in the
01:03:00.079
specification aren't there yet. Um, so all of that is to say that I've just
01:03:07.760
been defining all the metrics in the initializer for now and then calling them in different places of the
01:03:13.200
application. So the the actual instruments that we create, that's what we'll save as constants instead of the
01:03:18.960
meter. And like with the tracer, make sure you do this outside the configure block.
01:03:32.079
unique. One instrument is one metric. Open telemetry has seven different types
01:03:37.119
of instruments and unlike at least New Relics uh traditional Ruby agent, you
01:03:43.119
can have attributes on your metrics um that are known as dimensions.
01:03:48.960
I'll do a little walkthrough of the different instruments that are available and kind of places where they are used
01:03:54.799
in the open telemetry semantic conventions. So with a counter you can make non-
01:04:00.960
negative increments. So monotonically increasing counter. This is um used in
01:04:06.720
the semantic conventions for process CPU time. So tracking CPU time you know
01:04:12.960
every 60 seconds kind of by default. Histograms are best for arbitrary vi
01:04:19.119
values that are likely to be statistically meaningful. Um the metric that I think about in this category is
01:04:25.280
HTTP server request duration. This is a metric that we're going to generate later on. Um so that's you know the
01:04:33.359
duration of each of your requests. Keep those as a metric. You can kind of look at averages and buckets over time. Gauge
01:04:41.119
is the next one. That's for non-additive values. Um it's kind of like subscribing
01:04:46.880
to a ch a change event in order to capture some some number. Um updown
01:04:53.440
counter is another one. It can have increments and decrements unlike the regular counter. This is used in the uh
01:05:01.599
DB client connections count metric. So seeing how many connections are active.
01:05:06.640
Oh and going back to gauge um that is used in container CPU usage. So metrics
01:05:12.720
specifically for your containers. There are some more instruments that are very close to getting released. Um
01:05:19.359
asynchronous instruments. So you can have an asynchronous gauge and counter
01:05:25.039
and up down counter. So um keep an eye out for those. Though the way that those
01:05:30.079
work is that on every interval it will make an observation instead of recording
01:05:35.680
the metric and will basically just run whatever call back you provide it and
01:05:41.359
take the measurement from that point in time and also the attributes from the result of that call back.
01:05:49.280
So what we're going to create first is a counter. Um, we're going to call it the height counter, and it's going to keep
01:05:56.720
track of activities that have been completed. Whenever you create an instrument, it needs a name, a unit, and
01:06:02.799
a description. Now, the unit and description can kind of be whatever you
01:06:07.920
want when you're creating a custom metric. Usually, it's something like seconds, milliseconds, so that you can
01:06:13.839
keep an eye on things. Um, this information will be included in the data points for your metric. So it's another
01:06:20.720
way for you to help slice and dice the information to answer questions and by saving it as a constant we'll be
01:06:27.520
able to access it elsewhere. So the place where I want us to call actually
01:06:33.359
is everyone good with the counter and creating this counter. Wait one second.
01:06:43.760
Okay. We're going to call this counter in the activities model as an afters save
01:06:50.640
action. Um so we'll create a new method called count completions and add that to
01:06:57.599
the existing aftersave callbacks. Um we're we're not going to do anything
01:07:04.319
with this method if the activity is still in progress. On the activities model there's a boolean in progress. It
01:07:10.799
should be true as long as there's not an end time. If there is an end time, then it's no longer in progress.
01:07:18.400
And um on this height counter, we're going to call the height counter. And
01:07:23.680
with a counter, it has an add method. Different instruments have different methods. So for example, um with
01:07:30.559
histograms, you don't add with them. You record. And you know, since this is saving a
01:07:36.400
single record, we're going to add one to our counter and then provide it some attributes um with, you know,
01:07:42.559
potentially some PII um and uh grab them
01:07:47.680
to take a look in the UI and kind of look at how completions are going based
01:07:52.880
on these different dimensions.
01:08:02.319
So, if you haven't already, once you've added that information, restart your server,
01:08:08.640
rerun your traffic. And the thing that is different about this is that the traffic script is not
01:08:16.480
excuse me, particularly helpful for this step. Uh the reason being that
01:08:23.600
you do not have the um the completions button doesn't really get pressed in
01:08:30.080
that traffic script. So um let's see if my server will start
01:08:36.000
even without tell.
01:08:47.839
Oh because I broke my thing.
01:08:58.960
What else is
01:09:11.920
everyone had an okay time installing the metrics SDK? Is it working?
01:09:29.520
Okay, there we go. Um, so we'll hop into localhost
01:09:36.880
and oh, I did not freshly bundle before this. So you can see all my example
01:09:43.600
activities. I'll make a new one. and select a user
01:09:51.440
and a trail and it's marked as in progress. So we'll create that activity and then when
01:09:58.560
you're on the show page for an activity that is still in progress, this mark is completed button is there and clicking
01:10:06.159
that button will call the place where we added our metric.
01:10:14.719
So provided that's all worked out and everyone's UI is cooperating and
01:10:20.159
ingesting data nice and fast um you can see this metric uh by querying. So in
01:10:27.920
New Relic at the very bottom actually let's look at an old screenshot.
01:10:36.400
The very bottom of the page you can see this um query your data. Go ahead and
01:10:43.440
click on that and it will pop up a window where you can query the data specifically.
01:10:55.360
Okay. And so the way that you can see this activities completed metric is um
01:11:02.719
by let's see no that's not going to zoom. So the metric is from metric. So
01:11:07.920
from that data type that we have just uh sent over, select star if you want to
01:11:13.120
see all the dimensions that are available. That's usually how I like to get familiar with a metric. And then
01:11:19.199
where the name of the metric in this case activities completed. I believe it can be wrapped in quotes or back ticks
01:11:26.640
is not null since 48 hours ago. You don't need to make that window so long. You can make it shorter. I think by
01:11:33.600
default it's 30 minutes. So everything that you've recorded should show up. Um
01:11:38.960
but uh but yeah, for every click you should see a metric recorded here. And
01:11:45.520
taking a look you know we can see here in this activities completed section the the counter value that was provided and
01:11:53.679
also um the type of instrument here. This type count shows that it was a counter instrument. And over here I
01:12:02.320
forgot to scroll in my screenshot. Um, you can see the location. You should be able to see the other dimensional
01:12:08.880
attributes that we added too. Um, do I have Okay, I don't have any other ways
01:12:14.560
to dice the data here. But, um, with this querying tool, it's called Nerkl.
01:12:21.840
Um if you want to kind of alter the data and keep playing around with it, you can
01:12:27.760
add um a facet call after where and maybe facet by location. Um that uh will
01:12:36.400
give you an option if you click on this area chart type over here you can see a
01:12:41.760
different chart and kind of see that data displayed in different ways. This will be different on every observability
01:12:48.480
provider.
01:12:56.719
All right, let's get into logs. We will come back to metrics in a minute. Um,
01:13:03.760
so logs is structured uh very similar to traces, more similar than um than
01:13:11.199
metrics are to traces. There's a global logger provider that gets created and set when that open telemetry SDK
01:13:18.400
configure method is called. The logger provider is an entry point for your API calls. It provides access to loggers
01:13:26.000
which create log events um that are there's essentially one event per log
01:13:31.920
entry and keep track of log record processors which prepare the log events
01:13:37.600
for export. Um the terminology around logs is a
01:13:44.640
little different than tracing. Uh we always call the information that collects traces instrumentation and
01:13:51.440
that's still where my brain goes with log logs as well. But open telemetry refers to
01:13:57.920
instrumentation as bridges. So you take you can take existing uh logging
01:14:02.960
frameworks and add new bridges either inside of their code or from maybe an
01:14:08.880
open telemetry additional gem. uh for existing frameworks to kind of translate that data into logs um that are OTLP
01:14:17.920
compliant. What's nice about OTEL logs is that they are dimensional by default. Um there aren't any required attributes.
01:14:25.360
Message is just one attribute of many. Um and two of my favorite attributes that exist on open telemetry logs are
01:14:32.560
trace ID and span ID. So that that way you can correlate your logs with your trace and span events um to kind of get
01:14:41.040
a better picture of how these things connect, how your logs um what logs were
01:14:46.880
being actually emit in your console when that specific trace was running.
01:14:53.760
So in order to um install logs, we're going to do some fun stuff. We're going
01:14:58.880
to comment out the open telemetry instrumentation all gem and install a
01:15:04.719
new version of it from my fork. Um so uh
01:15:10.800
what we're going to do here um this is all available in that instrumented gem
01:15:15.840
file if you want to copy and paste it. Um the reason why this is from my fork
01:15:21.360
right now is that this is um this code is in a PR. It has been reviewed by some
01:15:26.719
of the maintainers, not by enough of them yet for us to merge it in. So, uh hopefully soon you won't need to install
01:15:33.760
it from source. But what this instrumentation is doing is that it is adding a bridge for the Ruby logger gem.
01:15:42.080
And the Rails um active support broadcast logger. That's the logger in
01:15:47.600
Rails 8. It's named something slightly different for older versions of Rails. it leverages Ruby's logger library uh to
01:15:55.679
generate those logs. So in this case, by instrumenting Ruby's logger, we're able to get the benefits um in Rails and kind
01:16:03.840
of have all of our logs automatically translated into OTEL and sent over.
01:16:10.719
I believe I'm missing a slide here too. If you look at that instrumented gem file, there are two other gems and I
01:16:17.280
think this is on the gist that you need to install. Um the open telemetry logs
01:16:22.800
SDK and the open telemetry exporter OTLP logs. So very similar naming scheme to
01:16:29.600
metrics but since this code is not yet stable it is um in its own gem. And unlike metrics
01:16:38.000
the logs SDK is feature complete. We just need more people to use it and know
01:16:44.080
that it is working well in their environments before we can graduate its stability status.
01:16:54.000
Um so yeah so please bundle to install those gems. Um the next step is to
01:17:01.040
update the default logs exporter. So currently by default um the logs
01:17:06.640
exporter is to the console instead of over OTLP but we want it to be over OTLP
01:17:12.159
so we can see the logs in the observability back end. Um, so I recommend adding this and you can also,
01:17:18.239
you know, like with traces, console, comma, OTLP, if you want to see them both.
01:17:25.920
And yeah, and that's that's it for logs, you know, by calling that configure method, it should all be set up and
01:17:31.679
making sure that instrumentation is installed. Um, hopefully by using use all. If you are just calling the use
01:17:39.360
method instead of use all and installing each individual gem, you will need to add a use call for logger.
01:17:47.920
Um, so yeah, so restart your server, run your traffic, and hopefully, um, if you're able to get
01:17:55.280
into your New Relic entity, you can, uh, hop onto the logs tab inside of that APM
01:18:03.520
and services section and should see all of the logs that are coming through for
01:18:09.920
your application. And hopefully, it will look pretty similar to what you're seeing in your Rails console as well. Um
01:18:16.480
some of the differences though when you actually click into the log you'll see all of these other attributes that are
01:18:22.080
included as well. So um you know in this case New Relic I believe is adding this
01:18:27.600
entity gooid and gooids um you have the message like I mentioned earlier which is what you'd see inside of uh your
01:18:34.800
Rails console. We can see the provider of the instrumentation was from open telemetry.
01:18:40.400
Um let's see what else is fun in here. Some of these process and service
01:18:46.239
details are related to that resource that we looked at earlier. So we can see that service name and version that we
01:18:52.320
set. Um it has the severity of the log and captures that. Uh hotel also
01:18:58.560
translates logs into specific severity numbers that are different than the way that Ruby tracks its severity numbers.
01:19:06.880
So, I don't think five is a valid severity number, and
01:19:12.000
probably haven't had to deal with Ruby severity numbers unless you're diving deep into the Ruby logger internals. Um,
01:19:19.360
but my favorite attributes, like I mentioned before, are span ID and trace ID. So, from that, we should be able to
01:19:28.719
go to um the specific trace. Hopefully in your example um up in this
01:19:34.000
distributed trace section you have one that you can hop to and then on that
01:19:39.199
trace you would see uh this logs tab with a number next to it
01:19:45.840
and then uh be able to click on that and see the logs that occurred during that point in time. So that's giving you more
01:19:52.640
dimensions, deeper knowledge, a larger ability to answer the questions that come up when you're trying to solve
01:19:58.400
problems in your code.
01:20:04.320
All right, we've graduated to adding our own instrumentation uh for something
01:20:09.679
that open telemetry does not have available yet. So um soon
01:20:15.760
instrumentation will emit metrics. It does not today, but there are a lot of great metrics that are already defined
01:20:21.840
in the semantic conventions that you may want to take advantage of. Um, this
01:20:26.960
example is going to be a little bit contrived because I would probably add this specific metric in a different
01:20:33.840
instrumentation library. But since this is RailsCom, let's use Rails active support notifications and add a
01:20:40.960
histogram. If you want to follow along with the
01:20:46.000
specs, I think this link is in the gist. Um, we're going to look at the HTTP
01:20:52.080
metric semantic conventions now. So let's see wonderful it's open. So if
01:21:00.880
you want to dive into the semantic conventions you know if you're on the open telemetry
01:21:07.440
docs then um you can find this section under specs
01:21:14.159
semantic conventions are nested in here. This next level is kind of all of the different categories that open telemetry
01:21:20.960
has conventions. Now when you see this status mixed that means some of them are
01:21:26.880
seen as stable and have been released. Some of them might be develop in development um or experimental and when
01:21:34.320
they have that indevelopment or experimental label attached to them they could change. Um but there's this
01:21:42.800
great in-depth description here to try to facilitate that change in the case where there are things that um are
01:21:50.640
outdated or you know not the new stable conventions to a stable convention.
01:21:56.960
And I um we're we're starting to roll this out for traces in some of our
01:22:02.719
instrumentation libraries. This SEMCOM stability opt-in environment variable.
01:22:08.560
Um it's a little out of the scope of what we're talking about today, but essentially when you see this, you have
01:22:14.080
the option to um emit the old attributes, the new attributes, or both.
01:22:20.480
And if you're just starting with open telemetry and adding this to your application, I recommend you start
01:22:26.000
setting this environment variable at least for HTTP since we're starting to add um add it to more and more of the
01:22:33.040
libraries and um setting it to HTTP here
01:22:38.639
so that that way um for the libraries that are compatible, your attributes are going to take the shape of what is going
01:22:44.800
to be the stable semantic convention. So that if you're setting up queries and dashboards and things like that, you're
01:22:51.040
not using the names that are going to change. You're using the names that will become stable.
01:22:58.239
So that's a little bit of a detour to get to what I really wanted to show you, which is the semantic convention entry
01:23:05.280
for um the particular metric. So this is uh HTTP server request duration. It's a
01:23:12.719
very important metric to um you know be able to track the duration of your server requests and kind of look at
01:23:20.320
averages, how things change over time, pay attention to anomalies. This is a very special span because it
01:23:27.920
has unique bucket boundaries. So histograms have buckets. You can think of them like a bar graph where each line
01:23:36.159
each bar is its own bucket. And uh the default buckets are much
01:23:42.400
larger and not really compatible with what you would um traditionally see as
01:23:48.400
durations that would come up in a standard HTTP um HTTP durations. So we have our name
01:23:56.480
here. There's a instrument type unit like we looked at earlier description
01:24:02.159
um and then stability. And down here are all of the different attributes that should exist provided
01:24:08.800
the data is available on that metric. So um we are going to take this and add it
01:24:17.199
to our application.
01:24:27.600
Okay. So we're we're going to do a couple of steps in order to add it. The first thing is we want to take those
01:24:32.880
boundaries that we saw from the um the semantic conventions page and save them
01:24:38.960
in a variable. Um these are uh explicit the you might see this word explicit
01:24:44.719
buckets, explicit boundaries. There's another form of buckets that I believe
01:24:50.560
is out now or will be out soon for the Ruby metrics called um exponential bucket histograms. And those are pretty
01:24:58.239
cool. they look at the data that's actually coming through and adjust the buckets based on that so that you can
01:25:03.840
kind of see more meaningful buckets. Um, but that's not the default for this particular metric. So, we're going to
01:25:10.080
use this default. The next thing we're doing is we're updating the meter provider to add a
01:25:15.679
view. So, metric views in open telemetry are ways to configure metrics to do
01:25:21.679
things that you want them to do that are outside of the default. You can add new aggregations. So if you wanted to try
01:25:28.159
out exponential bucket histograms, you would add a view in order to do that. The views work by um looking right here
01:25:35.840
for a name. It also accepts wild cards. So if you have, you know, if you want it
01:25:40.960
to apply to all of your metrics or only metrics that include certain words, you
01:25:46.080
can use that as well. Um you also add the type of metric you'd like this view
01:25:51.600
to apply to. And yeah and then here we're um updating the aggregation for
01:25:56.960
histogram. There are other attributes available too for different types of uh
01:26:02.400
instruments. And then we're applying those explicit boundaries that we had inside of this uh explicit bucket
01:26:09.920
histogram. So it's all pretty nested. I think the documentation needs to get a lot better for Ruby. Um, but know that
01:26:17.920
if the metrics like the semantic conventions, if they're not working for you and you intentionally want to change
01:26:23.520
them or the defaults just in general for the instruments aren't working, this is a tool that you have in order to tweak
01:26:30.400
that information. Now that we've created our view with our
01:26:35.760
custom boundaries, we're going to create the instrument. So
01:26:41.040
at the bottom of this initializers page, we're going to add our duration histogram. Instead of calling create
01:26:47.840
counter, we call create histogram. We'll give it the name that was in the semantic conventions. HTTP server
01:26:55.040
request duration. The unit will be seconds matching the conventions. And then we'll use the same description
01:27:01.520
duration of HTTP server requests.
01:27:13.199
And as always, you can copy paste from the gist or the completed instrumentation.
01:27:20.719
This next one's a little longer, a little meteor. So what's going on here is that we're calling active support
01:27:26.960
notification subscribe, which we looked at earlier, because we want to subscribe to the process action event. So whenever
01:27:33.679
process action is happening, whenever action controller is running through a particular action, we would like this
01:27:40.800
histogram to record the duration. Um, like I mentioned earlier, we'd probably go lower level and actually put it on
01:27:47.280
the server gem like rack. Uh, but for today, this will give us a pretty good approxim approximation of how long
01:27:53.360
things take. What's great about this is that the event um payload has a lot of
01:27:59.199
the attributes that we would want um to include. And this list is a lot shorter
01:28:04.800
to make sure everything fit on the slide, but there's a more comprehensive list in the completed um instrumented
01:28:11.280
version of the application of other attributes that you can look at. Um, one
01:28:16.320
thing to show there too before we wrap up on that step
01:28:28.320
is that sometimes this information is uh a little bit nested inside of other
01:28:35.360
values that are part of the payload. So the headers actually come out as like a rack headers object and you need to dig
01:28:42.480
inside of them to get the specific values. Um sometimes as instrumentation
01:28:47.920
authors we have to do funny things like manipulate even still those values to get exactly what we want in terms of
01:28:53.520
protocols uh or I'm sorry in terms of attributes.
01:28:59.280
So going back here, the last step after we've collected our attributes is to actually call record on that histogram
01:29:05.840
we just created to um give the event duration as the value that we'll capture
01:29:12.080
and the attributes is what we've collected above. So now every time process action runs we'll record this uh
01:29:19.040
duration histogram. So you can restart your server and this
01:29:25.120
should you know generate a new or I should say record a new measurement for
01:29:30.560
this instrument every time uh any controller action is hit
01:29:45.199
mention this earlier there's this toggle at the top to view charts based on metric data or span data. Um, so if
01:29:52.239
you've been viewing it based on span data, um, or up until this point just metric data, it probably looked very
01:29:58.719
lonely. And now these graphs will start to have a little more life.
01:30:06.800
And then if you'd like to query kind of like we did with the previous metric to
01:30:12.320
take a closer look at what's going on in these met in this particular instrument.
01:30:18.639
Um, similar to the last query from metrics, select star where HTTP server
01:30:24.880
request duration is not null since 30 minutes ago, you can leave that out as well. And when you run that, um, you
01:30:33.199
know, depending on how much traffic is generated, you'll get something that looks kind of like this and has
01:30:39.440
information about that specific method. We can see our 404s are different than our 200s when the random errors are
01:30:46.400
raised or probably in this case when the uh trail didn't exist. Um and that
01:30:51.920
allows you to to get a little deeper into kind of trends in these metrics by
01:30:57.760
aggregating this data maybe faceting based on your state status code or request method.
01:31:05.280
Oh, I thought I had more screenshots but uh I guess that's it. we we have successfully gone through the workshop
01:31:11.920
information. Um there's a little more that I want to still chat about, but
01:31:17.040
that is the the gist on what I wanted to cover today in terms of getting the
01:31:22.239
basic signals added to your application. Um there are more signals in open telemetry that are getting added.
01:31:28.560
Profiles is something that's coming along soon, but I think it will still be a while for Ruby before we actually get
01:31:34.560
around to implementing them. Um which speaking of which um so hopes and dreams
01:31:40.000
for the project. I um am one of the maintainers of this project and as I mentioned these things um Ruby's
01:31:47.040
packages are not stable. They're not complete in some cases. So the goals um
01:31:53.520
that you know we've kind of talked about as a special interest group is to move
01:31:58.639
towards metric stability. It seems like there are some people who have a strong interest in metrics. Alongside that,
01:32:05.280
we'd also like to move towards making logs stable and having those core signals stable um would kind of start to
01:32:11.760
put Ruby more on par of some other languages that have more mature open telemetry offerings. Um I would also
01:32:18.480
like to update the instrumentation to use stable semantic conventions and the bridge to do that is through that
01:32:24.000
environment variable we looked at uh the hotel sem stability opt-in. Um, also
01:32:31.440
adding metrics to instrumentation so that that way that server request dur duration metric that we added is
01:32:38.320
generated automatically. Um, file-based configuration is another
01:32:43.440
thing that's been talked about where you could have all your configuration in YAML instead of needing to actually add
01:32:49.679
an initializer and call Ruby code to um to update that configure those
01:32:56.400
configuration values. And theoretically that configuration could also be applied to if you have applications and other
01:33:03.520
languages um you know global configs could apply to those as well. And then
01:33:08.639
also whatever is most useful to the community that's also something that I would like to focus on. And so um issues
01:33:16.560
poll requests those are always very welcome. Um, in the GitHub repos under
01:33:22.880
the open telemetry project, the Ruby ones specifically are open telemetry
01:33:28.159
Ruby, which is what includes the SDKs, the APIs, the exporters, kind of those
01:33:34.880
more uh speced out um elements of the project. The other one is open telemetry
01:33:42.880
ruby contrib, excuse me. Um, and that one includes instrumentation. It also includes
01:33:49.520
resources, so you can have automatic resource detection for AWS and Azure.
01:33:54.960
Um, there are propagators in there, which we didn't cover today, but they can help support um, distributed tracing
01:34:01.199
across your services. So, pretty much anything that isn't in the specification, something that's maybe
01:34:07.679
more defined by a semantic convention, you can find that code in the contrib.
01:34:14.560
The other things that are important to know about are documentation. There's a specification repo, a semantic
01:34:20.560
conventions repo if you prefer to look at the information that way instead of on the docs website. And there's also a
01:34:26.800
community repository that includes a lot of information about how open telemetry organizes itself and what meetings are
01:34:33.360
available in order for you to meet with um maintainers and approvers in order to
01:34:40.159
uh participate in the project or get support. Ah support here we are. So if you decide
01:34:46.719
to use open telemetry or using it already and have questions um the places that I recommend you reach out first I
01:34:53.360
would recommend joining the Ruby uh the Ruby SIG the Ruby special interest group uh open telemetry calls all of the
01:35:01.360
groups that meet about different languages um or different you know facets of semantic conventions special
01:35:07.280
interest groups or SIGs. So we meet every Tuesday at 10 a.m. Pacific time. You can find the Zoom link in that open
01:35:14.000
telemetry community repository on the readme. Um, an enduser SIG. That is
01:35:19.199
another great place for you to join if you're just getting into open telemetry. Uh, oh, it's every other Thursday at 10
01:35:25.120
a.m., not every Thursday at 10 a.m. And they um they want to know about how the
01:35:32.320
open telemetry information that's available online is working for users and where it needs to be improved and
01:35:38.320
also kind of help troubleshoot or learn about use cases. Uh the CNCF um
01:35:45.040
cloudnative computing foundation is an organization that has a lot of different open source projects including
01:35:51.520
Kubernetes. Um but open telemetry is one of those projects and inside of their
01:35:57.040
Slack workspace there is an Otel Ruby channel where you can also ask for support or ping on issues or PR reviews
01:36:05.440
that haven't been um looked at. Uh issues or discussions on the repos are
01:36:10.800
another great way to get in touch. Pull requests are welcome. Issues are welcome. My GitHub username is my name,
01:36:17.760
Kayla Riapel. Um so feel free to tag me on things if you have any questions or
01:36:23.199
need any support. Um yeah, so if you um what? Oh, okay. So
01:36:31.280
um with next steps here uh there is a new getting started app coming soon that will be available on the open telemetry
01:36:37.840
documentation website that will include some of those advanced concepts we didn't get to today like span links and
01:36:43.760
custom pro processors and the collector stuff like that. So um keep an eye out
01:36:49.760
for that documentation. And yeah that's um that's all I had.
01:36:55.360
This has been a long time um a long workshop. I appreciate you all staying here and hanging with me through it.
01:37:01.920
Does anyone have any questions um that you want to discuss? Yeah, kind of as a
01:37:07.440
group. Yes. So, the question was if you already have um you know something like
01:37:14.560
structured logs set up in your Rails app, how do you migrate to open telemetry?
01:37:19.679
What I've heard most people do is kind of keep both tools installed for a while and make sure that open telemetry is
01:37:25.920
solving your needs. Now, there can be some issues with monkey patching and
01:37:31.760
things like that if that's how um the logs are getting recorded if the same method is being edited by multiple
01:37:38.080
sources. But um hopefully that's not the case. So, just something to be aware of. But I think people will tend to deploy
01:37:44.719
both for a while, make sure that they can answer the questions that they want with the open telemetry data and then
01:37:50.239
slowly start to remove that from different services.
01:37:55.679
Any other questions? Okay. Well, I'll um
01:38:01.040
also hang out up here if you do have questions. I hope that um the internet worked well and that things were able to
01:38:08.000
be followed for the most part. Um like I mentioned, you know, the the repository
01:38:13.040
has a lot of open spaces for discussion too. We have an issue specifically for
01:38:18.320
metrics feedback and for logs feedback. So, if you are using those things and have um questions, that's a great place
01:38:24.320
to ask them. I would also love some feedback on this presentation if you're willing to um to
01:38:30.159
share this QR code will take you to a Google form where you can answer a couple questions and let me know how I
01:38:36.800
did. So, um yeah, I am really stoked that everyone came. That's it's so
01:38:42.480
awesome. So, uh enjoy the rest of the conference and yeah, thanks for your time.