Running a simple example
Creating a producer
For this simple example we will create a producer that will add a couple of jobs, but it will add them in bulks instead of one by one, this will help us demonstrate how spans are linked between the consumers and the producers:
Creating a consumer
The consumer will be just a simple instance, we will use concurrency 10, so that jobs can be processed concurrently, and therefore create overlapping spans. We will also simulate jobs failures so that we can get retries, to show how spans are generated as the job gets failed, retried and finally completed:
Creating the instrumentation files
To test the telemetry functionality we can run a simple example. For that we also need to instantiate the OpenTelemetry SDK using a so called OpenTelemetry Protocol (OTLP) exporter.
We must install the following modules that are part of the OpenTelemetry SDK:
And now we must create so called "instrumentation" files. We will create one for our "producer" service, which is the service actually taking care of producing jobs, it will look like this. Note that we use localhost (127.0.0.1) where our jaeger service is running:
Likewise we will create another instrumentation file for our "consumer" service, this is where the workers will run and consume the jobs produced by the "Queue" instance:
Both services looks basically the same, just the service name will differ in this case.
Launching the services
In order to guarantee that the OpenTelemetry instrumentation is run first, before everything else, and performs any required internal patching (even though BullMQ does not rely on patching other modules may do), we need to launch it like this (note that we use tsx in this example but Node runtime will do as well:
You can also use Node runtime directly if you are using javascript (or building from Typescript to javascript): node --import producer.inst.otlp.js producer.js
As the services are launched we will see that the consumers starts processing the jobs and produce some logs on the console:
These are just the logs that we wrote ourselves on the "process" function in our worker, so nothing special here. However if we go to Jaeger we will find the following:
We have now 2 services to choose from, consumer and producer. If we search for traces in the producer we will be able to see all the traces where the producer is involved:
Here we can see as even though we are searching for the producer traces, we also get the consumer spans, and this is because jobs are linked between producers and consumers, so that we can trace all the way from the creation of a job to its final processing.
If we look into the consumer spans for example, there are some interesting things to see:
First of all, note how the producer span "addBulk myQueue", is the root of this trace. Since this was an addBulk, it means that several jobs were added to the queue in one go, 5 in this case. So the spans created by the consumer are therefore linked to this one producer span. The consumer spans "process myQueue" are generated for every job that is being processed, and since we had a concurrency factor larger than 5, all 5 jobs are processed concurrently, which we can see in the spans all starting at the same time.
But we also forced the jobs to fail 1 time, so that they would be retried with a small backoff (delay), which is why we can see a "delay myQueue" span and then a final "process myQueue" span.
If we open the spans we can find other useful information:
We have some useful tags related to this particular job, and also logs that shows events that happened during the span lifetime, for instance here we can see that the job failed with the given error message.
If we go to the last span of the trace we can see that the job was finally completed after being delayed a bit before its last retry:
Last updated