

Every processor supports all or a few data sources (traces, logs, etc.), and the processor's order is important. No processors are enabled by default in your pipeline. The core Collector also includes a few processors that modify the data before exporting it by adding attributes, batching, deleting data, etc. First, you need to declare your various receivers, processors, and exporters, as follows: Designing a pipelineĭesigning your pipeline is very simple. In the end, you'll use the operator provided by the release of the Collector.Įvery plugin supports one or more signals, so make sure that the one you’d like to use supports traces, metrics, or logs. They're generally used for implementing components that can be added to the Collector but don't require direct access to telemetry data.Įach pipeline step comprises an operator that’s part of the Collector core or from the contrib repository. The OpenTelemetry Collector also provides extensions. Like the agent log Collector, the Collector pipeline is a sequence of tasks starting with a receiver, then a processing sequence, and then the last sequence to forward the measurements with the exporter sequence. The Collector requires you to build a pipeline for each signal (traces, metrics, logs, etc.)

OPENTELEMETRY PROMETHEUS EXPORTER CODE
You'll only import standard OpenTelemetry libraries in your code and do all the vendor transformation and export using the Collector. The Collector helps you keep your code agnostic. It’s recommended to deploy it in agent mode to collect local measurements and use several Collectors to forward your measurements to your observability solutions.
