Tracing
Last updated
Was this helpful?
Last updated
Was this helpful?
Tracing can be set up to monitor and track requests, optimize performance and latency, improve observability, and preform root cause and service dependency analysis. Grey Matter supports Envoy's for the visualization of call flows.
Environment Variable
Description
Type
Default
tracing_enabled
Indicates whether or not to enable tracing
bool
false
tracing_address
The host for the trace collector server
string
"localhost"
tracing_port
The port for the trace collector server
int
9411
tracing_collector_endpoint
The endpoint on the tracing server to send spans
string
/api/v1/spans
tracing_use_tls
bool
false
tracing_ca_cert_path
The path to the CA certificate
string
./certs/egress_intermediate.crt
tracing_cert_path
The path to the certificate file.
string
./certs/egress_localhost.crt
tracing_key_path
The path to the key file.
string
./certs/egress_localhost.key
Attribute
Description
Type
Default
ingress
Does the listener trace incoming or outgoing traffic?
boolean
true
request_headers_for_tags
headers to convert into trace tags
[]string
null
The boolean value set for ingress
determines the operation_name
value in the Envoy HTTP connection manager tracing configuration. By default in both Grey Matter and Envoy, ingress
is true and "operation_name": "INGRESS"
. If ingress
is set to false
, operation_name
will be "operation_name": "EGRESS"
. This determines the traffic direction of the trace.
This field takes a list of header names to create tags for the active span. By default it is null
, and no tags are configured. If values are configured, a tag is created if the value is present in the requests headers, with the header name used as the tag name, and the header value used as the tag value in a span.
The below example uses a k8s deployment only as reference. The key points from the example are the TRACING_*
environment variables set.
The listener object with tracing_config
set will look something like the following:
The traces will be sent to the server and a trace object in JSON will look something like:
And the below is what the Jaeger trace timeline looks like for a Trace:
To properly set up tracing in Grey Matter, there must be a tracing server running with known address and port at the time the Sidecar is deployed. The Sidecar takes a series of run-time environment variables to set up a static cluster as the tracing server and configure it's . Then, the listener object takes a tracing configuration to configure specifics about the information in the spans.
Tracing can then be configured using the tracing_config
field of any . Properly configured, the Sidecar will send spans to the trace collector server with information on the request and it's path through the mesh.
Use TLS to connect to trace collector server. If true, , , and should be set.
Set in the .
Once the Grey Matter Sidecar is configured to talk to a trace server, the tracing_config
on the Grey Matter for the desired service configures the mesh to start sending traces to this server.
The values configured in this field will be used to set the in Envoy and configure specifics about the trace's sent to the server.
For a walkthrough example using docker and Jaeger, see the . The Jaeger dashboard when set up for tracing using this walkthrough looks like the following screenshot: