Access Sidecar Admin Interface in Grey Matter on Kubernetes
Because the Sidecar admin interface can contain sensitive information and allows users to manipulate the sidecar, access is not routed through the mesh. In a Grey Matter deployment, access is locked down and only accessible by admin users who have direct access to the pod.
This guide is a step by step walkthrough on how to access a Sidecar's admin interface. This interface is used to both inspect and operate on the running server; doing tasks like inspecting stats or performing a hard shut-down. In this walkthrough, we'll perform the following common tasks:
Prerequisites
An existing Grey Matter deployment running on Kubernetes (tutorial)
kubectl
oroc
setup with access to the cluster
NOTE, the operations on the admin server do not depend on the platform in use. Since this guide is focused on Kubernetes, we'll use
kubectl
for access to the server. On other platforms, tools likessh
sessions can be used instead.
Steps
For all examples shown here, the Sidecar's admin server has been started on port 8001. This is the default for Grey Matter deployments.
1. Establish a Session
The first step is to establish a session inside the pod we need to examine. For this walkthrough, lets look at the edge
service. To do this, we'll need the POD ID for any edge nodes in the deployment:
$ kubectl get pods | grep edge
edge-7d7bf848b9-xjs5l 1/1 Running 0 114m
We can see we have one edge
pod with an ID of edge-7d7bf848b9-xjs5l
. We'll now use the kubectl
tool to start a shell inside that pod. After running the command below, you'll be left off with a shell inside the container.
$ kubectl exec -it edge-7d7bf848b9-xjs5l -- sh
/app $
Now that we're in the pod, we can run commands against the admin interface. Here we'll use the curl
to send commands since it's widely available and installed by default on Grey Matter docker images.
2. General Admin Commands
To see the full list of endpoints available, send a GET request to the /help
endpoint. Full descriptions of each can be found in the Envoy docs.
In the next 3 sections, we'll show sample usage of the /stats
and /config_dump
endpoints.
/app $ curl localhost:8001/help
admin commands are:
/: Admin home page
/certs: print certs on machine
/clusters: upstream cluster status
/config_dump: dump current Envoy configs (experimental)
/contention: dump current Envoy mutex contention stats (if enabled)
/cpuprofiler: enable/disable the CPU profiler
/drain_listeners: drain listeners
/healthcheck/fail: cause the server to fail health checks
/healthcheck/ok: cause the server to pass health checks
/heapprofiler: enable/disable the heap profiler
/help: print out list of admin commands
/hot_restart_version: print the hot restart compatibility version
/listeners: print listener info
/logging: query/change logging levels
/memory: print current allocation/heap usage
/quitquitquit: exit the server
/ready: print server state, return 200 if LIVE, otherwise return 503
/reset_counters: reset all counters to zero
/runtime: print runtime values
/runtime_modify: modify runtime values
/server_info: print server version/status information
/stats: print server stats
/stats/prometheus: print server stats in prometheus format
/stats/recentlookups: Show recent stat-name lookups
/stats/recentlookups/clear: clear list of stat-name lookups and counter
/stats/recentlookups/disable: disable recording of reset stat-name lookup names
/stats/recentlookups/enable: enable recording of reset stat-name lookup names
2. Stats
The /stats
endpoint is used to output all of the collected statistics. The first ~30 lines are shown below, but the full output is much larger and is dependent on the exact configuration of the sidecar being examined. For each new filter, listener, or cluster that is configured, new stats will be output. The stats will also be different depending on each connection type, e.g. http or MongoDB.
/app $ curl localhost:8001/stats
cluster.catalog.assignment_stale: 0
cluster.catalog.assignment_timeout_received: 0
cluster.catalog.bind_errors: 0
cluster.catalog.circuit_breakers.default.cx_open: 0
cluster.catalog.circuit_breakers.default.cx_pool_open: 0
cluster.catalog.circuit_breakers.default.rq_open: 0
cluster.catalog.circuit_breakers.default.rq_pending_open: 0
cluster.catalog.circuit_breakers.default.rq_retry_open: 0
cluster.catalog.circuit_breakers.high.cx_open: 0
cluster.catalog.circuit_breakers.high.cx_pool_open: 0
cluster.catalog.circuit_breakers.high.rq_open: 0
cluster.catalog.circuit_breakers.high.rq_pending_open: 0
cluster.catalog.circuit_breakers.high.rq_retry_open: 0
cluster.catalog.client_ssl_socket_factory.downstream_context_secrets_not_ready: 0
cluster.catalog.client_ssl_socket_factory.ssl_context_update_by_sds: 7
cluster.catalog.client_ssl_socket_factory.upstream_context_secrets_not_ready: 0
cluster.catalog.control_plane.connected_state: 1
cluster.catalog.control_plane.pending_requests: 0
cluster.catalog.control_plane.rate_limit_enforced: 0
cluster.catalog.default.total_match_count: 0
cluster.catalog.external.upstream_rq_503: 6
cluster.catalog.external.upstream_rq_5xx: 6
cluster.catalog.external.upstream_rq_completed: 6
cluster.catalog.init_fetch_timeout: 0
cluster.catalog.lb_healthy_panic: 6
...
To query just a subset of the stats, use the filter
query parameter:
/app $ curl localhost:8001/stats?filter=cluster.prometheus
cluster.prometheus.assignment_stale: 0
cluster.prometheus.assignment_timeout_received: 0
cluster.prometheus.bind_errors: 0
cluster.prometheus.circuit_breakers.default.cx_open: 0
cluster.prometheus.circuit_breakers.default.cx_pool_open: 0
cluster.prometheus.circuit_breakers.default.rq_open: 0
cluster.prometheus.circuit_breakers.default.rq_pending_open: 0
cluster.prometheus.circuit_breakers.default.rq_retry_open: 0
cluster.prometheus.circuit_breakers.high.cx_open: 0
cluster.prometheus.circuit_breakers.high.cx_pool_open: 0
cluster.prometheus.circuit_breakers.high.rq_open: 0
cluster.prometheus.circuit_breakers.high.rq_pending_open: 0
cluster.prometheus.circuit_breakers.high.rq_retry_open: 0
cluster.prometheus.client_ssl_socket_factory.downstream_context_secrets_not_ready: 0
cluster.prometheus.client_ssl_socket_factory.ssl_context_update_by_sds: 9
cluster.prometheus.client_ssl_socket_factory.upstream_context_secrets_not_ready: 0
cluster.prometheus.control_plane.connected_state: 1
cluster.prometheus.control_plane.pending_requests: 0
cluster.prometheus.control_plane.rate_limit_enforced: 0
cluster.prometheus.default.total_match_count: 2
cluster.prometheus.external.upstream_rq_200: 26
cluster.prometheus.external.upstream_rq_2xx: 26
cluster.prometheus.external.upstream_rq_301: 1
cluster.prometheus.external.upstream_rq_302: 1
cluster.prometheus.external.upstream_rq_3xx: 2
...
or
/app $ curl localhost:8001/stats?filter=ssl.handshake
cluster.catalog.ssl.handshake: 0
cluster.control-api.ssl.handshake: 0
cluster.dashboard.ssl.handshake: 5
cluster.data-internal.ssl.handshake: 0
cluster.internal-jwt-security.ssl.handshake: 0
cluster.jwt-security.ssl.handshake: 1
cluster.prometheus.ssl.handshake: 6
cluster.slo.ssl.handshake: 1
listener.0.0.0.0_10808.ssl.handshake: 6
3. Configuration Dump
Next we'll dump out the entire configuration of the Sidecar. The output of this command is routinely many thousands of lines long, but is very useful to inspect the exact state of a sidecar at any given moment. This config is mostly useful for verifying the exact behavior of the sidecar against what was intended.
/app $ curl localhost:8001/config_dump
{
"configs": [
{
"@type": "type.googleapis.com/envoy.admin.v3.BootstrapConfigDump",
"bootstrap": {
"node": {
"id": "default",
"cluster": "edge",
"locality": {
"region": "default-region",
"zone": "zone-default-zone"
},
"hidden_envoy_deprecated_build_version": "a8507f67225cdd912712971bf72d41f219eb74ed/1.13.3/Modified/DEBUG/BoringSSL",
"user_agent_name": "envoy",
"user_agent_build_version": {
"version": {
"major_number": 1,
"minor_number": 13,
"patch": 3
},
"metadata": {
"revision.status": "Modified",
"revision.sha": "a8507f67225cdd912712971bf72d41f219eb74ed",
"build.type": "DEBUG",
"ssl.version": "BoringSSL"
}
},
"extensions": [
{
"name": "envoy.grpc_credentials.aws_iam",
"category": "envoy.grpc_credentials"
},
{
"name": "envoy.grpc_credentials.default",
"category": "envoy.grpc_credentials"
},
{
"name": "envoy.grpc_credentials.file_based_metadata",
"category": "envoy.grpc_credentials"
},
{
"name": "envoy.health_checkers.redis",
"category": "envoy.health_checkers"
},
{
"name": "envoy.dog_statsd",
"category": "envoy.stats_sinks"
},
{
"name": "envoy.metrics_service",
"category": "envoy.stats_sinks"
},
{
"name": "envoy.stat_sinks.hystrix",
"category": "envoy.stats_sinks"
},
{
...
5. Exit
Type exit
in the terminal to return to drop the connection to the pod and return to your local shell.
/app $ exit
Last updated
Was this helpful?