Direct Config

Config File

The Grey Matter Proxy is first configured from a YAML configuration file on disk. This file takes any configuration available in the Envoy Bootstrap Config File, as well as the additional Grey Matter filters that are made available through the SDK.

This file can be mounted into a container, or specified via flags with the gm-proxy binary.

Destination

Short Form

Long Form

Default

Envoy

-c

--config-file

config.yaml (current directory)

Base64 Encoding

Due to the difficulties involved in mounting files directly on some container platforms, the Grey Matter Proxy also supports passing Envoy configuration files as a base64-encoded environment variable, which is then decoded and written to disk.

When using this mechanism, you must supply the full config, because it will entirely overwrite the default on disk.

Variable

Default

Description

ENVOY_CONFIG

""

Base64 encoded string of envoy configuration file

When to Use Direct Config

Using the template config discussed in the previous section is simple and less error prone. However, not everything is exposed via the template or API as there are a variety of functionalities continuously added to both Envoy and Grey Matter Proxy. So you may find yourself in a situation that certain configuration you want to specify is not yet available in the template or API. This is one of the situations you would need to use direct configuration. Due to the flat nature of environment variables, we found that highly nested configurations are easier to specify and read in JSON/YAML format. Of course, if you already use Envoy proxy in your current infrastructure, the hurdle of conversion is lower if you could use configuration files you already have. No matter what the reasons, Grey Matter provides a direct configuration mechanism.

Envoy fundamentally employs an eventual consistency model, however, there are cases, the order in which resources (e.g. endpoints, routes, clusters, etc) are generated or modified is crucial. To achieve this, there is something called Aggregated Discovery Service (ADS) which ensures them to be coalesced to a single stream to a single management servers. If you would like to do this on startup, providing this via a direct configuration will likely be more straight forward and efficient. Perhaps, the easiest way to think of this would be a way to provide the bootstrap configuration in its native format whether you will use this as a static configuration or dynamic configuration.

Here is the example configuration:

static_resources:
  listeners:
    - name: ingress
      address:
        socket_address:
          address: 0.0.0.0
          port_value: 8443
      filter_chains:
        - filters:
          - name: envoy.http_connection_manager
            config:
              idle_timeout: 1s
              forward_client_cert_details: sanitize_set
              set_current_client_cert_details:
                  uri: true
              codec_type: AUTO
              access_log:
                - name: envoy.file_access_log
                  config:
                    path: "/dev/stdout"
              stat_prefix: ingress
              route_config:
                name: local
                virtual_hosts:
                  - name: local
                    domains: ["*"]
                    routes:
                      - match:
                          prefix: "/"
                        route:
                          cluster: local
              http_filters:
                - name: gm.metrics
                  typed_config:
                    "@type": type.googleapis.com/foo.gm_proxy.filters.MetricsConfig
                    metrics_port: 8080
                    metrics_host: 0.0.0.0
                    metrics_dashboard_uri_path: /metrics
                    metrics_prometheus_uri_path: /prometheus
                    prometheus_system_metrics_interval_seconds: 15
                    metrics_ring_buffer_size: 4096
                    metrics_key_function: depth
                    metrics_key_depth: "2"
                - name: envoy.router
          tls_context:
            common_tls_context:
              tls_certificate_sds_secret_configs:
                - name: "spiffe://foo.com/ns/fabric/sa/api"
                  sds_config:
                    api_config_source:
                      api_type: GRPC
                      grpc_services:
                        envoy_grpc:
                          cluster_name: spire
              tls_params:
                ecdh_curves:
                  - X25519:P-256:P-521:P-384
  clusters:
    - name: local
      connect_timeout: 0.25s
      type: STATIC
      lb_policy: ROUND_ROBIN
      load_assignment:
        cluster_name: local
        endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: 127.0.0.1
                    port_value: 10080
    - name: spire
      connect_timeout: 0.25s
      http2_protocol_options: {}
      type: STATIC
      lb_policy: ROUND_ROBIN
      load_assignment:
        cluster_name: spire
        endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  pipe:
                    path: /run/spire/sockets/agent.sock
admin:
  access_log_path: /dev/stdout
  address:
    socket_address:
      address: 127.0.0.1
      port_value: 8001

You can provide this as a file or base64 encoded environment variable to your sidecar. If you are using Kubernetes, you could create a ConfigMap with a content like above, and when you create your deployment or statefulset for your service, you can pass that information to GM Proxy like this:

apiVersion: apps/v1
kind: StatefulSet
spec:
  serviceName: api
  template:
    metadata:
      ...
    spec:
      serviceAccount: api
      containers:
        - name: sidecar
          image: "docker.greymatter.io/release:1.4.5-alpine"
          imagePullPolicy: IfNotPresent
          args:
            - -c
            - /etc/greymatter/config.yaml
          command:
            - /app/gm-proxy
          ports:
            ...
          volumeMounts:
            - name: sidecar-config
              mountPath: /etc/greymatter
              readOnly: true
      ...
      volumes:
        - name: sidecar-config
          configMap:
            name: api-sidecar

As you see, the config.yaml file gets mapped under /etc/greymatter/ and when you create a service, that location gets passed on to the proxy via the argument -c /etc/greymatter/config.yaml as we discussed above.

Questions

Last updated

Was this helpful?