Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Grey Matter Control performs service discovery and distribution of configuration in the Grey Matter service mesh. Control provides policy and configuration for all running sidecars within the Grey Matter platform.
The Control server works in conjunction with the Grey Matter Control API to manage, maintain, operate, and govern the Grey Matter hybrid mesh platform.
The Control server performs service discovery in the mesh and acts as an xDS server to which all proxies connect.
xDS is the generic name for the following:
Endpoints (EDS)
Clusters (CDS)
Routes (RDS)
Listeners (LDS)
Service Discovery is the way Grey Matter dynamically adds and removes instances of each microservice. Discovery adds the initial instances that come online, and modifies the mesh to react to any scaling actions that happen. To keep flexibility in the Grey Matter platform, the Control server supports a number of different service discovery options and platforms.
One key benefit to a service mesh is the dynamic handling of ephemeral service nodes. These nodes have neither consistent IP addresses nor consistent numbers of instances as services are spun up and down. The gm-control-api server, in conjunction with Grey Matter Control, can handle these ephemeral services automatically.
The ability to automatically populate instances of a particular microservice comes from the clusterobject. In particular, the name field in the cluster object determines which nodes will be pulled out of the mesh and populated in the instances array. In the example below, the name is catalog. This means that all services that announce as catalog in service discovery, will be found and populated into the instances array after creation.
Create the following object:
Will be populated in the mesh as:
Even though the object was created with no instances, they were discovered from the mesh and populated. Now any service that needs to talk to catalog, can link to this cluster and address all live instance.
Each proxy in the mesh is connected to the control plane through a gRPC stream to the Grey Matter Control server. Though gm-control-api houses all the configuration for the mesh, it's ultimately gm-control that turns these configs into full Envoy configuration objects and sends them to the proxies.
The configuration in the Control API is mapped to physical proxies by the name field in the proxy API object. It's very important that this field exactly match the service-cluster identifier that the intended target proxy used when registering with gm-control.
In the example below, the proxy object, and all other objects linked by their appropriate keys, will be turned into a full Envoy configuration and sent to any proxies that announce as a cluster catalog.
Services that announce as:
Will receive the config form the object below, because XDS_CLUSTER==name, and they're both in the same zone.
Have a question about the Grey Matter Control Plane? Reach out to us at to
Explore Grey Matter's design and learn how it works.
Grey Matter consists of a control plane, data plane, mesh, telemetry-powered business intelligence, and AI. It can be deployed on multiple cloud-native or legacy infrastructures without placing predetermined downstream requirements on existing investments.
Learn about the inner workings of Grey Matter's three core components: Fabric, Data, and Sense.
Grey Matter enables unified hybrid microservice deployments and hybrid/multi-cloud operations without special requirements for underlying infrastructure like containers or container platforms regardless of cloud vendor, PaaS or infrastructure.
Access Logging (ALS)
Aggregate (ADS)
Envoy v1 CDS/SDS (beta)
Envoy v2 CDS/EDS (beta)
{ "zone_key": "default-zone", "cluster_key": "catalog-proxy", "name": "catalog", "instances": [], "circuit_breakers": { "max_connections": 500, "max_requests": 500 }, "outlier_detection": null, "health_checks": []}{ "cluster_key": "catalog-proxy", "zone_key": "default-zone", "name": "catalog", "secret": { "secret_key": "", "secret_name": "", "secret_validation_name": "", "subject_names": null, "ecdh_curves": null, "set_current_client_cert_details": { "uri": false }, "checksum": "" }, "instances": [ { "host": "10.128.2.183", "port": 9080, "metadata": [ { "key": "pod-template-hash", "value": "2000163809" }, { "key": "gm_k8s_host_ip", "value": "10.0.2.132" }, { "key": "gm_k8s_node_name", "value": "ip-10-0-2-132.ec2.internal" } ] }, { "host": "10.128.2.140", "port": 9080, "metadata": [ { "key": "pod-template-hash", "value": "475497808" }, { "key": "gm_k8s_host_ip", "value": "10.0.2.82" }, { "key": "gm_k8s_node_name", "value": "ip-10-0-2-82.ec2.internal" } ] } ], "circuit_breakers": { "max_connections": 500, "max_pending_requests": null, "max_retries": null, "max_requests": 500 }, "outlier_detection": null, "health_checks": [], "checksum": "2b6d2a8a6886eb30574f16480b0f99b90e11484d9ddb10fb7970c3ce37d945ab"}XDS_CLUSTER=catalogXDS_REGION=default-zone{ "proxy_key": "catalog-proxy", "zone_key": "default-zone", "name": "catalog", "domain_keys": [ "catalog" ], "listener_keys": [ "catalog-listener" ], "listeners": null, "active_proxy_filters": [ "gm.metrics" ], "proxy_filters": { "gm_impersonation": {}, "gm_observables": {}, "gm_oauth": {}, "gm_inheaders": {}, "gm_listauth": {}, "gm_metrics": { "metrics_port": 8081, "metrics_host": "0.0.0.0", "metrics_dashboard_uri_path": "/metrics", "metrics_prometheus_uri_path": "/prometheus", "prometheus_system_metrics_interval_seconds": 15, "metrics_ring_buffer_size": 4096, "metrics_key_function": "depth" } }}Contact us at info@greymatter.io to discuss your specific use case.
Learn about the major components in the Grey Matter ecosystem.
Use the concepts in this document alongside our Guides when deploying Grey Matter in production.
Grey Matter is composed of Fabric, Data, and Sense. Internal to each component is a series of microservices that offers several core features. Each feature simplifies technical challenges associated with service management, such as:
Announcement
Discovery
Instrumentation
Logging
The following diagram shows the workload distribution between Grey Matter's core components.
Fabric powers the zero-trust hybrid service mesh, which consists of the , , , and . You can use Fabric to connect services regardless of language, framework, or runtime environment.
Secure network fabrics provide bridge points, observability, routing, policy assertion, and more between on-premise, multi-cloud, and multi-PaaS capabilities. Fabric offers workload distribution and management within a hybrid environment.
Grey Matter supports multiple runtime environments with multi-mesh bridges as shown below. These environments include:
Multiple cloud providers (i.e. AWS and Azure)
Container management solutions (i.e., K8s, OpenShift and ECS)
On-premise infrastructure
Grey Matter gives you the flexibility to deploy the mesh to suit your environment. Learn more about our here.
Fabric operates at layers 3 (network), 4 (transport), and 7 (application) simultaneously. Providing a powerful, performant, and unified platform to run, manage, connect, and perform distributed workloads across a hybrid architecture.
Layer 3 operates at the TCP level. Responsible for transferring data “packets” from one host to another using IP addresses, TCP ports, etc., determining which route is the most suitable from source to its destination. At this level, network-segmentation is able to be performed using ABAC, RBAC, and NGAC policies set within each sidecar. More details can be found in the section.
Layer 4 coordinates data transfer between clients and hosts. Adding load balancing, rate limiting, discovery, health checks, observability, and more built on top of TCP/IP. Layer 3 and 4 alone live within the TCP/IP space and are unable to make routing decisions based on different URLs to backend systems or services. This is where layer 7 comes into the architecture.
Layer 7 sits at the top of the OSI model, interacting directly with services and applications responsible for presenting data to users. HTTP requests and responses accessing services, webpages, images, data, etc. are layer 7 actions.
Grey Matter Fabric offers a fast, simple, and elegant model to build modern architecture while bridging legacy applications.
The following graphic shows Fabric's basic capabilities--access, routing decisions, rate limits, health checks, discoverability, observability, proxying, network and micro-segmentation--and how they leverage all features found within each of the OSI layers described above.
Grey Matter Edge handles flowing through the mesh. Multiple edge nodes can be configured depending on throughput or regulatory requirements requiring segmented routing or security policy rules.
Traffic flow management in and out of the hybrid mesh.
Hybrid cloud jump points.
Load balancing and protocol control.
Edge OAuth security.
Automatic discovery throughout your hybrid mesh.
Templated static or dynamic sidecar configuration.
Telemetry and observable collection and aggregation.
Neural net brain.
Grey Matter Fabric offers the following security features:
Verifies that tokens presented by the invoking service are trusted for such operations.
Performs operations on behalf of a trusted third party within the Hybrid Mesh.
Add Grey Matter to services by deploying a sidecar proxy throughout your environment. This sidecar intercepts all network communication between microservices.
The Grey Matter Sidecar offers the following capabilities:
Multiple protocol support.
Observable events for all traffic and content streams.
Filter SDK.
Certified, Tested, Production-Ready Sidecars.
Once you've deployed the Grey Matter Sidecar, you can configure and manage Grey Matter with its control plane functionality.
Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic
Fine-grained control of traffic behavior with rich routing rules, retries, failover, and fault injection
A policy layer and configuration API supporting access controls, rate limits and quotas
Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress
Example
The following diagram shows how the Grey Matter Sidecar would operate in a North/South traffic pattern.
Grey Matter Data is an API that enables secure and flexible access control for your microservices. Data consists of Grey Matter Data and JWT server, and includes an API Explorer to help you manage the API.
Grey Matter Sense consists of four primary components: , , and .
Intelligence 360 is our user dashboard that paints a high-level picture of the service mesh. Intelligence 360 includes the following features:
Mesh Overview
Running state of all services
Search, sort and filter options
Historical metrics per service
Grey Matter Service Level Objectives (SLOs) allows users to manage objectives towards service-level agreements. These objectives can be internal to business operations or made between a company and its customers. They are generic and are valuable in more than one use case.
Key Definition
SLOs are simply service performance objectives associated with metrics collected by the , such as memory usage, request traffic (request rate, error rate, and latency).
SLOs combine with Intelligence 360 time-series charts to visualize warning and violation thresholds for targeted performance analysis. These objectives are used even further to train Sense AI for service scaling recommendations.
Business Impact allows users to set metadata on services with the goal of associating how critical a service is towards the operations of a company, mission, or customer. Business Impact provides a list of values (Critical, High, Medium, Low) that correlates each service's business impact. Sense lets users of Intelligence 360 configure these values themselves, which can be used to filter and search via the mesh overview.
acts as an interface between the data plane (network of sidecars) of the service mesh and Intelligence 360. Catalog provides a user-focused representation of the mesh.
Learn how to here.
Want to learn more about Grey Matter Sense? Contact us at to discuss your use case.
Create an account at to reach our team.
In the simple deployment model, there is a single point of ingress between the Client and the Grey Matter Edge. The Edge routes traffic to appropriate Grey Matter Sidecars.
The multi-mesh deployment model extends the simple deployment model by allowing each mesh egress to communicate with another mesh ingress using mTLS, as shown below.
Have a question about Grey Matter's deployment models? Reach out to us at to learn more.
Tracing
Troubleshooting
Encryption
Access control
Network/micro/data-segmentation
The proxy layer orchestrates communications between microservices operating in the mesh to provide reliability, visibility, and security.
API for advanced control.
Native support for gRPC, HTTP/1, HTTP/2, and TCP.
TCP runs on top of IP (Layer 3 - Network layer) protocol
Secure service-to-service communication in a cluster with strong identity-based authentication and authorization
Real-time metrics per service instance
Service instance drill down
Metrics explorer
Service configuration
Business impact
SLO
Sidecar settings
The Grey Matter Sidecar is a L7 reverse proxy based off of the popular Open-Source Envoy Proxy. Grey Matter's proxy enhances the base capabilities with custom filters, logic, and the ability for developers to write full-featured Envoy filters in Go.
The primary use of the Grey Matter Sidecar is to act as the distributed network of proxies in the Grey Matter Fabric service mesh. In this use-case; each proxy starts out with very simple configuration, which is then modified by the control plane to suit the changing needs of the network. The documentation here is focused on the individual proxy itself; low-level configuration, filter specifications, etc.
At the level of the individual service, event auditing works as follows:
One proxy collects all metrics that happen on the individual service.
At the Edge, they extract the PKI/cert.
The user that has accessed the service from outside Fabric is then decomposed based on one of the observable fields emitted by the Sidecar proxy.
This information, coupled with IP address information from the originating request, is added to the stack of the
At the service-to-service level, the sidecar tracks service-to-service calls within Fabric. This enables architecture inference and service dependency observation.
Grey Matter also has an observable indexer which can capture geolocation info and move it into Elasticsearch. Customizable event mappings are also available. These can be tailored per individual route so that a POST request may result in an EventAccess event in one route, while resulting in EventCreate on another.
Have a question about the Grey Matter Sidecar? Reach out to us at to learn more.
Explore Grey Matter's design principles.
Grey Matter is a zero-trust hybrid mesh platform built using open architecture and mesh app and service architecture (MASA) principles.
Each microservice within Grey Matter runs and scales independently to improve secure interoperations, resiliency, continuity of operations, and insight for your business. Our omnichannel support provides rich, fluid, and dynamic connections between people, content, devices, processes, and services.
Combine Grey Matter with today’s languages and powerful frameworks to write business services faster than ever.
Our core architecture principles support the following business needs.
Designed to operate using a zero-trust threat model to ensure each service running within a Grey Matter enabled hybrid mesh is appropriately secured, observed, and managed.
Enable on-premise, multi-cloud, and multi-platform as a service (PaaS) runtime environments.
Built with elasticity, high availability, and cloud computing models in mind - provides a unified mesh platform to build applications as microservices, utilize container management solutions, and dynamically orchestrate workloads across hybrid enterprise.
Provides a solid foundation to scale with the growth of your business. Enabling modern architectural patterns supporting rapid increase or decrease in traffic volume, maintaining business insight for effectiveness and efficiency, and aiding in the reduction of bottlenecks when the time matters.
Modular service delivery - enabling loosely coupled systems and services developed independent of each other, taking advantage of continuous delivery to achieve reliability and faster time to market.
Creates a secure unified zero-trust network fabric, allowing systems to interchangeably serve or receive services from other systems providing enterprises the ability to perform multi-environment segmentation and observe traffic flows between environments. Managed through a runtime environment agnostic Grey Matter Control API.
Able to react to digital business changes, providing a pathway enabling business insight, security, and connectivity across multiple environments reducing complexities while increasing and facilitating a business's digital transformation journey.
Use of artificial intelligence (AI) techniques simplifying and assisting a user experience while providing business insight and fleet wide management across Grey Matter connected resources.
Integration into any ecosystems and end-to-end automation through the lifecycle of the mesh app and service architecture.
Have a question about Grey Matter's design principles? Reach out to us at to learn more.
xForwardedForIpGrey Matter Data is a microservice for the versioned and encrypted storage of media blobs and assets. It is a high-performance time-series object storage. It contains an immutable sequence of events which collectively describes a file system at a given moment in time. If the first event is a creation of an object and the second is its deletion, then the object will not appear in listings as of the current time, but it will appear as of any time after the creation and before the deletion.
Have a question about Grey Matter's platform services? Reach out to us at to learn more.
git installed
helm v3
envsubst (a dependency of our helm charts)
eksctl or an already running Kubernetes cluster.
NOTE: if you already have a Kubernetes cluster up and running, move to step 2. Just verify you can connect to the cluster with a command like
kubectl get nodes
For this deployment, we'll use to automatically provision a Kubernetes cluster for us. The eksctl will use our preconfigured AWS credentials to create master nodes and worker nodes to our specifications, and will leave us off with kubectl setup to manipulate the cluster.
The regions, node type/size, etc can all be tuned to your use case, the values given are simply examples.
Cluster provisioning usually takes between 10 and 15 minutes. When it is complete, you will see the follwing output:
When your cluster is ready, run the following to test that your kubectl configuration is correct:
Though Helm is not the only way to install Grey Matter into Kubernetes, it does make some things very easy and reduces a large number of individual configurations to a few charts. For this step, we'll clone the public git repository that holds Grey Matter and cd into the resulting directory.
NOTE: this tutorial is using a release candidate, so only a specific branch is being pulled. The entire repository can be cloned if desired.
Before running this step, determine whether or not you wish to install . If so, determine whether or not you will use S3 for backing. If you do want to configure Grey Matter Data with S3, follow the guide. You will need the AWS credentials from here.
To set up credentials, we need to create a credentials.yaml file that holds some secret information like usernames and passwords. The helm-charts repository contains some convenience scripts to make this easier.
Run:
and follow the prompts. The email and password you are prompted for should match your credentials to access the Decipher Nexus at . If you have decided to install Grey Matter Data persisting to S3, indicate that when prompted, and provide the access credentials, region, and bucket name.
Note that if your credentials are not valid, you will see the following response:
To see the default configurations, check the global.yaml file from the root directory of your cloned repo. In general for this tutorial, you should use the default options, but there are a couple of things to note.
If you would like to install a Grey Matter Data that is external and reachable from the dashboard, set global.data.external.enabled to true.
If you are installing data and set up your , set global.data.external.uses3 to true.
If you plan to update ingress certificates or modify RBAC configurations in the mesh, set global.rbac.edge
You can set global.environment to eks instead of kubernetes for reference, but we will also override this value with a flag during the installation steps in .
Grey Matter is made up of a handful of components, each handling different pieces of the overall platform. Please follow each installation step in order.
Add the charts to your local Helm repository, install the credentials file, and install the Spire server.
Watch the Spire server pod.
Watch it until the READY status is 2/2, then proceed to the next step.
Install the Spire agent, and remaining Grey Matter charts.
NOTE: for easy setup, access to this deployment was provisioned with quickstart SSL certificates. They can be found in the helm chart repository at
./certs. For access to the dashboard via the public access point, import the./certs/quickstart.p12file into your browser of choice - the password ispassword.
An will be created automatically when we specified the flag --set=global.environment=eks during installation. The ELB is accessible through the randomly created URL attached to the edge service:
You will need to use this value for EXTERNAL-IP in the .
Visit the url (e.g. https://a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com:10808/) in the browser to access the Intelligence 360 Application
If you intend to move onto into your installation, or otherwise modify/explore the Grey Matter configurations, you will need to .
For this installation, the configurations will be as follows. Fill in the value of the edge service's external IP from the for <EDGE-EXTERNAL-IP>, and the path to your helm-charts directory in <path/to/helm-charts>:
Run these in your terminal, and you should be able to use the CLI, greymatter list cluster.
You have now successfully installed Grey Matter!
If you're ready to shut down your cluster:
NOTE: this deletion actually takes longer than the output would indicate to terminate all resources. Attempting to create a new cluster with the same name will fail for some time until all resources are purged from AWS.
Service meshes, microservices, server-less, and containers are key elements of Mesh application and service architecture (MASA) implementations. MASA, APIs, and internal traffic patterns represent one of the most effective pathways to enterprise modernization, but this doesn’t come without challenges.
Industry has signaled increased interest in zero-trust infrastructure for service-to-service mTLS connections, scheduled or on-demand key rotations, service cryptographic identifiers, observability (continuous monitoring, granular audit compliance, etc.), service-level management, and policy management throughout the enterprise service fleet.
Understanding how the roles of Authentication, Authorization, Claims, and Principals will play within your MASA is important (figure 1). Authentication and authorization are both significant in any security model, but follow different concepts and implementation patterns. Authentication establishes and confirms an identity. Authorization takes action based on the confirmed identity authenticated. Principals are asserted claims that provide entitlements granting access to systems, services, or data based on Role-based Access Control (RBAC), Attribute-based Access Control (ABAC) and Next Generation Access Control (NGAC) controls.
Grey Matter's authentication scheme establishes identities for every transaction within the platform. There are two types of identities: users and services.
User Authentication methods:
OpenID Connect (OIDC)
mTLS x.509 certificates (Distinguished names represent who the user is)
Service-to-Service Authentication methods:
mTLS x.509 certificates (SPIFFE identities are incorporated into the x.509 certificate)
While distinct, these identities are not mutually exclusive. One of the most common access patterns within Grey Matter is a service making a request to another service on behalf of a user. In this case, there are three identities (two services and a user), each of which must be verified in order for the transaction to succeed. As users or services authenticate with Grey Matter, principals are asserted and flow to upstream services. This ensures that upstream services are aware of the entity (user or service) making a request. Grey Matter supports user authentication and service-to-service authentication methodologies identified below.
Grey Matter integrates with existing public OIDC providers (Google, Github, etc.) or private OIDC providers (e.g., Ory Hydra) to support user authentication. OIDC is an authentication protocol built on top of OAuth 2.0 that allows delegation of authentication responsibility to a trusted external identity provider. Many implementations of OIDC providers are available and support on premise, cloud or as a service via a host of underlying technologies (e.g., LDAP). This sequence diagram (figure 3) shows the OIDC flow within Grey Matter.
The client initiates a request to Grey Matter Edge.
Grey Matter Edge responds with a 302 HTTP status code used to perform URL redirection, along with a callback URL.
Based on the redirect URL, the client initiates a request to the specified OIDC provider.
Once the client is authenticated, the OIDC provider responds with a 302 HTTP code (based on the callback URL) and provides an OIDC code.
Grey Matter supports x.509 for both users and for service to service transactions.
A client (user or service) initiates a request to a server.
The server responds with its server certificate.
2.1. The client verifies that the server’s certificate is valid based on its certificate information.
The server requests the client's certificate.
Enterprise IT organizations that have existing public key infrastructure (PKI) in place for user authentication can pass their certifications with requests made to the Grey Matter Edge.
Service authentication, (service-to-service communication), is based solely upon x.509 certificates and mTLS. Grey Matter Fabric is installed with a certificate authority that issues and reissues short- lived x.509 certificates to each sidecar proxy for intermesh communication. Each certificate contains a SPIFFE identity that uniquely identifies the sidecar to which it is issued. No sidecar will accept a connection from any service that does not present a certificate issued by the certificate authority. Like user authentication, these service identities enable authorization.
Note: In cases where requests already contain a signed cookie the edge simply verifies the signature and expiry. If valid, the edge forwards the request. If not valid, the request is treated as unauthenticated.
Authorization is the process by which identities (users or services) are granted permission to access resources within the mesh. For example, we may wish to restrict access to a specific resource to a limited set of users, services or data. As an added complication, it is often more desirable to grant or deny access for a resource to entire classes of identities (i.e., administrative users or trusted services). Grey Matter uses the authenticated identities and their attributes to support fine-grained access controls using the following methods:
Authorization Filters
Data Authorization via the Grey Matter Data Platform Service
It's important to note that Sidecar-to-Sidecar (service-to-service) authorization follows similar patterns of a user with the exception that sidecar identities typically do not include additional attributes; however, there is nothing precluding the addition of attributes for a sidecar identity.
Upon choosing an authorization pattern, access control becomes a deployment concern, not a development concern. Allowing microservice developers to focus on business value since their services will not receive any unauthorized request. Authenticated identity and attributes are available to the service should they be required.
The Grey Matter Sidecar uses authorization filters to manage who is allowed to access which resources and how. Since all requests to the mesh are authenticated, filters can be dynamically configured at runtime with no additional requirements. Attribute based authorization is also implemented via Grey Matter Sidecar filters but requires that requests contain a signed JSON Web Token (JWT) containing the identity claims. The creation and population of these tokens is left to the enterprise.
The Grey Matter Sidecar supports list-based authorization decisions within the ListAuth filter. This filter allows whitelisting and blacklisting of individual identities based upon the identities distinguished name (i.e, “cn=user, dc=example, dc=com” or “cn=web server, dc=example, dc=com”) or relative distinguished name (i.e., “dc=example, dc=com”). This filter applies to all requests for the proxied service or services.
The Grey Matter Sidecar supports fine grained authorization decisions to authorize actions by identified clients using Role-Based Access Control (RBAC). This filter allows complex whitelisting and blacklisting of individual identities based upon the identities distinguished name. Matching of regular expressions is supported to add additional flexibility. Further, whereas the ListAuth filter applies to all requests, the Role Based Access Control Filter can be defined for any combination of service, route, or verb. This is useful to explicitly manage callers to a service running within the Grey Matter mesh platform and protect the mesh from unexpected or forbidden agents.
Supported HTTP verbs include:
GET The GET method requests a representation of the specified resource. Requests using GET should only retrieve data.*
POST The POST method is used to submit an entity to the specified resource, often causing a change in state or side effects on the server.*
PUT The PUT method replaces all current representations of the target resource with the request payload.*
Definitions as described by the Mozilla Developer Network (MDN).
In situations where the identity is not sufficient to make all authorization decisions, the Grey Matter Sidecar can enforce finer-grained control based upon identity attributes, provided the request contains a sign JWT.
Using the RBAC filter, rules can be created to authorize specific claims found within a JWT to perform specific actions. This requires an external service to generate a signed JWT for each request. Since the JWT is included as a header, if a JWT is passed, it will propagate to all sidecars in the request chain. With that said, if the request is completed—meaning the destination service has received it, processed it, and invokes another service in the Mesh—this is a new request and the calling service would be required to pass the JWT for further authorization purposes.
The Grey Matter sidecar offers a custom filter interface, so customers have the ability to create business-specific logic around their security and regulation concerns if required. This makes the mesh fully adaptable to an enterprise’s needs, and provides a way to take advantage of existing IT investments.
One of the unique facets of Grey Matter is that data security and sharing is addressed by Grey Matter Data. As enterprises shift from monoliths to microservices, data tends to be duplicated across the architecture. Grey Matter Data provides a service to address the secure sharing of this data without marshalling it into and out of processes. This feature is described in greater detail in the Data Segmentation portion of this document, but the pattern employed by Grey Matter Data can be used by any service to enforce complex security policies for any resource via the Grey Matter JWT Security Service or customer JWT service adhering to the Grey Matter Data interface.
One key feature of the Grey Matter hybrid mesh is its ability to secure, manage, and govern the traffic patterns of running services.
East/West traffic within the Fabric Mesh should be done via mTLS. Grey Matter uses two methods enabling this: direct integration with existing CAs and, automatic setup via SPIFFE/SPIRE. For integration with existing CAs, each sidecar in the mesh is configured to use provided x. certificates. In the automatic setup, each sidecar uses a unique SPIFFE ID to authenticate with SPIRE servers. Unique short-lived x.509 certificates are then automatically created and rotated for each connection between sidecars.
North/South traffic patterns use the Grey Matter Edge to establish principals and pass them to called services. The Edge supports both OIDC and mTLS x.509 certificate authentication modes, however, the Fabric Mesh is not limited to a single Edge. Multiple nodes can be configured to expose both authentication modes wherever an access point is needed. Note that the Edge node does not have to be exposed to a publicly addressable URL. In many cases, an API mediation layer may be put in front of the Edge node. In all cases, the Edge node is responsible for verifying and ensuring that the proper principals are available for downstream services within the Fabric Mesh to consume.
Principals such as user identities are moved by the Edge node into a user_dn header which flows through the entire service-to-service request chain. Each following link in the request chain is performed via mTLS, with each unique service using automatically rotating x.509 certificates established via SPIFFE Identities and the SPIRE framework.
In some cases traffic needs to flow outside of the mesh. Common scenarios include mesh-to- mesh communications, proxying to serverless functions, and supporting legacy systems that can’t be moved directly into the mesh. In all these cases, proxies are setup within the mesh with the sole purpose of communicating outside. Principals are established at the Edge. Inter-mesh communication is still handled by mTLS, and requests are authenticated via the outside system by whatever method it accepts: RPC, HTTP, mTLS, or OIDC.
Traffic splitting is another important pattern in stable environments. Traffic splitting allows configurable service requests to siphon off percentages of requests to another source. This allows services, apps, or entire meshes to experience small amounts of live traffic while keeping most users on the original source. The percentage of users on the original service is then decreased until the service is fully migrated.
Circuit Breaking is a way for each sidecar to protect the thing it is proxying to, but it is not a way to have that proxy harden itself. Grey Matter provides circuit breakers at every point in the mesh.
The most common place for this to occur is at the edge, where a DDOS could overwhelm the edge nodes themselves. To solve this, we employ Rate Limiting, which can protect the edge node from accepting too many requests and opening too many file handles and crashing. With proper configuration, each sidecar ceases queueing new requests before they’re overwhelmed, allowing the service time to heal. This ensures capabilities can withstand malicious attacks and accidental recursive network calls without going down.
Enterprises prefer hybrid environments capable of leveraging unified on-premise and cloud resources. Traditional networking patterns use features such as VLANs to create perimeter-based firewalls, but this concept breaks down with modern mesh application service architecture (MASA) patterns. In MASA, services are designed to be ephemeral, dynamically generating different IP:PORT pairs each time a new instance spins up.
Securing this type of architecture requires network segmentation. Grey Matter isolates services and network fabric communications to specific runtime environments or infrastructure resources. Grey Matter Fabric supports segmentation to a very fine level of granularity. Each service launched onto Fabric comes online with no knowledge of or connections to any other point on the mesh. The desired mesh is then built up through configuration with the required network topology. Segmentation is enforced through routing rules, service discovery, and mTLS. Dynamic configuration can facilitate any permutation of intra-mesh communication required. In addition to segmentation of individual meshes, Grey Matter can also support multi-mesh operations. This allows the bridging of environments already physically or logically isolated from each other.
Micro-segmentation is a method of creating secure isolation zones either on-premise or in the cloud in order to separate different workloads. Authentication plays a key role in micro-segmentation. Authentication is responsible for establishing network communications and flow through the mesh. Strong authentication models enable Grey Matter to perform micro-segmentation for users, services, and data throughout the mesh.
User-to-Service segmentation is controlled through user authorization signatures. These can be coupled with claim-based assertions. User identities and claims flow through mTLS-encrypted communication channels established by service-to-service micro-segmentation patterns. Complex security policies within each sidecar allow ABAC/RBAC down to the service, route, and HTTP verb permit. This enables a very high degree of isolation. ABAC/RBAC policies cannot be achieved without strong authentication methodologies establishing identities for both users and services.
Service-to-Service segmentation is controlled through mTLS certificates and SPIFFE identities. These can be coupled with claims-based assertions and ABAC/RBAC policies. Images in the following section illustrate how this is achieved.
Grey Matter’s data segmentation capability is a key differentiator. Data segmentation is the process of dividing data and grouping it with similar data based on set parameters. Grey Matter Data adds complex policy assertions to stored objects. These object policies govern which users or services may access the objects. Objects stored within Grey Matter Data are encrypted at rest and in transit. A JSON Web Token (JWT) is provided to gain access to an object stored in Data.
The token’s claims are dynamically mapped to the policies stored with the object. JWTs for both users and services can be created enabling end-to-end security using authentication principals.
The example above shows how data-segmentation is achieved through simple policy. However, the Grey Matter Data policy engine is designed to deal with complex rules designed to suit any scenario. The following scenario presents a more complex use case.
Sidecar A saves an object into Data and provides access privileges to Sidecar B’s SPIFFE identity. Sidecar A dynamically discovers Data via the Data Sidecar routing information.
Data Sidecar receives Sidecar A’s request and streams the object (with policy) into the Grey Matter Data node.
Sidecar B (through a means of event-based architecture patterns) is notified that Sidecar A just saved an object of interest into Data. Sidecar B calls into Data (through the Data Sidecar) to retrieve the object. Sidecar B’s SPIFFE identity is passed along with the request.
3.1. Data Sidecar receives the request from Sidecar B and passes it to Data. Data uses the Sidecar B principal (i.e. SPIFFE identity) to receive Sidecar B’s JWT claims and authorize access to decrypt and retrieve the object.
Since Grey Matter uses a unified principal model, data segmentation can be achieved for users as well. Grey Matter Data policies can be set to identify different access privileges for services and users on a single stored object, and can be customized around business needs. This paradigm provides a new model that combines network, information assurance, and protection concepts around zero-trust.
Grey Matter Data supports the ability to host multiple Data nodes available through different routing rules. When coupled with other segmentation features, enterprises are able to further isolate how information is stored, accessed, and controlled based on customer regulations and requirements.
For example, logs and observable traffic can be isolated based on zones. Data nodes with specific routing rules and policies are set to enforce the topology. Customer application data can be stored and accessed via different Data nodes (on-premise or in the cloud) and tightly controlled at the micro-segmentation layer or via data policies. These types of flows are depicted in the diagram below.
Grey Matter’s zero-trust threat model ensures security across every service in the hybrid mesh. Each transaction is authenticated and authorized through a combination of mTLS and SPIFFE authentication and SPIRE authorization providing multiple layers of zero-trust security. Grey Matter also supports fine-grained access control by combining authenticated identities with policy-enforced object authorization and enables East-West and North-South traffic pattern splitting and shadowing for in-depth monitoring and configuration. Finally, Grey Matter uses network and data segmentation to decompose operations to their most basic elements, to mitigate cyber intrusion impacts, and to optimize operations.
Have a question about Grey Matter's security models? Reach out to us at to learn more.
If you would like to install Grey Matter without SPIFFE/SPIRE, set global.spire.enabled to false.
Error: could not find tiller, verify that you are using Helm version 3.2.4 and try again. If you need to manage multiple versions of Helm, we highly recommend using helmenv to easily switch between versions.NOTE: Notice in the edge installation we are setting
--set=edge.ingress.type=LoadBalancer, this value sets the service type for edge. The default isClusterIP. In this example we want an AWS ELB to be created automatically for edge ingress (see below), thus we are setting it toLoadBalancer. See the Kubernetes publishing services docs for guidance on what this value should be in your specific installation.
While these are being installed, you can use the kubectl command to check if everything is running. When all pods are Running or Completed, the install is finished and Grey Matter is ready to go.
The client is redirected back to Grey Matter Edge sending the OIDC provided code.
The Edge sends the OIDC code to the OIDC provider, validating and verifying the code.
Once validating, the OIDC provider sends back the id_token. The id_token claims associated with the user, issuer, and audience.
The Edge inspects the id_token and extracts the subject claim and expiration and prepends a user_dn header with the subject claim to the request and forwards it to the upstream sidecar and service.
The upstream sidecar and service respond to the request.
The edge prepends a signed cookie containing the user_dn and expiration to the response received from the upstream sidecar and service and forwards the response to the client.
The client makes an additional request using the signed cookie that allows the edge to extract the user_dn directly up to the point of expiration at which point the client must re-authenticate.
4.1 The server verifies that the client's certificate is valid based on its certificate information.
4.2. The server is able to decrypt the information sent to it based on the established trust.
The client acknowledges that the handshake is complete.
The server acknowledges that the handshake is complete.
At this stage, the client and server certificates are validated and authenticated. All traffic is now passed through an encrypted communication channel.
DELETE The DELETE method deletes the specified resource.*
Sidecar C is an outlier listening for arbitrary events. Based on the event broadcasted, Sidecar C attempts to retrieve the encrypted object stored in Data. Sidecar C is entitled to talk to Data via the Data Sidecar but does not have access to all data stored.
Data Sidecar receives the request from Sidecar C and passes it to Data. Using Sidecar C’s principal (i.e. SPIFFE identity) Data retrieves its corresponding JWT claims and denies access to the object stored.
eksctl create cluster \
--name production \
--version 1.17 \
--nodegroup-name workers \
--node-type m4.2xlarge \
--nodes=2 \
--node-ami auto \
--region us-east-1 \
--zones us-east-1a,us-east-1b \
--profile default[ℹ] using region us-east-1
[ℹ] subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
[ℹ] subnets for us-east-1b - public:192.168.32.0/19 private:192.168.96.0/19
[ℹ] nodegroup "workers" will use "ami-0d373fa5015bc43be" [AmazonLinux2/1.15]
[ℹ] using Kubernetes version 1.15
[ℹ] creating EKS cluster "production" in "us-east-1" region
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --name=production'
[ℹ] CloudWatch logging will not be enabled for cluster "production" in "us-east-1"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-1 --name=production'
[ℹ] 2 sequential tasks: { create cluster control plane "production", create nodegroup "workers" }
[ℹ] building cluster stack "eksctl-production-cluster"
[ℹ] deploying stack "eksctl-production-cluster"
[ℹ] building nodegroup stack "eksctl-production-nodegroup-workers"
[ℹ] --nodes-min=2 was set automatically for nodegroup workers
[ℹ] --nodes-max=2 was set automatically for nodegroup workers
[ℹ] deploying stack "eksctl-production-nodegroup-workers"
[✔] all EKS cluster resource for "production" had been created
[✔] saved kubeconfig as "/home/user/.kube/config"
[ℹ] adding role "arn:aws:iam::828920212949:role/eksctl-production-nodegroup-worke-NodeInstanceRole-EJWJY28O2JJ" to auth ConfigMap
[ℹ] nodegroup "workers" has 0 node(s)
[ℹ] waiting for at least 2 node(s) to become ready in "workers"
[ℹ] nodegroup "workers" has 2 node(s)
[ℹ] node "ip-192-168-29-248.ec2.internal" is ready
[ℹ] node "ip-192-168-36-13.ec2.internal" is ready
[ℹ] kubectl command should work with "/home/user/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "production" in "us-east-1" region is readyeksctl get cluster --region us-east-1 --profile default
eksctl get nodegroup --region us-east-1 --profile default --cluster productiongit clone --single-branch --branch release-2.2 https://github.com/greymatter-io/helm-charts.git && cd ./helm-chartsCloning into 'helm-charts'...
remote: Enumerating objects: 337, done.
remote: Counting objects: 100% (337/337), done.
remote: Compressing objects: 100% (210/210), done.
remote: Total 4959 (delta 225), reused 143 (delta 126), pack-reused 4622
Receiving objects: 100% (4959/4959), 1.09 MiB | 2.50 MiB/s, done.
Resolving deltas: 100% (3637/3637), done.make credentials./ci/scripts/build-credentials.sh
decipher email:
first.lastname@company.io
password:
Do you wish to configure S3 credentials for gm-data backing [yn] n
Setting S3 to false
"decipher" has been added to your repositoriesError: looks like "https://nexus.greymatter.io/repository/helm" is not a valid chart repository or cannot be reached: failed to fetch https://nexus.greymatter.io/repository/helm/index.yaml : 401 Unauthorized helm dep up spire
helm dep up edge
helm dep up data
helm dep up fabric
helm dep up sense
make secrets
helm install server spire/server -f global.yaml kubectl get pod -n spire -w NAME READY STATUS RESTARTS AGE
server-0 2/2 Running 1 30s helm install agent spire/agent -f global.yaml
helm install fabric fabric --set=global.environment=eks -f global.yaml
helm install edge edge --set=global.environment=eks --set=edge.ingress.type=LoadBalancer -f global.yaml
helm install data data --set=global.environment=eks --set=global.waiter.service_account.create=false -f global.yaml
helm install sense sense --set=global.environment=eks --set=global.waiter.service_account.create=false -f global.yaml$ kubectl get svc edge
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
edge LoadBalancer 10.100.197.77 a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com 10808:32623/TCP,8081:31433/TCP 2m4sexport GREYMATTER_API_HOST=<EDGE-EXTERNAL-IP>:10808
export GREYMATTER_API_PREFIX=/services/control-api/latest
export GREYMATTER_API_SSL=true
export GREYMATTER_API_INSECURE=true
export GREYMATTER_API_SSLCERT=</path/to/helm-charts>/certs/quickstart.crt
export GREYMATTER_API_SSLKEY=</path/to/helm-charts>/certs/quickstart.key
export EDITOR=vim # or your preferred editormake uninstalleksctl delete cluster --name production[ℹ] using region us-east-1
[ℹ] deleting EKS cluster "production"
[✔] kubeconfig has been updated
[ℹ] cleaning up LoadBalancer services
[ℹ] 2 sequential tasks: { delete nodegroup "workers", delete cluster control plane "prod" [async] }
[ℹ] will delete stack "eksctl-production-nodegroup-workers"
[ℹ] waiting for stack "eksctl-production-nodegroup-workers" to get deleted
[ℹ] will delete stack "eksctl-production-cluster"
[✔] all cluster resources were deleted kubectl get pods NAME READY STATUS RESTARTS AGE
catalog-5b54979554-hs98q 2/2 Running 2 91s
catalog-init-k29j2 0/1 Completed 0 91s
control-887b76d54-gbtq4 1/1 Running 0 18m
control-api-0 2/2 Running 0 18m
control-api-init-6nk2f 0/1 Completed 0 18m
dashboard-7847d5b9fd-t5lr7 2/2 Running 0 91s
data-0 2/2 Running 0 17m
data-internal-0 2/2 Running 0 17m
data-mongo-0 1/1 Running 0 17m
edge-6f8cdcd8bb-plqsj 1/1 Running 0 18m
internal-data-mongo-0 1/1 Running 0 17m
internal-jwt-security-dd788459d-jt7rk 2/2 Running 2 17m
internal-redis-5f7c4c7697-6mmtv 1/1 Running 0 17m
jwt-security-859d474bc6-hwhbr 2/2 Running 2 17m
postgres-slo-0 1/1 Running 0 91s
prometheus-0 2/2 Running 0 59s
redis-5f5c68c467-j5mwt 1/1 Running 0 17m
slo-7c475d8597-7gtfq 2/2 Running 0 91sFollow along with this guide to configure SPIRE in Grey Matter.
This guide will help you set up a secure, zero-trust environment in Grey Matter to achieve the following:
Establish trust in user identities
Enforce adaptive and risk-based policies
Enable secure access to all apps
Enforce transaction and data security
Grey Matter uses to enable zero-trust security. For more information about how Grey Matter uses SPIRE, see the .
Learn more about Grey Matter's approach to zero-trust security here.
Unix shell and Decipher account
helm and kubectl installed
A Kubernetes or OpenShift deployment of Grey Matter with SPIRE enabled
Learn more about the SPIRE configuration on the and documentation.
To install Grey Matter using SPIRE, verify that global.spire.enabled is true (the default) for your helm charts setup and .
For a full walkthrough of an example service deployment in a SPIRE enabled environment, see .
To adapt an existing service deployment to enable SPIRE, add this environment variable to the sidecar container:
Then add the following to the deployment volumes:
and mount it into the sidecar container as:
This creates the Unix socket over which the sidecar will communicate with the SPIRE agent.
There are several updates to make to the mesh configurations for a new service to enable SPIRE. The following describe updates necessary to configure ingress to the service using SPIRE, if your service also has egress actions, check out the .
If you have existing mesh configurations for this service in a non-SPIRE installation, remove any from the ingress domain object, but keep force_https to true. The domain should look like .
The of the listener object is used to configure ingress mTLS using SPIRE.
If you installed Grey Matter using the helm charts, each deployment should have a label with key greymatter.io/control and value the name of the service (see ). This value will be used to indicate the SPIFFE ID for a sidecar.
Let {service-name} be the value of the label greymatter.io/control in your service deployment. Add the following secret to your listener object:
Once this is configured, the sidecar will use its SPIFFE certificate for ingress traffic on this listener.
The cluster created for edge to connect to the service will need a similar update for egress traffic to the new service. Remove any ssl_config on the edge-to-{service-name}-cluster and set the secret instead:
and should be configured as usual.
When you setup services to participate in the mesh, SPIFFE identities are setup for them. This means that the service will get a certificate that is made for that service. As an example of probing into data, you can use openssl to verify that it is setup to use SPIFFE.
In a kubernetes setup, you can find the ip of your deployment with kubectl describe pod {pod-id} | grep IP. Copy this ip and use openssl to check the certificate. You can use openssl from within the data container -
and then to check your service:
or
You should see from the info that the certificate chain and SAN that the certificate your service is presenting is from SPIRE.
You can also verify that SDS is working for your service by execing into its sidecar pod kubectl exec -it {pod-id} -c sidecar -- /bin/sh and running curl localhost:8001/certs. If the sidecar is configured properly, it's SPIFFE certificate will be listed there.
Need help setting up zero-trust security?
Create an account at to reach our team.
To do this, install using the Grey Matter Helm Charts with global.spire.enabled true
- name: SPIRE_PATH
value: "/run/spire/socket/agent.sock"volumes:
- name: spire-socket
hostPath:
path: /run/spire/socket
type: DirectoryOrCreatevolumeMounts:
- name: spire-socket
mountPath: /run/spire/socket
readOnly: false"secret": {
"secret_key": "{service-name}-secret",
"secret_name": "spiffe://quickstart.greymatter.io/{service-name}",
"secret_validation_name": "spiffe://quickstart.greymatter.io",
"subject_names": [
"spiffe://quickstart.greymatter.io/edge"
],
"ecdh_curves": [
"X25519:P-256:P-521:P-384"
]
}"secret": {
"secret_key": "secret-edge-secret",
"secret_name": "spiffe://quickstart.greymatter.io/edge",
"secret_validation_name": "spiffe://quickstart.greymatter.io",
"subject_names": [
"spiffe://quickstart.greymatter.io/{service-name}"
],
"ecdh_curves": [
"X25519:P-256:P-521:P-384"
]
}kubectl exec -it data-internal-0 -c data-internal -- /bin/shopenssl s_client --connect {IP}:10808openssl s_client --connect {IP}:10808 | openssl x509 -text --noout$ ./gm-control consul --help
NAME
consul - Consul collector
USAGE
gm-control [GLOBAL OPTIONS] consul [OPTIONS]
VERSION
1.0.3-dev
DESCRIPTION
Connects to a Consul agent via HTTP API and updates Clusters stored in the Greymatter API at startup and periodically thereafter.
A service is marked for import using tags, by default "gm-cluster" is used but it may be customized through the command line (see --cluster-tag). Each identified service will be imported as a Greymatter Cluster and the nodes that are marked with the configured
tag are added as instances for that Cluster. For each instance within a Cluster, metadata is populated from a combination of service tags, node metadata, service metadata and health checks.
Service Tags
Service tags, excluding the cluster tag itself, are added with a "tag:" prefix. By default, they are treated as single value entries and are imported with empty values. The --tag-delimiter flag can be used to treat tags as key value pairs, and they will be
parsed as such. Tags that have the delimiter as a suffix or that do not contain it at all are added with empty values, while tags that use it as a prefix are ignored and logged.
Node Metadata
Node metadata is added as instance metadata with a "node:" prefix for each key.
Service Metadata
Service metadata is passed through and is added as instance metadata without any namespacing.
Health Checks
Node health checks will be added as instance metadata named following the pattern "check:<check-id>" with the check status as value. Additionally "node-health" is added for an instance within each cluster to aggregate all the other health checks on that node
that either are 1) not bound to a service or 2) bound to the service this cluster represents. The value for this aggregate metadata will be:
passing if all Consul health checks have a "passing" value
mixed if any Consul health check has a "passing" value
failed if no Consul health check has the value of "passing"
GLOBAL OPTIONS
--api.header=header
Specifies a custom header to send with every gm-control request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
delimited with commas.
--api.host=host:port
(default: localhost:80)
The address (host:port) for gm-control requests. If no port is given, it defaults to port 443 if --api.ssl is true and port 80 otherwise.
--api.insecure
(default: false)
If true, don't validate server cert when using SSL for gm-control requests
--api.key=string
(default: "none")
[SENSITIVE] The auth key for gm-control requests
--api.prefix=value
The url prefix for gm-control requests. Forms the path part of <host>:<port><path>
--api.ssl
(default: true)
If true, use SSL for gm-control requests
--api.sslCert=value
Specifies the SSL cert to use for every gm-control request.
--api.sslKey=value
Specifies the SSL key to use for every gm-control request.
--api.zone-name=string
The name of the API Zone for gm-control requests.
--console.level=level
(default: "info")
(valid values: "debug", "info", "error", or "none")
Selects the log level for console logs messages.
--delay=duration
(default: 30s)
Sets the minimum time between API updates. If the discovery data changes more frequently than this duration, updates are delayed to maintain the minimum time.
--diff.dry-run
(default: false)
Log changes at the info level rather than submitting them to the API
--diff.ignore-create
(default: false)
If true, do not create new Clusters in the API
--diff.include-delete
(default: false)
If true, delete missing Clusters from the API
--help (default: false)
Show a list of commands or help for one command
--stats.api.header=header
Specifies a custom header to send with every stats API request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
delimited with commas.
--stats.api.host=host:port
(default: localhost:80)
The address (host:port) for stats API requests. If no port is given, it defaults to port 443 if --stats.api.ssl is true and port 80 otherwise.
--stats.api.insecure
(default: false)
If true, don't validate server cert when using SSL for stats API requests
--stats.api.prefix=value
The url prefix for stats API requests. Forms the path part of <host>:<port><path>
--stats.api.ssl
(default: true)
If true, use SSL for stats API requests
--stats.api.sslCert=value
Specifies the SSL cert to use for every stats API request.
--stats.api.sslKey=value
Specifies the SSL key to use for every stats API request.
--stats.backends=value
(valid values: "dogstatsd", "prometheus", "statsd", or "wavefront")
Selects which stats backend(s) to use.
--stats.batch
(default: true)
If true, stats requests are batched together for performance.
--stats.dogstatsd.debug
(default: false)
If enabled, logs the stats data on stdout.
--stats.dogstatsd.flush-interval=duration
(default: 5s)
Specifies the duration between stats flushes.
--stats.dogstatsd.host=string
(default: "127.0.0.1")
Specifies the destination host for stats.
--stats.dogstatsd.latch
(default: false)
Specifies whether stats are accumulated over a window before being sent to the backend.
--stats.dogstatsd.latch.base-value=float
(default: 0.001)
Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
--stats.dogstatsd.latch.buckets=int
(default: 20)
Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
--stats.dogstatsd.latch.window=duration
(default: 1m0s)
Specifies the period of time over which stats are latched. Must be greater than 0.
--stats.dogstatsd.max-packet-len=bytes
(default: 8192)
Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
total size of the headers and payload exceeds the network's MTU.
--stats.dogstatsd.port=int
(default: 8125)
Specifies the destination port for stats.
--stats.dogstatsd.scope=string
If specified, prepends the given scope to metric names.
--stats.dogstatsd.transform-tags=string
Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
as follows:
tag=/regex/,n1,n2...
where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
Examples:
foo=/^(.+):.*x=([0-9]+)/,foo,bar
foo=@.*y=([A-Za-z_]+)@,yval
--stats.event-backends=value
(valid values: "console" or "honeycomb")
Selects which stats backend(s) to use for structured events.
--stats.exec.attempt-timeout=duration
(default: 1s)
Specifies the default timeout for individual action attempts. A timeout of 0 means no timeout.
--stats.exec.delay=duration
(default: 100ms)
Specifies the initial delay for the exponential delay type. Specifies the delay for constant delay type.
--stats.exec.delay-type=value
(default: "exponential")
(valid values: "constant" or "exponential")
Specifies the retry delay type.
--stats.exec.max-attempts=int
(default: 8)
Specifies the maximum number of attempts made, inclusive of the original attempt.
--stats.exec.max-delay=duration
(default: 30s)
Specifies the maximum delay for the exponential delay type. Ignored for the constant delay type.
--stats.exec.parallelism=int
(default: 8)
Specifies the maximum number of concurrent attempts running.
--stats.exec.timeout=duration
(default: 10s)
Specifies the default timeout for actions. A timeout of 0 means no timeout.
--stats.honeycomb.api-host=string
(default: "https://api.honeycomb.io")
The Honeycomb API host to send messages to
--stats.honeycomb.batchSize=uint
(default: 50)
The Honeycomb batch size to use
--stats.honeycomb.dataset=string
They Honeycomb dataset to send messages to.
--stats.honeycomb.sample-rate=uint
(default: 1)
The Honeycomb sample rate to use. Specified as 1 event sent per Sample Rate
--stats.honeycomb.write-key=string
They Honeycomb write key used to send messages.
--stats.max-batch-delay=duration
(default: 1s)
If batching is enabled, the maximum amount of time requests are held before transmission
--stats.max-batch-size=int
(default: 100)
If batching is enabled, the maximum number of requests that will be combined.
--stats.node=string
If set, specifies the node to use when submitting stats to backends. Equivalent to adding "--stats.tags=node=value" to the command line.
--stats.prometheus.addr=value
(default: 0.0.0.0:9102)
Specifies the listener address for Prometheus scraping.
--stats.prometheus.scope=string
If specified, prepends the given scope to metric names.
--stats.source=string
If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. In either case, a UUID is appended to the value to insure that it is unique across proxies. Cannot be combined
with --stats.unique-source.
--stats.statsd.debug
(default: false)
If enabled, logs the stats data on stdout.
--stats.statsd.flush-interval=duration
(default: 5s)
Specifies the duration between stats flushes.
--stats.statsd.host=string
(default: "127.0.0.1")
Specifies the destination host for stats.
--stats.statsd.latch
(default: false)
Specifies whether stats are accumulated over a window before being sent to the backend.
--stats.statsd.latch.base-value=float
(default: 0.001)
Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
--stats.statsd.latch.buckets=int
(default: 20)
Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
--stats.statsd.latch.window=duration
(default: 1m0s)
Specifies the period of time over which stats are latched. Must be greater than 0.
--stats.statsd.max-packet-len=bytes
(default: 8192)
Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
total size of the headers and payload exceeds the network's MTU.
--stats.statsd.port=int
(default: 8125)
Specifies the destination port for stats.
--stats.statsd.scope=string
If specified, prepends the given scope to metric names.
--stats.statsd.transform-tags=string
Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
as follows:
tag=/regex/,n1,n2...
where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
Examples:
foo=/^(.+):.*x=([0-9]+)/,foo,bar
foo=@.*y=([A-Za-z_]+)@,yval
--stats.tags=value
Tags to be included with every stat. May be comma-delimited or specified more than once. Should be of the form "<key>=<value>" or "tag"
--stats.unique-source=string
If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. Unlike --stats.source, failing to specify a unique value may prevent stats from being recorded correctly. Cannot
be combined with --stats.source.
--stats.wavefront.debug
(default: false)
If enabled, logs the stats data on stdout.
--stats.wavefront.flush-interval=duration
(default: 5s)
Specifies the duration between stats flushes.
--stats.wavefront.host=string
(default: "127.0.0.1")
Specifies the destination host for stats.
--stats.wavefront.latch
(default: false)
Specifies whether stats are accumulated over a window before being sent to the backend.
--stats.wavefront.latch.base-value=float
(default: 0.001)
Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
--stats.wavefront.latch.buckets=int
(default: 20)
Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
--stats.wavefront.latch.window=duration
(default: 1m0s)
Specifies the period of time over which stats are latched. Must be greater than 0.
--stats.wavefront.max-packet-len=bytes
(default: 8192)
Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
total size of the headers and payload exceeds the network's MTU.
--stats.wavefront.port=int
(default: 8125)
Specifies the destination port for stats.
--stats.wavefront.scope=string
If specified, prepends the given scope to metric names.
--stats.wavefront.transform-tags=string
Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
as follows:
tag=/regex/,n1,n2...
where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
Examples:
foo=/^(.+):.*x=([0-9]+)/,foo,bar
foo=@.*y=([A-Za-z_]+)@,yval
--version
(default: false)
Print the version and exit
--xds.addr=value
(default: :50000)
The address on which to serve the envoy API server.
--xds.ads-enabled
(default: true)
If false, turn off ads discovery mode
--xds.ca-file=string
Path to a file (on the Envoy host's file system) containing CA certificates for TLS.
--xds.default-timeout=duration
(default: 1m0s)
The default request timeout, if none is specified in the RetryPolicy for a Route
--xds.disabled
(default: false)
Disables the xDS listener.
--xds.enable-tls
(default: false)
Enable grpc xDS TLS
--xds.grpc-log-top=int
(default: 0)
When gRPC logging is enabled and this value is greater than 1, logs of non-success Envoy responses are tracked and periodically reported. This flag controls how many unique response code & request path combinations are tracked. When the number of
tracked combinations in the reporting period is exceeded, uncommon paths are evicted.
--xds.grpc-log-top-interval=duration
(default: 5m0s)
See the grpc-log-top flag. Controls the interval at which top logs are generated.
--xds.interval=duration
(default: 1s)
The interval for polling the Greymatter API. Minimium value is 500ms
--xds.resolve-dns
(default: true)
If true, resolve EDS hostnames to IP addresses.
--xds.server-auth-type=string
TLS client authentication type
--xds.server-cert=string
URL containing the server certificate for the grpc ADS server
--xds.server-key=string
URL containing the server certificate key for the grpc ADS server
--xds.server-trusts=string
Comma-delimited URLs containing truststores for the grpc ADS server
--xds.standalone-cluster=string
(default: "default-cluster")
The name of the cluster for the Envoys consuming the standalone xDS server. Should match the --service-cluster flag for the envoy binary, or the ENVOY_NODE_CLUSTER value for the envoy-simple Docker image.
--xds.standalone-port=int
(default: 80)
The port on which Envoys consuming the standalone xDS server should listen. Ignored if --api.key is specified.
--xds.standalone-zone=string
(default: "default-zone")
The name of the zone for the Envoys consuming the standalone xDS server. Should match the --service-zone flag for the envoy binary, or the ENVOY_NODE_ZONE value for the envoy-simple Docker image.
--xds.static-resources.conflict-behavior=value
(default: "merge")
(valid values: "overwrite" or "merge")
How to handle conflicts between configuration types. If "overwrite" configuration types overwrite defaults. For example, if one were to include "listeners" in the static resources configuration file, all existing listeners would be overwritten. If the
value is "merge", listeners would be merged together, with collisions favoring the statically configured listener. Clusters are differentiated by name, while listeners are differentiated by IP/port. Listeners on 0.0.0.0 (or ::) on a given port will
collide with any other IP with the same port. Specifying colliding static resources will produce a startup error.
--xds.static-resources.filename=string
Path to a file containing static resources. The contents of the file should be either a JSON or YAML fragment (as configured by the corresponding --format flag) containing any combination of "clusters" (an array of
https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/cluster.proto), "cluster_template" (a single cluster, which will be used as the prototype for all clusters not specified statically), and/or listeners" (an array of
https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/listener.proto). The file is read once at startup. Only the v2 API is parsed. Enum strings such as "ROUND_ROBIN" must be capitalized.
--xds.static-resources.format=value
(default: "yaml")
(valid values: "json" or "yaml")
The format of the static resources file
Global options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_". For example, "--some-flag" becomes "GM_CONTROL_SOME_FLAG". Command-line flags take precedence over environment variables.
OPTIONS
--cluster-tag=string
(default: "gm-cluster")
The tag used to indicate that a service should be imported as a Cluster. If used in conjunction with 'tag-delimiter' its value can be used to override the cluster name from the default value of the name of the service in consul.
--console.level=level
(default: "info")
(valid values: "debug", "info", "error", or "none")
Selects the log level for console logs messages.
--dc=string
[REQUIRED] Collect Consul services only from this DC.
--help (default: false)
Show a list of commands or help for one command
--hostport=[host]:port
(default: "localhost:8500")
The [host]:port for the Consul API.
--tag-delimiter=string
The delimiter used to split key/value pairs stored in Consul service tags.
--use-ssl
(default: false)
If set will instruct communications to the Consul API to be done via SSL.
--version
(default: false)
Print the version and exit
Options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_CONSUL_". For example, "--some-flag" becomes "GM_CONTROL_CONSUL_SOME_FLAG". Command-line flags take precedence over environment variables.gm-control can be configured to log a leaderboard of non-2xx requests on a time interval to stdout. This is useful as a quick way to see which endpoints are performing poorly throughout the mesh, without getting into advanced debugging.
Leader board logging is configured with two following parameters:
Environment Variable
Leaderboarding can be disabled by setting GM_CONTROL_XDS_GRPC_LOG_TOP to 0.
Leaderboards are logged to standard out in the format:
Example:
CLI Flag
Meaning
Type
Example
GM_CONTROL_XDS_GRPC_LOG_TOP_INTERVAL
--xds.grpc-log-top-interval
How often leaderboards are logged and counts reset
Duration
5m3s
GM_CONTROL_XDS_GRPC_LOG_TOP
--xds.grpc-log-top
How many unique requests are logged
integer
5
[info] <timestamp> ALS: <number of requests>: <HTTP response code> <request path>[info] 2020/02/25 20:52:16 ALS: 1: 475 http://localhost:8080/errorWhen working with service meshes on various platforms, there is a benefit in supporting multiple methods for accessing secrets. Some platforms (like Kubernetes and OpenShift) provide means of securely storing secrets and mounting them in to running containers.
Others, like AWS ECS and the AWS Secrets manager, don't support such easy operations. To support operations on these other platforms, the Grey Matter Proxy contains functionality to parse a limited selection of Base64 encoded SSL certificates and write them directly to disk.
Variable
Default
Description
INGRESS_TLS_CERT
""
Written out to ./certs/ingress_localhost.crt
Need help with SSL cert parsing?
Create an account at to reach our team.
INGRESS_TLS_KEY
""
Written out to ./certs/ingress_localhost.key
INGRESS_TLS_TRUST
""
Written out to ./certs/ingress_intermediate.crt
EGRESS_TLS_CERT
""
Written out to ./certs/egress_localhost.crt
EGRESS_TLS_KEY
""
Written out to ./certs/egress_localhost.key
EGRESS_TLS_TRUST
""
Written out to ./certs/egress_intermediatet.c
Envoy has a built-in admin server that has a large amount of useful tools for debugging. The full abilities of this interface can be found in the docs, but some highlights are given here.
Envoy has the ability to set different log levels for different components of the running system. To see how they're all currently set:
NOTE all examples below assume the admin interface is started on port 8001 (the default option). Adjust according to your configuration.
You can set log levels of "trace, debug, info, warning, error, critical, off" on the global state.
Alternatively, you can set the level of just a specific logger with a format similar to the below. This one changes just the logger for the filters.
Configuration details for the Grey Matter JWT Security service.
You can deploy the Grey Matter JSON Web Token (JWT) Service many ways, including the following:
The preferred approach is to deploy via a Docker container running inside of an OpenShift or Kubernetes Pod.
The service is also packaged as a TAR file. The TAR contains an executable binary file you can deploy to a server.
Follow the configuration requirements below to set up the Grey Matter JWT Security service.
Default Value
Description
Type
USE_LDAP
false
true to configure and search an LDAP server for user payloads
bool
LDAP_ADDR
"ldap.example.com"
the LDAP server address
string
LDAP_PORT
389
the LDAP server port
uint
LDAP_TLS
false
true to encrypt the LDAP connection
bool
LDAP_BASE_DN
dc=example,dc=com
base userDN for LDAP search requests
string
LDAP_USER
"cn=read-only-admin,dc=example,dc=com"
user to associate with the LDAP session
string
LDAP_USER_PASSWORD
`"echo \"password\"
base64 -> cGFzc3dvcmQK"`
password to associate with the LDAP session user
base64
LDAP_TEST_DN
"cn=admin,dc=example,dc=com"
test user payload for LDAP
string
Need help configuring JWT for LDAP? Contact our team at Grey Matter Support.
Variable
/app $ curl -X POST localhost:8001/logging
active loggers:
admin: info
assert: info
backtrace: info
client: info
config: info
connection: info
dubbo: info
file: info
filter: debug
grpc: info
hc: info
health_checker: info
http: info
http2: info
hystrix: info
lua: info
main: info
misc: info
mongo: info
quic: info
pool: info
rbac: info
redis: info
router: info
runtime: info
stats: info
secret: info
tap: info
testing: info
thrift: info
tracing: info
upstream: infocurl -X POST localhost:8001/logging?level=debugcurl -X POST localhost:8001/logging?filter=debugThere are three required pieces of information to configure and run the service.
Set JWT_API_KEY as an environment variable
Set PRIVATE_KEY and USERS_JSON as a base64 encoded string, or as a volume mount (recommended)
If both are provided, the volume mount supersedes the set variable
Variable
Mount Location
Default Value
Description
Type
JWT_API_KEY
-
""
base64 encoded string of comma separated api keys
base64
PRIVATE_KEY
/gm-jwt-security/certs/jwtES512.key
""
base64 encoded private key file
JWT_API_KEY is the base64 encoding of a comma separated list of API keys.
The users.json file should have a users field that contains an array of user payloads. This is an example:
For the API key, list: 123,my-special-key,super-secret-key,pub-keyandJWT_API_KEY set to the value of:
Any service that provides the header api-key with a value matching one of the following will have access:
123
my-special-key
super-secret-key
pub-key
The gm-jwt-security services creates and writes jwt tokens to a Redis server. In order to successfully generate and store jwt tokens, a Redis client must be implemented to connect to a server using information from the following environment variables.
Variable
Default Value
Description
Type
REDIS_HOST
"0.0.0.0"
host name of Redis server
string
REDIS_PORT
"6379"
port number of Redis server
string
REDIS_DB
The following environment variables can be set to specify the host, ports, and logging capabilities of the gm-jwt-service. To specify an expiration time for generated tokens, set TOKEN_EXP_TIME.
Variable
Default Value
Description
Type
BIND_ADDRESS
"0.0.0.0"
bind address for the gm-jwt-security server
string
HTTP_PORT
8080
http port for the server
uint
HTTPS_PORT
The gm-jwt-security service supports LDAP as a backend server to search for user payloads.
TLS can be configured on the gm-jwt-security service using TLS Configuration.
Need help configuring JWT? Contact us at: Grey Matter Support.
{
"users": [
{
"label": "CN=localuser,OU=Engineering,O=Decipher Technology Studios,=Alexandria,=Virginia,C=US",
"values": {
"email": [
"localuser@deciphernow.com"
],
"org": [
"www.deciphernow.com"
]
}
},
{
"label": "cn=chris.holmes, dc=deciphernow, dc=com",
"values": {
"email": [
"chris.holmes@deciphernow.com"
],
"org": [
"www.deciphernow.com"
],
"privilege": [
"root"
]
}
}
]
}echo "123,my-special-key,super-secret-key,pub-key" | base64 MTIzLG15LXNwZWNpYWwta2V5LHN1cGVyLXNlY3JldC1rZXkscHViLWtleQo=base64
USERS_JSON
/gm-jwt-security/etc/users.json
""
base64 encoded users.json file
base64
0
Redis database to be selected after connecting to the server
uint
REDIS_PASS
"123"
password for Redis server
string
9443
https port for the server
uint
ZEROLOG_LEVEL
"WARN"
logging level: INFO, DEBUG, WARN, ERR
string
TOKEN_EXP_TIME
28800
token expiration time in seconds
uint
DEFAULT_PATH
"/services/"
default path to apply to cookies generated by the /policies endpoint
string
Grey Matter Intelligence 360 is configured with the following environment variables set on the host machine. The term host machine can apply to an AWS EC2 server, Docker container, Kubernetes Pod, etc.
Need help setting up the Intelligence 360? Create an account at to reach our team.
SERVER_SSL_CA
String
""
Path to client trust file (SERVER_SSL_ENABLED=true is required)
SERVER_SSL_CERT
String
""
Path to client certificate (SERVER_SSL_ENABLED=true is required)
SERVER_SSL_KEY
String
""
Path to client private key (SERVER_SSL_ENABLED=true is required)
CONFIG_SERVER
String
http://localhost:5555/v1.0/
Control API endpoint (for retrieving mesh configuration of services)
FABRIC_SERVER
String
http://localhost:1337/services/catalog/1.0/
Catalog endpoint (for retrieving metadata of mesh services)
OBJECTIVES_SERVER
String
http://localhost:1337/services/slo/1.0/
Service Level Objectives endpoint (for retrieving and setting performance objectives)
PROMETHEUS_SERVER
String
http://localhost:1337/services/prometheus/2.3/api/v1/
Prometheus endpoint (for retrieving historical service metrics)
USE_PROMETHEUS
Boolean
true
Use prometheus to query service level metrics
SENSE_SERVER
String
http://localhost:1337/services/sense/latest/
Sense endpoint (for displaying recommended scaling of services) Experimental endpoint
ENABLE_SENSE
Boolean
false
Sense feature toggle toggle
HIDE_EXTERNAL_LINKS
Boolean
false
Hide Decipher social links in the app footer
EXPOSE_SOURCE_MAPS
Boolean
false
Expose JavaScript source maps to web browsers in production (recommended for debugging only)
Name
Type
Default
Description
BASE_URL
String
"/"
Base URL, relative to the Grey Matter installation's hostname
SERVER_SSL_ENABLED
Boolean
false
Informs service to receive client connections over SSL only
To enable TLS support for the service, perform the following steps:
Set ENABLE_TLS to true
Specify cert, trust, and key either through a volume mount (recommended) or the following environment variables
In the event that both a volume mount and environment variables are provided, the volume mounted files will take precedence over the environment variables.
Variable
Mount Location
Default Value
Description
Type
ENABLE_TLS
-
false
true to enable TLS support
bool
SERVER_TRUST
/gm-jwt-security/certs/server.trust.pem
""
base64 encoded server trust store
Need help?
Create an account at Grey Matter Support to reach our team.
base64
SERVER_CERT
/gm-jwt-security/certs/server.cert.pem
""
base64 encoded server certificate
base64
SERVER_KEY
/gm-jwt-security/certs/server.key.pem
""
base64 encoded server key
base64
Each domain controls requests for a specific host:port. This permits different handling of requests to domains like localhost:8080 or catalog:8080 if desired. If uniform handling is required, wildcards are understood to apply to all domains. A domain set to match *:8080 will match both of the above domains.
NOTEThe domain port should, in most cases, match the port of the exposed on the proxy. If they do not match, users will need to supply HOST: header keywords to all requests to match the virtual domain.
Virtual host:port matching and redirecting
GZIP of requests
CORS
Setting custom headers for downstream requests
NOTE Do not set an ssl_config on any domain object whose service you want to use SPIFFE/SPIRE. If a domain ssl_config is set, it will override the secret set on the corresponding listener and the mesh configuration will be wrong.
The domain object has an optional ssl_config field, which can be used to set up TLS and specify it's configuration. The Domain SSL Config Object appears as follows:
The Domain SSL Configuration is used to populate a for the Envoy Listener.
The sni field for a domain accepts a list of strings and configures the Envoy Listener to detect the requested Server Name Indication.
To specify a minimum and maximum TLS protocol version, set the protocols field to one of the following: "TLSv1_0", "TLSv1_1", "TLSv1_2", "TLSv1_3". If one protocol is specified, it will be set as both the minimum and maximum protocol versions in Envoy. If more than one protocol version is specified in the list, the lowest will set the minimum TLS protocol version and the highest will set the maximum TLS protocol version. If this field is left empty, Envoy will choose the default TLS version.
The cipher_filter field takes a colon : delimited string to populate the cipher_suites cipher list in envoy for TLS.
redirectsThis field can be used to configure redirect routes for the domain. See for details.
Fields:
name
the name of the redirect
from
domain_keyA unique key used to identify this particular domain configuration. This key is used in proxy, listener, and route objects.
zone_keyThe zone in which this object will live. It will only be able to be referenced by objects or sent to Sidecars that live in the same zone.
nameThe name of this virtual domain, e.g. localhost, www.greymatter.io, or catalog.svc.local. Only requests coming in to the named host will be matched and handled by attached . Is used in conjunction with the field.
This field can be set to a wildcard (*) which will match against all hostnames.
portSet the specific port of the virtual host to match. Is used in conjunction with the field.
E.g. port: 8080 and name: * will setup a virtual domain matching any request made to port 8080 regardless of the host.
ssl_config for this cluster. Setting the SSL Config at the domain level set this same config on all that are directly linked to this domain.
redirectsArray of URL
gzip_enabledDEPRECATION: This field has been deprecated and will be removed in the next major version release.
This field has no effect.
cors_configA to attach to this domain.
aliasesAn array of additional hostnames that should be matched in this domain. E.g. name: "www.greymatter.io" with aliases: ["greymatter.io", "localhost"]
force_httpsIf true, listeners attached to this domain will only accept HTTPS connections. In this case, one of the or fields should be set. If false, attached listeners will only accept plaintext HTTP connections.
custom_headersAn array of header key, value pairs to set on all requests that pass through this domain.
E.g.
checksumAn API calculated checksum. Can be used to verify that the API contains the expected object before performing a write.
cipher_filter. If specified, only the listed ciphers will be accepted. Only valid with TLSv1-TLSv1.2, but has no affect with TLSv1.3.
Examples include the values below, but full options should be found in the link above.
[ECDHE-ECDSA-AES128-GCM-SHA256|ECDHE-ECDSA-CHACHA20-POLY1305]
[ECDHE-RSA-AES128-GCM-SHA256|ECDHE-RSA-CHACHA20-POLY1305]
ECDHE-ECDSA-AES128-SHA
ECDHE-RSA-AES128-SHA
protocolsArray of SSL protocols to accept: "TLSv1, TLSv1.1, TLSv1.2, TLSv1.3"
cert_key_pairsArray of (cert, key) pairs to use when receiving requests on this . Each cert or key must point to files on disk.
require_client_certsIf true, client cert verification will be performed. false will disable this check and not require client certificates to be presented when connecting to this listener.
trust_fileString representing the path on disk to the SSL trust file to use when receiving requests on this . If omitted, then no trust verification will be performed.
sniString representing how this listener will identify itself during SSL SNI.
{
"cipher_filter": "",
"protocols": [
"TLSv1.1",
"TLSv1.2"
],
"cert_key_pairs": [
{
"certificate_path": "/etc/proxy/tls/sidecar/server.crt",
"key_path": "/etc/proxy/tls/sidecar/server.key"
}
],
"require_client_certs": true,
"trust_file": "/etc/proxy/tls/sidecar/ca.crt",
"sni": null
}AES128-GCM-SHA256
AES128-SHA
ECDHE-ECDSA-AES256-GCM-SHA384
ECDHE-RSA-AES256-GCM-SHA384
ECDHE-ECDSA-AES256-SHA
ECDHE-RSA-AES256-SHA
AES256-GCM-SHA384
AES256-SHA
SSL Config for incoming requests
regex value that the incoming request :path will be regex matched to
to
the new URL that an incoming request matching from will route to
if set to "$host", will redirect to the name of the domain
redirect_type
determines the response code of the redirect
must be one of: "permanent" (for a 301 code), "temporary" (for a 307 code)
header_constraints
a list of header constraint objects
each header constraint has the following fields:
name
the header key to be compared to the incoming requests headers
will be compared without case sensitivity
value
must be a valid regex
the value to be compared to the value of the incoming request header with matching name
case_sensitive
boolean indicating whether the value will be compared to the value of the header with matching name with case sensitivity
invert
boolean value
{
"domain_key": "catalog",
"zone_key": "default",
"name": "*",
"port": 9080,
"ssl_config": {
"cipher_filter": "",
"protocols": [
"TLSv1.1",
"TLSv1.2"
],
"cert_key_pairs": [
{
"certificate_path": "/etc/proxy/tls/sidecar/server.crt",
"key_path": "/etc/proxy/tls/sidecar/server.key"
}
],
"require_client_certs": true,
"trust_file": "/etc/proxy/tls/sidecar/ca.crt",
"sni": null
},
"redirects": null,
"gzip_enabled": false,
"cors_config": null,
"aliases": null,
"force_https": true,
"custom_headers": null,
"checksum": "b633fd4b535932fc1da31fbb7c6d4c39517871d112e9bce2d5ffe004e6d09735"
}"ssl_config": {
"cipher_filter": "",
"protocols": [],
"cert_key_pairs": null,
"require_client_certs": false,
"trust_file": "",
"sni": null
}"custom_headers" : [
{
"key": "x-forwarded-proto",
"value": "https"
}
]nameHeader key to match on. Supports regex expressions.
valueHeader value to match on. Supports regex expressions.
case_sensitiveIf true, then the regex matching will be case sensitive. Defaults to false.
invertIf true, invert the regex match. This allows easier "not" expressions.
For example, to match only X-Forwarded-Proto: "http":
But to match anything NOT "https":
{
"name": "X-Forwarded-Proto",
"value": "http"
}String key that uniquely identifies this secret configuration in the Secret Discovery Service.
Secret names are identities that live within the cert pool of Envoy. A name should correspond to one certificate that Envoy has registered, and will be used when querying the SDS API.
ValidationNames are used to verify a certificate in the Envoy cert pool against a Certificate Authority.
When performing 2-Way SSL, Subject Alternative Names are required for client certificate verification. Without this configuration option, Envoy will not understand what certificate to verify when it attempts to connect to it's upstream/downstream host.
If specified, the TLS connection established when using secrets, will only support the specified ECDH curves. If not specified, the default curves will be used within Envoy.
This field specifies how to handle the x-forwarded-client-cert (XFCC) HTTP header.
The possible options when forwarding client cert details are:
"SANITIZE"
"SANITIZE_SET"
"FORWARD_ONLY"
"APPEND_FORWARD"
"ALWAYS_FORWARD_ONLY"
Valid only when forward_client_cert_details is APPEND_FORWARD or SANITIZE_SET and the client connection is mTLS. It specifies the fields in the client certificate to be forwarded. Note that in the x-forwarded-client-cert header, Hash is always set, and By is always set when the client certificate presents the URI type Subject Alternative Name value.
{
"name": "X-Forwarded-Proto",
"value": "http"
}{
"name": "X-Forwarded-Proto",
"value": "https",
"invert": true
}{
"secret_key": "web-secret",
"secret_name": "spiffe://greymatter.io/web_proxy/mTLS",
"secret_validation_name": "spiffe://greymatter.io",
"subject_names": "spiffe://greymatter.io/echo_proxy/mTLS",
"ecdh_curves": [
"X25519:P-256:P-521:P-384"
],
"forward_client_cert_details": "SANITIZE",
"set_current_client_cert_details": {
"uri": false
}
}Redirects specify how URLs may need to be rewritten. Each Redirect has a name, a regex that matches the requested URL, a to indicating how the url should be rewritten, and a flag to indicate how the redirect will be handled by the proxying layer.
nameCommon name for this redirect, e.g. "force-https".
fromRegex pattern to match against incoming request URLs. Capture groups set here can be used in the field.
toNew URL of the redirect. Can be direct string (") or reference capture groups from the field ("").
redirect_typeOne of "permanent" or "temporary". Selection "permanent" will set the response code to 301, "temporary" will be 302.
header_constraintsArray of that must match for the redirect to take effect.
Grey Matter supports the configuration of cross-origin resource sharing on a sidecar. CORS can be configured to allow an application to access resources at a different origin (domain, protocol, or port) than its own.
For more information on CORS, see this CORS reference.
Simple requests are classified as requests that don't require a . This distinction is made between requests that might be dangerous (i.e. modifies server resources) and those that are most likely benign. A request is considered simple when all of the following criteria is true:
The method is GET, HEAD, or POST
Only are present
The Content Type header is set to one of application/x-www-form-urlencoded
A more comprehensive list and explanation can be found .
As an example, say an app running at http://localhost:8080 is trying to call a backend service with a Grey Matter sidecar at http://localhost:10808. Without a CORS configuration, this request would fail, because the app localhost:8080 is trying to access resources from a server at a different origin, localhost:10808. To solve this, the following CORS config is set on its domain:
With this configuration, if a simple request comes in to the sidecar from the app, it will have an Origin header value of http://localhost:8080, and this request will succeed. The server will attach a header Access-Control-Allow-Origin: http://localhost:8080 to the response, which signals to the browser that this request is allowed.
are initiated by the browser using the OPTIONS HTTP method before sending a request in order to determine if the real request is safe to send. The response to this kind of request contains information about what is allowed from a request, and the server determines whether or not to send the actual request based on this information.
This response information is in the form of three HTTP headers, access-control-request-method, access-control-request-headers, and the origin header. These correspond to of the cors_config - thus these configurations can be specified to determine how the Grey Matter sidecar will respond to preflight requests. If a preflight request comes in to the Grey Matter Sidecar that does not meet the specification for one of these configured fields, the sidecar will send back a response to not initiate the request.
Based on the same , say the app running at http://localhost:8080 wants to send requests to the backend service at http://localhost:10808 with a content-type of application/json;charset=UTF-8. This particular content-type is outside of those allowed by CORS for simple requests, and thus would result in sending a preflight request to determine if the request can be sent. In order for the CORS configuration to indicate that the request can be sent, it would need to allow the content-type header by configuring the field:
In the above configuration, CORS will allow requests with Origin header value http://localhost:8080 only and indicate that the content-type header can be set according to the request.
To set up CORS, set the cors_config field on a domain object with the desired configuration, see the below.
For an existing domain, run
and add the desired cors_config object.
allowed_originsThis field specifies an array of string patterns that match allowed origins. The proxy will use these matchers to set the header. This header will be set on any cross-origin response that matches one of the allowed_origins.
Available matchers include:
exact
prefix
suffix
Example:
A wildcard value * is allowed except when using the regex matcher.
allow_credentialsSpecifies the content for the header that the proxy will set on any cross-origin request that matches one of the allowed_origins. This header specifies whether or not the upstream service allows credentials.
exposed_headersSpecifies the content for the header that the proxy will set on any cross-origin request that matches one of the allowed_origins. This header specifies an array of headers that are allowed on the response.
max_ageSpecifies the content for the header that the proxy will set on the preflight response. This header is an integer value specifying how long a preflight request can be cached by the browser.
allowed_methodsSpecifies the content for the header that the proxy will set on the preflight response. This header specifies an array of methods allowed by the upstream service.
allowed_headersSpecifies the content for the header that the proxy will set on the preflight response. This header specifies an array of headers allowed by the upstream service.
By default, the proxy will use the upstream service's CORS policy on the gateway and on the upstream service. The gateway policy is ignored.
Because CORS is a browser construct, curl can always make a request to the server, with or without CORS. However, it can be used to mimic a browser and verify how the proxy will react to CORS requests:
{
"name": "force-https",
"from": "(.*)",
"to": "https://$1",
"redirect_type": "permanent"
}multipart/form-datatext/plainregex{
"zone_key": "zone-default-zone",
"domain_key": "domain-backend-service",
"name": "*",
"port": 10808,
"cors_config": {
"allowed_origins": [
{ "match_type": "exact", "value": "http://localhost:8080" }
],
"allowed_headers": [],
"allowed_methods": [],
"exposed_headers": [],
"max_age": 60
}
}{
"zone_key": "zone-default-zone",
"domain_key": "domain-backend-service",
"name": "*",
"port": 10808,
"cors_config": {
"allowed_origins": [
{ "match_type": "exact", "value": "http://localhost:8080" }
],
"allowed_headers": ["content-type"],
"allowed_methods": [],
"exposed_headers": [],
"max_age": 60
}
}greymatter edit domain <domain-name> "cors_config": {
"allowed_origins": [],
"allowed_headers": [],
"allowed_methods": [],
"exposed_headers": [],
"max_age": 0,
"allow_credentials": true
} "allowed_origins": [
{ "match_type": "exact", "value": "http://localhost:8080" }
]$ curl -v 'http://localhost:9080/services/catalog/latest/' \
-X OPTIONS \
-H 'Access-Control-Request-Method: POST' \
-H 'Access-Control-Request-Headers: content-type' \
-H 'Origin: http://localhost:8080'
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 9080 (#0)
> OPTIONS /services/catalog/latest/ HTTP/1.1
> Host: localhost:9080
> User-Agent: curl/7.64.1
> Accept: */*
> Access-Control-Request-Method: POST
> Access-Control-Request-Headers: content-type
> Origin: http://localhost:8080
>
< HTTP/1.1 200 OK
< access-control-allow-origin: http://localhost:8080
< access-control-max-age: 60
< date: Tue, 12 May 2020 20:11:13 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host localhost left intact
* Closing connection 0
























Rules dictate logic on traffic is routed. Attributes from incoming requests are matched against preconfigured rule objects to determine where the outgoing upstream request should be routed. Rules are specified programmatically in routes and shared_rules as an array in the Rules attribute.
rule_keyA unique key for each rule. When a request is routed by a rule, it appends the header "X-Gm-Rule" with the rule key.
methodsThe supported request methods for this rule. Setting to an empty array will allow all methods.
matchesfor this rule.
constraintsThe constraints field defines arrays that map requests to clusters. Currently, the only implemented field is the light field which is used to determine the Instance to which the live request will be sent and from which the response will be sent to the caller.
Currently, the constraints field must set a light field that contains an array of .
NOTE:
constraintsalso contains adarkand alightfield which currently have no effect.darkarray will be used in future versions to support traffic shadowing to Instances. Similarly thetaparray will determine an Instance to send a copy of the request to, comparing the response to thelightresponse.
cohort_seedThis field has no effect.
{
"rule_key": "rkey1",
"methods": [
"GET"
],
"matches": [
{
"kind": "header",
"from": {
"key": "routeTo",
"value": "passthrough-cluster"
}
}
],
"constraints": {
"light": [
{
"cluster_key": "passthrough-cluster",
"weight": 1
}
]
}
}"constraints" : {
"light": [
{
"cluster_key": "example-service-1.0",
"weight": 10
},
{
"cluster_key": "example-service-1.1",
"weight": 1
}
]
}A retry policy is a way for the Grey Matter Sidecar or Edge to automatically retry a failed request on behalf of the client. This is mostly transparent to the client; they will only get the status and return of the final request attempted (failed or succeeded). The only effects they should see from a successful retry is a longer average request time and fewer failures.
num_retriesThis is the max number of retries attempted. Setting this field to N will cause up to N retries to be attempted before returning a result to the user.
Setting to 0 means only the original request will be sent and no retries are attempted. A value of 1 means the original request plus up to 1 retry will be sent, resulting in potentially 2 total requests to the server. A value of N will result in up to N+1 total requests going to the service.
per_try_timeout_msecThis is the timeout for each retry. The retry attempts can have longer or shorter timeouts than the original request. However, if the per_try_timeout_msec is too long, it is possible that not all retries will be attempted as it would violate the field.
timeout_msecThis is the total timeout for the entire chain: initial request + all timeouts. This should typically be set large enough to accommodate the request and all retries desired.
{
"num_retries": 2,
"per_try_timeout_msec": 60000,
"timeout_msec": 60000
}Welcome to Grey Matter documentation. This documentation covers all available features and options for Grey Matter, the intelligent hybrid mesh platform.