arrow-left

Only this pageAll pages
gitbookPowered by GitBook
triangle-exclamation
Couldn't generate the PDF for 190 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

1.2

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Insight

Gain real-time insight into the performance of your microservices.

hashtag
The Challenge

hashtag
Microservice Monitoring Is Complex

A service mesh lacks monitoring and visibility. Without mesh operations visibility, SREs and DevOps engineers struggle to find root causes for problems that can happen anywhere: on the network, inside containers, or between services.

hashtag
Audit Trail and Compliance Reporting

‌Decentralized systems already have reporting challenges that grow exponentially with new services, and cloud environments. To make matters worse, failure to comply with privacy protection policies could bring legal trouble, massive fines, and loss of accreditation.

hashtag
Digital Twin Modeling

Digital twins are a cost-effective way to help decision-makers across a number of business use-cases, from technical operations to marketing and sales. They require systems capable of securely replicating, interacting with, and, as necessary, accurately modifying, massive historical data volumes at rapid speed. In addition, human user and customer data is subject to several PII policy compliance requirements that may require the anonymization or the outright removal of key user data immediately upon user request.

hashtag
Security Attribution Replay (Chain of Evidence)

Service meshes, microservices, serverless, and containers are key elements of Mesh Application and Service Architecture (MASA) implementation. However, these capabilities present significant service-to-service communication, discovery mechanism, security layer, and audit observability challenges at scale. The volume of data generated by mesh operations coupled with the complexity of hybrid and multi-mesh operations can overwhelm traditional compliance tracking efforts.

hashtag
The Solution

hashtag
Instant Visibility and Discovery

Grey Matter's Intelligence 360 helps visualize service health, data-flows, and microservice interactions so you can solve performance issues fast. Intelligence 360 provides full-scope health observability and contextual awareness for the mesh.

The combination of Fabric orchestration and Intelligence 360 dynamic control lets your team check service health, making changes on the fly fixing an anomaly impacting service performance. Grey Matter also enables granular audit and policy compliance as well as rapid dependency identification, for real-time tracking of aggregate, route, and service level SLOs for Memory Utilization, CPU Utilization, Percentile Latencies, Error Rate, and Request Rate in an easy-to-understand manner.

Grey Matter’s omnidirectional mesh telemetry capture and analysis capabilities enable deep mesh audit and observability without the need for special instrumentation. Grey Matter employs a normalized and repeatable in-depth audit capture of every event on the mesh which can be used to establish a baseline for compliance reporting. Employing Kibana as a third-party dashboard, Grey Matter enables data lineage and provenance oversight via fully observable audit controls for rapid forensic detection and diagnosis of anomalies and intrusions.

hashtag
Scale with Confidence

Grey Matter supports various containers, microservices, and serverless architectures so you can adopt cloud-native technologies without worry.

hashtag
Isolate the Root Cause

Grey Matter records and displays every activity within the mesh for in-depth audit control, policy compliance reporting, rapid forensic detection and diagnosis of anomalies and intrusions. You can cut through the noise to find performance outliers throughout your mesh and pinpoint the root cause of your problem within seconds.

hashtag
Distributed Tracing

Grey Matter offers distributed tracing to monitor service availability and performance. This data helps teams troubleshoot the mesh and improve mesh performance.

hashtag
Omnidirectional Mesh Telemetry Data

Grey Matter offers in-depth observability and policy compliance management for your complex multi-environment networks. Grey Matter sidecars run alongside each microservice within the mesh, creating a controlled and trusted service edge fleet guided by dynamically established policy guidelines. These services are responsible for managing network requirements such as scaling, access control, policy compliance, audit, and service-to-service intercommunication.

Grey Matter’s omnidirectional mesh telemetry capture and analysis capabilities enable deep mesh audit and observability without the need for special instrumentation. Network micro-segmentation and policy enforcement ensure zero-trust managed security compliance.

Mesh telemetry data powers architecture-wide service level objective (SLO) policy compliance and fine-grained hybrid mesh operational control. With Grey Matter, users can dynamically set SLOs governing each policy action atop the mesh. If one approaches warning or violation, Grey Matter alerts via an intuitive single-pane-of-glass interface for rapid mitigation.

hashtag
Zero-Trust Security

Grey Matter is designed to operate using a zero-trust threat model to ensure each service and transaction running within a Grey Matter enabled hybrid mesh is appropriately protected. This is supported by the normalized and repeatable in-depth audit capture of every event on the mesh which can be used to establish a baseline for compliance reporting. The concept of zero-trust is centered on a belief that enterprises do not automatically trust systems or services inside or outside its perimeters. In the case of Grey Matter, everything attempting to connect is verified before granting access. Every action is recorded and can be played back for forensic analysis.

Grey Matter records and displays the lineage and provenance of every activity conducted by every user on every object atop the mesh throughout its life-cycle. The platform makes this information fully observable and accessible for in-depth audit control, policy compliance reporting, and rapid forensic detection and diagnosis of anomalies and intrusions.

Grey Matter is purposefully designed to capture and analyze the massive volumes of telemetry, user, and operational data generated by mesh operations. With Grey Matter, every piece of network traffic, user activity, or policy-driven system action can be brought together to create digital twins of systems, networks, and even individual users or group for analysis and testing. Because Grey Matter is built with zero-trust control and audit tracing in mind, PII information can be anonymized or selectively removed in keeping with legislative requirements such as the EU General Data Protection Regulation (GDPR).

hashtag
Questions?

circle-check

Have a question? Reach out to us at to discuss your specific use case.

Zero-Trustchevron-right
info@greymatter.ioenvelope

Release Notes

hashtag
Artifacts

Grey Matter v1.2 artifacts are now available. Artifacts can be found in the staging repositories:

  • Imagesarrow-up-right

hashtag
1.2.2

Server versions (if changed from 1.2.1):

hashtag
Fabric

  • gm-proxy:1.4.5

hashtag
Fixed

  • Fixed intermittent segfault when using websockets

hashtag
1.2.1

Server versions (if changed from 1.2.0):

hashtag
Fabric

  • gm-proxy:1.4.4

  • gm-jwt-security-gov:1.1.2

  • gm-cli:1.4.2

hashtag
Sense

  • gm-dashboard:3.4.2

  • gm-slo:1.1.5

hashtag
Changed

  • JWT-Security filter cache now respects token expiration and internally limits the max cache size

  • Sidecar now supports FIPS-compliant builds

  • Sidecar base build updated to Envoy 1.13.3

hashtag
1.2

Server versions:

hashtag
Fabric

  • gm-proxy:1.4.2

  • gm-control:1.4.2

  • gm-control-api:1.4.4

hashtag
Sense

  • gm-dashboard:3.4.1

  • gm-slo:1.1.4

  • gm-catalog:1.0.7

hashtag
Platform Services

  • gm-data:1.1.1

hashtag
Release Notes

hashtag
Added

  • Allow setting Envoy HTTP filters

  • Allow setting Envoy Network filters

  • Allow setting Network and HTTP filters on Grey Matter Listener object

hashtag
Changed

  • Sidecar base Envoy build updated to 1.13.1

  • gm.inheaders filter will now return 403 if certificates are not present

  • Update trace defaults to v2 APIs

hashtag
Removed

  • None

hashtag
Fixed

  • Control EDS resolution of instances now properly causes an update to be sent to the Sidecar

  • Change internal protocol selection to USE_DOWNSTREM_PROTOCOL to fully support HTTP2

  • Control server no longer overwrites defined static resources for the data plane

hashtag
Known Issues

  • The Catalog API requires that cluster name be unique, if you have two services with the same name and version values. Failure to do so will lead to a mismatch in the Sense Dashboard and you will not see one of the services. If the versions are unique then you can use the same name value.

  • The proxy requires a route to be configured on the domain/listener in order for observables to be enabled.

Use Cases

Microservices are a popular architectural pattern for cloud-based development and deployment, but managing them is complex.

hashtag
Common Questions

If you're familiar with microservices, you've probably faced the following questions.

  • How do I know what services are running in my infrastructure?

  • How do I enable secure hybrid methodologies across multiple cloud and platform-as-a-service (PaaS) environments while leveraging existing on-prem investments?

  • How do I deploy and enforce common security models?

  • How do I avoid core function duplication (i.e. discoverability, scalability, observability, security, service level management, multinetwork fabric and data protection policies)?

  • How do I understand how my new and existing IT investments are being used?

Explore Grey Matter's solutions to these common use cases below.

Microservice instances have dynamically assigned network locations, and these locations change due to autoscaling, failures, and upgrades.

Grey Matter's manage complex network traffic and data. By removing network communication complexities and infrastructure concerns, your team can focus on writing next-gen services in a polyglot, cloud-agnostic environment.

A is an infrastructure layer built into your application. It controls how parts of your app share data, and tracks these interactions so you can optimize communication and avoid downtime as your app grows. Unique service-level insight and control let you closely measure and manage business objectives.

Use a service mesh like Grey Matter to maximize network efficiency, free developers to innovate, and harness data to enable AIOps.

Modernize your applications and .

Grey Matter supports the scaling of containers, microservices, and serverless architectures, and the dashboard will help you optimize infrastructure, application performance and business outcomes. Grey Matter takes advantage of your network's data through the innovative application of neural net AI designed to enhance traditional network and application performance. Enhance your diagnostic capabilities, and enable predictive network operational monitoring and automated response.

While traditional security models rely on location-based trust, provides trust for all access requests regardless of location. It enforces adaptive controls and verifies trust on an ongoing basis.

Trust levels adapt to your evolving business. Our zero-trust approach helps prevent unauthorized access, contain breaches, and helps reduce the risk of an attacker's lateral movement.

hashtag
Questions?

circle-check

Want to discuss Grey Matter in more depth? to discuss your specific use case.

Service Mesh

Use policies to securely manage services across platforms and between private and public clouds.

hashtag
The Challenge

It's hard to improve performance and operations in a dynamic microservice environment.

hashtag
The Solution

hashtag
Dynamic Routing and Security

Grey Matter is a platform agnostic service mesh that simplifies network management. The mesh is made of --components that work in unison to optimize decentralized microservice performance on the network. Grey Matter's service policies and configurations enable dynamic routing and security based on service identity. These policies scale without IP-based rules or networking middleware.

circle-info

Grey Matter lets you dynamically change policies without touching any code.

hashtag
Benefits of a Service Mesh

Grey Matter takes complexity from a single microservice and puts it into a "sidecar" proxy. This sidecar proxy works with its dedicated service to provide the following benefits:

  • It gives its service behaviors the service needs to perform well in a microservice architecture, and

  • It lets its dedicated service perform its business-specific tasks.

The following table summarizes the benefits a service mesh provides.

hashtag
Features

hashtag
Real-Time Performance Metrics

If you’re building microservices, you're anticipating the ability to scale, since a microservices architecture will look very different a year out. A new service introduces failure points, and microservices make it hard to find the root of failures without a mesh. A service mesh captures all communications as performance metrics. These metrics translate to more reliable service requests and a more secure way to scale.

hashtag
Secure and Reliable Decision-Making

Grey Matter separates decision-making from data-gathering with its data and control planes to improve the performance of each of these important activities.

The data plane is a collection of sidecar proxies: one proxy for each service. The data plane manages traffic from one application to another and includes routing, forwarding, load balancing, even authentication and authorization.

circle-info

Note: a service knows nothing about the network other than the way the network handles the proxy.

The control plane connects data planes and serves as the policy and management layer of the service mesh. It collects telemetry data and makes decisions about configurations.

hashtag
Hybrid Cloud, Traffic Management, and Observability

Grey Matter lets you set policies you can enforce across cloud instantiations. Its single abstraction layer hides details of the underlying cloud.

Service-to-service communication can be managed centrally, enabling advanced traffic management patterns such as service failover, path-based routing, and traffic shifting that can be applied across public and private clouds, platforms, and networks.

Centrally-managed service observability includes detailed metrics on all service-to-service communication such as connections, bytes transferred, retries, timeouts, open circuits, and request rates, response codes.

hashtag
Secure Services Across Any Runtime Platform

Grey Matter offers secure communication between legacy and modern workloads. Sidecar proxies allow applications to be integrated without code changes and Layer 4 support provides nearly universal protocol compatibility.

circle-check

Benefits of Using Grey Matter

  • Add business value instead of focusing on individual services.

hashtag
Certificate-Based Service Identity and Encrypted Communications

Grey Matter uses TLS certificates to identify services and secure communications. Using TLS provides a strong guarantee of the identity of services communicating, and ensures all data in transit is encrypted. These certificates use the SPIFFE format for interoperability with other platforms. Grey Matter can be a certificate authority to simplify deployment, or integrate with external signing authorities like Vault. All traffic between services is encrypted and authenticated with mutual TLS.

hashtag
Questions?

circle-check

to learn more about Grey Matter's service mesh capabilities.

About Grey Matter

Learn about Grey Matter: the intelligent hybrid mesh.

Grey Matter is a software platform that provides zero-trust policy compliance, operational insight, and infrastructure management. More concretely, it's a cloud-native infrastructure layer that improves microservice performance and simplifies service mesh operations.

circle-check

Key Definitions

A microservice is a self-contained process or collection of processes that provide a unique business capability. Microservices act in a modular fashion to improve system reliability, visibility, and security.

A service mesh is a configurable, low‑latency infrastructure layer that provides secure service-to-service communication. It's often implemented as lightweight network proxies deployed alongside application code. These proxies orchestrate the activities of every service or system running within the mesh.

hashtag
Core Components

Grey Matter is composed of . Each simplifies the technical challenges associated with microservice management, such as service announcement, service discovery, instrumentation, logging, tracing, troubleshooting, encryption, and access control.

hashtag
Technical Overviews on Grey Matter

Grey Matter's core features include:

  • Zero-trust security

  • Cloud, language, and vendor-agnostic implementations

  • Automated service-level observability and management

You can use Grey Matter to improve service discovery, build or manage your service mesh, gain business insights, and add zero-trust security to your environment. You can also use Grey Matter to extend your unique use cases.

Read more about our use cases to learn what Grey Matter does.

Grey Matter is composed of Fabric, Data, and Sense. Learn how these components simplify microservice management in Architecture.

hashtag
Why Grey Matter?

Grey Matter uses mesh-generated telemetry data to improve network insight and systems performance from local development, to production-scale container and hybrid cloud implementations.

hashtag
Flexibility and Compatibility‌

We take our design cues from open architecture. Our implementations don't require predetermined downstream investments for existing infrastructure. Grey Matter is cloud, language, and platform agnostic.

circle-check

With Grey Matter you can:

  • Deploy across any cloud or vendor environment.

hashtag
Precision, Depth, and Security

circle-check

Grey Matter captures over 100 service and instance route-level statistics for each service on the mesh.

Fabric, the platform’s core Envoy proxy-based mesh technology, generates and captures telemetry and audit data. This data powers dynamic enterprise policy compliance, and SLO creation and monitoring.

‌Grey Matter feeds each piece of telemetry and audit data generated throughout the lifetime of a service instance to Grey Matter’s Sense SLO monitoring overlay service. Because Grey Matter maintains the entire history of every event tied to the lifespan of every service on the mesh, the value of SLO monitoring for enterprise IT only grows over time.

Backed by the full lineage of service telemetry data, your team can:

  • Establish historical patterns of use

  • Refine SLOs to best reflect real-world performance

hashtag
Focus on Zero-Trust

Grey Matter is designed with in mind. Our platform manages the activities, policies, security, auditing and data of every microservice in a multi-tenant mesh across any enterprise environment. These zero-trust enforcement features include:

  • Enabling service-to-service mTLS connections

  • Scheduled or on-demand key rotation

  • Service cryptographic identifiers

circle-info

All actions within Grey Matter (all services, users, and all data objects) are audited, logged, tracked, and recorded. These logs can be stored within Grey Matter, pushed to external or separate storage devices, or automatically triaged and sent to security assessors.

Audit logs compliant with ICS 500-27 standards are automatically provided to your security team. Grey Matter maintains a historical pedigree of every action occurring on the system throughout its lifespan. This data can be provided to auditors and/or security analysts to assist with risk management, data loss, or breach impact mitigation.

Learn more about zero-trust , or learn how to set up zero-trust for your mesh.

hashtag
Elegance and Accessibility‌‌

Intelligence 360 is Grey Matter’s single pane of glass for mesh service health monitoring, access, and control. Intelligence 360 enables service level objective (SLO) observability and dynamic control for every service within Grey Matter Fabric. Intelligence 360 visualizes aggregates, routes, and service-level SLOs for the following infrastructure data:

  • Memory Utilization

  • CPU Utilization

  • Percentile Latencies

hashtag
Enterprise Business Value

Grey Matter's telemetry data gives you insight into microservice performance and control over your system's resource use. It offers the following key telemetry backed value-add benefits.

hashtag
Improve Data and Communications Management

is the control and data plane responsible for managing the mesh environment. Envoy-proxy backed communications and traffic management, policy enforcement, audit, and data governance take place atop Fabric.

hashtag
Grey Matter Data‌

is a data store agnostic API designed to manage access control and zero-trust data security policy compliance. Data enables secure global data sharing backed by complex content tagging, object tracking, and security policy. Data is store agnostic, and capable of supporting multiple storage backends to include Amazon S3, CEPH, Gluster, local disk, and more.

hashtag
​Improve Operations and Network Management

Grey Matter includes several open-source tools that improve quality and security, and offer low-cost and flexible ways to meet goals. With Grey Matter, network orchestration and policy management do not require expensive third-party tools and licensing fees.

hashtag
Questions?

circle-check

Want to learn more? Contact us at to discuss your specific use case.

Grey Matter Documentation

Welcome to Grey Matter documentation. This documentation covers all available features and options for Grey Matter, the intelligent hybrid mesh platform.

circle-info

Grey Matter is enterprise software that helps organizations manage application strategies built around microservices.

Zero-Trust

Make sure only the right users and secure devices can access applications.

hashtag
The Challenge

hashtag
All Network Traffic Is Untrusted

JWT Security Service (Gov) uses RFC3339 for logging
  • JWT Security Service (Gov) allows omitting JWT_AAC_SERVER_CN

  • greymatter CLI import-zone now properly works with the output of export-zone

  • greymatter deep delete and list summary format works for all objects

  • Dashboard paginates routes on instance views and can disable routes tabs entirely

  • Dashboard misc bug fixes and browser support

  • SLO service Alpine binary no longer throws segmentation fault in Alpine

  • gm-jwt-security:1.1.1

  • gm-jwt-security-gov:1.1.1

  • gm-cli:1.4.1

  • Allow setting all Envoy cluster load balancing policies (excepting deprecated options)
  • Control server now has a simple healthcheck endpoint

  • Sidecar environment variables now support tracing with Zipkin/Jaeger

  • Sidecar can now disable or restrict Sidecar admin endpoint

  • Sidecar filters now support per-route metadata and configurations

  • Sidecar environment variables can now setup Envoy Redis and TCP network filter static resources

  • New gm.oidc-authentication filter

  • New gm.oidc-validation filter

  • New gm.ensure-variables filter

  • New gm.jwt-security filter

  • Data server will now error early if it can't write to the intended storage medium
  • Data server: improved documentation, CLI messages, and server logs

  • Set better default health checks to prevent rejection by Envoy

  • Retry Policies can now be turned off by setting num_retries to 0

  • Listeners now properly always use the set IP rather than defaulting to 0.0.0.0

  • Fixed nil pointer reference in some configurations of listener

  • Allow not setting validation certs in Cluster SSLConfig and Domain SSLConfig

  • Sidecar Observables TLS and mTLS support now working properly

  • Sidecar Observables kafka connection logic now properly terminates

  • Sidecar Observables are now proper JSON when outputting to files

  • Sidecar filters no longer dropping T and ST in USER_DN fields (PKI)

  • Removed frame when outputting observables to file that created invalid JSON

  • #65arrow-up-right The Grey Matter Control variable GM_CONTROL_STATS_BACKENDS is ineffectual and does not output stats to prometheus.

  • #305arrow-up-right Grey Matter control domain object redirects can't perform port rewrites.

  • Binariesarrow-up-right
    #3482arrow-up-right
    #761arrow-up-right

    Without Service Discovery

    With Service Discovery

    Bottlenecks due to frequent manual updates to load balancers as services scale up/down.

    Improve operations by reducing lead time of connecting services from weeks to seconds without operator intervention.

    High costs due to a proliferation of east-west load balancers.

    Save money by eliminating the need for east-west load balancers to connect services.

    Increased risk due to high probability of human errors and single points of failure introduced by load balancers.

    Boost performance by lowering the probability of downtime introduced when managing and updating load balancers.

    Service Discoverychevron-right
    service discovery capabilities
    Service Meshchevron-right
    service mesh
    Insightchevron-right
    gain business insights with Grey Matter
    Zero-Trustchevron-right
    Grey Matter's zero-trust model
    Contact usenvelope
    Recover faster during downtime.
  • Find ways to optimize the mesh during runtime.

  • Benefits

    Details

    Inventory, Visibility, and Performance Management

    Grey Matter's telemetry data shows how well a service is performing so you can adjust in real-time.

    Security Policy Management

    Grey Matter manages policies based on service identities to provide secure service-to-service communications.

    Traffic Management

    Grey Matter manages traffic between services using route rules.

    Fabric, Data, and Sense
    Contact usenvelope
    Data governance and policy management
  • SLO compliance

  • Root-cause analysis

  • Business analytics

  • AIOps

  • Work in concert with existing service mesh environments.
  • Flex across containers such as DC/OS Mesosphere, OpenShift, or Kubernetes.

  • Error Rate
  • Request Rate

  • Fabric, Data, and Sense
    Core Featureschevron-right
    Use Caseschevron-right
    Architecturechevron-right
    zero-trust security
    here
    Setup Zero-Trustchevron-right
    Grey Matter Fabric
    Grey Matter Data
    info@greymatter.ioenvelope
    Grey Matter's Fabric, Data, and Sense
    Grey Matter enables hybrid and multi-mesh operations regardless of environment.
    Grey Matter's capture and analysis of mesh telemetry data improves CapEx and OpEx.
    Insight for your service mesh.
    Service-level management using Grey Matter telemetry and business operations reporting.
    Data, Grey Matter’s secure global data management API
    First coined by Forrester Research, zero-trust architecture “abolishes the idea of a trusted network inside a defined corporate perimeter.” Put simply, zero-trust means “never trust, always verify.” Zero-trust assumes your systems are already compromised by cyber intrusion. Under zero-trust, the enterprise is mandated to create micro-segmentation around sensitive data, backed by deep visibility into how the enterprise uses data across its ecosystem in pursuit of customer satisfaction. This combination of micro-segmentation and awareness greatly enhances security across the enterprise.

    As described by O’Reilly Media, the increasingly popular zero-trust approach is based on five key premises:

    1. The network is always hostile.

    2. External and internal threats always exist on the network.

    3. Network locality isn’t enough to decide trust in a network.

    4. Every user, device, or network flow must be authenticated and authorized.

    5. Policies need to be dynamic and derived from multiple data sources.

    hashtag
    Why is Network Perimeter Trust Insufficient?

    Bad actors can breach networks in countless ways. Granting trust to a user who was somehow accessed only one layer of your network security creates both a false sense of security and introduces multiple security gaps.

    Perimeter trust ignores policy and context change. Relying on IP address data to establish local trust is insufficient protection. For instance, this ignores risk based on user type and business role and request reason (geo-location/time)‌. In addition to security concerns, ignoring these controls also represents potential policy compliance failure.

    Perimeter trust also ignores the possibility of compromised credentials. Per Verizon’s 2019 Data Breaches Investigations Reportarrow-up-right, 32% of breaches stemmed from phishing attacks, while 29% involved stolen credentials. By trusting the credentials of a single user, your network becomes susceptible to similar outside attack.

    Finally, perimeter trust also ignores the existence of compromised devices already on the network. Compromised devices can be purposefully or accidentally introduced to corporate networks by countless means. Symantec's 2019 Internet Security Threat Reportarrow-up-right research indicates that “one in 36 devices used in organizations present high risk.” Such devices may include malware and other malicious software. ‌

    hashtag
    The Solution

    hashtag
    Zero-trust Architecture

    ‌‌Zero Trust security provides defense in depth based on a handful of guiding practices. ‌

    • Trusted user identities backed by strong identification, visibility & authentication,

    • Enable secure access to all apps on the network,

    • Enforce adaptive & risk-based policies at endpoints, and

    • Ensure trustworthy device and data transaction‌s.

    Grey Matter ensures security and access control based on zero-trust design and implementation.

    hashtag
    How does Zero Trust Improve Security?

    Industry has signaled increased interest in zero-trust infrastructure for service-to-service mTLS connections, scheduled or on-demand key rotations, service cryptographic identifiers, observability (i.e. continuous monitoring, granular audit compliance), service level management, and policy management throughout the enterprise service fleet.

    Grey Matter meets each of these requirements. Leveraging zero-trust within Grey Matter, development teams can quickly and flexibly deploy new capabilities and functions without specific instrumentation required for security or compliance. The platform enables zero-trust segmentation that enforce secure business operations while ensuring audit capture for every event in a normalized and repeatable way establishing a baseline for compliance reporting.

    The concept of Zero Trust is centered on a belief that enterprises do not automatically trust systems or services inside or outside its perimeters, instead verify everything attempting to connect before granting access. Grey Matter is designed to operate using a zero-trust threat model to ensure each service and transaction running within a Grey Matter enabled hybrid mesh is appropriately protected.

    Grey Matter elevates a multi-facet security model and unprecedented compliance insight into the service mesh and data layers, drastically reducing developer complexity burden. The platform enables Enterprise IT teams to continuously deploy to a common hybrid mesh while maintaining security enforcement and compliance reporting.

    hashtag
    Steps to a Zero-Trust Enterprise

    Zero Trust security requires the following six areas of control. Together, they provide a defense in depth approach to securing corporate resources no matter where they’re deployed and who needs access to them.

    hashtag
    Establish Trust in User Identities

    hashtag
    Strong Identification & Authentication

    Verify the identity of all users with secure access solutions such as two-factor authentication (2FA) before granting access to corporate applications and resources

    During authentication and authorization, verify the following before proceeding:

    • Is this user legitimate?

    • Was this user identified in a manner that is acceptable to the task being performed?

    • Is their device healthy enough for the task they are performing?

    • Is this user who they say they are?

    • Should this user have access under any circumstance?

    • Should this user have access given their current circumstances?

    Verifying and authenticating user identityarrow-up-right from the moment of registration to each request for access is critical to improving security. These capabilities ensure that all users (privileged and not) and all resources are protected no matter where they’re deployed.

    Gain Visibility into Devices & Activity

    Gain visibility into every device used to access corporate applications, whether or not the device is corporate managed, without device management agents.arrow-up-right

    hashtag
    Enforce Adaptive & Risk-Based Policies

    hashtag
    Endpoint Security

    Legitimate users often incidentally expose their organizations to high levels of risk by accessing resources with compromised devices. These capabilities ensure that when a device is compromised, access won’t be providedarrow-up-right.

    Protect every application by defining policies that limit access only to users and devices that meet your organization's risk tolerance levels. Define, with fine granularity, which users and which devices can access what applications under which circumstances.arrow-up-right

    hashtag
    Enable Secure Access to All Apps

    Grant users secure access to all protected applications through a frictionless secure single sign-on interface accessible from anywhere without a VPN. Protect all applications - legacy, on-premises and cloud-based.

    hashtag
    Network Security

    Preventing lateral movement between segments is often the most effective way to minimize the impact of a breach. These capabilities ensure that breaches are contained with access terminated as soon as malicious behavior is detected or a risk threshold is exceeded.arrow-up-right

    hashtag
    Workload Security

    Attacks come from those with valid credentials as well as from the outside. These capabilities ensure that context is included in all authorization decisionsarrow-up-right and that vulnerabilities in applications and APIs are covered.

    hashtag
    Transaction Security

    Ensure Device and Data Transaction Trustworthiness

    Certain transactions pose more risk than others. Grey Matter's capabilities ensure that high-risk transactions are verified by the user while recognizing anomalous behavior.

    Grey Matter inspects all devices used to access corporate applications and resources at the time of access to determine their security posture and trustworthiness. Devices that do not meet the minimum security and trust requirements set by your organization are denied access to protected applications.

    • Is this session still driven by the real user?

    • Does the amount of trust in the user identity match the level of risk associated with this transaction?

    • Has the request been verified?

    • Did the user provide consent for access, and to whom?

    • What transactions (READ, MODIFY, DELETE) did they consent to?

    • Should this data be encrypted?

    hashtag
    Data Security

    Whether its sensitive IP or user data covered by one of the many privacy regimes popping up around the globe, data security has become paramount for many organizations. These capabilities ensure that data is encrypted where it needs to be, and that users are always in control of their dataarrow-up-right.

    hashtag
    Where to start?

    Start your zero-trust journey with a strategic deployment of global, adaptive authentication. Use this capability as the policy administration and decision point for where all risk signals and policy decision meet.

    hashtag
    Questions?

    circle-check

    Want to see how Grey Matter's approach to security could work with your specific use case? Contact usenvelope to learn more.

    Core Featureschevron-right
    Release Noteschevron-right
    Use Caseschevron-right
    Architecturechevron-right
    Install the Grey Matter CLIchevron-right
    Quickstart Installation on AWS EKSchevron-right
    Deploy Service to Grey Matter on Kuberneteschevron-right
    Setup Distributed Tracing in Grey Matter on Kuberneteschevron-right
    Fabricchevron-right
    Datachevron-right
    Sensechevron-right
    Fabricchevron-right
    Sensechevron-right
    System Requirementschevron-right
    Quick Linkschevron-right
    Standards and Compliancechevron-right
    Glossarychevron-right

    Core Features

    Get a high-level view of Grey Matter's features.

    hashtag
    Grey Matter Fabric

    Fabric is the master control plane for Grey Matter, and helps microservices focus on specific tasks.

    Explore how Fabric works within Grey Matter's architecture, or configure Fabricarrow-up-right now.

    hashtag
    Oversight and Visibility

    Fabric's user-operated dashboard helps visualize and manage the service mesh. The dashboard provides the following features:

    • Status, owner, capability, and business value sortability

    • Fine-grained search targeting every microservice on the mesh

    • Aggregate service reporting for all service instances spawned atop the mesh

    Learn more about Grey Matter's service discovery features.

    hashtag
    Network Operations Management and Business Intelligence

    Grey Matter offers the following network operations and business intelligence activities:

    • User-defined service and route level management and objective capture for:

      • Aggregated Service Level Objectives (SLOs)

        • memory, CPU, latency, error and request rates

    Read more about Grey Matter's business insight capabilities here.

    hashtag
    Security and Access Control

    Each microservice requires its own data access point and generates its own data, creating a lot of complexity in the network. Grey Matter governs data access management with fine-grained access policy creation and management.

    Fabric's sidecar proxies run alongside each microservice to manage network requirements such as scaling, access control, and intercommunication. Its security and access control features include:

    • Service-to-service and end-user authentication

      • Mutual TLS and Fine-Grained Access Policies

      • FIPS 140-2 (BoringSSL can be built in a )

    Learn more about Grey Matter's security policies here.

    hashtag
    Compatibility and Interoperability

    Grey Matter is compatible with the following technologies:

    • Istio Compatible Sidecar

      • Beta features (Istio Control Plane, Citadel, Pilot, Mixer, etc.)

    • Consul Compatible Sidecar

    hashtag
    Grey Matter Data

    hashtag
    Data and Content Transfer and Distribution

    Grey Matter Data is a Data Distribution Network (DDN) that provides data and content capture, store, sync, cache, move and share of any kind, to and from consumers and services anywhere.

    hashtag
    Data's Features

    Data's features include:

    • Immutable, timestamped fact database

    • JWT-based authentication and authorization

    • High-performance, user-defined security policy

    circle-check

    No other service mesh offering on the market today matches the breadth of metrics capture provided by Grey Matter.

    hashtag
    Grey Matter Sense

    Finally, the direct and atmospheric telemetry generated by this constant flow of data is fed to Sense, our experimental machine learning-enabled AI layer. We designed Sense to automate and optimize enterprise network operations, cutting resource expenditures at machine speed.

    hashtag
    Sense's Features

    Sense's features include:

    • Catalog

    • Service Level Objectives

    • Intelligence 360

    hashtag
    All Together

    Check out our architecture section for visual representations of Grey Matter's features.

    hashtag
    Questions?

    circle-check

    Have a question about our features? Contact us at to discuss your use case.

    Hybrid and Multi-Mesh Deployments

    hashtag
    Simple Deployment Model

    In the simple deployment model, there is a single point of ingress between the Client and the Grey Matter Edge. The Edge routes traffic to appropriate Grey Matter Sidecars.

    Edge in a simple deployment model.

    hashtag
    Multi-Mesh Deployment Model

    The multi-mesh deployment model extends the simple deployment model by allowing each mesh egress to communicate with another mesh ingress using mTLS, as shown below.

    hashtag
    Questions?

    circle-check

    Have a question about Grey Matter's deployment models? Reach out to us at to learn more.

    Design Principles

    Explore Grey Matter's design principles.

    Grey Matter is a zero-trust hybrid mesh platform built using open architecture and mesh app and service architecture (MASA) principles.

    Each microservice within Grey Matter runs and scales independently to improve secure interoperations, resiliency, continuity of operations, and insight for your business. Our omnichannel support provides rich, fluid, and dynamic connections between people, content, devices, processes, and services.

    circle-check

    Combine Grey Matter with today’s languages and powerful frameworks to write business services faster than ever.

    hashtag
    Core Architectural Principles

    Our core architecture principles support the following business needs.

    hashtag
    Security

    Designed to operate using a zero-trust threat model to ensure each service running within a Grey Matter enabled hybrid mesh is appropriately secured, observed, and managed.

    hashtag
    Portability

    Enable on-premise, multi-cloud, and multi-platform as a service (PaaS) runtime environments.

    hashtag
    Cloud-native

    Built with elasticity, high availability, and cloud computing models in mind - provides a unified mesh platform to build applications as microservices, utilize container management solutions, and dynamically orchestrate workloads across hybrid enterprise.

    hashtag
    Web-scale

    Provides a solid foundation to scale with the growth of your business. Enabling modern architectural patterns supporting rapid increase or decrease in traffic volume, maintaining business insight for effectiveness and efficiency, and aiding in the reduction of bottlenecks when the time matters.

    hashtag
    Modular

    Modular service delivery - enabling loosely coupled systems and services developed independent of each other, taking advantage of continuous delivery to achieve reliability and faster time to market.

    hashtag
    Interoperability

    Creates a secure unified zero-trust network fabric, allowing systems to interchangeably serve or receive services from other systems providing enterprises the ability to perform multi-environment segmentation and observe traffic flows between environments. Managed through a runtime environment agnostic Grey Matter Control API.

    hashtag
    Adaptive

    Able to react to digital business changes, providing a pathway enabling business insight, security, and connectivity across multiple environments reducing complexities while increasing and facilitating a business's digital transformation journey.

    hashtag
    Intelligent

    Use of artificial intelligence (AI) techniques simplifying and assisting a user experience while providing business insight and fleet wide management across Grey Matter connected resources.

    hashtag
    Automation

    Integration into any ecosystems and end-to-end automation through the lifecycle of the mesh app and service architecture.

    hashtag
    Questions?

    circle-check

    Have a question about Grey Matter's design principles? Reach out to us at to learn more.

    Grey Matter Control Plane

    Grey Matter Control performs service discovery and distribution of configuration in the Grey Matter service mesh. Control provides policy and configuration for all running sidecars within the Grey Matter platform.

    The Control server works in conjunction with the Grey Matter Control API to manage, maintain, operate, and govern the Grey Matter hybrid mesh platform.

    hashtag
    xDS

    The Control server performs service discovery in the mesh and acts as an xDS server to which all proxies connect.

    xDS is the generic name for the following:

    • Endpoints (EDS)

    • Clusters (CDS)

    • Routes (RDS)

    circle-info

    Each discovery service allows proxies (and other services that speak the xDS protocol) to request streams for particular resources (Listeners, Routes, Clusters, etc) from the control plane.

    hashtag
    Service Discovery

    Service Discovery is the way Grey Matter dynamically adds and removes instances of each microservice. Discovery adds the initial instances that come online, and modifies the mesh to react to any scaling actions that happen. To keep flexibility in the Grey Matter platform, the Control server supports a number of different service discovery options and platforms.

    circle-check

    One key benefit to a service mesh is the dynamic handling of ephemeral service nodes. These nodes have neither consistent IP addresses nor consistent numbers of instances as services are spun up and down. The gm-control-api server, in conjunction with Grey Matter Control, can handle these ephemeral services automatically.‌

    The ability to automatically populate instances of a particular microservice comes from the clusterobject. In particular, the name field in the cluster object determines which nodes will be pulled out of the mesh and populated in the instances array. In the example below, the name is catalog. This means that all services that announce as catalog in service discovery, will be found and populated into the instances array after creation.‌

    Create the following object:

    ‌

    Will be populated in the mesh as:

    ‌

    Even though the object was created with no instances, they were discovered from the mesh and populated. Now any service that needs to talk to catalog, can link to this cluster and address all live instance.‌

    hashtag
    Configuration Discovery

    Each proxy in the mesh is connected to the control plane through a gRPC stream to the Grey Matter Control server. Though gm-control-api houses all the configuration for the mesh, it's ultimately gm-control that turns these configs into full Envoy configuration objects and sends them to the proxies.‌

    The configuration in the Control API is mapped to physical proxies by the name field in the proxy API object. It's very important that this field exactly match the service-cluster identifier that the intended target proxy used when registering with gm-control.

    In the example below, the proxy object, and all other objects linked by their appropriate keys, will be turned into a full Envoy configuration and sent to any proxies that announce as a cluster catalog.‌

    Services that announce as:

    ‌Will receive the config form the object below, because XDS_CLUSTER==name, and they're both in the same zone.

    hashtag
    Questions?

    circle-check

    Have a question about the Grey Matter Control Plane? Reach out to us at to

    Architecture

    Explore Grey Matter's design and learn how it works.

    Design Principleschevron-right

    Grey Matter consists of a control plane, data plane, mesh, telemetry-powered business intelligence, and AI. It can be deployed on multiple cloud-native or legacy infrastructures without placing predetermined downstream requirements on existing investments.

    Core Componentschevron-right

    Learn about the inner workings of Grey Matter's three core components: Fabric, Data, and Sense.

    Hybrid and Multi-Mesh Deploymentschevron-right

    Grey Matter enables unified hybrid microservice deployments and hybrid/multi-cloud operations without special requirements for underlying infrastructure like containers or container platforms regardless of cloud vendor, PaaS or infrastructure.

    hashtag
    Questions?

    circle-check

    Contact us at to discuss your specific use case.

    Service Discovery

    Automatically detect new services to improve performance.

    hashtag
    The Challenge

    Service discovery is important but hard. Microservices constantly communicate with instances of other microservices, intensifying the volume and frequency of communications.

    hashtag
    The Solution

    Grey Matter's service-level visibility and telemetry improves service inventory and dependency analysis.

    hashtag
    How It Works

    The server supports several service discovery options and platforms.

    • Control performs service discovery in the mesh and acts as the Envoy xDS server to which all proxies connect.

    • Service discovery lets services communicate transparently with a member of a cluster without knowing its identity, or even how many members there are.

    • When a service joins or leaves the cluster, it joins or leaves the load-balancing list.

    hashtag
    Implementations

    Grey Matter supports *several service discovery implementations, with more to come. These include a central server or servers that maintain a global view of addresses (Kubernetes keeps this central server), and clients that connect to the central server to update and retrieve addresses.

    *Except for flat-file cases; we farm out that functionality to some other service discovery system.

    circle-check

    Service discovery is arguably the first piece of infrastructure you should adopt when moving to microservices.

    When choosing your service discovery architecture, consider the following:

    Here is a short explanation of the workings of each of our service discovery implementations.

    hashtag
    Kubernetes

    Grey Matter's Kubernetes service discovery scheme keeps track of labels placed on pods when they're deployed. Our software uses the Kubernetes API to ask "what pods are in this namespace?" Then we use the labels to match those pods to "clusters", so services can talk to other services just by saying "send this request to some instance of this cluster". When pods die, they get removed from Kubernetes' service discovery list, and therefore from our software's list of running instances for a cluster.

    hashtag
    Consul

    For existing infrastructures already using Consul, you can easily configure Grey Matter to discover services from the existing catalog without migration to Kubernetes or any other platform. Infrastructures with services running in heterogeneous environments, or looking to leverage features such as health checking, load-balancing, or distributed service configuration management can use Consul as a central service registry from which Grey Matter will discover.

    hashtag
    Envoy

    Many cloud infrastructure use Envoy's or to keep track of instances and upstream clusters. Since Grey Matter Control uses xDS as the configuration discovery mechanism, Grey Matter inherits all Envoy configs. Moreover, any Grey Matter proxy can act as a vanilla Envoy proxy instance, and any Envoy proxy can integrate with Grey Matter Control.

    hashtag
    File

    We also support service discovery for flat-file cases.

    circle-check

    We are always working on new implementations for service discovery! to discuss your implementation requirements.

    hashtag
    Features

    Grey Matter monitors cloud-native environments in real-time. Grey Matter can detect and address meaningful conditions in seconds.

    circle-check

    Have a question about how Grey Matter performs service discovery? to discuss your specific use case.

    Core Components

    Learn about the major components in the Grey Matter ecosystem.

    circle-check

    Use the concepts in this document alongside our when deploying Grey Matter in production.

    hashtag

    Quickstart Installation on AWS EKS

    hashtag
    Prerequisites

    1. git installed

    Grey Matter Sidecar

    The Grey Matter Sidecar is a L7 reverse proxy based off of the popular Open-Source . Grey Matter's proxy enhances the base capabilities with custom filters, logic, and the ability for developers to write full-featured Envoy filters in Go.

    hashtag
    Fabric Mesh

    The primary use of the Grey Matter Sidecar is to act as the distributed network of proxies in the Grey Matter Fabric service mesh. In this use-case; each proxy starts out with very simple configuration, which is then modified by the control plane to suit the changing needs of the network. The documentation here is focused on the individual proxy itself; low-level configuration, filter specifications, etc.

    Security Models

    Service meshes, microservices, server-less, and containers are key elements of Mesh application and service architecture (MASA) implementations. MASA, APIs, and internal traffic patterns represent one of the most effective pathways to enterprise modernization, but this doesn’t come without challenges.

    Industry has signaled increased interest in zero-trust infrastructure for service-to-service mTLS connections, scheduled or on-demand key rotations, service cryptographic identifiers, observability (continuous monitoring, granular audit compliance, etc.), service-level management, and policy management throughout the enterprise service fleet.

    hashtag
    Security Models

    Grey Matter Platform Services

    hashtag
    Data

    Grey Matter Data is a microservice for the versioned and encrypted storage of media blobs and assets. It is a high-performance time-series object storage. It contains an immutable sequence of events which collectively describes a file system at a given moment in time. If the first event is a creation of an object and the second is its deletion, then the object will not appear in listings as of the current time, but it will appear as of any time after the creation and before the deletion.

    Guides

    Walkthroughs of common operations.

    hashtag
    Get Started with Grey Matter

    hashtag
    Prerequisites

    hashtag
    Questions?
    circle-check

    Have a question about Grey Matter's platform services? Reach out to us at info@greymatter.ioenvelope to learn more.

    hashtag
    How does event auditing work?

    hashtag
    Individual Service

    At the level of the individual service, event auditing works as follows:

    1. One proxy collects all metrics that happen on the individual service.

    2. At the Edge, they extract the PKI/cert.

    3. The user that has accessed the service from outside Fabric is then decomposed based on one of the observable fields emitted by the Sidecar proxy.

    4. This information, coupled with IP address information from the originating request, is added to the stack of the xForwardedForIp information.

    hashtag
    Service-to-Service

    At the service-to-service level, the sidecar tracks service-to-service calls within Fabric. This enables architecture inference and service dependency observation.

    hashtag
    Observable Indexer

    Grey Matter also has an observable indexer which can capture geolocation info and move it into Elasticsearch. Customizable event mappings are also available. These can be tailored per individual route so that a POST request may result in an EventAccess event in one route, while resulting in EventCreate on another.

    circle-info

    Note: payload can be delivered via Kafka or a log file. Beyond Kafka, Grey Matter can also support SS2 and direct logging. Kafka emits back out to Elasticsearch through the Audit Proxy Observable Consumer.

    hashtag
    Questions?

    circle-check

    Have a question about the Grey Matter Sidecar? Reach out to us at info@greymatter.ioenvelope to learn more.

    Envoy Proxyarrow-up-right
    Listeners (LDS)
  • Access Logging (ALS)

  • Aggregate (ADS)

  • AWS/ECSarrow-up-right
  • Filearrow-up-right

  • Envoy v1 CDS/SDS (beta)

  • Envoy v2 CDS/EDS (beta)

  • Kubernetesarrow-up-right
    Consularrow-up-right
    AWS/EC2arrow-up-right
    info@greymatter.ioenvelope
    learn more arrow-up-right
    {    "zone_key": "default-zone",    "cluster_key": "catalog-proxy",    "name": "catalog",    "instances": [],    "circuit_breakers": {      "max_connections": 500,      "max_requests": 500    },    "outlier_detection": null,    "health_checks": []}
    {  "cluster_key": "catalog-proxy",  "zone_key": "default-zone",  "name": "catalog",  "secret": {    "secret_key": "",    "secret_name": "",    "secret_validation_name": "",    "subject_names": null,    "ecdh_curves": null,    "set_current_client_cert_details": {      "uri": false    },    "checksum": ""  },  "instances": [    {      "host": "10.128.2.183",      "port": 9080,      "metadata": [        {          "key": "pod-template-hash",          "value": "2000163809"        },        {          "key": "gm_k8s_host_ip",          "value": "10.0.2.132"        },        {          "key": "gm_k8s_node_name",          "value": "ip-10-0-2-132.ec2.internal"        }      ]    },    {      "host": "10.128.2.140",      "port": 9080,      "metadata": [        {          "key": "pod-template-hash",          "value": "475497808"        },        {          "key": "gm_k8s_host_ip",          "value": "10.0.2.82"        },        {          "key": "gm_k8s_node_name",          "value": "ip-10-0-2-82.ec2.internal"        }      ]    }  ],  "circuit_breakers": {    "max_connections": 500,    "max_pending_requests": null,    "max_retries": null,    "max_requests": 500  },  "outlier_detection": null,  "health_checks": [],  "checksum": "2b6d2a8a6886eb30574f16480b0f99b90e11484d9ddb10fb7970c3ce37d945ab"}
    XDS_CLUSTER=catalogXDS_REGION=default-zone
    {    "proxy_key": "catalog-proxy",    "zone_key": "default-zone",    "name": "catalog",    "domain_keys": [        "catalog"    ],    "listener_keys": [        "catalog-listener"    ],    "listeners": null,    "active_proxy_filters": [        "gm.metrics"    ],    "proxy_filters": {        "gm_impersonation": {},        "gm_observables": {},        "gm_oauth": {},        "gm_inheaders": {},        "gm_listauth": {},        "gm_metrics": {            "metrics_port": 8081,            "metrics_host": "0.0.0.0",            "metrics_dashboard_uri_path": "/metrics",            "metrics_prometheus_uri_path": "/prometheus",            "prometheus_system_metrics_interval_seconds": 15,            "metrics_ring_buffer_size": 4096,            "metrics_key_function": "depth"        }    }}
    API Registry and Edge Node
  • Real-Time Service Instance Reporting, to include:

    • Summary View

    • Metrics Explorer

      • Service and route-level performance measurements

      • Request and response status codes, counts, and in and out throughput

      • Percentile statistics

      • Latency

  • Sidecar filters to surface security, observable events, and service metrics, including:

    • OAuth

    • TLS Mutual Authentication and Impersonation

    • Access Control Whitelist and Blacklist Lists

  • Aggregated route-level objectives (latency, error and request rate)

  • Violation monitoring

  • Service and route level aggregation analysis views providing pertinent historical data for ephemeral nodes and performance measurements

  • User-determined business impact assessment and value reporting per service and route overlaid against actual service mesh performance

  • Microservice fleet-wide Policy Management and Enforcement

  • Intelligent Routing and Resiliency, to include:

    • Advanced Load Balancing for ephemeral services

    • Intelligent Routing (A/B tests, canary deployments, etc.)

    • Resiliency (timeouts, retries, circuit breakers, bulkheads, etc.)

  • Active and Passive Health Checks

  • Certificate Verification

  • Access Logging (standard output [stdout]), Kafka via observables

  • AWS Cloudwatch Integration

  • Go SDK for Envoy

  • OAuth integration

  • Support for telemetry storage in Prometheus

  • Service Discovery supporting:

    • ZooKeeper

    • Istio

    • Consul

  • Configuration Inject (Docker, K8s)

  • HTTP/2 and GRPC support

  • Multi-cluster (alpha)

  • Support for multiple deployment environments, including:

    • On-Premise

    • AWS

    • Kubernetes

    • OpenShift

    • DC/OS Mesosphere

    • Cloud Foundry (alpha)

    • Microsoft Azure (alpha)

    • IBM Cloud (alpha)

  • Obscured fine-grained access control labels
  • File expiration and deletion

  • Lineage tracking

  • Event-sourced transactions

  • Content Delivery Network (CDN) hosting features

  • Relative paths

  • Range requests

  • Bulk upload

  • Content encryption

  • Offline, rejoin, and merge support

  • Eventual consistency

  • Business Impact
    Service Discoverychevron-right
    Insightchevron-right
    FIPS-compliant modearrow-up-right
    Zero-Trustchevron-right
    Architecturechevron-right
    info@greymatter.ioenvelope
    What is the best strategy for deploying service discovery clients?
  • What types of resources and services will you ultimately want to address in your service discovery system?

  • What languages and platforms need to be supported?

  • Regardless of your choice, the implementation of an automated, real-time service discovery solution will simplify your microservices architecture.

    Feature

    Details

    Service Registry

    Grey Matter's service registry tracks all running services and their health status to organize a dynamic microservice environment.

    Service Registration

    The service registry registers and deregisters all service instances using self-registration, or third-party registration (where another system component manages the registration of service instances).

    Health Checks

    Health checks find running nodes and services that can't handle requests. A service that has a health check API endpoint returns the health of the service. The API endpoint handler performs various tests, such as:

    • Status of connections to the infrastructure services used by the service instance

    • Status of the host, such as disk space

    • Application-specific logic

    A service registry or load balancer routinely invokes the endpoint to check the health of the service instance. A service instance failure generates an alert and subsequent requests route to healthy instances.

    Multi Datacenter

    Grey Matter supports multiple datacenters out of the box with no complicated configuration. Look up services in other datacenters or keep the request local. Advanced features like Prepared Queries enable automatic failover to other datacenters.

    DNS Query Interface

    Grey Matter does not use DNS for service discovery, though nothing we do precludes doing so. Service discovery using a built-in DNS server helps existing applications integrate, as almost all applications support using DNS to resolve IP addresses. Using DNS instead of a static IP address allows services to scale up/down and route around failures easily.

    HTTP API with Edge Triggers

    Grey Matter provides an HTTP API to query the service registry for nodes, services, and health check information. The API also supports blocking queries, or long-polling for any changes. This allows automation tools to react to services being registered or health status changes to change configurations or traffic routing in real time.

    Grey Matter Control
    Cluster Discovery Service (CDS)arrow-up-right
    Endpoint Discovery Service (EDS)arrow-up-right
    Contact usenvelope
    Contact usenvelope
    Overview of Grey Matter's Functionality

    Grey Matter is composed of Fabric, Data, and Sense. Internal to each component is a series of microservices that offers several core features. Each feature simplifies technical challenges associated with service management, such as:

    • Announcement

    • Discovery

    • Instrumentation

    • Logging

    • Tracing

    • Troubleshooting

    • Encryption

    • Access control

    • Network/micro/data-segmentation

    hashtag
    Workload Distribution

    The following diagram shows the workload distribution between Grey Matter's core components.

    hashtag
    Grey Matter Fabric

    Fabric powers the zero-trust hybrid service mesh, which consists of the Edge, Control, Security, and Sidecar. You can use Fabric to connect services regardless of language, framework, or runtime environment.

    circle-info

    How does Fabric work?

    1. Fabric's sidecar proxies run alongside each microservice.

    2. Each proxy manages scaling, access control, and intercommunication. **

    3. The proxy layer orchestrates communications between microservices operating in the mesh to provide reliability, visibility, and security.

    Secure network fabrics provide bridge points, observability, routing, policy assertion, and more between on-premise, multi-cloud, and multi-PaaS capabilities. Fabric offers workload distribution and management within a hybrid environment.

    hashtag
    Support for Multiple Runtime Environments

    Grey Matter supports multiple runtime environments with multi-mesh bridges as shown below. These environments include:

    • Multiple cloud providers (i.e. AWS and Azure)

    • Container management solutions (i.e., K8s, OpenShift and ECS)

    • On-premise infrastructure

    The Grey Matter Hybrid Platform
    circle-check

    Grey Matter gives you the flexibility to deploy the mesh to suit your environment. Learn more about our deployment options here.

    hashtag
    OSI Model Layers

    Fabric operates at OSI modelarrow-up-right layers 3 (network), 4 (transport), and 7 (application) simultaneously. Providing a powerful, performant, and unified platform to run, manage, connect, and perform distributed workloads across a hybrid architecture.

    Layer 3 operates at the TCP level. Responsible for transferring data “packets” from one host to another using IP addresses, TCP ports, etc., determining which route is the most suitable from source to its destination. At this level, network-segmentation is able to be performed using ABAC, RBAC, and NGAC policies set within each sidecar. More details can be found in the Security Modelarrow-up-right section.

    Layer 4 coordinates data transfer between clients and hosts. Adding load balancing, rate limiting, discovery, health checks, observability, and more built on top of TCP/IP. Layer 3 and 4 alone live within the TCP/IP space and are unable to make routing decisions based on different URLs to backend systems or services. This is where layer 7 comes into the architecture.

    Layer 7 sits at the top of the OSI model, interacting directly with services and applications responsible for presenting data to users. HTTP requests and responses accessing services, webpages, images, data, etc. are layer 7 actions.

    circle-check

    Grey Matter Fabric offers a fast, simple, and elegant model to build modern architecture while bridging legacy applications.

    The following graphic shows Fabric's basic capabilities--access, routing decisions, rate limits, health checks, discoverability, observability, proxying, network and micro-segmentation--and how they leverage all features found within each of the OSI layers described above.

    Grey Matter Fabric simultaneously functioning within layers 3, 4, and 7

    hashtag
    Edge

    Grey Matter Edge handles north/south traffic flowing through the mesh. Multiple edge nodes can be configured depending on throughput or regulatory requirements requiring segmented routing or security policy rules.

    • Traffic flow management in and out of the hybrid mesh.

    • Hybrid cloud jump points.

    • Load balancing and protocol control.

    • Edge OAuth security.

    circle-info

    Note: the Grey Matter Edge and Grey Matter Sidecar are the same binary configured differently based on north/south and east/west access patterns.

    hashtag
    Control

    • Automatic discovery throughout your hybrid mesh.

    • Templated static or dynamic sidecar configuration.

    • Telemetry and observable collection and aggregation.

    • Neural net brain.

    • API for advanced control.

    Simple deployment architecture.

    hashtag
    Security

    Grey Matter Fabric offers the following security features:

    • Verifies that tokens presented by the invoking service are trusted for such operations.

    • Performs operations on behalf of a trusted third party within the Hybrid Mesh.

    hashtag
    Sidecar

    Add Grey Matter to services by deploying a sidecar proxy throughout your environment. This sidecar intercepts all network communication between microservices.

    The Grey Matter Sidecar offers the following capabilities:

    • Multiple protocol support.

    • Observable events for all traffic and content streams.

    • Filter SDK.

    • Certified, Tested, Production-Ready Sidecars.

    • Native support for gRPC, HTTP/1, HTTP/2, and TCP.

    circle-info

    gRPC Protocol Basics

    • gRPC is an RPC protocol implemented on top of HTTP/2

    • HTTP/2 is a Layer 7 (Application layer) protocol that runs on top of a TCP (Layer 4 - Transport layer) protocol

    • TCP runs on top of IP (Layer 3 - Network layer) protocol

    Once you've deployed the Grey Matter Sidecar, you can configure and manage Grey Matter with its control plane functionality.

    hashtag
    Grey Matter Control Plane Functionality

    • Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic

    • Fine-grained control of traffic behavior with rich routing rules, retries, failover, and fault injection

    • A policy layer and configuration API supporting access controls, rate limits and quotas

    • Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress

    • Secure service-to-service communication in a cluster with strong identity-based authentication and authorization

    Example

    The following diagram shows how the Grey Matter Sidecar would operate in a North/South traffic pattern.

    North-South Traffic Pattern.

    hashtag
    Grey Matter Data

    Grey Matter Data is an API that enables secure and flexible access control for your microservices. Data consists of Grey Matter Data and JWT server, and includes an API Explorer to help you manage the API.

    Grey Matter Data's API Explorer simplifies the user experience.

    hashtag
    Grey Matter Sense

    Grey Matter Sense consists of four primary components: Intelligence 360, SLO, Business Impact and Catalog.

    hashtag
    Intelligence 360

    Intelligence 360 is our user dashboard that paints a high-level picture of the service mesh. Intelligence 360 includes the following features:

    • Mesh Overview

      • Running state of all services

      • Search, sort and filter options

    • Historical metrics per service

      • SLA warnings/violations (powered by )

      • Resource usage

      • Request traffic

    • Real-time metrics per service instance

      • Service instance drill down

      • Metrics explorer

    • Service configuration

      • Business impact

      • SLO

    hashtag
    SLO

    Grey Matter Service Level Objectives (SLOs) allows users to manage objectives towards service-level agreements. These objectives can be internal to business operations or made between a company and its customers. They are generic and are valuable in more than one use case.

    circle-check

    Key Definition

    SLOs are simply service performance objectives associated with metrics collected by the Grey Matter Sidecar, such as memory usage, request traffic (request rate, error rate, and latency).

    SLOs combine with Intelligence 360 time-series charts to visualize warning and violation thresholds for targeted performance analysis. These objectives are used even further to train Sense AI for service scaling recommendations.

    hashtag
    Business Impact

    Business Impact allows users to set metadata on services with the goal of associating how critical a service is towards the operations of a company, mission, or customer. Business Impact provides a list of values (Critical, High, Medium, Low) that correlates each service's business impact. Sense lets users of Intelligence 360 configure these values themselves, which can be used to filter and search via the mesh overview.

    hashtag
    Catalog

    Grey Matter Catalogarrow-up-right acts as an interface between the data plane (network of sidecars) of the service mesh and Intelligence 360. Catalog provides a user-focused representation of the mesh.

    Learn how to use the Catalog here.

    hashtag
    Questions?

    circle-check

    Want to learn more about Grey Matter Sense? Contact us at info@greymatter.ioenvelope to discuss your use case.

    hashtag
    Need Help?

    circle-check

    Create an account at Grey Matter Supportarrow-up-right to reach our team.

    Guides
    helm v3 installedarrow-up-right
  • envsubst installedarrow-up-right (a dependency of our helm charts)

  • eksctl installedarrow-up-right or an already running Kubernetes cluster.

  • hashtag
    Steps

    hashtag
    1. Install a Kubernetes Cluster

    NOTE: if you already have a Kubernetes cluster up and running, move to step 2. Just verify you can connect to the cluster with a command like kubectl get nodes

    For this deployment, we'll use EKSarrow-up-right to automatically provision a Kubernetes cluster for us. The eksctl will use our preconfigured AWS credentials to create master nodes and worker nodes to our specifications, and will leave us off with kubectl setup to manipulate the cluster.

    The regions, node type/size, etc can all be tuned to your use case, the values given are simply examples.

    Cluster provisioning usually takes between 10 and 15 minutes. When it is complete, you will see the follwing output:

    When your cluster is ready, run the following to test that your kubectl configuration is correct:

    hashtag
    2. Clone the Grey Matter Helm Charts Repo

    Though Helm is not the only way to install Grey Matter into Kubernetes, it does make some things very easy and reduces a large number of individual configurations to a few charts. For this step, we'll clone the public git repository that holds Grey Matter and cd into the resulting directory.

    NOTE: this tutorial is using a release candidate, so only a specific branch is being pulled. The entire repository can be cloned if desired.

    hashtag
    3. Setup Credentials

    Before running this step, determine whether or not you wish to install Grey Matter Data. If so, determine whether or not you will use S3 for backing. If you do want to configure Grey Matter Data with S3, follow the set up S3 for Grey Matter Data guide. You will need the AWS credentials from step 4 here.

    To set up credentials, we need to create a credentials.yaml file that holds some secret information like usernames and passwords. The helm-charts repository contains some convenience scripts to make this easier.

    Run:

    and follow the prompts. The email and password you are prompted for should match your credentials to access the Decipher Nexus at https://nexus.greymatter.io/arrow-up-right. If you have decided to install Grey Matter Data persisting to S3, indicate that when prompted, and provide the access credentials, region, and bucket name.

    Note that if your credentials are not valid, you will see the following response:

    hashtag
    4. Configurations

    To see the default configurations, check the global.yaml file from the root directory of your cloned repo. In general for this tutorial, you should use the default options, but there are a couple of things to note.

    • If you would like to install a Grey Matter Data that is external and reachable from the dashboard, set global.data.external.enabled to true.

      • If you are installing data and set up your credentials to persist to s3, set global.data.external.uses3 to true.

    • If you plan to update ingress certificates or modify RBAC configurations in the mesh, set global.rbac.edge to false. This turns off the default RBAC configuration and allows for more granular RBAC rules at the service level.

    • If you would like to install Grey Matter without SPIFFE/SPIRE, set global.spire.enabled to false.

    You can set global.environment to eks instead of kubernetes for reference, but we will also override this value with a flag during the installation steps in step 5.

    hashtag
    5. Install Grey Matter component Charts

    Grey Matter is made up of a handful of components, each handling different pieces of the overall platform. Please follow each installation step in order.

    1. Add the charts to your local Helm repository, install the credentials file, and install the Spire server.

    2. Watch the Spire server pod.

      Watch it until the READY status is 2/2, then proceed to the next step.

    3. Install the Spire agent, and remaining Grey Matter charts.

      If you see a template error or Error: could not find tiller, verify that you are using Helm version 3.2.4 and try again. If you need to manage multiple versions of Helm, we highly recommend using to easily switch between versions.

      NOTE: Notice in the edge installation we are setting --set=edge.ingress.type=LoadBalancer, this value sets the service type for edge. The default is ClusterIP. In this example we want an AWS ELB to be created automatically for edge ingress (see ), thus we are setting it to LoadBalancer. See the Kubernetes for guidance on what this value should be in your specific installation.

      While these are being installed, you can use the kubectl command to check if everything is running. When all pods are Running or Completed, the install is finished and Grey Matter is ready to go.

    hashtag
    6. Accessing the dashboard

    NOTE: for easy setup, access to this deployment was provisioned with quickstart SSL certificates. They can be found in the helm chart repository at ./certs. For access to the dashboard via the public access point, import the ./certs/quickstart.p12 file into your browser of choice - the password is password.

    An Amazon ELBarrow-up-right will be created automatically when we specified the flag --set=global.environment=eks during installation. The ELB is accessible through the randomly created URL attached to the edge service:

    You will need to use this value for EXTERNAL-IP in the next step.

    Visit the url (e.g. https://a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com:10808/) in the browser to access the Intelligence 360 Application

    Intelligence 360 Application

    hashtag
    7. Configure the CLI

    If you intend to move onto deploy a service into your installation, or otherwise modify/explore the Grey Matter configurations, you will need to configure the CLI.

    For this installation, the configurations will be as follows. Fill in the value of the edge service's external IP from the previous step for <EDGE-EXTERNAL-IP>, and the path to your helm-charts directory in <path/to/helm-charts>:

    Run these in your terminal, and you should be able to use the CLI, greymatter list cluster.

    You have now successfully installed Grey Matter!

    hashtag
    Cleanup

    If you're ready to shut down your cluster:

    hashtag
    Delete the Grey Matter Installation

    hashtag
    Delete The EKS Cluster

    NOTE: this deletion actually takes longer than the output would indicate to terminate all resources. Attempting to create a new cluster with the same name will fail for some time until all resources are purged from AWS.

    Understanding how the roles of Authentication, Authorization, Claims, and Principals will play within your MASA is important (figure 1). Authentication and authorization are both significant in any security model, but follow different concepts and implementation patterns. Authentication establishes and confirms an identity. Authorization takes action based on the confirmed identity authenticated. Principals are asserted claims that provide entitlements granting access to systems, services, or data based on Role-based Access Control (RBAC), Attribute-based Access Control (ABAC) and Next Generation Access Control (NGAC) controls.

    hashtag
    Authentication

    Grey Matter's authentication scheme establishes identities for every transaction within the platform. There are two types of identities: users and services.

    User Authentication methods:

    • OpenID Connect (OIDC)

    • mTLS x.509 certificates (Distinguished names represent who the user is)

    Service-to-Service Authentication methods:

    • mTLS x.509 certificates (SPIFFE identities are incorporated into the x.509 certificate)

    While distinct, these identities are not mutually exclusive. One of the most common access patterns within Grey Matter is a service making a request to another service on behalf of a user. In this case, there are three identities (two services and a user), each of which must be verified in order for the transaction to succeed. As users or services authenticate with Grey Matter, principals are asserted and flow to upstream services. This ensures that upstream services are aware of the entity (user or service) making a request. Grey Matter supports user authentication and service-to-service authentication methodologies identified below.

    hashtag
    User Authentication with an OIDC Provider

    Grey Matter integrates with existing public OIDC providers (Google, Github, etc.) or private OIDC providers (e.g., Ory Hydra) to support user authentication. OIDC is an authentication protocol built on top of OAuth 2.0 that allows delegation of authentication responsibility to a trusted external identity provider. Many implementations of OIDC providers are available and support on premise, cloud or as a service via a host of underlying technologies (e.g., LDAP). This sequence diagram (figure 3) shows the OIDC flow within Grey Matter.

    1. The client initiates a request to Grey Matter Edge.

    2. Grey Matter Edge responds with a 302 HTTP status code used to perform URL redirection, along with a callback URL.

    3. Based on the redirect URL, the client initiates a request to the specified OIDC provider.

    4. Once the client is authenticated, the OIDC provider responds with a 302 HTTP code (based on the callback URL) and provides an OIDC code.

    5. The client is redirected back to Grey Matter Edge sending the OIDC provided code.

    6. The Edge sends the OIDC code to the OIDC provider, validating and verifying the code.

    7. Once validating, the OIDC provider sends back the id_token. The id_token claims associated with the user, issuer, and audience.

    8. The Edge inspects the id_token and extracts the subject claim and expiration and prepends a user_dn header with the subject claim to the request and forwards it to the upstream sidecar and service.

    9. The upstream sidecar and service respond to the request.

    10. The edge prepends a signed cookie containing the user_dn and expiration to the response received from the upstream sidecar and service and forwards the response to the client.

    11. The client makes an additional request using the signed cookie that allows the edge to extract the user_dn directly up to the point of expiration at which point the client must re-authenticate.

    hashtag
    x.509 Certificates

    Grey Matter supports x.509 for both users and for service to service transactions.

    1. A client (user or service) initiates a request to a server.

    2. The server responds with its server certificate.

      2.1. The client verifies that the server’s certificate is valid based on its certificate information.

    3. The server requests the client's certificate.

    4. The client sends its certificate and key information to the server.

      4.1 The server verifies that the client's certificate is valid based on its certificate information.

      4.2. The server is able to decrypt the information sent to it based on the established trust.

    5. The client acknowledges that the handshake is complete.

    6. The server acknowledges that the handshake is complete.

    7. At this stage, the client and server certificates are validated and authenticated. All traffic is now passed through an encrypted communication channel.

    hashtag
    User Authentication with x.509 Certs

    Enterprise IT organizations that have existing public key infrastructure (PKI) in place for user authentication can pass their certifications with requests made to the Grey Matter Edge.

    hashtag
    Service Authentication with x.509 Certs

    Service authentication, (service-to-service communication), is based solely upon x.509 certificates and mTLS. Grey Matter Fabric is installed with a certificate authority that issues and reissues short- lived x.509 certificates to each sidecar proxy for intermesh communication. Each certificate contains a SPIFFE identity that uniquely identifies the sidecar to which it is issued. No sidecar will accept a connection from any service that does not present a certificate issued by the certificate authority. Like user authentication, these service identities enable authorization.

    Note: In cases where requests already contain a signed cookie the edge simply verifies the signature and expiry. If valid, the edge forwards the request. If not valid, the request is treated as unauthenticated.

    hashtag
    Authorization

    Authorization is the process by which identities (users or services) are granted permission to access resources within the mesh. For example, we may wish to restrict access to a specific resource to a limited set of users, services or data. As an added complication, it is often more desirable to grant or deny access for a resource to entire classes of identities (i.e., administrative users or trusted services). Grey Matter uses the authenticated identities and their attributes to support fine-grained access controls using the following methods:

    • Authorization Filters

    • Data Authorization via the Grey Matter Data Platform Service

    It's important to note that Sidecar-to-Sidecar (service-to-service) authorization follows similar patterns of a user with the exception that sidecar identities typically do not include additional attributes; however, there is nothing precluding the addition of attributes for a sidecar identity.

    hashtag
    Authorization filters

    Upon choosing an authorization pattern, access control becomes a deployment concern, not a development concern. Allowing microservice developers to focus on business value since their services will not receive any unauthorized request. Authenticated identity and attributes are available to the service should they be required.

    The Grey Matter Sidecar uses authorization filters to manage who is allowed to access which resources and how. Since all requests to the mesh are authenticated, filters can be dynamically configured at runtime with no additional requirements. Attribute based authorization is also implemented via Grey Matter Sidecar filters but requires that requests contain a signed JSON Web Token (JWT) containing the identity claims. The creation and population of these tokens is left to the enterprise.

    hashtag
    List Authorization Filter

    The Grey Matter Sidecar supports list-based authorization decisions within the ListAuth filter. This filter allows whitelisting and blacklisting of individual identities based upon the identities distinguished name (i.e, “cn=user, dc=example, dc=com” or “cn=web server, dc=example, dc=com”) or relative distinguished name (i.e., “dc=example, dc=com”). This filter applies to all requests for the proxied service or services.

    hashtag
    Role-Based Access Control Filter

    The Grey Matter Sidecar supports fine grained authorization decisions to authorize actions by identified clients using Role-Based Access Control (RBAC). This filter allows complex whitelisting and blacklisting of individual identities based upon the identities distinguished name. Matching of regular expressions is supported to add additional flexibility. Further, whereas the ListAuth filter applies to all requests, the Role Based Access Control Filter can be defined for any combination of service, route, or verb. This is useful to explicitly manage callers to a service running within the Grey Matter mesh platform and protect the mesh from unexpected or forbidden agents.

    Supported HTTP verbs include:

    • GET The GET method requests a representation of the specified resource. Requests using GET should only retrieve data.*

    • POST The POST method is used to submit an entity to the specified resource, often causing a change in state or side effects on the server.*

    • PUT The PUT method replaces all current representations of the target resource with the request payload.*

    • PATCH The PATCH method is used to apply partial modifications to a resource.*

    • DELETE The DELETE method deletes the specified resource.*

    Definitions as described by the Mozilla Developer Network (MDN). https://developer.mozilla.org/en-US/docs/Web/HTTP/Methodsarrow-up-right

    In situations where the identity is not sufficient to make all authorization decisions, the Grey Matter Sidecar can enforce finer-grained control based upon identity attributes, provided the request contains a sign JWT.

    Using the RBAC filter, rules can be created to authorize specific claims found within a JWT to perform specific actions. This requires an external service to generate a signed JWT for each request. Since the JWT is included as a header, if a JWT is passed, it will propagate to all sidecars in the request chain. With that said, if the request is completed—meaning the destination service has received it, processed it, and invokes another service in the Mesh—this is a new request and the calling service would be required to pass the JWT for further authorization purposes.

    hashtag
    Custom Access Control Filter

    The Grey Matter sidecar offers a custom filter interface, so customers have the ability to create business-specific logic around their security and regulation concerns if required. This makes the mesh fully adaptable to an enterprise’s needs, and provides a way to take advantage of existing IT investments.

    hashtag
    Data Authorization

    One of the unique facets of Grey Matter is that data security and sharing is addressed by Grey Matter Data. As enterprises shift from monoliths to microservices, data tends to be duplicated across the architecture. Grey Matter Data provides a service to address the secure sharing of this data without marshalling it into and out of processes. This feature is described in greater detail in the Data Segmentation portion of this document, but the pattern employed by Grey Matter Data can be used by any service to enforce complex security policies for any resource via the Grey Matter JWT Security Service or customer JWT service adhering to the Grey Matter Data interface.

    hashtag
    Traffic Patterns

    One key feature of the Grey Matter hybrid mesh is its ability to secure, manage, and govern the traffic patterns of running services.

    hashtag
    East/West Traffic

    East/West traffic within the Fabric Mesh should be done via mTLS. Grey Matter uses two methods enabling this: direct integration with existing CAs and, automatic setup via SPIFFE/SPIRE. For integration with existing CAs, each sidecar in the mesh is configured to use provided x. certificates. In the automatic setup, each sidecar uses a unique SPIFFE ID to authenticate with SPIRE servers. Unique short-lived x.509 certificates are then automatically created and rotated for each connection between sidecars.

    Sidecar A and sidecar B have been granted mTLS certificates, to establish an encrypted communication channel between each other. However, the mesh was not configured to allow Sidecar A to communicate with Sidecar B. In this scenario, even though Sidecar A and Sidecar B were granted mTLS certificates through a common certificate authority (CA), access is still denied.
    The mesh is configured to allow Sidecar A to communicate with Sidecar B. However, Sidecar B is not allowed to communicate with Sidecar A. If Sidecar B tries to send a request to Sidecar A, access is denied.

    hashtag
    North/South Traffic

    North/South traffic patterns use the Grey Matter Edge to establish principals and pass them to called services. The Edge supports both OIDC and mTLS x.509 certificate authentication modes, however, the Fabric Mesh is not limited to a single Edge. Multiple nodes can be configured to expose both authentication modes wherever an access point is needed. Note that the Edge node does not have to be exposed to a publicly addressable URL. In many cases, an API mediation layer may be put in front of the Edge node. In all cases, the Edge node is responsible for verifying and ensuring that the proper principals are available for downstream services within the Fabric Mesh to consume.

    Principals such as user identities are moved by the Edge node into a user_dn header which flows through the entire service-to-service request chain. Each following link in the request chain is performed via mTLS, with each unique service using automatically rotating x.509 certificates established via SPIFFE Identities and the SPIRE framework.

    In some cases traffic needs to flow outside of the mesh. Common scenarios include mesh-to- mesh communications, proxying to serverless functions, and supporting legacy systems that can’t be moved directly into the mesh. In all these cases, proxies are setup within the mesh with the sole purpose of communicating outside. Principals are established at the Edge. Inter-mesh communication is still handled by mTLS, and requests are authenticated via the outside system by whatever method it accepts: RPC, HTTP, mTLS, or OIDC.

    hashtag
    Traffic Splitting

    Traffic splitting is another important pattern in stable environments. Traffic splitting allows configurable service requests to siphon off percentages of requests to another source. This allows services, apps, or entire meshes to experience small amounts of live traffic while keeping most users on the original source. The percentage of users on the original service is then decreased until the service is fully migrated.

    hashtag
    Circuit Breaking

    Circuit Breaking is a way for each sidecar to protect the thing it is proxying to, but it is not a way to have that proxy harden itself. Grey Matter provides circuit breakers at every point in the mesh.

    The most common place for this to occur is at the edge, where a DDOS could overwhelm the edge nodes themselves. To solve this, we employ Rate Limiting, which can protect the edge node from accepting too many requests and opening too many file handles and crashing. With proper configuration, each sidecar ceases queueing new requests before they’re overwhelmed, allowing the service time to heal. This ensures capabilities can withstand malicious attacks and accidental recursive network calls without going down.

    hashtag
    Network Segmentation

    Enterprises prefer hybrid environments capable of leveraging unified on-premise and cloud resources. Traditional networking patterns use features such as VLANs to create perimeter-based firewalls, but this concept breaks down with modern mesh application service architecture (MASA) patterns. In MASA, services are designed to be ephemeral, dynamically generating different IP:PORT pairs each time a new instance spins up.

    Securing this type of architecture requires network segmentation. Grey Matter isolates services and network fabric communications to specific runtime environments or infrastructure resources. Grey Matter Fabric supports segmentation to a very fine level of granularity. Each service launched onto Fabric comes online with no knowledge of or connections to any other point on the mesh. The desired mesh is then built up through configuration with the required network topology. Segmentation is enforced through routing rules, service discovery, and mTLS. Dynamic configuration can facilitate any permutation of intra-mesh communication required. In addition to segmentation of individual meshes, Grey Matter can also support multi-mesh operations. This allows the bridging of environments already physically or logically isolated from each other.

    hashtag
    Micro-segmentation

    Micro-segmentation is a method of creating secure isolation zones either on-premise or in the cloud in order to separate different workloads. Authentication plays a key role in micro-segmentation. Authentication is responsible for establishing network communications and flow through the mesh. Strong authentication models enable Grey Matter to perform micro-segmentation for users, services, and data throughout the mesh.

    User-to-Service segmentation is controlled through user authorization signatures. These can be coupled with claim-based assertions. User identities and claims flow through mTLS-encrypted communication channels established by service-to-service micro-segmentation patterns. Complex security policies within each sidecar allow ABAC/RBAC down to the service, route, and HTTP verb permit. This enables a very high degree of isolation. ABAC/RBAC policies cannot be achieved without strong authentication methodologies establishing identities for both users and services.

    Service-to-Service segmentation is controlled through mTLS certificates and SPIFFE identities. These can be coupled with claims-based assertions and ABAC/RBAC policies. Images in the following section illustrate how this is achieved.

    hashtag
    Data Segmentation

    Grey Matter’s data segmentation capability is a key differentiator. Data segmentation is the process of dividing data and grouping it with similar data based on set parameters. Grey Matter Data adds complex policy assertions to stored objects. These object policies govern which users or services may access the objects. Objects stored within Grey Matter Data are encrypted at rest and in transit. A JSON Web Token (JWT) is provided to gain access to an object stored in Data.

    The token’s claims are dynamically mapped to the policies stored with the object. JWTs for both users and services can be created enabling end-to-end security using authentication principals.

    The example above shows how data-segmentation is achieved through simple policy. However, the Grey Matter Data policy engine is designed to deal with complex rules designed to suit any scenario. The following scenario presents a more complex use case.

    1. Sidecar A saves an object into Data and provides access privileges to Sidecar B’s SPIFFE identity. Sidecar A dynamically discovers Data via the Data Sidecar routing information.

    2. Data Sidecar receives Sidecar A’s request and streams the object (with policy) into the Grey Matter Data node.

    3. Sidecar B (through a means of event-based architecture patterns) is notified that Sidecar A just saved an object of interest into Data. Sidecar B calls into Data (through the Data Sidecar) to retrieve the object. Sidecar B’s SPIFFE identity is passed along with the request.

      3.1. Data Sidecar receives the request from Sidecar B and passes it to Data. Data uses the Sidecar B principal (i.e. SPIFFE identity) to receive Sidecar B’s JWT claims and authorize access to decrypt and retrieve the object.

    4. Sidecar C is an outlier listening for arbitrary events. Based on the event broadcasted, Sidecar C attempts to retrieve the encrypted object stored in Data. Sidecar C is entitled to talk to Data via the Data Sidecar but does not have access to all data stored.

    5. Data Sidecar receives the request from Sidecar C and passes it to Data. Using Sidecar C’s principal (i.e. SPIFFE identity) Data retrieves its corresponding JWT claims and denies access to the object stored.

    Since Grey Matter uses a unified principal model, data segmentation can be achieved for users as well. Grey Matter Data policies can be set to identify different access privileges for services and users on a single stored object, and can be customized around business needs. This paradigm provides a new model that combines network, information assurance, and protection concepts around zero-trust.

    Grey Matter Data supports the ability to host multiple Data nodes available through different routing rules. When coupled with other segmentation features, enterprises are able to further isolate how information is stored, accessed, and controlled based on customer regulations and requirements.

    For example, logs and observable traffic can be isolated based on zones. Data nodes with specific routing rules and policies are set to enforce the topology. Customer application data can be stored and accessed via different Data nodes (on-premise or in the cloud) and tightly controlled at the micro-segmentation layer or via data policies. These types of flows are depicted in the diagram below.

    hashtag
    Conclusion

    Grey Matter’s zero-trust threat model ensures security across every service in the hybrid mesh. Each transaction is authenticated and authorized through a combination of mTLS and SPIFFE authentication and SPIRE authorization providing multiple layers of zero-trust security. Grey Matter also supports fine-grained access control by combining authenticated identities with policy-enforced object authorization and enables East-West and North-South traffic pattern splitting and shadowing for in-depth monitoring and configuration. Finally, Grey Matter uses network and data segmentation to decompose operations to their most basic elements, to mitigate cyber intrusion impacts, and to optimize operations.

    hashtag
    Questions?

    circle-check

    Have a question about Grey Matter's security models? Reach out to us at info@greymatter.ioenvelopeto learn more.

    To complete the following guides, you will need:
    1. An AWS account with permissions to provision an EKS cluster and the AWS CLIarrow-up-right set up.

    2. A Grey Matter account with access to the Grey Matter artifactsarrow-up-right. If you don't have an account contact us at Grey Matter Supportarrow-up-right.

    hashtag
    Steps to Grey Matter

    To get started with a core Grey Matter installation, step through our guides in the following order:

    1. Install the Grey Matter CLI

    2. Install Grey Matter Locally

    3. Deploy a Service in Grey Matter

    hashtag
    Overview

    Check out the about Grey Matter sectionarrow-up-right for reference information on Grey Matter.

    hashtag
    Questions?

    circle-check

    Need help?

    Create an account at Grey Matter Supportarrow-up-right to reach our team.

    Install the Grey Matter CLIchevron-right
    Image of check list
    info@greymatter.ioenvelope
    Edge supporting a multi-mesh deployment model.
    info@greymatter.ioenvelope
    info@greymatter.ioenvelope

    Access Sidecar Admin Interface in Grey Matter on Kubernetes

    Because the Sidecar admin interface can contain sensitive information and allows users to manipulate the sidecar, access is not routed through the mesh. In a Grey Matter deployment, access is locked down and only accessible by admin users who have direct access to the pod.

    This guide is a step by step walkthrough on how to access a Sidecar's admin interfacearrow-up-right. This interface is used to both inspect and operate on the running server; doing tasks like inspecting stats or performing a hard shut-down. In this walkthrough, we'll perform the following common tasks:

    • Examine the live stats

    hashtag
    Prerequisites

    1. An existing Grey Matter deployment running on Kubernetes ()

    2. kubectl or oc setup with access to the cluster

    NOTE, the operations on the admin server do not depend on the platform in use. Since this guide is focused on Kubernetes, we'll use kubectl for access to the server. On other platforms, tools like ssh sessions can be used instead.

    hashtag
    Steps

    For all examples shown here, the Sidecar's admin server has been started on port 8001. This is the default for Grey Matter deployments.

    hashtag
    1. Establish a Session

    The first step is to establish a session inside the pod we need to examine. For this walkthrough, lets look at the edge service. To do this, we'll need the POD ID for any edge nodes in the deployment:

    We can see we have one edge pod with an ID of edge-7d7bf848b9-xjs5l. We'll now use the kubectl tool to start a shell inside that pod. After running the command below, you'll be left off with a shell inside the container.

    Now that we're in the pod, we can run commands against the admin interface. Here we'll use the curl to send commands since it's widely available and installed by default on Grey Matter docker images.

    hashtag
    2. General Admin Commands

    To see the full list of endpoints available, send a GET request to the /help endpoint. Full descriptions of each can be found in the .

    In the next 3 sections, we'll show sample usage of the /stats and /config_dump endpoints.

    hashtag
    2. Stats

    The /stats endpoint is used to output all of the collected statistics. The first ~30 lines are shown below, but the full output is much larger and is dependent on the exact configuration of the sidecar being examined. For each new filter, listener, or cluster that is configured, new stats will be output. The stats will also be different depending on each connection type, e.g. or .

    To query just a subset of the stats, use the filter query parameter:

    or

    hashtag
    3. Configuration Dump

    Next we'll dump out the entire configuration of the Sidecar. The output of this command is routinely many thousands of lines long, but is very useful to inspect the exact state of a sidecar at any given moment. This config is mostly useful for verifying the exact behavior of the sidecar against what was intended.

    hashtag
    5. Exit

    Type exit in the terminal to return to drop the connection to the pod and return to your local shell.

    Deploy Service to Grey Matter on Kubernetes

    This guide is a step by step walkthrough on deploying a new service into an existing Grey Matter deployment. This guide is for a SPIFFE/SPIRE enabled deployment. For a walkthrough on deploying a service into a non-SPIFFE/SPIRE Grey Matter setup, follow the Quickstart Launch Service to Kubernetes for a non-SPIFFE/SPIRE Deploymentarrow-up-right.

    If you are looking for a quick service deployment and/or configuration guide, check out the Grey Matter templatesarrow-up-right.

    hashtag
    Prerequisites

    1. An existing Grey Matter deployment running on Kubernetes ()

    2. kubectl or oc setup with access to the cluster

    3. greymatter cli with access to the deployment

    hashtag
    Overview

    1. Launch pod with the service and sidecar

    2. Create Fabric configuration for the sidecar to talk to the service

    3. Create Fabric configuration for the Edge to talk to the sidecar

    hashtag
    Steps

    hashtag
    1. Launch pod

    The service we'll launch is a simple Fibonacci service. It has one route: /fibonacci that calculates the fibonacci sequence of any integer supplied.

    Note the spire-specific configurations to the deployment - the volume and volume mount spire-socket and the environment variable SPIRE_PATH. These are the additions that will need to be made to any deployment for a service you wish to add to the mesh with SPIFFE/SPIRE.

    Save the above deployment file as deployment.yaml and launch the pod:

    hashtag
    2. Local Routing

    The next steps are to create objects in the Fabric API. These objects will create all the configuration for the Sidecar to handle requests on behalf of the deployed service.

    This step creates and configures the Grey Matter objects necessary to allow the sidecar container in the deployment to route to the fibonacci container (the service itself). We will refer to this as "local" routing. The will configure the Edge proxy to route to the fibonacci sidecar, thus fully wiring the new service into the mesh for routing.

    NOTE: This guide goes over deploying a new service and configuring it for ingress routing. To configure a service for both ingress and egress routing within the mesh, see the .

    For each Grey Matter object created, create the local file and send them to the API with: greymatter create <object> < <file_name.json>.

    hashtag
    Domain

    The first object to create is a Grey Matter Domain the ingress domain for the fibonacci sidecar. This object does virtual host identification, but for this service we'll accept any hosts ("name": "*") that come in on the port 10080 (the port with name proxy-or value of Grey Matter Control environment variable GM_CONTROL_KUBERNETES_PORT_NAME-in the sidecar container). Note that force_https is set to true, requiring any incoming connection be https.

    See the for more information.

    Save this file as domain.json and apply it with

    hashtag
    Listener

    The next object is the ingress listener. This is the physical binding of the Sidecar to a host interface and port and is linked in the field domain_keys to a specific domain. The listener and domain configurations determine where the sidecar should listen for incoming connections on and what kind of connections it should accept.

    The listener object is also the place to configure . See the for more information.

    Note the secret field. This field is required for service to service communication in a SPIFFE/SPIRE setup. This secret tells the sidecar to fetch its SVID (with ID spiffe://quickstart.greymatter.io/fibonacci) from Envoy and present it to incoming connections. It also will set a with match subject alternative names specifies to only accept incoming requests with SAN spiffe://quickstart.greymatter.io/edge. See the for specifics. The listener secret configuration will be important for the .

    Save this file as listener.json and apply it with

    hashtag
    Proxy

    The proxy object links a sidecar deployment to it's Grey Matter objects. The name field must match the label on the deployment (in this case greymatter.io/control) that Grey Matter Control is looking for in it's environment variable GM_CONTROL_KUBERNETES_CLUSTER_LABEL. It takes a list of domain_keys and listener_keys to link to the deployment with cluster label matching name.

    See the for more information.

    Save this file as proxy.json and apply it with

    hashtag
    Local Cluster

    The next object to create is a local cluster. The cluster is in charge of the egress connection from a sidecar to whatever service is located at its configured instances, and can set things like circuit breakers, health checks, and load balancing policies.

    This "local" cluster will tell the sidecar where to find the fibonacci container to send requests. From the deployment above, we configured the fibonacci container at port 8080. Since the sidecar and fibonacci containers are running in the same pod, they can communicate over localhost.

    See the for more information.

    Save this file as fibonacci-local-cluster.json and apply it with

    hashtag
    Local Shared Rules

    The shared rules object is used to match to . They can do some features of routes like setting retry_policies and appending response data, but they also can perform traffic splitting between clusters for operations like blue/green deployments.

    This local shared rules object will be used to link the in the next step to the that we just created.

    See the for more information.

    Save this file as fibonacci-local-rules.json and apply it with

    hashtag
    Local Route

    match against requests by things like URI path, headers, cookies, or metadata and map to shared_rules. Since this service only needs to forward everything it receives to the local microservice, the setup is fairly simple.

    This "local" route will link the (fibonacci-domain) to the (fibonacci-local-rules) that we just created. We know that the fibonacci-local-rules object is used to link routes to the fibonacci-cluster, thus with this route object applied, the fibonacci sidecar will be configured to accept requests and route to the fibonacci service.

    See the for more information.

    The path indicates that any request coming into the sidecar with path / should be routed to the fibonacci service. We will see in the next step when configuring that all requests from the Edge proxy to the fibonacci service will come in at this path.

    Save this file as fibonacci-local-route.json and apply it with

    The sidecar will now be configured to properly accept requests and route to the fibonacci service. The next step will configure the Edge proxy to route to the sidecar.

    hashtag
    3. Edge Routing

    Now that the Sidecar-to-Service routing has been configured, we will set up the Edge-to-Sidecar routing because we want this service to be available to external users.

    The process will take similar steps to what was done before, but we only need to create a cluster, a shared_rules object pointing at that cluster, and two routes.

    hashtag
    Edge to Fibonacci Cluster

    This will handle traffic from the Edge to the Fibonacci Sidecar. The Edge has existing domain (with domain key edge), listener, and proxy much like the ones we just created for the fibonacci service. The first step to configure the Edge to fibonacci service is to create a cluster to tell it where to find the fibonacci sidecar.

    NOTE that there are several differences between this cluster and the created above:

    1. The instances field is left as an empty string, whereas the fibonacci-local-cluster instances were configured. This is because Grey Matter Control will discover the fibonacci deployment and the instances array will be automatically populated from this service discovery: the instances will go up and down whenever the service scales or changes. To do this, (in the same way as described in creating the object above) the name field must match the cluster label on the deployment.

    2. This cluster has a secret set on it, and

    Save this file as edge-to-fibonacci-cluster.json and apply it with

    hashtag
    Edge to Fibonacci Shared Rules

    The edge to fibonacci will be used to link the to the edge-to-fibonacci-cluster .

    Save this file as edge-to-fibonacci-rules.json and apply it with

    hashtag
    Edge to Fibonacci Routes

    There are two that need to be created to send requests through the Edge proxy to the fibonacci service. In the same way that the was connected to the fibonacci-domain, these routes will be connected to the edge domain, and will configure how the edge sidecar sends routes.

    They are nearly identical, but one provides for the case that the path on the request to send to the fibonacci service ends in a slash, and one for the case that it doesn't.

    This route looks for the path on a request into the edge proxy /services/fibonacci with no trailing slash, replaces that path with /, and sends the request to the cluster (edge-to-fibonacci-cluster) being pointed to by the edge-to-fibonacci-rules.

    This route looks for the path on a request into the edge proxy /services/fibonacci/ this time with a trailing slash, replaces that path with /, and sends the request to the same cluster.

    Save these files as edge-to-fibonacci-route.json and edge-to-fibonacci-route-slash.json and apply them with

    Once these routes are applied, the service is fully configured in the mesh! You should be able to access the service at https://{your-gm-ingress-url}:{your-gm-ingress-port}/services/fibonacci/ with response Alive. To send a request for a specific fibonacci number, https:///{your-gm-ingress-url}:{your-gm-ingress-port}/services/fibonacci/fibonacci/<number>.

    If you don't know your gm ingress url and you followed the guide, run

    and copy the EXTERNAL-IP and port (by default the port will be 10808).

    hashtag
    4. Catalog

    The last step in deploying a service is to add the expected service entry to the Grey Matter Catalog service. This will interface with the control plane, and provide information to the Intelligence 360 Application for display.

    Save this file as fibonacci-catalog.json and using the quickstart certificates from the helm chart repository at ./certs, make the following post request to the catalog service:

    You'll see the following response if the addition was successful.

    And the service will display in the Intelligence 360 Application.

    Deploy Service for Ingress/Egress Actions to Grey Matter on Kubernetes

    This is an example of deploying a new service into the mesh and configuring it for ingress and egress actions using SPIFFE/SPIRE identities.

    hashtag
    Prerequisites

    1. An existing Grey Matter deployment following the Grey Matter Quickstart Install with Helm/Kubernetes guide.

      1. This guide assumes you've followed the step to configure a load balancer for ingress access to the mesh. If so, you should have a URL to access the Grey Matter Intelligence 360 application (e.g. a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com:10808).

      2. Make note of your load balancer ingress URL and use it wherever this guide references {your-gm-ingress-url}.

    2. setup with a running Fabric mesh.

    hashtag
    Overview

    1. Generate k8s deployment with the service and sidecar

    2. Configure the sidecar for ingress actions

    3. Configure the sidecar for egress actions

    hashtag
    Steps

    hashtag
    1. Create Deployment

    The service we will be using for this walkthrough is a simple egress/ingress service with two endpoints. The /ingress endpoint will return Ingress request to simple-service successful!. The /egress endpoint takes an environment variable EGRESS_ROUTE, generates a request to that route, and returns its response.

    There are a few things to note before generating the deployment file:

    1. The value of the pod label that Grey Matter Control is discovering, environment variable GM_CONTROL_KUBERNETES_CLUSTER_LABEL

    2. The value of the kubernetes port name that Grey Matter Control is discovering, environment variable GM_CONTROL_KUBERNETES_PORT_NAME

    3. The trust_domain of the SPIRE server

    Using the Grey Matter Helm Charts, the values of the first and second points are greymatter.io/control and proxy by default. The deployment will then need that pod label greymatter.io/control with a value identifying the service to Grey Matter Control, and its sidecar container will need a port labeled proxy.

    The SPIRE trust domain, by default, is quickstart.greymatter.io. This will be important for . The SPIRE registrar service, by default, is configured to register entries with the SPIRE server for every pod that is created with the same label that Grey Matter Control is looking for, greymatter.io/control. It will generate SPIFFE identities for these pods in the form spiffe://<trust_domain>/<greymatter.io/control-value>. For example, for a pod in the default setup with label greymatter.io/control: control-api, the registrar will generate an entry with SPIFFE ID spiffe://quickstart.greymatter.io/control-api.

    Note: If the registrar service in your setup is configured without the pod_label specification, it will generate entries with the SPIRE server in a different form, which will determine both the way that the mesh is configured and the deployment. The entries in this setup will be of the form spiffe://<trust_domain>/ns/<namespace>/sa/<service_account>, and thus the secrets in the mesh configurations will need to reflect this format instead.

    Based on the defaults described, the deployment for the simple-service will look like the following:

    Note: we set the environment for the service container EGRESS_ROUTE to https://localhost:10909/catalog/summary. This will be important when

    Save the deployment as deployment.yaml and apply the deployment:

    Run kubectl get pods -l=greymatter.io/control=simple-service and make sure that the containers are running 2/2 before moving on the the mesh configuration.

    hashtag
    2. Configure the deployment for ingress actions

    Now, we will generate the mesh configurations in order to route from edge to the sidecar of the deployment and from the sidecar to the simple-service itself. At the end of this step, the /ingress endpoint of the simple service should be accessible through the edge and properly return Ingress request to simple-service successful!.

    hashtag
    Domain

    The first object to generate is the ingress domain of the service.

    Save this file as ingress-domain.json and apply the domain object using the greymatter cli:

    hashtag
    Listener

    Next, create the corresponding ingress listener of the service.

    The combination of the ingress domain and listener opens 0.0.0.0:10808 to accept connections. The secret set on the listener specifies the SPIFFE identity for the simple-service pod, spiffe://quickstart.greymatter.io/simple-service. The sidecar will fetch the certificate for this identity via . The subject_names field of the secret specifies that only incoming connections with certificate SAN equal to spiffe://quickstart.greymatter.io/edge will be accepted by the listener.

    Save this file as ingress-listener.json and apply it:

    hashtag
    Proxy

    The proxy object will indicate that the above domain and listener are meant to configure this specific sidecar. In order to link these objects to the deployment, the name field of this proxy object must equal the value of the deployment label greymatter.io/control, which in this case is simple-service.

    Note: the domain_keys and listener_keys fields take a list of strings. When we add a second domain and listener in , we will add their keys to this proxy.

    Save this file as proxy.json and apply it:

    hashtag
    Clusters

    There are two clusters necessary for ingress connections. The first is the edge-to-simple-service-cluster. This cluster will be used to locate and route to the simple-service sidecar from edge. In opposition to the listener secret above, the secret on the edge-to-simple-service-cluster specifies then that when connecting to the sidecar, use the spiffe certificate with identity spiffe://quickstart.greymatter.io/edge and only connect to certificate with SAN equal to spiffe://quickstart.greymatter.io/simple-service.

    The instances field is left empty because Grey Matter Control will discover the sidecar with label greymatter.io/control: simple-service, and add the instances to this cluster because the name field matches. Thus, like the proxy object, it is important to note that the name field must equal the greymatter.io/control label.

    The second cluster connects the sidecar to the simple-service, which is running on port 8080 as specified in the deployment. Because the two containers share a pod, this connection is plaintext over localhost and needs no secret. Because we know where the service is running in connection to the sidecar, localhost:8080, we can hardcode the instance.

    Save these files as edge-to-simple-service-cluster.json and simple-service-cluster.json, respectively, and apply them:

    hashtag
    Shared Rules

    The two clusters created above, edge-to-simple-service-cluster and simple-service-cluster need two shared_rules objects to allow routes to connect to them.

    When specifying routes from edge to the simple-service sidecar, we'll use edge-to-simple-service-rules, and when specifying routes from the simple-service sidecar to the service, we'll use simple-service-rules.

    Save these files as edge-to-simple-service-rules.json and simple-service-rules.json, respectively, and apply them:

    hashtag
    Routes

    The last step in the ingress configuration is routes. There will be three routes for ingress to the service.

    The first two specify when and how edge should connect to the simple-service sidecar. These two routes will be configured on the edge sidecar, and when a request comes in with path prefix equal to /services/simple-service/ or /services/simple-service, it will strip off that value from the path and forward the request to the cluster specified in the shared_rules object that it links to.

    The last route configures the simple-service sidecar to route any incoming requests with path prefix "/" to the simple-service.

    Save these files as edge-route.json, edge-route-slash.json, and service-route.json and apply them:

    Once all of the configurations are applied, you should be able to access the simple-service at https://{your-gm-ingress-url}/services/simple-service/ and the ingress route at https://{your-gm-ingress-url}/services/simple-service/ingress. Note that if you try to hit https://{your-gm-ingress-url}/services/simple-service/egress, the request will fail with upstream connect error or disconnect/reset before headers. reset reason: connection termination. This is because we have not yet configured the service for egress actions.

    hashtag
    3. Configure the deployment for egress actions

    For any service being deployed in place of simple-service that generates requests to other services in the mesh, there is another set of configurations necessary.

    For this walkthrough, we have configured the simple-service with environment variable EGRESS_ROUTE=http://localhost:10909/catalog/summary. The example that this will show is the simple-service /egress endpoint generating a request to the Grey Matter Catalog service, which is also running inside the mesh, and returning the json response from a GET request of its /summary endpoint.

    hashtag
    SPIFFE/SPIRE and egress explanation

    In order to allow the simple-service to make requests inside the mesh, we need to open up a second domain/listener combo on the sidecar, this time on a different port than the ingress at 10808. We will use 10909. In a non-SPIFFE/SPIRE setup, this second domain/listener is not necessary.

    This is necessary for a SPIFFE/SPIRE setup because all of the sidecars are using SPIFFE certificates, fetched by the sidecar using Envoy SDS and not through a physical mount into the service or sidecar containers. Thus, the containers themselves can not have and do not need certificates mounted into them. Because of this, any request generated from within the service (service-A) can only be http. But, all incoming requests to a service (say service-B) within the mesh must go through it's sidecar, and the sidecar is only accepting https requests with specific SPIFFE certificates.

    The services service-A and service-B can talk plaintext to their own sidecars. All sidecars have access to their SPIFFE identities via Envoy SDS, so two sidecars can communicate with each other over https. Thus, the flow of a request generated by service-A to service-B looks like the following:

    This allows us to break down the need for the second, "egress" / combo on port 10909, and how the EGRESS_ROUTE is formed. With the sidecar for service-A (or in this example the simple-service sidecar), listening for plaintext connections over localhost at 10909, the simple-service itself can direct requests to its own sidecar at http://localhost:10909. The sidecar, then, can use path_matching on a configured to forward specific requests to an . As described below, this egress cluster will be configured with the correct SPIFFE identity for service-A, and the request will be sent to the ingress domain/listener for service-B

    For this example, where simple-service wants to make a request to the Grey Matter Catalog service for its /summary endpoint, the flow will look like:

    hashtag
    Egress Domain

    The egress domain object will have force_https set to false, as we want the sidecar to accept plaintext connections on 10909. It will add the custom header x-forwarded-proto: https to requests coming from the domain.

    Save this file as egress-domain.json and apply it:

    hashtag
    Egress Listener

    Save this file as egress-listener.json and apply it:

    hashtag
    Egress Cluster

    The egress cluster must have name equal to the greymatter.io/control pod label for the service it is trying to reach in order to Grey Matter Control to give it the instance for the service. In our case, this is the Grey Matter Catalog service, which has greymatter.io/control label "catalog".

    The secret on this cluster will use the SPIFFE certificate with id spiffe://quickstart.greymatter.io/simple-service and connect only over connections with SAN spiffe://quickstart.greymatter.io/catalog.

    Save this file as simple-service-to-catalog-cluster.json and apply it:

    hashtag
    Egress Shared Rules

    To configure the egress route to send requests to the simple-service-to-catalog-cluster, we'll generate a shared_rules.

    Save this file as simple-service-to-catalog-rules.json and apply it:

    hashtag
    Egress Route

    For this example, the egress route we want is to the Grey Matter Catalog service. The way we have chosen to specify this is using path prefix /catalog on the request. This route will take requests into the simple-service-domain-egress with path prefix math /catalog, strip the prefix and forward the request through the cluster specified in shared_rules object simple-service-to-catalog-rules.

    Save this file as simple-service-to-catalog-route.json and apply it:

    Now there are several updates to existing Grey Matter objects that need to be made. First, we know that the simple-service-to-catalog-cluster will send requests to the Catalog service using SPIFFE certificate with id (or SAN) spiffe://quickstart.greymatter.io/simple-service. The Catalog service ingress listener secret will need to have this id added to its subject_names in order to accept this request.

    Locate the secret field. The subject_names should contain only the edge identity, spiffe://quickstart.greymatter.io/edge. Change the subject names field to also allow our simple-service identity,

    Lastly, we need to tell the object for the simple-service to add the egress domain and listener to the service configuration.

    and add simple-service-domain-egress to the domain_keys and simple-service-listener-egress to the listener_keys. It should look like:

    Now, the simple-service should be configured to successfully make requests to the Grey Matter Catalog service. Go to https://{your-gm-ingress-url}/services/simple-service/egress to test this out. The egress request should return a string of json, and if you hit the catalog /summary endpoint directly through edge, https://{your-gm-ingress-url}/services/catalog/latest/summary, you should see the same result.

    For any internal services that your service needs to be able to connect to in the above way will need their own service-a-to-service-b cluster, route, and shared_rules like the egress ones above. They will connect to the egress domain.

    Remove Service from Grey Matter on Kubernetes

    hashtag
    Prerequisites

    1. An existing Grey Matter deployment running on Kubernetes _(tutorial)

    2. kubectl or oc setup with access to the cluster

    3. greymatter cli setup with access to the deployment

    hashtag
    Overview

    1. Delete service from Intelligence 360 Application

    2. Delete Edge routes

    3. Delete pod with the service and sidecar

    NOTE: The exact configurations and commands done here assume you've gone through the tutorial to setup this service. For any other service; you'll just need to substitute your object keys and service deployments where appropriate.

    hashtag
    Steps

    hashtag
    1. Delete Catalog Entry

    The first step is to remove the service entry from the Catalog server so that users will not see it and expect to be able to use it.

    This will remove the service card from display.

    hashtag
    2. Edge Routing

    Next we'll remove any Edge routing configuration. This will prevent users from using the service.

    First delete the cluster object.

    Then delete the route and shared_rules. We can delete these both with a single call because they're explicitly linked by object keys.

    Shortly after these steps (generally a few seconds) the Edge will have removed the configuration to route to this service and users can no longer call the service.

    hashtag
    3. Delete Deployment

    Now that the service is no longer routed to from the edge, we can fully spin down the pod so that it no longer can receive any traffic and stops announcing into the mesh. This can be done in any way supported by k8s:

    • Scale the deployment/replicaset to 0

    • Delete the resources (deployment, replicaset, pod)

    Assuming you've followed out tutorial, the deployment set can be removed the kubectl command:.

    hashtag
    4. Delete Sidecar config

    NOTE This step is optional for removing a service, and should only be done if this service is not expected to spin back up. These configuration objects are not sent to any Sidecar now that the pod is spun down, and nothing is referencing them for routing.

    The last step is to delete the configuration objects in the API for this service.

    First we'll delete the cluster for the Sidecar to talk to it's local microservice.

    Then we'll delete the domain, and use the --deep option to make sure no other Sidecars are using it.

    Third, delete the listener:

    Then we can delete the route and shared_rules together

    Finally, we'll delete the overarching proxy object.

    The service and all configs are now removed from GreyMatter.

    Setup Rate Limiting in Grey Matter on Kubernetes

    Envoy allows configuration of how many requests / second a service can field. This is useful in preventing DDOS attacks and otherwise making sure your server's resources aren't easily overrun.

    Unlike circuit breakersarrow-up-right which are configured on each cluster, rate limiting is configured across multiple listeners or clusters. Overall, circuit breakers are good for avoiding cascading failures of a bad downstream host. However, when:

    [A] large number of hosts are forwarding to a small number of hosts and the average request latency is low (e.g., connections/requests to a database server). If the target hosts become backed up, the downstream hosts will overwhelm the upstream cluster. In this scenario it is extremely difficult to configure a tight enough circuit breaking limit on each downstream host such that the system will operate normally during typical request patterns but still prevent cascading failure when the system starts to fail. Global rate limiting is a good solution for this case.

    This is because rate limiting sets a global number of requests / second across a service, which is independent of the number of instances configured for a cluster. This guide shows how to configure rate limiting on the edge node in a Grey Matter deployment to an edge namespace.

    hashtag
    Prerequisites

    1. setup with a running Fabric mesh.

    2. An existing Grey Matter deployment running on Kubernetes _()

    3. kubectl or oc setup with access to the cluster

    hashtag
    Steps

    hashtag
    1. Deploy Ratelimit Service

    Rate limiting relies on an external service to regulate and calculate the current number of requests / second. For this example we use which is based on . is a good example of this architecture.

    Begin by creating a deployment for the rate limit service by applying these configs to kubernetes. Note for all these examples I use edge as the namespace – this may differ for your deployment. Additionally, be sure to update the Redis password with your own:

    When applied with kubectl, this service will serve as a central hub to limit the number of requests coming from the domain "edge" to 100 requests / second.

    hashtag
    2. Configure Grey Matter Sidecar To Use Ratelimit Filter

    In order to configure rate limiting on a sidecar, we need to configure the sidecar with an additional cluster that points to the rate limit service. As a convenience, Grey Matter Sidecar allows for defining a cluster using environment variables. The following sample environment variables define a cluster ratelimit that points to our deployed ratelimit service:

    Make sure that you can see this cluster on whichever sidecar you have configured. It's available on localhost:8001/clusters under the ratelimit cluster.

    Now let's update the listener config for the sidecar we've configured. Edit the listener in the Grey Matter CLI and add the following attributes:

    You should see that requests to the ratelimit cluster succeed on the localhost:8001/clusters endpoint on the sidecar. You should also see logs in the ratelimit pod on every request since the logs are set to debug. If there are ever more than 1 request / second, all requests are blocked until the 1 request / second threshold is released.

    hashtag
    3. Trust but Verify

    The ratelimit we set can be tested by changing the number of requests / second to 1 and spamming the sidecar. Make a series of curl requests to the edge. The response code should be 429, although if TLS is enabled this often is just an SSL error. You should see also something like the following logs in the ratelimit pod:

    This shows that the ratelimit service is registering calls from edge and is checking the requests / second against the limit of a maximum of 1 request / second. If you wish to raise the limit in a production environment, 100 requests / second is a good starting point.

    hashtag
    References

    Install the Grey Matter CLI

    This guide will help you install and set up the Grey Matter CLI.

    The Grey Matter Command Line Interface (CLI) is a configuration tool for Grey Matter. Leveraging the Grey Matter Control API, the CLI mainly performs dynamic configuration of the Fabric mesh.

    circle-check

    Q: How do the CLI and the Grey Matter Control API interact?

    A: The CLI manipulates configuration objects registered with the control plane to control the behavior of the sidecars making up the mesh. Learn more about these if you haven't already.

    hashtag
    Prerequisites

    To complete this tutorial, you’ll need an understanding of, and local access to the following environments and tools:

    • Unix Shell (or equivalent)

    • A

    hashtag
    Step 1: Download and Install the greymatter CLI

    The greymatter CLI is a binary written in Go. There are two ways to install it:

    1. use , a manager for installing and setting different greymatter versions

    2. download the binary manually from our release pages

    We recommend using gmenv because it allows for easier versioning against different Grey Matter environments.

    hashtag
    Option A (recommended): Use gmenv

    • using homebrew or manually following the README

    • (the email and password used to access ). For ease of use, it is recommended to .

    • Run gmenv install 1.4.2 to install the latest version of the CLI. You should see:

    hashtag
    Option B: Install the greymatter binary manually

    Retrieve the or visit Grey Matter's to browse all released versions of the CLI. When prompted, enter the username and password associated with your Grey Matter account.

    Alternatively, any artifact in nexus can also be downloaded with a terminal. The below command demonstrates how to do this with curl. Before executing, replace -u user.name@organization.com with your username, and make sure the desired artifact is specified.

    greymatter is distributed as a precompiled binary. Once you have downloaded the desired release, install it with the following steps:

    1. Unpack the tarball

      You will see the binaries:

    2. Move the greymatter binary for your operator onto your system's PATH with name greymatter:

    circle-check

    On a Mac?

    On a Mac there is SIP mode which won't let you move the binary into your $PATH. Follow these instructions to disable rootless mode:

    triangle-exclamation

    Make sure to rename your binary for your operating system to just greymatter when moving the artifact into your $PATH

    hashtag
    Step 2: Test Installation

    Run a quick test of the binary with the following command to verify a successful installation.

    If you don't see this response, verify that the greymatter binary is on your PATH.

    hashtag
    Step 3: Configure Your Environment for Grey Matter

    Once the CLI has been downloaded and installed, it will need to be configured for your Grey Matter installation to communicate with the Grey Matter Control API. If you don't yet have an existing Grey Matter installation, get started with the , and you will be able to configure the Grey Matter CLI and complete once it is up.

    If you have an existing deployment running, configure the CLI by setting the following environment variables in your terminal. Note that as each deployment is different, the specific endpoint and security context will be different, so make sure to verify your settings against the deployed environment.

    It is recommended to use an absolute path for your GREYMATTER_API_SSLCERT and GREYMATTER_API_SSLKEY variables.

    NOTE: If you've used our - see the section for the specific configurations.

    The full configuration and usage of the CLI can be found in the pages or by running greymatter --help, but some quick examples are shown here.

    Configuration options can also be set by CLI flags with each use. These flags will take precedence over the set environment variables.

    hashtag
    Step 4: Test connection to the Grey Matter Control API

    circle-exclamation

    This section requires a running instance of the Grey Matter Control API server. Your GREYMATTER_API_HOSTand many other configuration options will change based upon where the Control API service is deployed.

    hashtag
    List Zone

    Use greymatter list zone with the following keys and certs to connect to the Grey Matter Control API.

    The returned Zone indicates that the connection was successful, and you've been able to inspect the Fabric Mesh. You can now interact with your Grey Matter installation using the CLI!

    If you see a connection refused error, your configuration is likely wrong. Check the values you have configured by running greymatter with no command, the options configured from the environment will be listed at the bottom of the output.

    hashtag
    Questions?

    circle-check

    Need help with your installation?

    Create an account at to reach our team.

    Configure Audits

    Configure audits in Grey Matter Fabric.

    The auditing capability of the Grey Matter Sidecar enables observability for all events within Grey Matter Fabric. This tutorial will guide you through a few easy steps to add an audit trail for your Fabric service.

    hashtag
    Prerequisites

    To complete this tutorial, you’ll need an understanding of, and local access to the following environments and tools:

    Set Sidecar Filters

    This guide will help you enable and disable Grey Matter Sidecar filters.

    Learn how to configure the Grey Matter Sidecar in Fabric to achieve the following:

    • Dynamic configuration for adaptive proxying

    • Service discovery through the Grey Matter Control API

    Setup Distributed Tracing in Grey Matter on Kubernetes

    hashtag
    Prerequisites

    1. An existing Grey Matter deployment.

    Set Mesh-wide Configuration

    These examples show how you can apply mesh-wide policy changes using the cli and .

    hashtag
    Prerequisites

    1. command-line JSON processor

    Set Per-route Sidecar Filters

    The Grey Matter Route object offers the ability to configure on specific routes of a domain. Read the reference documentation on per route filter configuration .

    This guide will walk through the steps of setting two different configurations of the Grey Matter on different routes. To demonstrate this, provides the files to deploy a fibonacci service and configure it into the mesh for testing. For a step by step guide on service deployment, see the or the . To test the per filter route configuration for observables on a service already existing in the mesh, skip to .

    hashtag

    Setup OIDC Filter Chain

    There are four main filters that can be used to construct an Open ID Connect (OIDC) filter chain:

      • This filter checks the existence of a request attribute (header, cookie, etc.) and will copy/move it to the next filter in the chain.

     helm dep up spire
     helm dep up edge
     helm dep up data
     helm dep up fabric
     helm dep up sense
    
     make secrets
     helm install server spire/server -f global.yaml
     kubectl get pod -n spire -w
     NAME       READY   STATUS    RESTARTS   AGE
     server-0   2/2     Running   1          30s
    eksctl create cluster \
        --name production \
        --version 1.17 \
        --nodegroup-name workers \
        --node-type m4.2xlarge \
        --nodes=2 \
        --node-ami auto \
        --region us-east-1 \
        --zones us-east-1a,us-east-1b \
        --profile default
    [ℹ]  using region us-east-1
    [ℹ]  subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
    [ℹ]  subnets for us-east-1b - public:192.168.32.0/19 private:192.168.96.0/19
    [ℹ]  nodegroup "workers" will use "ami-0d373fa5015bc43be" [AmazonLinux2/1.15]
    [ℹ]  using Kubernetes version 1.15
    [ℹ]  creating EKS cluster "production" in "us-east-1" region
    [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
    [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --name=production'
    [ℹ]  CloudWatch logging will not be enabled for cluster "production" in "us-east-1"
    [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=us-east-1 --name=production'
    [ℹ]  2 sequential tasks: { create cluster control plane "production", create nodegroup "workers" }
    [ℹ]  building cluster stack "eksctl-production-cluster"
    [ℹ]  deploying stack "eksctl-production-cluster"
    [ℹ]  building nodegroup stack "eksctl-production-nodegroup-workers"
    [ℹ]  --nodes-min=2 was set automatically for nodegroup workers
    [ℹ]  --nodes-max=2 was set automatically for nodegroup workers
    [ℹ]  deploying stack "eksctl-production-nodegroup-workers"
    [✔]  all EKS cluster resource for "production" had been created
    [✔]  saved kubeconfig as "/home/user/.kube/config"
    [ℹ]  adding role "arn:aws:iam::828920212949:role/eksctl-production-nodegroup-worke-NodeInstanceRole-EJWJY28O2JJ" to auth ConfigMap
    [ℹ]  nodegroup "workers" has 0 node(s)
    [ℹ]  waiting for at least 2 node(s) to become ready in "workers"
    [ℹ]  nodegroup "workers" has 2 node(s)
    [ℹ]  node "ip-192-168-29-248.ec2.internal" is ready
    [ℹ]  node "ip-192-168-36-13.ec2.internal" is ready
    [ℹ]  kubectl command should work with "/home/user/.kube/config", try 'kubectl get nodes'
    [✔]  EKS cluster "production" in "us-east-1" region is ready
    eksctl get cluster --region us-east-1 --profile default
    eksctl get nodegroup --region us-east-1 --profile default --cluster production
    git clone --single-branch --branch release-2.2 https://github.com/greymatter-io/helm-charts.git && cd ./helm-charts
    Cloning into 'helm-charts'...
    remote: Enumerating objects: 337, done.
    remote: Counting objects: 100% (337/337), done.
    remote: Compressing objects: 100% (210/210), done.
    remote: Total 4959 (delta 225), reused 143 (delta 126), pack-reused 4622
    Receiving objects: 100% (4959/4959), 1.09 MiB | 2.50 MiB/s, done.
    Resolving deltas: 100% (3637/3637), done.
    make credentials
    ./ci/scripts/build-credentials.sh
    decipher email:
    first.lastname@company.io
    password:
    Do you wish to configure S3 credentials for gm-data backing [yn] n
    Setting S3 to false
    "decipher" has been added to your repositories
    Error: looks like "https://nexus.greymatter.io/repository/helm" is not a valid chart repository or cannot be reached: failed to fetch https://nexus.greymatter.io/repository/helm/index.yaml : 401 Unauthorized
    $ kubectl get svc edge
    NAME      TYPE           CLUSTER-IP      EXTERNAL-IP                                                              PORT(S)                          AGE
    edge      LoadBalancer   10.100.197.77   a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com   10808:32623/TCP,8081:31433/TCP   2m4s
    export GREYMATTER_API_HOST=<EDGE-EXTERNAL-IP>:10808
    export GREYMATTER_API_PREFIX=/services/control-api/latest
    export GREYMATTER_API_SSL=true
    export GREYMATTER_API_INSECURE=true
    export GREYMATTER_API_SSLCERT=</path/to/helm-charts>/certs/quickstart.crt
    export GREYMATTER_API_SSLKEY=</path/to/helm-charts>/certs/quickstart.key
    export EDITOR=vim # or your preferred editor
    make uninstall
    eksctl delete cluster --name production
    [ℹ]  using region us-east-1
    [ℹ]  deleting EKS cluster "production"
    [✔]  kubeconfig has been updated
    [ℹ]  cleaning up LoadBalancer services
    [ℹ]  2 sequential tasks: { delete nodegroup "workers", delete cluster control plane "prod" [async] }
    [ℹ]  will delete stack "eksctl-production-nodegroup-workers"
    [ℹ]  waiting for stack "eksctl-production-nodegroup-workers" to get deleted
    [ℹ]  will delete stack "eksctl-production-cluster"
    [✔]  all cluster resources were deleted
    Dump the current server configuration
    tutorial
    Envoy docsarrow-up-right
    httparrow-up-right
    MongoDBarrow-up-right
    Delete Service Configurations
    service launch
    greymatterarrow-up-right
    tutorial
    envoy proxy's open source rate limit servicearrow-up-right
    Lyft's original rate limiting servicearrow-up-right
    This blog postarrow-up-right
    https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/rate_limit_filterarrow-up-right
    https://github.com/greymatter-io/gm-control-api/tree/release-1.3/docs/examples/network_filters/ratelimitarrow-up-right
    https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/other_features/global_rate_limitingarrow-up-right

    Unix/Linux setup

  • Microservices and mesh architecture

  • Grey Matter Sidecar - v0.7.2 +

  • Grey Matter xDS

  • Docker (https://docs.docker.com/install/arrow-up-right) - v17.03 and newer

  • Docker Compose (https://docs.docker.com/compose/install/arrow-up-right)

  • Kafka - v2.12-0 - 10.2.1

  • hashtag
    Step 1: Add Kafka to the Sidecar

    Since the Sidecar will emit events into Kafka to be collected as the user wants, you will need to set up Kafka in Fabric. To emit a full GEM payload into Kafka, add the following environment variables to the hello-service-proxy section of the docker-compose.yml file.

    Once you have made these changes, proceed to step 2.

    hashtag
    Step 2: Add Kafka to Fabric

    You'll need to add Kafka to Fabric so Kafka can start tracking audits and push them to Fabric. To add Kafka to Fabric, add the following code to the docker-compose-yml file:

    Once you have added Kafka to Fabric, proceed to Step 3.

    hashtag
    Step 3: Test Audit Event Results

    Now verify that the audit trails work. Anytime someone visits a route that goes to the hello-service, the hello-service-proxy will emit an audit event in Kafka that says that something happened.

    The event looks something like this:

    hashtag
    View Observables in Kafka

    To view exactly what is put into Kafka, enter the following command into the Kafka CLI located here: https://kafka.apache.org/quickstart#quickstart_consumearrow-up-right

    hashtag
    Sample Output from Kafka

    The output should look like this:

    hashtag
    What's Next?

    Have your audits configured? Take the next step and learn how to visualize your audit data.

    hashtag
    Questions?

    circle-check

    Need help configuring audits?

    Create an account at Grey Matter Supportarrow-up-right to reach our team.

    Visualize Auditschevron-right

    Hot reloading of configuration through the Grey Matter Control API

    circle-check

    The Grey Matter Sidecar's functionality allows for deep request and response lifecycle observability in your network. We've extended Envoy's filters with in-house Go libraries.

    hashtag
    Prerequisites

    Successful installation of the following:

    • Grey Matter

    • Grey Matter Proxy

    • Grey Matter Control API

    hashtag
    Step 1: Edit the Sidecar Proxy

    To enable filters, you'll need to edit the proxy you've created. The following command will bring up the proxy in your favorite console editor in your shell.

    circle-info

    Note the two fields: active_proxy_filters and proxy_filters.

    hashtag
    Add Another List Item

    In the active_proxy_filters array, add another list item.

    triangle-exclamation

    Do not save and exit yet.

    hashtag
    Step 2: Configure the Observables Filter

    So far you've only told Grey Matter Sidecar which filters you want to run, but you have not provided configuration for the observables filter.

    hashtag
    gm_observables

    To configure the observables filter, look under the proxy_filters object.

    circle-info

    Note: the gm_observables object. That is where you will configure your new filter.

    If needed, enable Kafka as a message buffer by specifyinguseKafka, eventTopic, and kafkaServerConnection.

    Learn more about the Grey Matter Observables Filter.

    hashtag
    Save Your Configuration

    Once you've edited the configuration to your liking, save the newly modified JSON. The Grey Matter CLI will update your instance of Grey Matter Control API.

    Now that you've saved your configuration, proxies with the key proxy-example will receive their new configuration and hot reload with the new filter enabled.

    hashtag
    Step 3: Test

    To see your new filter in action, tail the logs of the example proxy container with the following command:

    hashtag
    Successful Output

    If you've tailed the logs correctly, you will see a large JSON output to stdout of the container. That is your observable object that provides a looking glass into your request/response lifecycle of the Sidecar.

    hashtag
    Congratulations

    You have successfully created a full running instance of a Grey Matter Fabric, adding an example service, and modified the service using the Grey Matter CLI.

    hashtag
    Questions?

    circle-check

    Need configuration help?

    Create an account at Grey Matter Supportarrow-up-right to reach our team.

    greymatterarrow-up-right setup with a running Fabric mesh.

    hashtag
    Mesh-Wide Configuration Changes

    To apply a mesh-wide configuration change, we need to loop through each Grey Matter Object you wish to change and reapply a new configuration. This can be done by using the greymatter list $object and greymatter edit $object_key in tandem.

    hashtag
    Example: Setting Circuit Breakers

    This example will set circuit breaker max_connections to 500 for all clusters in the service mesh. Create a file named update.json

    This update will be merged in with the current values of each cluster object when run in the following snippet. It's best to run these commands a few times individually first, to make sure the update.json creates the expected change. In the below example, my cluster is cluster-slo-service.

    If you're satisfied, apply the change to the entire mesh:

    hashtag
    Example: Enable Grey Matter Observables on All Proxies

    Similar to the above example, create a file update.json with the proposed update to all proxy objects:

    Note that we must also supply any other currently active proxy filters (in this case just "gm.metrics", since the entirety of "active_proxy_filters" will be overridden. Also note that we have an attribute __REPLACE_WITH_TOPIC_NAME__. The kafka topic is specific to each proxy, so we must replace this with the proxy name for each proxy we change.

    Apply these changes with a similar script as above.

    hashtag
    Example: Configuring Objects With Specific Attributes

    It's often useful to be able to update objects in the mesh only with specific attributes. We can do this by modifying our shell snippet above to use jq conditionalsarrow-up-right. Note: all the shell snippets in this doc can be changed to only update on specific attributes by adding the below if block.

    In the above example, we update only the proxies that have proxy_filters.gm_observables.eventTopic equal to "fabric".

    hashtag
    Health Checks And Outlier Detection

    Health checks and outlier detection help determine if an endpoint is healthy and our services are configured correctly. A common problem this can fix is incorrectly configured or unresponsive hosts. See envoy docsarrow-up-right for available configurations and source codearrow-up-right for api definition.

    Health checks can be added the same way we updated the clusters above, as all we need to update is the configuration options for health_checks on each cluster. Change update.json to enable a basic health check.

    Change update.json for basic outlier detection.

    Apply this update to each cluster by running the snippet

    You should now see that health checks are running. Follow the logs from any given sidecar to see that health checks have been correctly enabled.

    hashtag
    Detect Unneeded Objects

    Often there are objects "floating around" in the mesh from previous deployments or experiments. These objects can be partially detected by looping through each one and ensuring that each one was created with a name, zone_key, or other basic properties.

    The above snippet will trigger if a zone_key, cluster_key, or instances attributes in any cluster is not assigned.

    greymatterarrow-up-right
    gm-control-apiarrow-up-right
    jqarrow-up-right
    $ kubectl get pods | grep edge
    edge-7d7bf848b9-xjs5l                   1/1     Running            0          114m
    $ kubectl exec -it edge-7d7bf848b9-xjs5l -- sh
    /app $
    /app $ curl localhost:8001/help
    admin commands are:
      /: Admin home page
      /certs: print certs on machine
      /clusters: upstream cluster status
      /config_dump: dump current Envoy configs (experimental)
      /contention: dump current Envoy mutex contention stats (if enabled)
      /cpuprofiler: enable/disable the CPU profiler
      /drain_listeners: drain listeners
      /healthcheck/fail: cause the server to fail health checks
      /healthcheck/ok: cause the server to pass health checks
      /heapprofiler: enable/disable the heap profiler
      /help: print out list of admin commands
      /hot_restart_version: print the hot restart compatibility version
      /listeners: print listener info
      /logging: query/change logging levels
      /memory: print current allocation/heap usage
      /quitquitquit: exit the server
      /ready: print server state, return 200 if LIVE, otherwise return 503
      /reset_counters: reset all counters to zero
      /runtime: print runtime values
      /runtime_modify: modify runtime values
      /server_info: print server version/status information
      /stats: print server stats
      /stats/prometheus: print server stats in prometheus format
      /stats/recentlookups: Show recent stat-name lookups
      /stats/recentlookups/clear: clear list of stat-name lookups and counter
      /stats/recentlookups/disable: disable recording of reset stat-name lookup names
      /stats/recentlookups/enable: enable recording of reset stat-name lookup names
    /app $ curl localhost:8001/stats
    cluster.catalog.assignment_stale: 0
    cluster.catalog.assignment_timeout_received: 0
    cluster.catalog.bind_errors: 0
    cluster.catalog.circuit_breakers.default.cx_open: 0
    cluster.catalog.circuit_breakers.default.cx_pool_open: 0
    cluster.catalog.circuit_breakers.default.rq_open: 0
    cluster.catalog.circuit_breakers.default.rq_pending_open: 0
    cluster.catalog.circuit_breakers.default.rq_retry_open: 0
    cluster.catalog.circuit_breakers.high.cx_open: 0
    cluster.catalog.circuit_breakers.high.cx_pool_open: 0
    cluster.catalog.circuit_breakers.high.rq_open: 0
    cluster.catalog.circuit_breakers.high.rq_pending_open: 0
    cluster.catalog.circuit_breakers.high.rq_retry_open: 0
    cluster.catalog.client_ssl_socket_factory.downstream_context_secrets_not_ready: 0
    cluster.catalog.client_ssl_socket_factory.ssl_context_update_by_sds: 7
    cluster.catalog.client_ssl_socket_factory.upstream_context_secrets_not_ready: 0
    cluster.catalog.control_plane.connected_state: 1
    cluster.catalog.control_plane.pending_requests: 0
    cluster.catalog.control_plane.rate_limit_enforced: 0
    cluster.catalog.default.total_match_count: 0
    cluster.catalog.external.upstream_rq_503: 6
    cluster.catalog.external.upstream_rq_5xx: 6
    cluster.catalog.external.upstream_rq_completed: 6
    cluster.catalog.init_fetch_timeout: 0
    cluster.catalog.lb_healthy_panic: 6
    ...
    /app $ curl localhost:8001/stats?filter=cluster.prometheus
    cluster.prometheus.assignment_stale: 0
    cluster.prometheus.assignment_timeout_received: 0
    cluster.prometheus.bind_errors: 0
    cluster.prometheus.circuit_breakers.default.cx_open: 0
    cluster.prometheus.circuit_breakers.default.cx_pool_open: 0
    cluster.prometheus.circuit_breakers.default.rq_open: 0
    cluster.prometheus.circuit_breakers.default.rq_pending_open: 0
    cluster.prometheus.circuit_breakers.default.rq_retry_open: 0
    cluster.prometheus.circuit_breakers.high.cx_open: 0
    cluster.prometheus.circuit_breakers.high.cx_pool_open: 0
    cluster.prometheus.circuit_breakers.high.rq_open: 0
    cluster.prometheus.circuit_breakers.high.rq_pending_open: 0
    cluster.prometheus.circuit_breakers.high.rq_retry_open: 0
    cluster.prometheus.client_ssl_socket_factory.downstream_context_secrets_not_ready: 0
    cluster.prometheus.client_ssl_socket_factory.ssl_context_update_by_sds: 9
    cluster.prometheus.client_ssl_socket_factory.upstream_context_secrets_not_ready: 0
    cluster.prometheus.control_plane.connected_state: 1
    cluster.prometheus.control_plane.pending_requests: 0
    cluster.prometheus.control_plane.rate_limit_enforced: 0
    cluster.prometheus.default.total_match_count: 2
    cluster.prometheus.external.upstream_rq_200: 26
    cluster.prometheus.external.upstream_rq_2xx: 26
    cluster.prometheus.external.upstream_rq_301: 1
    cluster.prometheus.external.upstream_rq_302: 1
    cluster.prometheus.external.upstream_rq_3xx: 2
    ...
    /app $ curl localhost:8001/stats?filter=ssl.handshake
    cluster.catalog.ssl.handshake: 0
    cluster.control-api.ssl.handshake: 0
    cluster.dashboard.ssl.handshake: 5
    cluster.data-internal.ssl.handshake: 0
    cluster.internal-jwt-security.ssl.handshake: 0
    cluster.jwt-security.ssl.handshake: 1
    cluster.prometheus.ssl.handshake: 6
    cluster.slo.ssl.handshake: 1
    listener.0.0.0.0_10808.ssl.handshake: 6
    /app $ curl localhost:8001/config_dump
    {
     "configs": [
      {
       "@type": "type.googleapis.com/envoy.admin.v3.BootstrapConfigDump",
       "bootstrap": {
        "node": {
         "id": "default",
         "cluster": "edge",
         "locality": {
          "region": "default-region",
          "zone": "zone-default-zone"
         },
         "hidden_envoy_deprecated_build_version": "a8507f67225cdd912712971bf72d41f219eb74ed/1.13.3/Modified/DEBUG/BoringSSL",
         "user_agent_name": "envoy",
         "user_agent_build_version": {
          "version": {
           "major_number": 1,
           "minor_number": 13,
           "patch": 3
          },
          "metadata": {
           "revision.status": "Modified",
           "revision.sha": "a8507f67225cdd912712971bf72d41f219eb74ed",
           "build.type": "DEBUG",
           "ssl.version": "BoringSSL"
          }
         },
         "extensions": [
          {
           "name": "envoy.grpc_credentials.aws_iam",
           "category": "envoy.grpc_credentials"
          },
          {
           "name": "envoy.grpc_credentials.default",
           "category": "envoy.grpc_credentials"
          },
          {
           "name": "envoy.grpc_credentials.file_based_metadata",
           "category": "envoy.grpc_credentials"
          },
          {
           "name": "envoy.health_checkers.redis",
           "category": "envoy.health_checkers"
          },
          {
           "name": "envoy.dog_statsd",
           "category": "envoy.stats_sinks"
          },
          {
           "name": "envoy.metrics_service",
           "category": "envoy.stats_sinks"
          },
          {
           "name": "envoy.stat_sinks.hystrix",
           "category": "envoy.stats_sinks"
          },
          {
    ...
    /app $ exit
    curl -X DELETE \
      -k --cacert ./certs/ca.pem \
      --cert ./certs/client.pem \
      --key ./certs/client.key \
      https://{YOUR_AWS_ELB_HOSTNAME}:10808/services/catalog/latest/clusters/fibonacci?zoneName=zone-default-zone
    {"deleted": "fibonacci"}
    greymatter delete cluster edge-to-fibonacci-cluster
    greymatter delete --deep=true route edge-to-fibonacci-route
    greymatter delete route edge-to-fibonacci-route-slash
    kubectl delete deployment fibonacci
    greymatter delete cluster fibonacci-cluster
    greymatter delete --deep domain fibonacci-domain
    greymatter delete listener fibonacci-listener
    greymatter delete --deep route fibonacci-local-route
    greymatter delete proxy fibonacci-proxy
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: ratelimit
      name: ratelimit
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: ratelimit
      template:
        metadata:
          labels:
            app: ratelimit
        spec:
          serviceAccountName: default
          containers:
            - name: ratelimit
              image: "envoyproxy/ratelimit:v1.4.0"
              imagePullPolicy: IfNotPresent
              env:
              - name: USE_STATSD
                value: "false"
              - name: LOG_LEVEL
                value: "debug"
              - name: REDIS_SOCKET_TYPE
                value: "tcp"
              - name: REDIS_URL
                value: "redis.default.svc:6379"
              - name: RUNTIME_ROOT
                value: "/"
              - name: RUNTIME_SUBDIRECTORY
                value: "ratelimit"
              - name: REDIS_AUTH
                valueFrom:
                  secretKeyRef:
                    name: redis-password
                    key: password
              command: ["/bin/sh","-c"]
              args: ["mkdir -p /ratelimit/config && cp /data/ratelimit/config/config.yaml /ratelimit/config/config.yaml && cat /ratelimit/config/config.yaml &&  /bin/ratelimit"]
              ports:
                - name: server
                  containerPort: 8081
                - name: debug
                  containerPort: 6070
              volumeMounts:
                - name: ratelimit-config
                  mountPath: /data/ratelimit/config
                  readOnly: true
          volumes:
            - name: ratelimit-config
              configMap:
                name: ratelimit
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: ratelimit
      labels:
        app: ratelimit
    spec:
      ports:
        - name: server
          port: 8081
          protocol: TCP
          targetPort: 8081
        - name: debug
          port: 6070
          protocol: TCP
          targetPort: 6070
      selector:
        app: ratelimit
      sessionAffinity: None
      type: ClusterIP
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ratelimit
      namespace: default
    data:
      config.yaml: |-
        ---
        domain: edge
        descriptors:
          - key: path
            value: "/"
            rate_limit:
              unit: second
              requests_per_unit: 1
    ...
          tcp_cluster:
            type: 'value'
            value: 'ratelimit'
          tcp_host:
            type: 'value'
            value: 'ratelimit.default.svc.cluster.local'
          tcp_port:
            type: 'value'
            value: '8081'
    ...
    ...
        "active_network_filters": [
        "envoy.rate_limit"
      ],
      "network_filters": {
        "envoy_rate_limit": {
          "stat_prefix": "edge",
          "domain": "edge",
          "failure_mode_deny": true,
          "descriptors": [
            {
              "entries": [
                {
                  "key": "path",
                  "value": "/"
                }
              ]
            }
          ],
          "rate_limit_service": {
            "grpc_service": {
              "envoy_grpc": {
                "cluster_name": "ratelimit"
              }
            }
          }
        }
      },
    ...
    time="2020-09-03T21:54:40Z" level=debug msg="returning normal response"
    time="2020-09-03T21:54:40Z" level=debug msg="cache key: edge_path_/_1599170080 current: 1"
    time="2020-09-03T21:54:40Z" level=debug msg="returning normal response"
    time="2020-09-03T21:54:40Z" level=debug msg="starting get limit lookup"
    time="2020-09-03T21:54:40Z" level=debug msg="looking up key: path_/"
    time="2020-09-03T21:54:40Z" level=debug msg="found rate limit: path_/"
    time="2020-09-03T21:54:40Z" level=debug msg="starting cache lookup"
    time="2020-09-03T21:54:40Z" level=debug msg="looking up cache key: edge_path_/_1599170080"
    time="2020-09-03T21:54:40Z" level=debug msg="cache key: edge_path_/_1599170080 current: 3"
      - EMIT_EVENTS=true
          - EMIT_FULL_RESPONSE=true
          - USE_KAFKA=true
          - ENFORCE_AUDIT=true
          - KAFKA_TOPIC="hello-service-tests"
          - KAFKA_ENABLED=true
          - OBS_ENFORCE=true
          - OBS_ENABLED=true
          - OBS_FULL_RESPONSE=true
          - KAFKA_ZK_DISCOVER=true
          - INHEADERS_ENABLED=true
    kafka:
        hostname: kafka
        image: wurstmeister/kafka:0.10.2.1
        networks:
          - mesh
        environment:
          - KAFKA_HEAP_OPTS="-Xmx1G -Xms500M"
          - KAFKA_ADVERTISED_HOST=kafka
          - KAFKA_ADVERTISED_PORT=9092
          - KAFKA_ZOOKEEPER_CONNECT=zk
          - KAFKA_CREATE_TOPICS=hello-service-tests
        ports:
          - "22181:2181"
          - "29092:9092"
          - "9092:9092"
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
        depends_on:
          - zk
    {
        "action": "GET",
        "eventChain": [
            "9308cf66-8218-11e9-a159-0242ac1c0005"
        ],
        "eventId": "9308cf66-8218-11e9-a159-0242ac1c0005",
        "eventType": "",
        "originatorToken": null,
        "payload": {
            "isSuccessful": true,
            "request": {
                "endpoint": "/services/hello-service/0.1/",
                "headers": {
                    ":authority": "localhost:8080",
                    ":method": "GET",
                    ":path": "/services/hello-service/0.1/",
                    "accept": "*/*",
                    "user-agent": "curl/7.54.0",
                    "x-envoy-internal": "true",
                    "x-forwarded-for": "172.28.0.1",
                    "x-forwarded-proto": "https",
                    "x-request-id": "d9e69795-3fc8-41c3-a0f2-7775822340c5"
                }
            },
            "response": {
                "body": "Hello World!",
                "code": 200,
                "headers": {
                    ":status": "200",
                    "content-length": "12",
                    "content-type": "text/html; charset=utf-8",
                    "date": "Wed, 29 May 2019 13:49:26 GMT",
                    "server": "envoy",
                    "x-envoy-upstream-service-time": "7"
                }
            }
        },
        "schemaVersion": "1.0",
        "systemIp": "172.28.0.5",
        "timestamp": 1559137766,
        "xForwardedForIp": "172.28.0.1"
    }
    kafka-console-consumer --bootstrap-server localhost:9092 --topic hello-service-tests --from-beginning
    {
       "eventId":"a83bd73a-afc2-11e9-bf98-0242ac130006",
       "eventChain":[
          "a83bd73a-afc2-11e9-bf98-0242ac130006"
       ],
       "schemaVersion":"1.0",
       "originatorToken":[
          "CN=localuser,OU=Engineering,O=Decipher Technology Studios,=Alexandria,=Virginia,C=US",
          "",
          "CN=localuser,OU=Engineering,O=Decipher Technology Studios,=Alexandria,=Virginia,C=US"
       ],
       "eventType":"",
       "timestamp":1564158619,
       "xForwardedForIp":"172.19.0.1,172.19.0.1,172.19.0.6",
       "systemIp":"172.19.0.6",
       "action":"GET",
       "payload":{
          "isSuccessful":true,
          "request":{
             "endpoint":"/",
             "headers":{
                ":authority":"localhost:8080",
                ":method":"GET",
                ":path":"/",
                "accept":"*/*",
                "content-length":"0",
                "external_sys_dn":"",
                "ssl_client_s_dn":"CN=localuser,OU=Engineering,O=Decipher Technology Studios,=Alexandria,=Virginia,C=US",
                "user-agent":"curl/7.54.0",
                "user_dn":"CN=localuser,OU=Engineering,O=Decipher Technology Studios,=Alexandria,=Virginia,C=US",
                "x-envoy-external-address":"172.19.0.6",
                "x-envoy-original-path":"/services/hello-service/0.1/",
                "x-forwarded-for":"172.19.0.1,172.19.0.1,172.19.0.6",
                "x-forwarded-proto":"https",
                "x-real-ip":"172.19.0.1",
                "x-request-id":"9bbada13-3916-43c2-a59d-f1076a373a19"
             }
          },
          "response":{
             "code":200,
             "headers":{
                ":status":"200",
                "content-length":"12",
                "content-type":"text/html; charset=utf-8",
                "date":"Fri, 26 Jul 2019 16:30:19 GMT",
                "server":"Werkzeug/0.15.5 Python/3.6.6",
                "x-envoy-upstream-service-time":"6"
             },
             "body":"Hello World!"
          }
       }
    }
    greymatter edit proxy proxy-example
      "active_proxy_filters": [
        "gm.metrics",
        "gm.observables"
      ],
        "gm_observables": {
          "emitFullResponse": false,
          "useKafka": false,
          "topic": "proxy-example",
          "eventTopic": "proxy-example-topic",
          "kafkaZKDiscover": false,
          "kafkaServerConnection": ""
        }
    docker logs -f gmfabric_example-proxy_1
    {
      "circuit_breakers": {
        "max_connections": 500
      }
    }
    greymatter get cluster cluster-slo-service > cluster-slo-service.json
    jq -s '.[0] * .[1]' cluster-slo-service.json update.json > merged.json
    # this is what will be appplied
    cat merged.json
    ...
    # try applying the change
    greymatter edit cluster cluster-slo-service < merged.json
    #!/bin/sh
    
    for key in $(greymatter list cluster | jq -r '.[] | .cluster_key'); do
      greymatter get cluster $key > $key.json
      jq -s '.[0] * .[1]' $key.json update.json > merged.json
      greymatter edit cluster $key < merged.json
      rm $key.json merged.json
    done
    {
      "active_proxy_filters": [
        "gm.metrics",
        "gm.observables"
      ],
      "proxy_filters": {
        "gm_observables": {
          "emitFullResponse": true,
          "useKafka": true,
          "eventTopic": "observables",
          "enforceAudit": false,
          "kafkaZKDiscover": false,
          "topic": "__REPLACE_WITH_TOPIC_NAME__",
          "kafkaServerConnection": "kafka-default.fabric.svc:9092"
        }
      }
    }
    #!/bin/sh
    
    for key in $(greymatter list proxy | jq -r '.[] | .proxy_key'); do
      greymatter get proxy $key > $key.json
      # fill in topic name with the name of the proxy
      name=$(cat $key.json | jq -r '.name')
      sed 's/__REPLACE_WITH_TOPIC_NAME__/'"$name"'/g' update.json > update-$key.json
      jq -s '.[0] * .[1]' $key.json update-$key.json > merged.json
      greymatter edit proxy $key < merged.json
      rm $key.json merged.json update-$key.json
    done
    #!/bin/sh
    
    for key in $(greymatter list proxy | jq -r '.[] | .proxy_key'); do
      greymatter get proxy $key > $key.json
      matches=$(cat $key.json | jq '.proxy_filters.gm_observables.eventTopic == "fabric"')
      if [ $matches = "true" ]; then
     jq -s '.[0] * .[1]' $key.json update.json > merged.json
     greymatter edit proxy $key < merged.json
      fi
      rm -rf $key.json merged.json
    done
    {
      "health_checks": [
        {
          "timeout_msec": 1000,
          "interval_msec": 60000,
          "interval_jitter_msec": 1000,
          "unhealthy_threshold": 3,
          "healthy_threshold": 3
        }
      ]
    }
    {
      "outlier_detection": {
        "consecutive_5xx": 3,
        "base_ejection_time_msec": 30000
      }
    }
    #!/bin/sh
    
    for key in $(greymatter list cluster | jq -r '.[] | .cluster_key'); do
      greymatter get cluster $key > $key.json
      jq -s '.[0] * .[1]' $key.json update.json > $key-merged.json
      greymatter edit cluster $key < $key-merged.json
      rm $key.json $key-merged.json
    done
    #!/bin/sh
    
    for key in $(greymatter list cluster | jq -r '.[] | .cluster_key'); do
      greymatter get cluster $key > $key.json
      GREYMATTER_CONSOLE_LEVEL="none"
      possibleOutlier=$(cat $key.json | jq '.name == "" or .zone_key == "" or .instances == []')
      if [ $possibleOutlier = "true" ]; then
     echo "------ POSSIBLY UNNEEDED ------ $key "
      else
       echo "--------------OK -------------- $key "
      fi
      rm -rf $key.json merged.json
    done

    Route usage

  • Route-level metrics

  • Sidecar settings
    SLO
    Add entry in Catalog service to display in the Intelligence 360 Applications
    require_tls
    is true. This is because the edge proxy and the fibonacci
    sidecar
    are running in different pods so they can't connect over localhost and must use their SPIFFE SVIDs for communication.

    The secret here mirrors the one set on the fibonacci listener. As stated above, the cluster is in charge of the egress connection from a sidecar to whatever service is located at its instances.

    In this case, the secret is telling the Edge proxy to fetch its SVID (with ID spiffe://quickstart.greymatter.io/edge) from Envoy SDS and present it on its outgoing connections. It will also only accept connections that present a certificate with SAN spiffe://quickstart.greymatter.io/fibonacci. See the SPIRE documentation for specifics.

    As described in the secret configuration on the fibonacci listener, these are opposites. The request from this cluster will be accepted by the fibonacci sidecar and vice versa.

    tutorial
    setuparrow-up-right
    next step
    guide
    domain object documentationarrow-up-right
    Grey Matter filters
    listener object documentationarrow-up-right
    certificate validation contextarrow-up-right
    SPIRE documentation
    Edge to fibonacci cluster
    proxy object documentationarrow-up-right
    cluster object documentationarrow-up-right
    routesarrow-up-right
    clustersarrow-up-right
    local route
    local cluster fibonacci-cluster
    shared rules object documentationarrow-up-right
    Routesarrow-up-right
    domain
    local shared rules
    local cluster
    route object documentationarrow-up-right
    edge routes
    clusterarrow-up-right
    local cluster
    proxy
    shared_rulesarrow-up-right
    edge to fibonacci routes
    we just created
    route objectsarrow-up-right
    local route
    Quickstart Install Kubernetes
    Intelligence 360 Application
  • The SPIRE server registrar service configuration

  • at
    10808
    .
    greymatter
    generating the mesh configurations
    configuring the egress actions
    Envoy SDSarrow-up-right
    egress configurations
    domain
    listener
    egress route
    internal, or egress, cluster
    proxy
    Simple Egress Flow
    Example Egress Flow
  • Run gmenv use 1.4.2 to set this as your version.

  • Reboot into recovery mode (reboot and hold down Cmd-R)
  • Open a terminal

  • Use this command: csrutil disable

  • Reboot and run the command that worked prior to El Capitan

  • When you're done, it is highly recommended that you re-enable SIP by following the same steps, but using csrutil enable in step 3.

  • mesh objects
    Grey Matter accountarrow-up-right
    gmenvarrow-up-right
    Install gmenvarrow-up-right
    Configure it with your Grey Matter credentialsarrow-up-right
    Grey Matter's Nexus Repositoryarrow-up-right
    use a credentials filearrow-up-right
    latest release directlyarrow-up-right
    Nexus Repositoryarrow-up-right
    Quickstart Installation Guide
    step 4
    quickstart guide
    configure the CLI
    support
    Grey Matter Supportarrow-up-right
    See Quickstart Install with Helm/Kubernetes if you don't already have a deployment running
  • kubectl access to the k8s cluster

  • The greymatterarrow-up-right CLI with access to the Fabric mesh.

  • hashtag
    Overview

    1. Install the trace server

    2. Configure a Sidecar to emit traces

    3. Verify traces are collected

    hashtag
    Steps

    hashtag
    1. Install Jaegar

    For this walkthrough we'll use Jaegerarrow-up-right as the trace backend. For convenience, we'll use their provided all-in-one Docker image. This image provides a simple server and web UI that is useful for small deployments. Larger production deployments should use a more resilient deployment strategy.

    We'll launch the server with the configuration below. The Jaeger server is going to be exposed through a k8s service, so it can be directly addressed by all Sidecars, and the trace port is 9411, which we'll need to set on the Sidecar.

    Write the above Kubernetes deployment to a file (jaeger.yaml) and create the resources:

    hashtag
    2. Turn on tracing for the Edge

    Now that the trace server is up and running, we can configure a service to send out traces for each request. To do this, we do need to restart the Sidecar with new runtime options. This is due to how the Sidecar needs to address the trace server. It must be defined at runtime, and cannot be defined through the service mesh later.

    We can setup the sidecar with the traceserver by editing the current deployment:

    And adding in the following lines to the Sidecar's environment variables:

    After editing the deployment, we're now ready to turn on tracing on the edge Sidecar using the greymatter cli. Edit the ingress listener for the edge Sidecar:

    And add in the following config. This will turn on tracing for all requests that pass through this listener.

    hashtag
    3. Verify Trace Collection

    At this point everything is setup. Sending more requests through the edge (such as refreshing the browser or calling a service), will trigger a trace to be collected.

    We can watch this in the logs of the trace server.

    We can also view this in the provided Jaegar UI. To quickly view the UI, port-forward your local shell to the UI port of the Jaegar pod.

    NOTE: The trace UI can also be setup as a Service in the mesh and routed through the Edge like all other services. See the guide for deploying a service for a walkthrough on those steps.

    Using your browser, navigate to localhost:16686

    Jaeger Trace UI
    Prerequisites
    1. An existing Grey Matter deployment following the Grey Matter Quickstart Install with Helm/Kubernetes guide.

      1. This guide assumes you've followed the step to configure a load balancer for ingress access to the mesh. If so, you should have a URL to access the Grey Matter Intelligence 360 application (e.g. a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com:10808).

      2. Make note of your load balancer ingress URL and use it wherever this guide references {your-gm-ingress-url}.

    2. setup with a running Fabric mesh.

    hashtag
    Overview

    1. Deploy and configure a Fibonacci service

    2. Enable the Grey Matter observables filter

    3. Enable per filter route configurations on a specific route

    hashtag
    Steps

    hashtag
    1. Deploy and configure a fibonacci service

    Deploy and configure a fibonacci service following the Quickstart Launch Service to Kubernetes Guide.

    If you are using a non-SPIFFE/SPIRE deployment, follow this guidearrow-up-right instead.

    Before you move on to the next step of this guide, you should be able to see Fibonacci on the dashboard and successfully connect to the service. The fibonacci service route will be at https:///{your-gm-ingress-url}:30000/services/fibonacci/. You should see Alive when you navigate to this route in a browser. Try https:///{your-gm-ingress-url}:30000/services/fibonacci/fibonacci/37 to get the 37th fibonacci in response, 24157817.

    hashtag
    2. Enable the Grey Matter observables filter

    Now we will turn on the observables filter for the Fibonacci service. The configuration that we add to the fibonacci-listener object will enabled observables on all of the routes from it's domain. Make sure that the greymatter CLIarrow-up-right is set up.

    Add the following configuration

    Check the logs kubectl logs deployment/fibonacci sidecar -f and hit the fibonacci endpoint https://{your-gm-ingress-url}/services/fibonacci/ in a browser, you should see something that looks like the following:

    followed by the observable string of JSON.

    hashtag
    3. Configure and test two different route filter configurations

    Now we will configure and test per route filter configurations. To do this, we'll add a route from the fibonacci sidecar to the fibonacci service.

    The route looks for the /fibonacci/37 path, so any request into the mesh at https://{your-gm-ingress-url}/services/fibonacci/fibonacci/37, looking for exactly the 37th fibonacci number, will use this route.

    Per route filter configurations are configured by the filter_metadata field of the route object. The filter configuration we will make for this route specifically is to turn emitFullResponse in the observables filter to true. This means that for any request to https://{your-gm-ingress-url}/services/fibonacci/fibonacci/37, the full response body will be printed in the observable.

    Save this route in the /mesh directory as fibonacci-route-37.json

    Follow the logs again kubectl logs deployment/fibonacci sidecar -f.

    First, hit any endpoint that isn't the 37th fibonacci number, try https://{your-gm-ingress-url}/services/fibonacci/, or https://{your-gm-ingress-url}/services/fibonacci/fibonacci/18. Since neither of those requests have path exactly equal to /fibonacci/37, neither will correspond to the fibonacci-route-37 route. The observable printed for both of those requests in the fibonacci logs should be preceded with:

    Now, watch the logs and make a request to the fibonacci-route-37 route - https://{your-gm-ingress-url}/services/fibonacci/fibonacci/37. The observable JSON string should this time be preceded with:

    Notice the route based config log, as well as the emitFullResponse = true for this route only. If you inspect the JSON of the observable you will also see thar it contains a field "body", with value "24157817\n", the full body of the response.

    filters
    here
    observables filter
    step 1
    Quickstart Launch Service to Kubernetes
    Service Deployment guidearrow-up-right
    step 2

    OIDC Validation

    • This filter does online validation of an access token by calling the /userinfo endpoint. Note that the response of the /userinfo API is JSON containing user info, not JSON Web Token (JWT) so if there is a need for an ID token, it needs to be acquired via OIDC Authentication filter.

  • OIDC Authentication

    • The main filter that handles authorization code flowarrow-up-right and will store access or ID tokens in specified locations.

  • Envoy JWT Authenticationarrow-up-right

    • This filter does offline verification of a JWT (e.g. ID token) using a specified JSON Web Key Set (JWKS). It verifies the signature, audience, and issuer as well as time restriction such as expiration and nbf (not before) time. If the verification fails, the request is rejected.

  • The following filters may appear required for our OAuth/OIDC flow but are used for very specific cases:

    • OAuth

      • This is a legacy OAuth filter that requires openid scope and id_token.

      • This filter is tightly coupled with Grey Matter JWT Security service.

    hashtag
    Basic Implementation

    In this section, we'll cover the layout for a mesh which contains one Edge proxy, one web application with UI accessed via browsers, and one API service which return JSON responses to be consumed by frontend applications.

    Image of OIDC Filter Flow

    hashtag
    Edge Proxy

    • In a Grey Matter deployment this is simply our sidecar used for edge traffic. It allows all traffic to go through the specified Listeners into the OIDC filter chain.

    hashtag
    Application Sidecar Listener

    • OIDC Validation filter checks to see if a request contains an access token in a specified location.

      • If so, validate the token with Identity Provider (IdP) by calling the /userinfo endpoint and optionally store the response JSON into a specified location. If validation fails, remove the access token from the request so that the following filter can process as if the request did not contain an access token.

      • Otherwise let the request go through.

    • OIDC Authentication Filter checks to see if a request contains an access token in a specified location

      • If so, there is nothing more to do.

      • Otherwise check whether an access code exists in this request.

    hashtag
    API Service Sidecar Listener

    • Ensure Variables filter ensures that access tokens that are spread out across multiple request attributes, like a bearer token in a header, a cookie, etc., are moved to a consistent location. This alleviates the need for subsequent filters to check multiple request attributes. If a token is not found, the request is rejected.

    • OIDC Validation filter validates the token with IdP by calling the /userinfo endpoint and optionally stores the response JSON into a specified location. If validation fails, the request is rejected.

    hashtag
    Enhanced Implementation

    The above implementation works for mesh with a small number of services. However, it requires each and every application/API listener to be aware of OIDC configuration which is cumbersome. The second implementation uses a "gateway" proxy which contains two separate listeners and domains - one for application route, the other for API service route.

    Image of enhanced OIDC filter chain at the edge

    hashtag
    Edge Proxy

    • It allows any traffic to go through, but using a string pattern match, it splits the traffic into two routes: one for /app and the other for /services, This edge proxy can certainly be omitted by a use of different subdomains/ports for application traffic and API traffic, etc.

    hashtag
    Gateway Proxy

    Gateway proxy is an instance of the gm-proxy with two listeners and domains. Why two domains? Because a domain can be responsible for one port. When we "split" the traffic into two, /app traffic gets directed to one port (e.g. 8080) of the gateway proxy while /services traffic goes to another port (e.g. 9080).

    hashtag
    App Listener & App Domain

    This set of listener and domain is responsible for localhost:9443/app traffic. The filters used by the listener are the same as the basic implementation's application sidecar listener.

    hashtag
    API Listener & API Domain

    This set of listener and domain is responsible for localhost:9443/services traffic. The filters used by the listener are the same as the basic implementation's API service sidecar listener.

    hashtag
    Considerations

    • Online validation using OIDC Validation Filter adds an API call to IdP for every single request you serve. If you just need to quickly check whether an ID token (or any other JWT)'s signature, you can use Envoy JWT Authentication.

    • Carefully consider ways to reduce traffic back to IdP. Do static assets for the UI (CSS, images, etc) really need to be protected? Is there a way to cache the validation results so that we only need to ask IdP every 5 minutes, for example?

    Ensure Variables
    helmenvarrow-up-right
    below
    publishing services docsarrow-up-right

    Setup Zero-Trust

    Follow along with this guide to configure SPIRE in Grey Matter.

    This guide will help you set up a secure, zero-trust environment in Grey Matter to achieve the following:

    • Establish trust in user identities

    • Enforce adaptive and risk-based policies

    • Enable secure access to all apps

    • Enforce transaction and data security

    Grey Matter uses to enable zero-trust security. For more information about how Grey Matter uses SPIRE, see the .

    Learn more about Grey Matter's approach to zero-trust security here.

    hashtag
    Prerequisites

    • Unix shell and Decipher account

    • helm and kubectl installed

    • A Kubernetes or OpenShift deployment of Grey Matter with SPIRE enabled

    circle-check

    Learn more about the SPIRE configuration on the and documentation.

    hashtag
    Step 1: Install

    To install Grey Matter using SPIRE, verify that global.spire.enabled is true (the default) for your helm charts setup and .

    hashtag
    Step 2: Deploy a new service

    For a full walkthrough of an example service deployment in a SPIRE enabled environment, see .

    To adapt an existing service deployment to enable SPIRE, add this environment variable to the sidecar container:

    Then add the following to the deployment volumes:

    and mount it into the sidecar container as:

    This creates the Unix socket over which the sidecar will communicate with the SPIRE agent.

    hashtag
    Step 3: Mesh configurations

    There are several updates to make to the mesh configurations for a new service to enable SPIRE. The following describe updates necessary to configure ingress to the service using SPIRE, if your service also has egress actions, check out the .

    hashtag
    Domain

    If you have existing mesh configurations for this service in a non-SPIRE installation, remove any from the ingress domain object, but keep force_https to true. The domain should look like .

    hashtag
    Listener

    The of the listener object is used to configure ingress mTLS using SPIRE.

    If you installed Grey Matter using the helm charts, each deployment should have a label with key greymatter.io/control and value the name of the service (see ). This value will be used to indicate the SPIFFE ID for a sidecar.

    Let {service-name} be the value of the label greymatter.io/control in your service deployment. Add the following secret to your listener object:

    Once this is configured, the sidecar will use its SPIFFE certificate for ingress traffic on this listener.

    hashtag
    Edge to new service routing

    The cluster created for edge to connect to the service will need a similar update for egress traffic to the new service. Remove any ssl_config on the edge-to-{service-name}-cluster and set the secret instead:

    and should be configured as usual.

    hashtag
    Step 4: Test

    When you setup services to participate in the mesh, SPIFFE identities are setup for them. This means that the service will get a certificate that is made for that service. As an example of probing into data, you can use openssl to verify that it is setup to use SPIFFE.

    In a kubernetes setup, you can find the ip of your deployment with kubectl describe pod {pod-id} | grep IP. Copy this ip and use openssl to check the certificate. You can use openssl from within the data container -

    and then to check your service:

    or

    You should see from the info that the certificate chain and SAN that the certificate your service is presenting is from SPIRE.

    You can also verify that SDS is working for your service by execing into its sidecar pod kubectl exec -it {pod-id} -c sidecar -- /bin/sh and running curl localhost:8001/certs. If the sidecar is configured properly, it's SPIFFE certificate will be listed there.

    hashtag
    Questions?

    circle-check

    Need help setting up zero-trust security?

    Create an account at to reach our team.

    Open Policy Agent Integration

    This guide will teach you how to integrate Open Policy Agent into your service mesh and utilize its policy creation and enforcement system. In this practical example, we will demonstrate how to establish a policy for the Grey Matter SLO service based on method and payload. This example can be modified and used for any service throughout your mesh.

    Learn how to integrate Open Policy Agent and in doing so:

    • Create custom policies ensuring proper authorization surrounding your services

    • Safely determine what values and information can reach your service

    • Base policies off of information such as request:

      • Method

      • Parsed-Body Values

    hashtag
    Prerequisites

    Successful installation of the following:

    • Grey Matter deployed into the Kubernetes environment

    • Greymatter CLI configured with access to the environment

    circle-info

    Note In order to use Open Policy Agent, we need to launch an OPA-Container in the pod of the service we are looking to policy-enforce. The following steps will explain how to do so, along with a practical example.

    hashtag
    Step 1: Creating the Rego Policy

    Rego is the backbone of OPA, it is the structured language that OPA policies are written in and interpreted.

    Rego code is structured in such a way that information is passed to it through requests and depending on the logic of the Rego code, it will return values such as 'deny' or 'allow' as true or false, depending on how the conditions are met within the code. The code-snippet below is an example of a Rego policy that prevents users on the Grey Matter dashboard from setting memory-utilization violation triggers that are under 20 MB. We will save it as slo-policy.rego

    circle-info

    Note Want to read more about how the Rego language works and how to write your own Rego code? See and .

    hashtag
    Applying the policy as a secret

    In order for this Rego policy file to be found in the mesh, we need to apply it as a secret:

    hashtag
    Step 2: Modifying the Service Deployment File

    hashtag
    Getting the Deployment File

    The first step in launching an OPA-Container in our service's pod is to to grab the deployment file of the pod we're looking to secure. We can do this by using the following command.

    hashtag
    Editing the Deployment File

    Next, we will edit the service deployment file we just grabbed. We need to add in the container image for Open Policy Agent along with several arguments in the deployment file we just grabbed (saved to our current directory). This ensures that the container will spawn into our pod seamlessly. Following the format of the YAML code-snippet below, it exhibits specifically how the container setup should look. This code can be placed right under containers: whose immediate parent is spec: as seen here.

    In addition to the code-snippet above, you must also add the follow code to your deployment file. At the end of the file, there should be a parent called volumes. It is at this point you will add the following as such:

    This step helps us to grab the Rego policy from our secret.

    hashtag
    Applying the Deployment File

    Now that we have our opa-policy secret in the mesh and our deployment file is saved and ready to go, we can apply it with:

    You can check on the progress of you deployment by using kubectl get pods -w -n default . Once your new pod is in the Running stage and the old one is done Terminating (~1min), we can move on to the next step.

    hashtag
    Step 3: Editing the Listener

    The final step is to edit the listener that is corresponding with the deployment you have been working on in this guide. We will be adding the envoy.ext_authz filter into the active_http_filters section of the listener. This filter will ensure that all requests being sent to our service are first checked with Open Policy Agent. The filter will also listen to Open Policy Agent's deny or allow response that comes from our parsed Rego Policy and accept the request dependent on the value of the response.

    You can use the command greymatter list listener to get a list of listeners that are available for editing. In our example's case, we are looking for listener-slo as we are looking to edit the slo deployment.

    Using the following command, we will edit the listener that we want:

    Now, in your terminal's default editor, you will be shown the listener's data in a json format. The only fields that need to be edited are shown below:

    Notice how we have added envoy.ext_authz to the active_http_filters parent and that we have added a second snippet, envoy_ext_authz, into the http_filters parent. The snippet added into the http_filters area makes it so that the requests sent from the filter are sent using the gRPC protocol and that all the of the request's information can be sent through ext_authz and observed by our Rego Policy. Please note: The json data we have added is not specific to our example only, and will not require editing. Any following http_filters can start to be placed after the last line of the above code-snippet.

    Save this edit and allow some time (~30s) for the changes to propagate through the mesh.

    hashtag
    Successful Output

    If you have followed our example, you'll notice that for your Service Level Objective service in the Grey Matter dashboard, you will not be able to successfully change the memory-utilization violation field to any number less than 20 as it will return a 403 Forbidden Status. However, you will be able to edit the number to be anything equal to or greater than 20, or blank so that no violation trigger exists.

    hashtag
    In the dashboard

    hashtag
    In the network console

    hashtag
    Congratulations

    You have successfully integrated Open Policy Agent into your service mesh and created a custom Rego Policy to go along with it, in order to protect your service!

    hashtag
    Want to look further into Open Policy Agent / Rego Policies?

    Read further in-depth on Open Policy Agent and its more advanced capabilities such as, but not limited to, JWT Verification / Signing, dynamic Access Control Lists, and editing policies on the fly. See

    hashtag
    Questions?

    circle-check

    Need Help?

    Create an account at to reach our team.

    Base Options

    hashtag
    Control

    hashtag
    Description

    gm-control is the base command for performing discovery operations in the Grey Matter Service Mesh.

    hashtag
    Child Commands

    hashtag
    Help

    To list available commands, either run gm-control with no parameters or with the global help flag, gm-control --help:

    Visualize Audits

    Visualize audit data from Grey Matter using Kibana.

    This guide will help you leverage the Kibana dashboard to visualize your Grey Matter audit data.

    hashtag
    Prerequisites

    To complete this tutorial, you’ll need an understanding of, and local access to the following environments and tools:

    • Unix/Linux setup

    • Microservices and mesh architecture

    • Grey Matter Sidecar - v0.7.2+

    • Grey Matter Discovery Service -v2.0.2

    • How to configure the Grey Matter Sidecar

    • Docker () - v17.03 and newer

    • Docker Compose ()

    • Kafka - v2.12-0 - 10.2.1

    • Elasticsearch

    • Kibana

    hashtag
    Step 1: Configure Audits

    If you haven't already done so, complete our guide to help you with the Grey Matter Sidecar.

    When you've completed that guide, proceed to Step 2: Audit Proxy Observable Consumer (APOC).

    hashtag
    Step 2: Audit Proxy Observable Consumer (APOC)

    The Audit Proxy Observable Consumer (APOC) reads from Kafka and submits the information to Elasticsearch asynchronously for maximum throughput.

    APOC's goals include:

    • Listening to given Kafka topic(s)

    • Taking every event that is emitted into Kafka

    • Transforming those events

    hashtag
    Message Transformation

    When a message is transformed by the APOC, event mappings are also added. The message will default to the following HTTP types found below:

    The message can also be changed for individual routes by adding the following settings:

    During the transformation action_locations are added to the audit event. These consist of an IP address identifier based on the x-forwarded-for in the header of the request.

    action_targets is added consisting of {x=forwarded-proto}://{:authority}{x-envoy-original-path}.

    hashtag
    Determine Value of Creator

    A creator is then added to the auditing payload, and follows these steps to determine its value.

    • Allow for configuring an override default to that in the environment variable or settings.yaml or JSON.

    • If there is a request header named application , use it.

    • See if there is a request header named service

    In this configuration, the transformation will try to find user information in the request header and set action_initiator to be the userdn from the request. time_audited is also added to the audit information. This is the system time at which the audit transformation is completed by the APOC. Finally, if a GEO_DATABASE is supplied, the transformation will try to find the geographical location of the request based on the last IP address in the x_forwarded_for header field. The full information is added to the audit information as geo_ip, but basic location (lat and long) is added as location.

    hashtag
    Questions?

    circle-check

    Need help getting a dashboard up so you can visualize observables?

    Create an account at to reach our team.

    Audits and Observables

    Overview of how Grey Matter handles audits and observables.

    Grey Matter Fabric helps you visualize and analyze audit data. As long as you deploy the Grey Matter Sidecararrow-up-right with a service, the Sidecar will send metrics and audit data to Fabric.

    circle-check

    Key Definition

    Audits are a security-relevant event within Grey Matter. An audit event, or simply event, can be any of the following:

    • Change to the security state of the system

    • Attempted or actual violation of the system access control or accountability security policies

    • Both

    An audit event report includes the following information:

    • Name of the event

    • Success or failure of the event

    • Additional event-specific information that is related to security auditing

    hashtag
    How Do Audits Work in Grey Matter?

    1. The Grey Matter Sidecar emits audit data to a Kafka topic for easy observability.

    2. If Fabric is set up with an Edge, it pulls audit data from the PKI certificate, the IP address of the originating request, etc.

    3. This audit data--also called events or observables--allows for detailed event auditing of ingress and egress traffic, and process resource use.

    hashtag
    Learn About the Grey Matter Sidecar

    Learn more about the process and capabilities of the Grey Matter Sidecar here.

    hashtag
    Configure an Observables Filter

    Learn how to set up an observables filter here.

    hashtag
    Visualize Observables

    Learn how to use Grey Matter to visualize observables here.

    hashtag
    How Does Grey Matter Index Audit Events?

    Grey Matter does not index audit events directly into Elasticsearch. Instead, Grey Matter contains a Kafka consumer that reads Kafka observables. This consumer transforms and indexes them to use with Elasticsearch.

    hashtag
    Use Kibana to Visualize Observables

    Kibana is an open source Elasticsearch plugin that takes observables from Grey Matter and visualizes them in a graphical dashboard.

    Kibana simplifies the creation of visualizations to explore, search, view, and interact with audit data stored in Elasticsearch indices. Kibana helps you analyze and visualize individual events and trends such as:

    • Total requests

    • Number of requests by individual users

    • Geographic locations of requests made in Fabric

    hashtag
    Enable Audits to Be Ingested into Elasticsearch with Kibana

    To enable audits to be ingested into Elasticsearch with Kibana, follow these steps:

    1. : this guide helps you gather observables.

    2. : this guide helps visualize observables.

    circle-info

    What about Splunk?

    While the Grey Matter Sidecar does not support direct emission of events into Splunk, you can create or modify a consumer to provide that capability.

    hashtag
    Sample Observable Information

    The following observable information was captured from a user accessing an event through a Sidecar operating within Grey Matter Fabric:

    hashtag
    Questions?

    circle-check

    Create an account at to reach our team.

    Troubleshoot

    Learn how to debug common problems.

    hashtag
    Fabric Mesh Communication

    hashtag
    404 Errors

    Fabric

    Grey Matter Fabric powers the zero-trust hybrid service mesh, and consists of the , , , and . Use Fabric to connect services built with any language, framework, or runtime environment.

    Get a refresher on how Fabric fits into Grey Matter's architecture.

    hashtag
    Edge

    Setup

    hashtag
    Questions?

    circle-info

    Configuration Issues?

    AWS Discovery

    hashtag
    AWS

    hashtag
    Description

    gm-control aws

    Marathon Discovery

    hashtag
    DC/OS Marathon

    hashtag
    Description

    gm-control marathon

    Grey Matter Control

    The Control server performs service discovery in the mesh and acts as the Envoy xDS server to which all proxies connect. Service discovery can be performed through a .

    hashtag
    Usage

    The Control server runs on a number of different platforms and supports a variety of discovery mechanisms. Each one has its own configuration options, the details of which are outlined in the .

    For all commands, both CLI flags and environment variables are supported. If both are used, then CLI flags take precedence. Each flag listed in the docs (or

    Leaderboard Logging

    gm-control can be configured to log a leaderboard of non-2xx requests on a time interval to stdout. This is useful as a quick way to see which endpoints are performing poorly throughout the mesh, without getting into advanced debugging.

    hashtag
    Configuration

    Leader board logging is configured with two following parameters:

    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: fibonacci
        greymatter.io/control: fibonacci
      name: fibonacci
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: fibonacci
          greymatter.io/control: fibonacci
      template:
        metadata:
          labels:
            app: fibonacci
            greymatter.io/control: fibonacci
        spec:
          containers:
            - name: fibonacci
              image: docker.greymatter.io/internal/fibonacci:latest
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 8080
            - name: sidecar
              image: "docker.greymatter.io/release/gm-proxy:1.4.5"
              imagePullPolicy: IfNotPresent
              ports:
                - name: metrics
                  containerPort: 8081
                - name: proxy
                  containerPort: 10808
              env:
                - name: ENVOY_ADMIN_LOG_PATH
                  value: "/dev/stdout"
                - name: PROXY_DYNAMIC
                  value: "true"
                - name: SPIRE_PATH
                  value: "/run/spire/socket/agent.sock"
                - name: XDS_CLUSTER
                  value: "fibonacci"
                - name: XDS_HOST
                  value: "control.default.svc"
                - name: XDS_NODE_ID
                  value: "default"
                - name: XDS_PORT
                  value: "50000"
                - name: XDS_ZONE
                  value: "zone-default-zone"
              volumeMounts:
                - name: spire-socket
                  mountPath: /run/spire/socket
                  readOnly: false
          imagePullSecrets:
            - name: docker.secret
          volumes:
            - name: spire-socket
              hostPath:
                path: /run/spire/socket
                type: DirectoryOrCreate
    kubectl apply -f deployment.yaml
    {
      "zone_key": "zone-default-zone",
      "domain_key": "fibonacci-domain",
      "name": "*",
      "port": 10808,
      "force_https": true
    }
    greymatter create domain < domain.json
    {
      "zone_key": "zone-default-zone",
      "listener_key": "fibonacci-listener",
      "domain_keys": [
        "fibonacci-domain"
      ],
      "name": "ingress",
      "ip": "0.0.0.0",
      "port": 10808,
      "protocol": "http_auto",
      "secret": {
        "secret_key": "fibonacci.identity",
        "secret_name": "spiffe://quickstart.greymatter.io/fibonacci",
        "secret_validation_name": "spiffe://quickstart.greymatter.io",
        "subject_names": [
          "spiffe://quickstart.greymatter.io/edge"
        ],
        "ecdh_curves": [
          "X25519:P-256:P-521:P-384"
        ]
      }
    }
    greymatter create listener < listener.json
    {
      "zone_key": "zone-default-zone",
      "proxy_key": "fibonacci-proxy",
      "domain_keys": [
        "fibonacci-domain"
      ],
      "listener_keys": [
        "fibonacci-listener"
      ],
      "name": "fibonacci",
      "listeners": null
    }
    greymatter create proxy < proxy.json
    {
      "zone_key": "zone-default-zone",
      "cluster_key": "fibonacci-cluster",
      "name": "local",
      "instances": [
        {
          "host": "localhost",
          "port": 8080
        }
      ],
      "require_tls": false
    }
    greymatter create cluster < fibonacci-local-cluster.json
    {
      "zone_key": "zone-default-zone",
      "shared_rules_key": "fibonacci-local-rules",
      "name": "local",
      "default": {
        "light": [
          {
            "constraint_key": "",
            "cluster_key": "fibonacci-cluster",
            "metadata": null,
            "properties": null,
            "response_data": {},
            "weight": 1
          }
        ],
        "dark": null,
        "tap": null
      }
    }
    greymatter create shared_rules < fibonacci-local-rules.json
    {
      "zone_key": "zone-default-zone",
      "domain_key": "fibonacci-domain",
      "route_key": "fibonacci-local-route",
      "path": "/",
      "prefix_rewrite": "",
      "shared_rules_key": "fibonacci-local-rules"
    }
    greymatter create route < fibonacci-local-route.json
    {
      "zone_key": "zone-default-zone",
      "cluster_key": "edge-to-fibonacci-cluster",
      "name": "fibonacci",
      "instances": [],
      "require_tls": true,
      "secret": {
        "secret_key": "edge.identity",
        "secret_name": "spiffe://quickstart.greymatter.io/edge",
        "secret_validation_name": "spiffe://quickstart.greymatter.io",
        "subject_names": [
          "spiffe://quickstart.greymatter.io/fibonacci"
        ],
        "ecdh_curves": [
          "X25519:P-256:P-521:P-384"
        ]
      }
    }
    greymatter create cluster < edge-to-fibonacci-cluster.json
    {
      "zone_key": "zone-default-zone",
      "shared_rules_key": "edge-to-fibonacci-rules",
      "name": "edge-to-fibonacci",
      "default": {
        "light": [
          {
            "constraint_key": "",
            "cluster_key": "edge-to-fibonacci-cluster",
            "metadata": null,
            "properties": null,
            "response_data": {},
            "weight": 1
          }
        ],
        "dark": null,
        "tap": null
      }
    }
    greymatter create shared_rules < edge-to-fibonacci-rules.json
    {
      "zone_key": "zone-default-zone",
      "domain_key": "edge",
      "route_key": "edge-to-fibonacci-route",
      "path": "/services/fibonacci",
      "prefix_rewrite": "/",
      "shared_rules_key": "edge-to-fibonacci-rules"
    }
    {
      "zone_key": "zone-default-zone",
      "domain_key": "edge",
      "route_key": "edge-to-fibonacci-route-slash",
      "path": "/services/fibonacci/",
      "prefix_rewrite": "/",
      "shared_rules_key": "edge-to-fibonacci-rules"
    }
    greymatter create route < edge-to-fibonacci-route.json
    greymatter create route < edge-to-fibonacci-route-slash.json
    kubectl get svc edge
    {
      "clusterName": "fibonacci",
      "zoneName": "zone-default-zone",
      "name": "Fibonacci",
      "version": "1.0",
      "owner": "Decipher",
      "capability": "Tutorial",
      "runtime": "GO",
      "documentation": "/services/fibonacci/",
      "prometheusJob": "fibonacci",
      "minInstances": 1,
      "maxInstances": 2,
      "authorized": true,
      "enableInstanceMetrics": true,
      "enableHistoricalMetrics": true,
      "metricsPort": 8081
    }
    curl -k -XPOST --cert <path>/<to>/<certs>/quickstart.crt --key <path>/<to>/<certs>/quickstart.key https:///{your-gm-ingress-url}:{your-gm-ingress-port}/services/catalog/latest/clusters -d "@fibonacci-catalog.json"
    {"added": "fibonacci"}
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: simple-service
        greymatter.io/control: simple-service
      name: simple-service
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: simple-service
          greymatter.io/control: simple-service
      template:
        metadata:
          labels:
            app: simple-service
            greymatter.io/control: simple-service
        spec:
          containers:
            - name: service
              image: "zoemccormick/simple-service:latest"
              imagePullPolicy: IfNotPresent
              ports:
                - containerPort: 8080
              env:
                - name: EGRESS_ROUTE
                  value: http://localhost:10909/catalog/summary
            - name: sidecar
              image: "docker.greymatter.io/release/gm-proxy:1.4.5"
              imagePullPolicy: IfNotPresent
              ports:
                - name: metrics
                  containerPort: 8081
                - name: proxy
                  containerPort: 10808
              env:
                - name: ENVOY_ADMIN_LOG_PATH
                  value: "/dev/stdout"
                - name: PROXY_DYNAMIC
                  value: "true"
                - name: SPIRE_PATH
                  value: "/run/spire/socket/agent.sock"
                - name: XDS_CLUSTER
                  value: "simple-service"
                - name: XDS_HOST
                  value: "control.default.svc"
                - name: XDS_NODE_ID
                  value: "default"
                - name: XDS_PORT
                  value: "50000"
                - name: XDS_ZONE
                  value: "zone-default-zone"
              volumeMounts:
                - name: spire-socket
                  mountPath: /run/spire/socket
                  readOnly: false
          imagePullSecrets:
            - name: docker.greymatter.io
          volumes:
            - name: spire-socket
              hostPath:
                path: /run/spire/socket
                type: DirectoryOrCreate
    kubectl apply -f deployment.yaml
    {
      "zone_key": "zone-default-zone",
      "domain_key": "simple-service-domain",
      "name": "*",
      "port": 10808,
      "force_https": true
    }
    greymatter create domain < ingress-domain.json
    {
      "zone_key": "zone-default-zone",
      "listener_key": "simple-service-listener",
      "domain_keys": [
        "simple-service-domain"
      ],
      "name": "ingress",
      "ip": "0.0.0.0",
      "port": 10808,
      "protocol": "http_auto",
      "secret": {
        "secret_key": "simple-service.identity",
        "secret_name": "spiffe://quickstart.greymatter.io/simple-service",
        "secret_validation_name": "spiffe://quickstart.greymatter.io",
        "subject_names": [
          "spiffe://quickstart.greymatter.io/edge"
        ],
        "ecdh_curves": [
          "X25519:P-256:P-521:P-384"
        ]
      }
    }
    greymatter create listener < ingress-listener.json
    {
      "zone_key": "zone-default-zone",
      "proxy_key": "simple-service-proxy",
      "domain_keys": [
        "simple-service-domain"
      ],
      "listener_keys": [
        "simple-service-listener"
      ],
      "name": "simple-service",
      "listeners": null
    }
    greymatter create proxy < proxy.json
    {
      "zone_key": "zone-default-zone",
      "cluster_key": "edge-to-simple-service-cluster",
      "name": "simple-service",
      "instances": [],
      "require_tls": true,
      "secret": {
        "secret_key": "edge.identity",
        "secret_name": "spiffe://quickstart.greymatter.io/edge",
        "secret_validation_name": "spiffe://quickstart.greymatter.io",
        "subject_names": [
          "spiffe://quickstart.greymatter.io/simple-service"
        ],
        "ecdh_curves": [
          "X25519:P-256:P-521:P-384"
        ]
      }
    }
    {
      "zone_key": "zone-default-zone",
      "cluster_key": "simple-service-cluster",
      "name": "service",
      "instances": [
        {
          "host": "localhost",
          "port": 8080
        }
      ],
      "require_tls": false
    }
    greymatter create cluster < edge-to-simple-service-cluster.json
    greymatter create cluster < simple-service-cluster.json
    {
      "zone_key": "zone-default-zone",
      "shared_rules_key": "edge-to-simple-service-rules",
      "name": "edge-to-simple-service",
      "default": {
        "light": [
          {
            "constraint_key": "",
            "cluster_key": "edge-to-simple-service-cluster",
            "metadata": null,
            "properties": null,
            "response_data": {},
            "weight": 1
          }
        ],
        "dark": null,
        "tap": null
      }
    }
    {
      "zone_key": "zone-default-zone",
      "shared_rules_key": "simple-service-rules",
      "name": "service",
      "default": {
        "light": [
          {
            "constraint_key": "",
            "cluster_key": "simple-service-cluster",
            "metadata": null,
            "properties": null,
            "response_data": {},
            "weight": 1
          }
        ],
        "dark": null,
        "tap": null
      }
    }
    greymatter create cluster < edge-to-simple-service-rules.json
    greymatter create cluster < simple-service-rules.json
    {
      "zone_key": "zone-default-zone",
      "domain_key": "edge",
      "route_key": "edge-to-simple-service-route",
      "path": "/services/simple-service/",
      "prefix_rewrite": "/",
      "shared_rules_key": "edge-to-simple-service-rules"
    }
    {
      "zone_key": "zone-default-zone",
      "domain_key": "edge",
      "route_key": "edge-to-simple-service-route-slash",
      "path": "/services/simple-service",
      "prefix_rewrite": "/services/simple-service/",
      "shared_rules_key": "edge-to-simple-service-rules"
    }
    {
      "zone_key": "zone-default-zone",
      "domain_key": "simple-service-domain",
      "route_key": "simple-service-route",
      "path": "/",
      "prefix_rewrite": "",
      "shared_rules_key": "simple-service-rules"
    }
    greymatter create route < edge-route.json
    greymatter create route < edge-route-slash.json
    greymatter create route < service-route.json
    {
      "zone_key": "zone-default-zone",
      "domain_key": "simple-service-domain-egress",
      "name": "*",
      "port": 10909,
      "force_https": false,
      "custom_headers": [
        {
          "key": "x-forwarded-proto",
          "value": "https"
        }
      ]
    }
    greymatter create domain < egress-domain.json
    {
      "zone_key": "zone-default-zone",
      "listener_key": "simple-service-listener-egress",
      "domain_keys": [
        "simple-service-domain-egress"
      ],
      "name": "egress",
      "ip": "0.0.0.0",
      "port": 10909,
      "protocol": "http_auto"
    }
    greymatter create listener < egress-listener.json
    {
      "zone_key": "zone-default-zone",
      "cluster_key": "simple-service-to-catalog-cluster",
      "name": "catalog",
      "instances": [],
      "require_tls": true,
      "secret": {
        "secret_key": "simple-service.identity",
        "secret_name": "spiffe://quickstart.greymatter.io/simple-service",
        "secret_validation_name": "spiffe://quickstart.greymatter.io",
        "subject_names": [
          "spiffe://quickstart.greymatter.io/catalog"
        ],
        "ecdh_curves": [
          "X25519:P-256:P-521:P-384"
        ]
      }
    }
    greymatter create cluster < simple-service-to-catalog-cluster.json
    {
      "zone_key": "zone-default-zone",
      "shared_rules_key": "simple-service-to-catalog-rules",
      "name": "simple-service-to-catalog",
      "default": {
        "light": [
          {
            "constraint_key": "",
            "cluster_key": "simple-service-to-catalog-cluster",
            "metadata": null,
            "properties": null,
            "response_data": {},
            "weight": 1
          }
        ],
        "dark": null,
        "tap": null
      }
    }
    greymatter create shared_rules < simple-service-to-catalog-rules.json
    {
      "zone_key": "zone-default-zone",
      "domain_key": "simple-service-domain-egress",
      "route_key": "simple-service-to-catalog-egress-route",
      "path": "/catalog/",
      "prefix_rewrite": "/",
      "shared_rules_key": "simple-service-to-catalog-rules"
    }
    greymatter create route < simple-service-to-catalog-route.json
    greymatter edit listener listener-catalog
      "subject_names": [
        "spiffe://quickstart.greymatter.io/edge",
        "spiffe://quickstart.greymatter.io/simple-service"
      ],
    greymatter edit proxy simple-service-proxy
    {
      "zone_key": "zone-default-zone",
      "proxy_key": "simple-service-proxy",
      "domain_keys": [
        "simple-service-domain",
        "simple-service-domain-egress"
      ],
      "listener_keys": [
        "simple-service-listener",
        "simple-service-listener-egress"
      ],
      "name": "simple-service",
      "listeners": null
    }
    Switching default version to v1.4.2
    Switching completed
    curl https://nexus.greymatter.io/repository/raw/release/gm-cli/greymatter-v1.4.2.tar.gz -u user.name@organization.com > greymatter-v1.4.2.tar.gz
    tar -xvzf greymatter-v1.4.2.tar.gz
    x ./greymatter.linux
    x ./greymatter.exe
    x ./greymatter.osx
    ...
    sudo mv ./greymatter.linux /usr/bin/greymatter
    $ greymatter --version
    
    Grey Matter CLI
     Command Name:                  greymatter
     Version:                       v1.4.2
     Branch:                        release-1.4
     Commit:                        455e5fc
     Built:                         Wed, 08 Jul 2020 18:47:07 UTC by justincely
    Grey Matter Control API
     Version:                       v1.4.2-dev
    export GREYMATTER_API_HOST=services.greymatter.io:443
    export GREYMATTER_API_PREFIX=/services/control-api/latest
    export GREYMATTER_API_SSL=true
    export GREYMATTER_API_INSECURE=true
    export GREYMATTER_API_SSLCERT=/path/to/my.crt
    export GREYMATTER_API_SSLKEY=/path/to/my.key
    export EDITOR=vim # or your preferred editor
      greymatter create --api.host=services.greymatter.io:443 --api.prefix=/services/control-api/latest --api.ssl=true route < route.json
    $ greymatter list zone
    
    [
      {
        "zone_key": "zone-default-zone",
        "name": "default-zone",
        "checksum": "6883b95eb2dbd05e15c54fcd0e5414bcb5a6aee1d3b91ab2d1c6493e4945ff74"
      }
    ]
    Installation of greymatter v1.4.2 successful. To make this your default version, run 'gmenv use 1.4.2'
    apiVersion: v1
    kind: Service
    metadata:
      name: jaeger
      labels:
        app: jaeger
    spec:
      ports:
      - port: 9411
        targetPort: 9411
        name: trace
      - port: 16686
        targetPort: 16686
        name: ui
      selector:
        app: jaeger
      type: NodePort
    
    ---
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: jaeger
    spec:
      selector:
        matchLabels:
          app: jaeger
      replicas: 1
      template:
        metadata:
          labels:
            app: jaeger
        spec:
          containers:
          - name: jaeger
            image: jaegertracing/all-in-one
            imagePullPolicy: Always
            ports:
            - name: trace
              containerPort: 9411
            - name: ui
              containerPort: 16686
            env:
            - name: COLLECTOR_ZIPKIN_HTTP_PORT
              value: "9411"
            - name: QUERY_BASE_PATH
              value: "/apps/trace"
            - name: LOG_LEVEL
              value: "debug"
    kubectl apply -f ./jaeger.yaml
    kubectl edit deployment edge
    - name: TRACING_ENABLED
      value: "true"
    - name: TRACING_ADDRESS
      value: "jaeger"
    - name: TRACING_PORT
      value: "9411"
    greymatter edit listener edge-listener
    "tracing_config": {
      "ingress": true
    }
    kubectl logs -f jaeger-5dc85d4bbd-7whzl
    {"level":"debug","ts":1588867245.6727643,"caller":"handler/thrift_span_handler.go:130","msg":"Zipkin span batch processed by the collector.","span-count":1}
    {"level":"debug","ts":1588867245.6729214,"caller":"app/span_processor.go:148","msg":"Span written to the storage by the collector","trace-id":"e6681bca2d0b0bac","span-id":"e6681bca2d0b0bac"}
    {"level":"debug","ts":1588867250.6756654,"caller":"handler/thrift_span_handler.go:130","msg":"Zipkin span batch processed by the collector.","span-count":1}
    {"level":"debug","ts":1588867250.6758242,"caller":"app/span_processor.go:148","msg":"Span written to the storage by the collector","trace-id":"25cb6d784522de88","span-id":"25cb6d784522de88"}
    {"level":"debug","ts":1588867255.6768801,"caller":"handler/thrift_span_handler.go:130","msg":"Zipkin span batch processed by the collector.","span-count":1}
    {"level":"debug","ts":1588867255.67703,"caller":"app/span_processor.go:148","msg":"Span written to the storage by the collector","trace-id":"615a316d024c403c","span-id":"615a316d024c403c"}
    {"level":"debug","ts":1588867260.6802413,"caller":"handler/thrift_span_handler.go:130","msg":"Zipkin span batch processed by the collector.","span-count":1}
    kubectl port-forward $(kubectl get pod | grep jaeger | cut -d" " -f1) 16686
    greymatter edit listener fibonacci-listener
      "active_http_filters": [
        "gm.observables"
      ],
      "http_filters": {
        "gm_observables": {
          "topic": "fibonacci-topic",
          "eventTopic": "fibonacci-event-topic",
          "logLevel": "debug"
        }
      }
    6:05PM DBG Message publishing to STDOUT; emitFullResponse = false
     Encryption= EncryptionKeyID=0 Filter=Observables Topic=fibonacci-topic
    {
      "zone_key": "zone-default-zone",
      "domain_key": "fibonacci-domain",
      "route_key": "fibonacci-route-37",
      "route_match": {
        "path": "/fibonacci/37",
        "match_type": "exact"
      },
      "filter_metadata": {
        "gm.observables": [
          {
            "key": "emitFullResponse",
            "value": "true"
          }
        ]
      },
      "prefix_rewrite": "",
      "shared_rules_key": "fibonacci-rules"
    }
    greymatter create route < mesh/fibonacci-route-37.json
    6:46PM DBG Message publishing to STDOUT; emitFullResponse = false
     Encryption= EncryptionKeyID=0 Filter=Observables Topic=fibonacci-topic
    6:47PM DBG DecodeHeaders: route based config: changing emitFullResponse from false to true Encryption= EncryptionKeyID=0 Filter=Observables Topic=fibonacci-topic
    6:47PM DBG Message publishing to STDOUT; emitFullResponse = true
     Encryption= EncryptionKeyID=0 Filter=Observables Topic=fibonacci-topic
     helm install agent spire/agent -f global.yaml
    
     helm install fabric fabric --set=global.environment=eks -f global.yaml
     helm install edge edge --set=global.environment=eks --set=edge.ingress.type=LoadBalancer -f global.yaml
     helm install data data --set=global.environment=eks --set=global.waiter.service_account.create=false -f global.yaml
     helm install sense sense --set=global.environment=eks --set=global.waiter.service_account.create=false -f global.yaml
     kubectl get pods
     NAME                                    READY     STATUS      RESTARTS   AGE
     catalog-5b54979554-hs98q                2/2       Running     2          91s
     catalog-init-k29j2                      0/1       Completed   0          91s
     control-887b76d54-gbtq4                 1/1       Running     0          18m
     control-api-0                           2/2       Running     0          18m
     control-api-init-6nk2f                  0/1       Completed   0          18m
     dashboard-7847d5b9fd-t5lr7              2/2       Running     0          91s
     data-0                                  2/2       Running     0          17m
     data-internal-0                         2/2       Running     0          17m
     data-mongo-0                            1/1       Running     0          17m
     edge-6f8cdcd8bb-plqsj                   1/1       Running     0          18m
     internal-data-mongo-0                   1/1       Running     0          17m
     internal-jwt-security-dd788459d-jt7rk   2/2       Running     2          17m
     internal-redis-5f7c4c7697-6mmtv         1/1       Running     0          17m
     jwt-security-859d474bc6-hwhbr           2/2       Running     2          17m
     postgres-slo-0                          1/1       Running     0          91s
     prometheus-0                            2/2       Running     0          59s
     redis-5f5c68c467-j5mwt                  1/1       Running     0          17m
     slo-7c475d8597-7gtfq                    2/2       Running     0          91s
    Contact us at Grey Matter Supportarrow-up-right to reach our team.
    greymatter
    gm-control <command> --help
    ), will also be accepted as en environment variable of the form
    GM_CONTROL_<COMMAND>_<FLAG>
    . E.g.: the
    --namespace
    flag on the
    kubernetes
    command would be set in the environment as
    GM_CONTROL_KUBERNETES_NAMESPACE

    hashtag
    Additional Features

    • Leaderboard Loggingarrow-up-right

    • Static Resource Definitionarrow-up-right

    hashtag
    Deployment Patterns

    Grey Matter Control is a stateless microservice, and supports scaling to whatever size cluster is required. If a Control server goes offline, the data plane (connected mesh of Sidecars) will continue to function, but without any updates from new configuration or changed instances. When a new Control instance is brought up it will read the current state of Sidecar instances through service discovery and the current configuration for the mesh from gm-control-api to get back to the original state (or new state, if either has progressed since).

    circle-exclamation

    \Memory Utilization Requirements

    Each Control server requires ~1.25MB of memory per connected sidecar. This should be reflected in any enforced memory limits to prevent server restart due to OOM errors.

    hashtag
    Single-Node Deployment

    In a single-node deployment, the one gm-control instance will do service discovery, read mesh configuration from gm-control-api, and serve data plane configuration to all Sidecars in the mesh.

    hashtag
    N-Node Deployment

    When multiple Control servers are deployed, they should be provisioned behind a Load Balancer. This will distribute the load of connected Sidecars across all instances of the service, making each one responsible for serving configs out to only a fraction of the data plane.

    Even with multiple Control servers, each one still independently performs service discovery and reading mesh configuration from gm-control-api. This allows each gm-control node to see the mesh in it's entirety, and to take over control of additional sidecars should a node go down. If this does happen, existing sidecars will shift their connection to a live instance and continue to receive data plane updates.

    hashtag
    Multiple server deployment for mixed service discovery

    Multiple Control servers can also be deployed to use multiple different service discovery mechanisms for a single mesh. This allows services from different environments to be discovered and interact with each other.

    In this use case, no load balancer will be used, because each Control server will operate independently of any other(s), connecting to the same Grey Matter Control API.

    hashtag
    Dependencies and Requirements

    Version

    Dependency Name

    Dependency Version

    1.x

    gm-control-api

    1.x

    gm-control can stream valid instructions to envoy-compatible proxies of v1.x

    variety of mechanismsarrow-up-right
    usage documentationarrow-up-right
  • To do this, install using the Grey Matter Helm Charts with global.spire.enabled true

  • SPIREarrow-up-right
    security documentation
    Zero-Trustchevron-right
    SPIRE Serverarrow-up-right
    SPIRE Agentarrow-up-right
    install Grey Matter
    the service deployment guide
    Deploy Service for Ingress/Egress Actions Guide
    ssl_config
    the example domain here
    secret
    cluster label
    Shared rules
    routes
    Grey Matter Supportarrow-up-right
    Submitting them to a given Elasticsearch index
    , use it.
  • Extrapolate the name if the request url contains /apps/{value}/ or /services/{value}/ x-envoy-original-path.

  • Hard code a default. For instance, you could use service if none of the previous parameters are set.

  • https://docs.docker.com/install/arrow-up-right
    https://docs.docker.com/compose/install/arrow-up-right
    Configure Audits
    configure audits
    Configure Auditschevron-right
    Grey Matter Supportarrow-up-right
    What individual users are doing
  • Timing of user requests

  • What user are looking at

  • userDNs (Authenticated user names)

  • Geographic location of IP addresses

  • Requests per hour by user

  • Response codes

  • Paths

  • Service vs. userDN

  • Services

  • Response bodies

  • User agents

  • Configure Auditschevron-right
    Visualize Auditschevron-right
    Configure audits
    Set up the Audit Proxy Observable Consumer (APOC) code
    Learn more.arrow-up-right
    Grey Matter Supportarrow-up-right
    If the 404 is only for the correct URL for your service (e.g. /services/fibonacci/1.0/) then a missing route configuration is the most likely cause. Check that the response from control when you did your greymatter create route < 2_sidecar/route.json looks correct.

    hashtag
    “No healthy upstream”

    A request returning no healthy upstream indicates that the Sidecar responsible for handling this request is aware of what service to send it, but there is no healthy instance up and running.

    To verify this, send a request to the /clusters endpoint of the Sidecar's admin server. A healthy service will have both a named host and valid IP addresses. If you see no ip addresses, like the example below, then this is indeed the issue.

    If you just deployed your service, wait a few minutes. To avoid network chattiness, the control plane delays updates between updates. If it has been more than a few minutes and the error is still there, then the network chain to this service needs to be inspected.

    Typically: 1. Verify that the service is announcing with the correct labels. 2. Verify that the Sidecar deployed with the service has XDS_CLUSTER set to match the proxy API object's name field. 3. Verify that the API cluster objects that should reference this service have their name field set to match the above fields.

    hashtag
    “Upstream connect error or disconnect/reset before headers”

    A request returning Upstream connect error or disconnect/reset before headers indicates that the Sidecar responsible for handling this request both knows what service should receive it and has a healthy instance to send it to. However, in this case the service did not send a response.

    The most likely cause for this is that the microservice at the end of the request is not yet up and running. Though the network path is setup, if the end server is still booting up this error will be returned until the server is ready to accept requests.

    A more uncommon error is that the IP address sent to one of the Sidecars along the network path is stale. This could be from incorrect service announcement, or incorrect instance resolution behind a loadbalancer. Check the IP addresses returned from the Sidecar's admin endpoints against the service registry (k8s, consul, dc/os, aws, etc) to confirm.

    hashtag
    Questions

    circle-check

    Need help troubleshooting?

    Create an account at Grey Matter Supportarrow-up-right to reach our team.

    is used to discover service instances from AWS EC2 instances using cluster tags and instance filters. This collector requires AWS access credentials to utilize the AWS APIs.

    hashtag
    Usage

    gm-control [GLOBAL OPTIONS] aws [OPTIONS]

    hashtag
    Help

    To list the available commands, run with the global help flag, gm-control aws --help:

    is used to discover service instances from the Marathon API using cluster labels.

    hashtag
    Usage

    gm-control [GLOBAL OPTIONS] marathon [OPTIONS]

    hashtag
    Help

    To list available commands run with the global help flag, gm-control marathon --help:

    CLI Flag

    Meaning

    Type

    Example

    GM_CONTROL_XDS_GRPC_LOG_TOP_INTERVAL

    --xds.grpc-log-top-interval

    How often leaderboards are logged and counts reset

    Duration

    5m3s

    GM_CONTROL_XDS_GRPC_LOG_TOP

    --xds.grpc-log-top

    How many unique requests are logged

    integer

    5

    Leaderboarding can be disabled by setting GM_CONTROL_XDS_GRPC_LOG_TOP to 0.

    hashtag
    Schema

    Leaderboards are logged to standard out in the format:

    Example:

    Environment Variable

    - name: SPIRE_PATH
      value: "/run/spire/socket/agent.sock"
    volumes:
      - name: spire-socket
        hostPath:
          path: /run/spire/socket
          type: DirectoryOrCreate
    volumeMounts:
      - name: spire-socket
        mountPath: /run/spire/socket
        readOnly: false
    "secret": {
      "secret_key": "{service-name}-secret",
      "secret_name": "spiffe://quickstart.greymatter.io/{service-name}",
      "secret_validation_name": "spiffe://quickstart.greymatter.io",
      "subject_names": [
        "spiffe://quickstart.greymatter.io/edge"
      ],
      "ecdh_curves": [
        "X25519:P-256:P-521:P-384"
      ]
    }
    "secret": {
      "secret_key": "secret-edge-secret",
      "secret_name": "spiffe://quickstart.greymatter.io/edge",
      "secret_validation_name": "spiffe://quickstart.greymatter.io",
      "subject_names": [
        "spiffe://quickstart.greymatter.io/{service-name}"
      ],
      "ecdh_curves": [
        "X25519:P-256:P-521:P-384"
      ]
    }
    kubectl exec -it data-internal-0 -c data-internal -- /bin/sh
    openssl s_client --connect {IP}:10808
    openssl s_client --connect {IP}:10808 | openssl x509 -text --noout
    {"GET": "ACCESS",
                   "POST": "CREATE",
                   "DELETE": "REMOVE",
                   "PUT": "MODIFY"}
    EVENT_TYPE_MAPPINGS:
     GET:
       - uri: "/activities"
         eventType: "EventSearchQry"
     POST:
       - uri: "/analyses/pos"
         eventType: "EventAccess"
    {
     "_index": "audit",
     "_type": "_doc",
     "_id": "FvUJ2GsBQetsYfWuW1Ab",
     "_score": 1,
     "_source": {
       "eventId": "00f4b3e4-a279-11e9-b433-0a580a82025d",
       "eventChain": [
         "00f4b3e4-a279-11e9-b433-0a580a82025d"
       ],
       "schemaVersion": "1.0",
       "originatorToken": [
         "cn=minos.kepheus, dc=hellas, dc=com",
         "CN=*.greymatter.svc.cluster.local,OU=Engineering,O=Decipher Technology Studios,=Alexandria,=Virginia,C=US",
         "CN=*.greymatter.svc.cluster.local,OU=Engineering,O=Decipher Technology Studios,=Alexandria,=Virginia,C=US"
       ],
       "eventType": "fibonacci",
       "timestamp": 1562697620,
       "xForwardedForIp": "15.188.27.135,10.129.2.140",
       "systemIp": "10.130.2.93",
       "action": "GET",
       "payload": {
         "isSuccessful": true,
         "request": {
           "endpoint": "/fibonacci/18",
           "headers": {
             ":authority": "demo-oauth.production.deciphernow.com",
             ":method": "GET",
             ":path": "/fibonacci/18",
             "accept-encoding": "gzip",
             "content-length": "0",
             "cookie": "OauthExpires=1562757619; OauthSignature=0OgHLzHBxSUdNk557aKWeYW9jrg; OauthUserDN=cn%3Dminos.kepheus%2C+dc%3Dhellas%2C+dc%3Dcom",
             "external_sys_dn": "CN=*.greymatter.svc.cluster.local,OU=Engineering,O=Decipher Technology Studios,=Alexandria,=Virginia,C=US",
             "forwarded": "for=15.188.27.135;host=demo-oauth.production.deciphernow.com;proto=https;proto-version=",
             "ssl_client_s_dn": "CN=*.greymatter.svc.cluster.local,OU=Engineering,O=Decipher Technology Studios,=Alexandria,=Virginia,C=US",
             "user-agent": "Go-http-client/1.1",
             "user_dn": "cn=minos.kepheus, dc=hellas, dc=com",
             "x-envoy-original-path": "/services/fibonacci/1.0.0/fibonacci/18",
             "x-forwarded-for": "15.188.27.135,10.129.2.140",
             "x-forwarded-host": "demo-oauth.production.deciphernow.com",
             "x-forwarded-port": "443",
             "x-forwarded-proto": "https",
             "x-real-ip": "10.129.2.140",
             "x-request-id": "234aff6f-e376-41d5-89b8-1aa6dd0bbf4f"
           }
         },
         "response": {
           "code": 200,
           "headers": {
             ":status": "200",
             "content-length": "5",
             "content-type": "text/plain; charset=utf-8",
             "date": "Tue, 09 Jul 2019 18:40:20 GMT",
             "x-envoy-upstream-service-time": "0"
           },
           "body": "2584\n"
         }
       },
       "event_mapping": {
         "type": "EventAccess",
         "action": "ACCESS"
       },
       "time_audited": "20190709T184020.249380",
       "geo_ip": {
         "accuracy_radius": 1000,
         "latitude": 48.8607,
         "longitude": 2.3281,
         "time_zone": "Europe/Paris"
       },
       "location": {
         "lat": 48.8607,
         "lon": 2.3281
       }
     },
     "fields": {
       "payload.response.headers.date": [
         "2019-07-09T18:40:20.000Z"
       ],
       "time_audited": [
         "2019-07-09T18:40:20.249Z"
       ]
     }
    }
    my-service::default_priority::max_connections::1024
    my-service::default_priority::max_pending_requests::1024
    my-service::default_priority::max_requests::1024
    my-service::default_priority::max_retries::3
    my-service::high_priority::max_connections::1024
    my-service::high_priority::max_pending_requests::1024
    my-service::high_priority::max_requests::1024
    my-service::high_priority::max_retries::3
    my-service::added_via_api::true
    $ ./gm-control aws --help
    NAME
        aws - aws collector
    
    USAGE
        gm-control [GLOBAL OPTIONS] aws [OPTIONS]
    
    VERSION
        1.0.3-dev
    
    DESCRIPTION
        Connects to the AWS API in a given region and updates Clusters stored in the Greymatter API at startup and periodically thereafter.
    
        EC2 instance tags are used to determine to which clusters an instance belongs. An EC2 instance may belong to multiple clusters, serving traffic on multiple ports. Cluster membership on a port is declared with a tag, of the form:
    
            "<namespace>:<cluster-name>:<port>"=""
    
        The port must be numeric, and the cluster name cannot contain the delimiter. The delimiter is ":" and the default namespace is "gm:cluster".
    
        Tags of the following form will be added to the Instance in the appropriate Cluster, as "<key>"="<value>":
    
            "<namespace>:<cluster-name>:<port>:<key>"="<value>"
    
        If key/value tags are included, the cluster membership tag is optional.
    
        Tags without the namespaced cluster/port prefix will be added to all Instances in all Clusters to which the EC2 Instance belongs.
    
        By default, all EC2 Instances in the VPC are examined, but additional filters can be specified (see -filters).
    
        Additionally, by default if AWS credentials are not passed via cli then the AWS's Go SDK will fall back to its default credential chain. This first pulls from the environment then falls back to the task role and finally the instance profile role.
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every gm-control request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for gm-control requests. If no port is given, it defaults to port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for gm-control requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for gm-control requests
    
        --api.prefix=value
                The url prefix for gm-control requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for gm-control requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every gm-control request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every gm-control request.
    
        --api.zone-name=string
                The name of the API Zone for gm-control requests.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --delay=duration
                (default: 30s)
                Sets the minimum time between API updates. If the discovery data changes more frequently than this duration, updates are delayed to maintain the minimum time.
    
        --diff.dry-run
                (default: false)
                Log changes at the info level rather than submitting them to the API
    
        --diff.ignore-create
                (default: false)
                If true, do not create new Clusters in the API
    
        --diff.include-delete
                (default: false)
                If true, delete missing Clusters from the API
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --stats.api.header=header
                Specifies a custom header to send with every stats API request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --stats.api.host=host:port
                (default: localhost:80)
                The address (host:port) for stats API requests. If no port is given, it defaults to port 443 if --stats.api.ssl is true and port 80 otherwise.
    
        --stats.api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for stats API requests
    
        --stats.api.prefix=value
                The url prefix for stats API requests. Forms the path part of <host>:<port><path>
    
        --stats.api.ssl
                (default: true)
                If true, use SSL for stats API requests
    
        --stats.api.sslCert=value
                Specifies the SSL cert to use for every stats API request.
    
        --stats.api.sslKey=value
                Specifies the SSL key to use for every stats API request.
    
        --stats.backends=value
                (valid values: "dogstatsd", "prometheus", "statsd", or "wavefront")
                Selects which stats backend(s) to use.
    
        --stats.batch
                (default: true)
                If true, stats requests are batched together for performance.
    
        --stats.dogstatsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.dogstatsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.dogstatsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.dogstatsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.dogstatsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.dogstatsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.dogstatsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.dogstatsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.dogstatsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.dogstatsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.dogstatsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.event-backends=value
                (valid values: "console" or "honeycomb")
                Selects which stats backend(s) to use for structured events.
    
        --stats.exec.attempt-timeout=duration
                (default: 1s)
                Specifies the default timeout for individual action attempts. A timeout of 0 means no timeout.
    
        --stats.exec.delay=duration
                (default: 100ms)
                Specifies the initial delay for the exponential delay type. Specifies the delay for constant delay type.
    
        --stats.exec.delay-type=value
                (default: "exponential")
                (valid values: "constant" or "exponential")
                Specifies the retry delay type.
    
        --stats.exec.max-attempts=int
                (default: 8)
                Specifies the maximum number of attempts made, inclusive of the original attempt.
    
        --stats.exec.max-delay=duration
                (default: 30s)
                Specifies the maximum delay for the exponential delay type. Ignored for the constant delay type.
    
        --stats.exec.parallelism=int
                (default: 8)
                Specifies the maximum number of concurrent attempts running.
    
        --stats.exec.timeout=duration
                (default: 10s)
                Specifies the default timeout for actions. A timeout of 0 means no timeout.
    
        --stats.honeycomb.api-host=string
                (default: "https://api.honeycomb.io")
                The Honeycomb API host to send messages to
    
        --stats.honeycomb.batchSize=uint
                (default: 50)
                The Honeycomb batch size to use
    
        --stats.honeycomb.dataset=string
                They Honeycomb dataset to send messages to.
    
        --stats.honeycomb.sample-rate=uint
                (default: 1)
                The Honeycomb sample rate to use. Specified as 1 event sent per Sample Rate
    
        --stats.honeycomb.write-key=string
                They Honeycomb write key used to send messages.
    
        --stats.max-batch-delay=duration
                (default: 1s)
                If batching is enabled, the maximum amount of time requests are held before transmission
    
        --stats.max-batch-size=int
                (default: 100)
                If batching is enabled, the maximum number of requests that will be combined.
    
        --stats.node=string
                If set, specifies the node to use when submitting stats to backends. Equivalent to adding "--stats.tags=node=value" to the command line.
    
        --stats.prometheus.addr=value
                (default: 0.0.0.0:9102)
                Specifies the listener address for Prometheus scraping.
    
        --stats.prometheus.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. In either case, a UUID is appended to the value to insure that it is unique across proxies. Cannot be combined
                with --stats.unique-source.
    
        --stats.statsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.statsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.statsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.statsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.statsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.statsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.statsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.statsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.statsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.statsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.statsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.tags=value
                Tags to be included with every stat. May be comma-delimited or specified more than once. Should be of the form "<key>=<value>" or "tag"
    
        --stats.unique-source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. Unlike --stats.source, failing to specify a unique value may prevent stats from being recorded correctly. Cannot
                be combined with --stats.source.
    
        --stats.wavefront.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.wavefront.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.wavefront.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.wavefront.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.wavefront.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.wavefront.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.wavefront.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.wavefront.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.wavefront.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.wavefront.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.wavefront.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --version
                (default: false)
                Print the version and exit
    
        --xds.addr=value
                (default: :50000)
                The address on which to serve the envoy API server.
    
        --xds.ads-enabled
                (default: true)
                If false, turn off ads discovery mode
    
        --xds.ca-file=string
                Path to a file (on the Envoy host's file system) containing CA certificates for TLS.
    
        --xds.default-timeout=duration
                (default: 1m0s)
                The default request timeout, if none is specified in the RetryPolicy for a Route
    
        --xds.disabled
                (default: false)
                Disables the xDS listener.
    
        --xds.enable-tls
                (default: false)
                Enable grpc xDS TLS
    
        --xds.grpc-log-top=int
                (default: 0)
                When gRPC logging is enabled and this value is greater than 1, logs of non-success Envoy responses are tracked and periodically reported. This flag controls how many unique response code & request path combinations are tracked. When the number of
                tracked combinations in the reporting period is exceeded, uncommon paths are evicted.
    
        --xds.grpc-log-top-interval=duration
                (default: 5m0s)
                See the grpc-log-top flag. Controls the interval at which top logs are generated.
    
        --xds.interval=duration
                (default: 1s)
                The interval for polling the Greymatter API. Minimium value is 500ms
    
        --xds.resolve-dns
                (default: true)
                If true, resolve EDS hostnames to IP addresses.
    
        --xds.server-auth-type=string
                TLS client authentication type
    
        --xds.server-cert=string
                URL containing the server certificate for the grpc ADS server
    
        --xds.server-key=string
                URL containing the server certificate key for the grpc ADS server
    
        --xds.server-trusts=string
                Comma-delimited URLs containing truststores for the grpc ADS server
    
        --xds.standalone-cluster=string
                (default: "default-cluster")
                The name of the cluster for the Envoys consuming the standalone xDS server. Should match the --service-cluster flag for the envoy binary, or the ENVOY_NODE_CLUSTER value for the envoy-simple Docker image.
    
        --xds.standalone-port=int
                (default: 80)
                The port on which Envoys consuming the standalone xDS server should listen. Ignored if --api.key is specified.
    
        --xds.standalone-zone=string
                (default: "default-zone")
                The name of the zone for the Envoys consuming the standalone xDS server. Should match the --service-zone flag for the envoy binary, or the ENVOY_NODE_ZONE value for the envoy-simple Docker image.
    
        --xds.static-resources.conflict-behavior=value
                (default: "merge")
                (valid values: "overwrite" or "merge")
                How to handle conflicts between configuration types. If "overwrite" configuration types overwrite defaults. For example, if one were to include "listeners" in the static resources configuration file, all existing listeners would be overwritten. If the
                value is "merge", listeners would be merged together, with collisions favoring the statically configured listener. Clusters are differentiated by name, while listeners are differentiated by IP/port. Listeners on 0.0.0.0 (or ::) on a given port will
                collide with any other IP with the same port. Specifying colliding static resources will produce a startup error.
    
        --xds.static-resources.filename=string
                Path to a file containing static resources. The contents of the file should be either a JSON or YAML fragment (as configured by the corresponding --format flag) containing any combination of "clusters" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/cluster.proto), "cluster_template" (a single cluster, which will be used as the prototype for all clusters not specified statically), and/or listeners" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/listener.proto). The file is read once at startup. Only the v2 API is parsed. Enum strings such as "ROUND_ROBIN" must be capitalized.
    
        --xds.static-resources.format=value
                (default: "yaml")
                (valid values: "json" or "yaml")
                The format of the static resources file
    
        Global options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_". For example, "--some-flag" becomes "GM_CONTROL_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    OPTIONS
        --aws.access-key-id=string
                [SENSITIVE] The AWS API access key ID
    
        --aws.region=string
                [REQUIRED] The AWS region in which the binary is running
    
        --aws.secret-access-key=string
                [SENSITIVE] The AWS API secret access key
    
        --cluster-tag-namespace=string
                (default: "gm:cluster")
                The namespace for cluster tags
    
        --filters="<key>=<value>,..."
                A comma-delimited list of key/value pairs, used to specify additional EC2 Instances filters. Of the form "<key>=<value>,...". See http://goo.gl/kSCOHS for a discussion of available filters.
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --version
                (default: false)
                Print the version and exit
    
        --vpc-id=string
                [REQUIRED] The ID of the VPC in which gm-control is running
    
        Options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_AWS_". For example, "--some-flag" becomes "GM_CONTROL_AWS_SOME_FLAG". Command-line flags take precedence over environment variables.
    $ gm-control marathon --help
    NAME
        marathon - marathon collector
    
    USAGE
        gm-control [GLOBAL OPTIONS] marathon [OPTIONS]
    
    VERSION
        1.0.3-dev
    
    DESCRIPTION
        Connects to a Marathon API server and updates Clusters stored in the Greymatter API at startup and periodically thereafter.
    
        Application labels are used to determine which API cluster a particular task belongs to. The default label name is "gm_cluster", but can be overridden by a flag (see -cluster-label). By default all applications are watched, but you may also provide a label
        selector.
    
        Each task is examined for service ports. The first exposed TCP port is used. If no ports are exposed, the task is ignored.
    
        All application labels besides the cluster label are captured as instance metadata for routing.
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every gm-control request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for gm-control requests. If no port is given, it defaults to port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for gm-control requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for gm-control requests
    
        --api.prefix=value
                The url prefix for gm-control requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for gm-control requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every gm-control request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every gm-control request.
    
        --api.zone-name=string
                The name of the API Zone for gm-control requests.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --delay=duration
                (default: 30s)
                Sets the minimum time between API updates. If the discovery data changes more frequently than this duration, updates are delayed to maintain the minimum time.
    
        --diff.dry-run
                (default: false)
                Log changes at the info level rather than submitting them to the API
    
        --diff.ignore-create
                (default: false)
                If true, do not create new Clusters in the API
    
        --diff.include-delete
                (default: false)
                If true, delete missing Clusters from the API
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --stats.api.header=header
                Specifies a custom header to send with every stats API request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --stats.api.host=host:port
                (default: localhost:80)
                The address (host:port) for stats API requests. If no port is given, it defaults to port 443 if --stats.api.ssl is true and port 80 otherwise.
    
        --stats.api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for stats API requests
    
        --stats.api.prefix=value
                The url prefix for stats API requests. Forms the path part of <host>:<port><path>
    
        --stats.api.ssl
                (default: true)
                If true, use SSL for stats API requests
    
        --stats.api.sslCert=value
                Specifies the SSL cert to use for every stats API request.
    
        --stats.api.sslKey=value
                Specifies the SSL key to use for every stats API request.
    
        --stats.backends=value
                (valid values: "dogstatsd", "prometheus", "statsd", or "wavefront")
                Selects which stats backend(s) to use.
    
        --stats.batch
                (default: true)
                If true, stats requests are batched together for performance.
    
        --stats.dogstatsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.dogstatsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.dogstatsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.dogstatsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.dogstatsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.dogstatsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.dogstatsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.dogstatsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.dogstatsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.dogstatsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.dogstatsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.event-backends=value
                (valid values: "console" or "honeycomb")
                Selects which stats backend(s) to use for structured events.
    
        --stats.exec.attempt-timeout=duration
                (default: 1s)
                Specifies the default timeout for individual action attempts. A timeout of 0 means no timeout.
    
        --stats.exec.delay=duration
                (default: 100ms)
                Specifies the initial delay for the exponential delay type. Specifies the delay for constant delay type.
    
        --stats.exec.delay-type=value
                (default: "exponential")
                (valid values: "constant" or "exponential")
                Specifies the retry delay type.
    
        --stats.exec.max-attempts=int
                (default: 8)
                Specifies the maximum number of attempts made, inclusive of the original attempt.
    
        --stats.exec.max-delay=duration
                (default: 30s)
                Specifies the maximum delay for the exponential delay type. Ignored for the constant delay type.
    
        --stats.exec.parallelism=int
                (default: 8)
                Specifies the maximum number of concurrent attempts running.
    
        --stats.exec.timeout=duration
                (default: 10s)
                Specifies the default timeout for actions. A timeout of 0 means no timeout.
    
        --stats.honeycomb.api-host=string
                (default: "https://api.honeycomb.io")
                The Honeycomb API host to send messages to
    
        --stats.honeycomb.batchSize=uint
                (default: 50)
                The Honeycomb batch size to use
    
        --stats.honeycomb.dataset=string
                They Honeycomb dataset to send messages to.
    
        --stats.honeycomb.sample-rate=uint
                (default: 1)
                The Honeycomb sample rate to use. Specified as 1 event sent per Sample Rate
    
        --stats.honeycomb.write-key=string
                They Honeycomb write key used to send messages.
    
        --stats.max-batch-delay=duration
                (default: 1s)
                If batching is enabled, the maximum amount of time requests are held before transmission
    
        --stats.max-batch-size=int
                (default: 100)
                If batching is enabled, the maximum number of requests that will be combined.
    
        --stats.node=string
                If set, specifies the node to use when submitting stats to backends. Equivalent to adding "--stats.tags=node=value" to the command line.
    
        --stats.prometheus.addr=value
                (default: 0.0.0.0:9102)
                Specifies the listener address for Prometheus scraping.
    
        --stats.prometheus.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. In either case, a UUID is appended to the value to insure that it is unique across proxies. Cannot be combined
                with --stats.unique-source.
    
        --stats.statsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.statsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.statsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.statsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.statsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.statsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.statsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.statsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.statsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.statsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.statsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.tags=value
                Tags to be included with every stat. May be comma-delimited or specified more than once. Should be of the form "<key>=<value>" or "tag"
    
        --stats.unique-source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. Unlike --stats.source, failing to specify a unique value may prevent stats from being recorded correctly. Cannot
                be combined with --stats.source.
    
        --stats.wavefront.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.wavefront.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.wavefront.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.wavefront.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.wavefront.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.wavefront.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.wavefront.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.wavefront.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.wavefront.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.wavefront.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.wavefront.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --version
                (default: false)
                Print the version and exit
    
        --xds.addr=value
                (default: :50000)
                The address on which to serve the envoy API server.
    
        --xds.ads-enabled
                (default: true)
                If false, turn off ads discovery mode
    
        --xds.ca-file=string
                Path to a file (on the Envoy host's file system) containing CA certificates for TLS.
    
        --xds.default-timeout=duration
                (default: 1m0s)
                The default request timeout, if none is specified in the RetryPolicy for a Route
    
        --xds.disabled
                (default: false)
                Disables the xDS listener.
    
        --xds.enable-tls
                (default: false)
                Enable grpc xDS TLS
    
        --xds.grpc-log-top=int
                (default: 0)
                When gRPC logging is enabled and this value is greater than 1, logs of non-success Envoy responses are tracked and periodically reported. This flag controls how many unique response code & request path combinations are tracked. When the number of
                tracked combinations in the reporting period is exceeded, uncommon paths are evicted.
    
        --xds.grpc-log-top-interval=duration
                (default: 5m0s)
                See the grpc-log-top flag. Controls the interval at which top logs are generated.
    
        --xds.interval=duration
                (default: 1s)
                The interval for polling the Greymatter API. Minimium value is 500ms
    
        --xds.resolve-dns
                (default: true)
                If true, resolve EDS hostnames to IP addresses.
    
        --xds.server-auth-type=string
                TLS client authentication type
    
        --xds.server-cert=string
                URL containing the server certificate for the grpc ADS server
    
        --xds.server-key=string
                URL containing the server certificate key for the grpc ADS server
    
        --xds.server-trusts=string
                Comma-delimited URLs containing truststores for the grpc ADS server
    
        --xds.standalone-cluster=string
                (default: "default-cluster")
                The name of the cluster for the Envoys consuming the standalone xDS server. Should match the --service-cluster flag for the envoy binary, or the ENVOY_NODE_CLUSTER value for the envoy-simple Docker image.
    
        --xds.standalone-port=int
                (default: 80)
                The port on which Envoys consuming the standalone xDS server should listen. Ignored if --api.key is specified.
    
        --xds.standalone-zone=string
                (default: "default-zone")
                The name of the zone for the Envoys consuming the standalone xDS server. Should match the --service-zone flag for the envoy binary, or the ENVOY_NODE_ZONE value for the envoy-simple Docker image.
    
        --xds.static-resources.conflict-behavior=value
                (default: "merge")
                (valid values: "overwrite" or "merge")
                How to handle conflicts between configuration types. If "overwrite" configuration types overwrite defaults. For example, if one were to include "listeners" in the static resources configuration file, all existing listeners would be overwritten. If the
                value is "merge", listeners would be merged together, with collisions favoring the statically configured listener. Clusters are differentiated by name, while listeners are differentiated by IP/port. Listeners on 0.0.0.0 (or ::) on a given port will
                collide with any other IP with the same port. Specifying colliding static resources will produce a startup error.
    
        --xds.static-resources.filename=string
                Path to a file containing static resources. The contents of the file should be either a JSON or YAML fragment (as configured by the corresponding --format flag) containing any combination of "clusters" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/cluster.proto), "cluster_template" (a single cluster, which will be used as the prototype for all clusters not specified statically), and/or listeners" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/listener.proto). The file is read once at startup. Only the v2 API is parsed. Enum strings such as "ROUND_ROBIN" must be capitalized.
    
        --xds.static-resources.format=value
                (default: "yaml")
                (valid values: "json" or "yaml")
                The format of the static resources file
    
        Global options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_". For example, "--some-flag" becomes "GM_CONTROL_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    OPTIONS
        --cluster-label=label
                (default: "gm_cluster")
                The name of the Marathon label specifying to which cluster a Mesos task belongs.
    
        --dcos.acs-token=string
                The ACS Token for authenticating DC/OS requests. Obtained by logging into the DC/OS CLI, and then invoking "dcos config show core.dcos_acs_token". Required unless --dcos.toml-file is set. Cannot be combined with --dcos.toml-file.
    
        --dcos.insecure
                (default: false)
                If true, do not verify DC/OS SSL certificates
    
        --dcos.request-timeout=duration
                (default: 5s)
                The timeout for DC/OS requests.
    
        --dcos.toml-file=string
                The path to a DC/OS CLI dcos.toml configuration file. Required unless --dcos.url and --dcos.acs-token are set. Cannot be combined with --dcos.url or --dcos.acs-token.
    
        --dcos.url=string
                The the public master IP of your DC/OS installation. Required unless --dcos.toml-file is set. Cannot be combined with --dcos.toml-file.
    
        --group-prefix=group
                Marathon group prefix naming applications to expose. By default, all groups.
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --selector=string
                A label selector for filtering applications.
    
        --version
                (default: false)
                Print the version and exit
    
        Options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_MARATHON_". For example, "--some-flag" becomes "GM_CONTROL_MARATHON_SOME_FLAG". Command-line flags take precedence over environment variables.
    [info] <timestamp> ALS: <number of requests>: <HTTP response code> <request path>
    [info] 2020/02/25 20:52:16 ALS: 1: 475 http://localhost:8080/error

    If so, exchange the access code for a token with IdP.

  • Otherwise initiate the authentication process with IdP by redirecting the user to a login page.

  • JWT Security
    Headers
  • Path

  • Content Length

  • Destination / Origin

  • Parsed Query

  • User-Agent

  • Many More

  • Rego Policy Languagearrow-up-right
    Rego Policy Referencearrow-up-right
    Herearrow-up-right
    Grey Matter Supportarrow-up-right

    Discover mesh instances from local disk file

    Discover mesh instances from Kubernetes pod labels

    Discover mesh instances from DCOS/Marathon

    Perform no discovery, just start an xDS server

    Command

    Description

    aws

    Discover mesh instances through AWS ec2 tags

    consul

    Discover mesh instances through HashiCorp Consul service registry

    exp-envoy-cds-v1arrow-up-right

    (EXPERIMENTAL) Discover directly from Envoy CDS V1

    exp-envoy-cds-v1-filearrow-up-right

    (EXPERIMENTAL) Discover directly from Envoy CDS V1 File

    exp-envoy-cds-v2arrow-up-right

    (EXPERIMENTAL) Discover directly from Envoy CDS V2

    circle-info

    Note: the Grey Matter Edge and Grey Matter Sidecar are the same binary configured differently based on north/south and east/west access patterns.

    The Grey Matter Edge handles north/south traffic flowing through the mesh. You can configure multiple edge nodes based on throughput or regulatory requirements, requiring segmented routing or security policy rules. These include:

    • Traffic flow management in and out of the hybrid mesh

    • Hybrid cloud jump points

    • Load balancing and protocol control

    • Edge OAuth security

    hashtag
    Control

    Grey Matter Control is a microservice that performs the following functions within Fabric:

    • Automatic discovery throughout your hybrid mesh

    • Templated static or dynamic sidecar configuration

    • Telemetry and observable collection and aggregation

    • Neural net brain

    • API for advanced control

    Simple deployment architecture.

    Learn more about Grey Matter Controlarrow-up-right here.

    hashtag
    Security

    Fabric offers the following security features:

    • Verifies that tokens presented by the invoking service are trusted for such operations

    • Performs operations on behalf of a trusted third party within the Hybrid Mesh

    hashtag
    Sidecar

    The Grey Matter Sidecararrow-up-right is a deployment strategy that uses the Grey Matter Proxy. Add Grey Matter to your microservices by deploying a sidecar proxy throughout your environment. This sidecar intercepts all network communication between microservices.

    circle-info

    Reminder: the Grey Matter Edge and Grey Matter Sidecar are the same binary configured differently based on north/south and east/west access patterns.

    The Grey Matter Sidecar offers the following capabilities:

    • Multiple protocol support

    • Observable events for all traffic and content streams

    • Filter SDK

    • Certified, Tested, Production-Ready Sidecars

    • Native support for gRPC, HTTP/1, HTTP/2, and TCP

    hashtag
    Sidecar's Control Plane Functionality

    Once you've deployed the Grey Matter Sidecar, you can configure and manage Grey Matter with its control plane functionality:

    • Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic

    • Fine-grained control of traffic behavior with rich routing rules, retries, failover, and fault injection

    • A policy layer and configuration API supporting access controls, rate limits and quotas

    • Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress

    • Secure service-to-service communication in a cluster with strong identity-based authentication and authorization

    circle-check

    Need a refresher on gRPC Protocol?

    • gRPC is an RPC protocol implemented on top of HTTP/2

    • HTTP/2 is a Layer 7 (Application layer) protocol that runs on top of a TCP (Layer 4 - Transport layer) protocol

    • TCP runs on top of IP (Layer 3 - Network layer) protocol

    hashtag
    Questions

    circle-check

    Want to learn more about Grey Matter Fabric? Contact us at info@deciphernowenvelope.com to discuss your use case.

    Edge
    Control
    Security
    Sidecar
    Core Componentschevron-right

    File Discovery

    hashtag
    File

    hashtag
    Description

    gm-control file discovers service instances from a JSON or YAML file on disk.

    hashtag
    Usage

    gm-control [GLOBAL OPTIONS] file [OPTIONS] <file>

    hashtag
    Help

    To list available commands run with the global help flag, gm-control file --help:

    Kubernetes Discovery

    hashtag
    Kubernetes

    hashtag
    Description

    gm-control kubernetes is used to discover services instances in Kubernetes from pod tags and selectors.

    hashtag
    Usage

    gm-control [GLOBAL OPTIONS] kubernetes [OPTIONS]

    hashtag
    Help

    To list available commands run with the global help flag, gm-control kubernetes --help:

    Grey Matter Control API

    The purpose of the Grey Matter Control API is to update the configuration of every Grey Matter Proxy in the mesh. It works as follows:

    1. Control API creates, deletes, and modifies mesh configuration objects

    2. Grey Matter Control takes these configuration objects from Control API

    3. Control feeds these configuration objects to each proxy

    Learn more about the here.

    hashtag
    Sample Setup

    hashtag
    Mesh Persistence

    The Control API server has a number of options for how to enable backup and restore capabilities for the mesh configuration.

    hashtag
    Null

    The NULL persister means that no persistent storage of the mesh will be done. If the server were to shut down (due to things like server failure, platform outage, or server migration) then all mesh setup would be lost.

    This option is not recommended for any production deployments. If it is used, then some other means to export all created objects to a local store for backup purposes is recommended.

    hashtag
    File

    The File persister writes out the mesh configuration to local disk. The location is determined by setting GM_CONTROL_API_PERSISTER_PATH to a file on disk. This file is updated each time an operation is performed in the API, so the file will be modified after every call.

    Note that this file needs to be mounted on some form of persistent storage depending on the PaaS being used. In Kubernetes or OpenShift this would be a Persistent Volume Claim (PVC).

    hashtag
    GM-Data

    The GM Data persister backs up the mesh configuration in a provisioned instance of the Grey matter Data server. This provides a secure, persistent, storage option. Similar to the file persister, this writes a new backup file each time a modification is made to the mesh. Due to the way GM Date stores this file, this means that the entire mesh configuration can be backtracked across time.

    When using the gmdata persister, the following environment variables must also be used to control the exact desired behavior.

    • GM_CONTROL_API_GMDATA_OBJECT_POLICY_TEMPLATE

    • GM_CONTROL_API_GMDATA_POLICY_LABEL

    • GM_CONTROL_API_GMDATA_TOP_LEVEL_FOLDER_SECURITY

    hashtag
    Full Config

    hashtag
    Questions

    circle-check

    Need help configuring the Control API? Contact our team: .

    No Discovery

    hashtag
    Envoy xDS Only

    hashtag
    Description

    gm-control xds-only doesn't perform service discovery, but rather acts as a stand-alone Envoy xDS server and request sink.

    hashtag
    Usage

    gm-control xds-only

    hashtag
    Help

    To list available commands run with the global help flag, gm-control xds-only --help:

    Consul Discovery

    hashtag
    HashiCorp Consul

    hashtag
    Description

    gm-control consul is used to discover service instances from the Consul service registry.

    hashtag
    Usage

    gm-control [GLOBAL OPTIONS] consul [OPTIONS]

    hashtag
    Help

    To list available commands run with the global help flag, gm-control get --help:

    SSL Cert Parsing

    When working with service meshes on various platforms, there is a benefit in supporting multiple methods for accessing secrets. Some platforms (like Kubernetes and OpenShift) provide means of securely storing secrets and mounting them in to running containers.

    Others, like AWS ECS and the AWS Secrets manager, don't support such easy operations. To support operations on these other platforms, the Grey Matter Proxy contains functionality to parse a limited selection of Base64 encoded SSL certificates and write them directly to disk.

    Variable

    Default

    Description

    INGRESS_TLS_CERT

    ""

    hashtag
    Questions

    circle-check

    Need help with SSL cert parsing?

    Create an account at to reach our team.

    Grey Matter Proxy

    This document covers the Grey Matter Proxy's two operating modes and how to configure them.

    The Grey Matter Proxyarrow-up-right has two main operating modes: static and dynamic.

    hashtag
    Dynamic

    In dynamic mode, the proxy is minimally configured at runtime with just enough information to find the control plane. All additional config during subsequent operation will be fed through this connection to the control plane. This means that configurations can be removed, added, or modified without rebooting the proxy.

    hashtag
    Static

    In static mode, the proxy configuration is completely set at runtime. All desired behavior, filters, routes, clusters, certs, etc must be known in advance and cannot change without rebooting the proxy. This configuration is supported for operations outside of a service mesh and for simple development setups.

    hashtag
    Supply Configuration

    For either of the two methods discussed above (static or dynamic), the proxy supports supplying the configuration though a template or through direct envoy configuration.

    hashtag
    Template Configuration

    In , a selection of commonly-used options are parsed from the environment and rendered through templates into viable configuration on disk. This option provides access to many of the most needed features, without having the understand and parse the full configuration syntax. However, if a feature outside of the exposed templates are needed, then the direct configuration option can be used.

    hashtag
    Direct Configuration

    With , no templates are rendered, and the user must directly supply the full configuration file as needed. Any errors in syntax or values may result in the proxy failing to start, but this option does give full access to any Grey Matter or Envoy feature.

    hashtag
    Envoy Versioning

    Each gm-prox y version is built with a specific version of envoy.

    • Grey Matter v1.2 uses gm-proxy version 1.4.2, which is built on

    • Grey Matter v1.2.1 uses gm-proxy version 1.4.4 which is built on

    hashtag
    Questions

    circle-check

    Need help?

    Create an account at to reach our team.

    Announce to Fabric

    When the Grey Matter Proxy connects to Grey Matter Control, it sends an announcement that identifies itself to the control plane. This announcement information isolates nodes into zones, determine which configuration options go to which proxy instance, etc.

    hashtag
    Cluster

    The service clusterarrow-up-right defines what type of service this proxy is serving. Examples include:

    • example-service

    • user-service

    • data

    • catalog

    • etc.

    This field is used by the control plane to group together all proxies that share the same cluster so that they'll be properly routed and load-balanced as instances spin up or down.

    hashtag
    Zone

    The is the logical group that the proxy is running in. This can correlate to actual geographic regions, different slices of the network, or simply logical groups.

    circle-info

    Note: the zone that the proxy announces must match the --api.zone of the gm-control instance the proxy connects to. If these two are not in agreement, then the proxy is considered to be in a different zone and will receive no configuration.

    hashtag
    Node ID

    The is generally a unique identifier for this particular proxy instance, and can be used to take instance specific actions.

    circle-info

    Note: this field is not currently used for any operations in the control plane. By default, each node will get a random ID, so it does not need to be set by the user.

    hashtag
    Set Announcement Info

    Using the Grey Matter Proxy, you can set the announcement info most easily through the environment variables:

    You can also set these environment variables directly at the command line when running the binary:

    You can also set each flag directly in the bootstrap config template in the section, as shown below:

    hashtag
    Questions

    circle-check

    Need help announcing to Grey Matter Fabric?

    Create an account at to reach our team.

    Intelligence 360

    Grey Matter Intelligence 360 is configured with the following environment variables set on the host machine. The term host machine can apply to an AWS EC2 server, Docker container, Kubernetes Pod, etc.

    circle-info

    Note: to apply new configs, you must restart the application process on the host machine.

    hashtag

    Debug

    hashtag
    Envoy Admin Interface

    Envoy has a built-in admin server that has a large amount of useful tools for debugging. The full abilities of this interface can be found in the , but some highlights are given here.

    hashtag

    Setting Up Custom Trusted Certificates

    hashtag
    Intro

    When working with some features of the Grey Matter Proxy, notably the , it may become necessary to update the OS level trusted certificates with customer provided certs. This document shows briefly how to accomplish this.

    hashtag

    package envoy.authz  # Use envoy package
    
    import input.attributes.request.http as http_request  # Shorten HTTP request info as 'http_request'
    
    default allow = true    # Initialize allow variable, allow requests by default
    
    allow = false {    # Do not allow HTTP requests with that are true of all the conditions set in 'action_denied'
        action_denied
    }
    
    action_denied {      #  If all of the conditions in this block are true, block the request. If any of these conditions are false, don't block request
      any({http_request.method == "PUT", http_request.method == "POST"})  # If request method is PUT or POST
      input.parsed_body.threshold < 20      # If threshold is less than
      input.parsed_body.threshold != null   # When threshold is set to nothing
      input.parsed_body.metricKey == "mem"  # Ensuring its a memory value
      input.parsed_body.operator == "gte"   # If greater than or equal operator
    }
    kubectl create secret generic opa-policy --from-file slo-policy.rego
    kubectl get deployment -o yaml slo > slo.yaml
        spec:
          containers:
          - name: opa
            image: openpolicyagent/opa:latest-istio
            volumeMounts:
            - readOnly: true
              mountPath: /policy
              name: opa-policy
            args:
            - "run"
            - "--server"
            - "--addr=localhost:8181"
            - "--log-level=debug"
            - "--diagnostic-addr=0.0.0.0:8282"
            - "--set=plugins.envoy_ext_authz_grpc.addr=:9191"
            - "--set=plugins.envoy_ext_authz_grpc.query=data.envoy.authz.allow"
            - "--set=decision_logs.console=true"
            - "--ignore=.*"
            - "/policy/slo-policy.rego"
          volumes:
          - name: opa-policy
            secret:
              secretName: opa-policy
    kubectl apply -f ./slo.yaml
    greymatter edit listener listener-slo
      "active_http_filters": [
        "gm.metrics",
        "envoy.ext_authz"
      ],
      "http_filters": {
        "envoy_ext_authz": {
          "with_request_body": {
            "max_request_bytes": 8192,
            "allow_partial_message": true
          },
          "failure_mode_allow": false,
          "grpc_service": {
            "google_grpc": {
              "target_uri": "127.0.0.1:9191",
              "stat_prefix": "ext_authz"
            },
            "timeout": "10s"
          }
        },
    $ gm-control
    NAME
        gm-control - Collects cluster instance data and updates the Greymatter API. A variety of service discovery backends are supported via sub-commands, each with their own configuration options. The file collector can be used as a bridge for unsupported backends.
    
    USAGE
        gm-control [GLOBAL OPTIONS] <command> [COMMAND OPTIONS] [arguments...]
    
    VERSION
        1.0.3-dev
    
    COMMANDS
        aws     aws collector
    
        consul  Consul collector
    
        exp-envoy-cds-v1
                envoy CDS v1 collector [EXPERIMENTAL]
    
        exp-envoy-cds-v1-file
                envoy CDS v1 file collector [EXPERIMENTAL]
    
        exp-envoy-cds-v2
                envoy CDS v2 collector [EXPERIMENTAL]
    
        file    file-based collector
    
        kubernetes
                kubernetes collector
    
        marathon
                marathon collector
    
        xds-only
                Run the collector as only an xDS server and request logging sink.
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every gm-control request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for gm-control requests. If no port is given, it defaults to port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for gm-control requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for gm-control requests
    
        --api.prefix=value
                The url prefix for gm-control requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for gm-control requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every gm-control request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every gm-control request.
    
        --api.zone-name=string
                The name of the API Zone for gm-control requests.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --delay=duration
                (default: 30s)
                Sets the minimum time between API updates. If the discovery data changes more frequently than this duration, updates are delayed to maintain the minimum time.
    
        --diff.dry-run
                (default: false)
                Log changes at the info level rather than submitting them to the API
    
        --diff.ignore-create
                (default: false)
                If true, do not create new Clusters in the API
    
        --diff.include-delete
                (default: false)
                If true, delete missing Clusters from the API
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --stats.api.header=header
                Specifies a custom header to send with every stats API request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --stats.api.host=host:port
                (default: localhost:80)
                The address (host:port) for stats API requests. If no port is given, it defaults to port 443 if --stats.api.ssl is true and port 80 otherwise.
    
        --stats.api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for stats API requests
    
        --stats.api.prefix=value
                The url prefix for stats API requests. Forms the path part of <host>:<port><path>
    
        --stats.api.ssl
                (default: true)
                If true, use SSL for stats API requests
    
        --stats.api.sslCert=value
                Specifies the SSL cert to use for every stats API request.
    
        --stats.api.sslKey=value
                Specifies the SSL key to use for every stats API request.
    
        --stats.backends=value
                (valid values: "dogstatsd", "prometheus", "statsd", or "wavefront")
                Selects which stats backend(s) to use.
    
        --stats.batch
                (default: true)
                If true, stats requests are batched together for performance.
    
        --stats.dogstatsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.dogstatsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.dogstatsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.dogstatsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.dogstatsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.dogstatsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.dogstatsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.dogstatsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.dogstatsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.dogstatsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.dogstatsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.event-backends=value
                (valid values: "console" or "honeycomb")
                Selects which stats backend(s) to use for structured events.
    
        --stats.exec.attempt-timeout=duration
                (default: 1s)
                Specifies the default timeout for individual action attempts. A timeout of 0 means no timeout.
    
        --stats.exec.delay=duration
                (default: 100ms)
                Specifies the initial delay for the exponential delay type. Specifies the delay for constant delay type.
    
        --stats.exec.delay-type=value
                (default: "exponential")
                (valid values: "constant" or "exponential")
                Specifies the retry delay type.
    
        --stats.exec.max-attempts=int
                (default: 8)
                Specifies the maximum number of attempts made, inclusive of the original attempt.
    
        --stats.exec.max-delay=duration
                (default: 30s)
                Specifies the maximum delay for the exponential delay type. Ignored for the constant delay type.
    
        --stats.exec.parallelism=int
                (default: 8)
                Specifies the maximum number of concurrent attempts running.
    
        --stats.exec.timeout=duration
                (default: 10s)
                Specifies the default timeout for actions. A timeout of 0 means no timeout.
    
        --stats.honeycomb.api-host=string
                (default: "https://api.honeycomb.io")
                The Honeycomb API host to send messages to
    
        --stats.honeycomb.batchSize=uint
                (default: 50)
                The Honeycomb batch size to use
    
        --stats.honeycomb.dataset=string
                They Honeycomb dataset to send messages to.
    
        --stats.honeycomb.sample-rate=uint
                (default: 1)
                The Honeycomb sample rate to use. Specified as 1 event sent per Sample Rate
    
        --stats.honeycomb.write-key=string
                They Honeycomb write key used to send messages.
    
        --stats.max-batch-delay=duration
                (default: 1s)
                If batching is enabled, the maximum amount of time requests are held before transmission
    
        --stats.max-batch-size=int
                (default: 100)
                If batching is enabled, the maximum number of requests that will be combined.
    
        --stats.node=string
                If set, specifies the node to use when submitting stats to backends. Equivalent to adding "--stats.tags=node=value" to the command line.
    
        --stats.prometheus.addr=value
                (default: 0.0.0.0:9102)
                Specifies the listener address for Prometheus scraping.
    
        --stats.prometheus.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. In either case, a UUID is appended to the value to insure that it is unique across proxies. Cannot be combined
                with --stats.unique-source.
    
        --stats.statsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.statsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.statsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.statsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.statsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.statsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.statsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.statsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.statsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.statsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.statsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.tags=value
                Tags to be included with every stat. May be comma-delimited or specified more than once. Should be of the form "<key>=<value>" or "tag"
    
        --stats.unique-source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. Unlike --stats.source, failing to specify a unique value may prevent stats from being recorded correctly. Cannot
                be combined with --stats.source.
    
        --stats.wavefront.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.wavefront.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.wavefront.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.wavefront.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.wavefront.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.wavefront.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.wavefront.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.wavefront.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.wavefront.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.wavefront.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.wavefront.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --version
                (default: false)
                Print the version and exit
    
        --xds.addr=value
                (default: :50000)
                The address on which to serve the envoy API server.
    
        --xds.ads-enabled
                (default: true)
                If false, turn off ads discovery mode
    
        --xds.ca-file=string
                Path to a file (on the Envoy host's file system) containing CA certificates for TLS.
    
        --xds.default-timeout=duration
                (default: 1m0s)
                The default request timeout, if none is specified in the RetryPolicy for a Route
    
        --xds.disabled
                (default: false)
                Disables the xDS listener.
    
        --xds.enable-tls
                (default: false)
                Enable grpc xDS TLS
    
        --xds.grpc-log-top=int
                (default: 0)
                When gRPC logging is enabled and this value is greater than 1, logs of non-success Envoy responses are tracked and periodically reported. This flag controls how many unique response code & request path combinations are tracked. When the number of
                tracked combinations in the reporting period is exceeded, uncommon paths are evicted.
    
        --xds.grpc-log-top-interval=duration
                (default: 5m0s)
                See the grpc-log-top flag. Controls the interval at which top logs are generated.
    
        --xds.interval=duration
                (default: 1s)
                The interval for polling the Greymatter API. Minimium value is 500ms
    
        --xds.resolve-dns
                (default: true)
                If true, resolve EDS hostnames to IP addresses.
    
        --xds.server-auth-type=string
                TLS client authentication type
    
        --xds.server-cert=string
                URL containing the server certificate for the grpc ADS server
    
        --xds.server-key=string
                URL containing the server certificate key for the grpc ADS server
    
        --xds.server-trusts=string
                Comma-delimited URLs containing truststores for the grpc ADS server
    
        --xds.standalone-cluster=string
                (default: "default-cluster")
                The name of the cluster for the Envoys consuming the standalone xDS server. Should match the --service-cluster flag for the envoy binary, or the ENVOY_NODE_CLUSTER value for the envoy-simple Docker image.
    
        --xds.standalone-port=int
                (default: 80)
                The port on which Envoys consuming the standalone xDS server should listen. Ignored if --api.key is specified.
    
        --xds.standalone-zone=string
                (default: "default-zone")
                The name of the zone for the Envoys consuming the standalone xDS server. Should match the --service-zone flag for the envoy binary, or the ENVOY_NODE_ZONE value for the envoy-simple Docker image.
    
        --xds.static-resources.conflict-behavior=value
                (default: "merge")
                (valid values: "overwrite" or "merge")
                How to handle conflicts between configuration types. If "overwrite" configuration types overwrite defaults. For example, if one were to include "listeners" in the static resources configuration file, all existing listeners would be overwritten. If the
                value is "merge", listeners would be merged together, with collisions favoring the statically configured listener. Clusters are differentiated by name, while listeners are differentiated by IP/port. Listeners on 0.0.0.0 (or ::) on a given port will
                collide with any other IP with the same port. Specifying colliding static resources will produce a startup error.
    
        --xds.static-resources.filename=string
                Path to a file containing static resources. The contents of the file should be either a JSON or YAML fragment (as configured by the corresponding --format flag) containing any combination of "clusters" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/cluster.proto), "cluster_template" (a single cluster, which will be used as the prototype for all clusters not specified statically), and/or listeners" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/listener.proto). The file is read once at startup. Only the v2 API is parsed. Enum strings such as "ROUND_ROBIN" must be capitalized.
    
        --xds.static-resources.format=value
                (default: "yaml")
                (valid values: "json" or "yaml")
                The format of the static resources file
    
        Global options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_". For example, "--some-flag" becomes "GM_CONTROL_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    Run "gm-control help <command>" for more details on a specific command.

    Written out to ./certs/ingress_localhost.crt

    INGRESS_TLS_KEY

    ""

    Written out to ./certs/ingress_localhost.key

    INGRESS_TLS_TRUST

    ""

    Written out to ./certs/ingress_intermediate.crt

    EGRESS_TLS_CERT

    ""

    Written out to ./certs/egress_localhost.crt

    EGRESS_TLS_KEY

    ""

    Written out to ./certs/egress_localhost.key

    EGRESS_TLS_TRUST

    ""

    Written out to ./certs/egress_intermediatet.c

    Grey Matter Supportarrow-up-right
    template configuration
    direct configuration
    envoy version 1.13.2arrow-up-right
    envoy version 1.13.3arrow-up-right
    Grey Matter Supportarrow-up-right
    file
    kubernetes
    marathon
    xds-only

    Fabric

    Get a refresher on how Sense fits into Grey Matter's architecture.

    Core Componentschevron-right

    The following documents describe the usage of the Fabric Mesh.

    CLIchevron-rightService Discoverychevron-rightResiliencechevron-rightSecuritychevron-rightTelemetrychevron-rightTraffic Controlchevron-rightFilterschevron-right
    zonearrow-up-right
    node idarrow-up-right
    nodearrow-up-right
    Grey Matter Supportarrow-up-right
    Log Levels

    Envoy has the ability to set different log levels for different components of the running system. To see how they're all currently set:

    NOTE all examples below assume the admin interface is started on port 8001 (the default option). Adjust according to your configuration.

    You can set log levels of "trace, debug, info, warning, error, critical, off" on the global state.

    Alternatively, you can set the level of just a specific logger with a format similar to the below. This one changes just the logger for the filters.

    docsarrow-up-right
    /app $ curl -X POST localhost:8001/logging
    active loggers:
    admin: info
    assert: info
    backtrace: info
    client: info
    config: info
    connection: info
    dubbo: info
    file: info
    filter: debug
    grpc: info
    hc: info
    health_checker: info
    http: info
    http2: info
    hystrix: info
    lua: info
    main: info
    misc: info
    mongo: info
    quic: info
    pool: info
    rbac: info
    redis: info
    router: info
    runtime: info
    stats: info
    secret: info
    tap: info
    testing: info
    thrift: info
    tracing: info
    upstream: info
    $ gm-control file --help
    NAME
        file - file-based collector
    
    USAGE
        gm-control [GLOBAL OPTIONS] file [OPTIONS] <file>
    
    VERSION
        1.0.3-dev
    
    DESCRIPTION
        Watches the given JSON or YAML file and updates Clusters stored in the Greymatter API at startup and whenever the file changes.
    
        The file can be specified as a flag or as the only argument (but not both).
    
        The structure of the JSON and YAML formats is equivalent. Each contains 0 or more clusters identified by name, each containing 0 or more instances. For example, as YAML:
    
            - cluster: c1
              instances:
              - host: h1
                port: 8000
                metadata:
                - key: stage
                  value: prod
    
        Alternatively as JSON:
    
            [
              {
                "cluster": "c1",
                "instances": [
                  {
                    "host": "h1",
                    "port": 8000,
                    "metadata": [
                      { "key": "stage", "value": "prod" }
                    ]
                  }
                ]
              }
            ]
    
        Note that when updating the file, care should be taken to make the modification atomic. In practice, this means writing the updated file to a temporary location and then moving/renaming the file to the watched path. Alternatively, the watched path may be a
        symbolic link that is replaced with a reference to the updated file.
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every gm-control request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for gm-control requests. If no port is given, it defaults to port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for gm-control requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for gm-control requests
    
        --api.prefix=value
                The url prefix for gm-control requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for gm-control requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every gm-control request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every gm-control request.
    
        --api.zone-name=string
                The name of the API Zone for gm-control requests.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --delay=duration
                (default: 30s)
                Sets the minimum time between API updates. If the discovery data changes more frequently than this duration, updates are delayed to maintain the minimum time.
    
        --diff.dry-run
                (default: false)
                Log changes at the info level rather than submitting them to the API
    
        --diff.ignore-create
                (default: false)
                If true, do not create new Clusters in the API
    
        --diff.include-delete
                (default: false)
                If true, delete missing Clusters from the API
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --stats.api.header=header
                Specifies a custom header to send with every stats API request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --stats.api.host=host:port
                (default: localhost:80)
                The address (host:port) for stats API requests. If no port is given, it defaults to port 443 if --stats.api.ssl is true and port 80 otherwise.
    
        --stats.api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for stats API requests
    
        --stats.api.prefix=value
                The url prefix for stats API requests. Forms the path part of <host>:<port><path>
    
        --stats.api.ssl
                (default: true)
                If true, use SSL for stats API requests
    
        --stats.api.sslCert=value
                Specifies the SSL cert to use for every stats API request.
    
        --stats.api.sslKey=value
                Specifies the SSL key to use for every stats API request.
    
        --stats.backends=value
                (valid values: "dogstatsd", "prometheus", "statsd", or "wavefront")
                Selects which stats backend(s) to use.
    
        --stats.batch
                (default: true)
                If true, stats requests are batched together for performance.
    
        --stats.dogstatsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.dogstatsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.dogstatsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.dogstatsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.dogstatsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.dogstatsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.dogstatsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.dogstatsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.dogstatsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.dogstatsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.dogstatsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.event-backends=value
                (valid values: "console" or "honeycomb")
                Selects which stats backend(s) to use for structured events.
    
        --stats.exec.attempt-timeout=duration
                (default: 1s)
                Specifies the default timeout for individual action attempts. A timeout of 0 means no timeout.
    
        --stats.exec.delay=duration
                (default: 100ms)
                Specifies the initial delay for the exponential delay type. Specifies the delay for constant delay type.
    
        --stats.exec.delay-type=value
                (default: "exponential")
                (valid values: "constant" or "exponential")
                Specifies the retry delay type.
    
        --stats.exec.max-attempts=int
                (default: 8)
                Specifies the maximum number of attempts made, inclusive of the original attempt.
    
        --stats.exec.max-delay=duration
                (default: 30s)
                Specifies the maximum delay for the exponential delay type. Ignored for the constant delay type.
    
        --stats.exec.parallelism=int
                (default: 8)
                Specifies the maximum number of concurrent attempts running.
    
        --stats.exec.timeout=duration
                (default: 10s)
                Specifies the default timeout for actions. A timeout of 0 means no timeout.
    
        --stats.honeycomb.api-host=string
                (default: "https://api.honeycomb.io")
                The Honeycomb API host to send messages to
    
        --stats.honeycomb.batchSize=uint
                (default: 50)
                The Honeycomb batch size to use
    
        --stats.honeycomb.dataset=string
                They Honeycomb dataset to send messages to.
    
        --stats.honeycomb.sample-rate=uint
                (default: 1)
                The Honeycomb sample rate to use. Specified as 1 event sent per Sample Rate
    
        --stats.honeycomb.write-key=string
                They Honeycomb write key used to send messages.
    
        --stats.max-batch-delay=duration
                (default: 1s)
                If batching is enabled, the maximum amount of time requests are held before transmission
    
        --stats.max-batch-size=int
                (default: 100)
                If batching is enabled, the maximum number of requests that will be combined.
    
        --stats.node=string
                If set, specifies the node to use when submitting stats to backends. Equivalent to adding "--stats.tags=node=value" to the command line.
    
        --stats.prometheus.addr=value
                (default: 0.0.0.0:9102)
                Specifies the listener address for Prometheus scraping.
    
        --stats.prometheus.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. In either case, a UUID is appended to the value to insure that it is unique across proxies. Cannot be combined
                with --stats.unique-source.
    
        --stats.statsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.statsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.statsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.statsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.statsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.statsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.statsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.statsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.statsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.statsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.statsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.tags=value
                Tags to be included with every stat. May be comma-delimited or specified more than once. Should be of the form "<key>=<value>" or "tag"
    
        --stats.unique-source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. Unlike --stats.source, failing to specify a unique value may prevent stats from being recorded correctly. Cannot
                be combined with --stats.source.
    
        --stats.wavefront.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.wavefront.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.wavefront.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.wavefront.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.wavefront.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.wavefront.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.wavefront.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.wavefront.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.wavefront.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.wavefront.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.wavefront.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --version
                (default: false)
                Print the version and exit
    
        --xds.addr=value
                (default: :50000)
                The address on which to serve the envoy API server.
    
        --xds.ads-enabled
                (default: true)
                If false, turn off ads discovery mode
    
        --xds.ca-file=string
                Path to a file (on the Envoy host's file system) containing CA certificates for TLS.
    
        --xds.default-timeout=duration
                (default: 1m0s)
                The default request timeout, if none is specified in the RetryPolicy for a Route
    
        --xds.disabled
                (default: false)
                Disables the xDS listener.
    
        --xds.enable-tls
                (default: false)
                Enable grpc xDS TLS
    
        --xds.grpc-log-top=int
                (default: 0)
                When gRPC logging is enabled and this value is greater than 1, logs of non-success Envoy responses are tracked and periodically reported. This flag controls how many unique response code & request path combinations are tracked. When the number of
                tracked combinations in the reporting period is exceeded, uncommon paths are evicted.
    
        --xds.grpc-log-top-interval=duration
                (default: 5m0s)
                See the grpc-log-top flag. Controls the interval at which top logs are generated.
    
        --xds.interval=duration
                (default: 1s)
                The interval for polling the Greymatter API. Minimium value is 500ms
    
        --xds.resolve-dns
                (default: true)
                If true, resolve EDS hostnames to IP addresses.
    
        --xds.server-auth-type=string
                TLS client authentication type
    
        --xds.server-cert=string
                URL containing the server certificate for the grpc ADS server
    
        --xds.server-key=string
                URL containing the server certificate key for the grpc ADS server
    
        --xds.server-trusts=string
                Comma-delimited URLs containing truststores for the grpc ADS server
    
        --xds.standalone-cluster=string
                (default: "default-cluster")
                The name of the cluster for the Envoys consuming the standalone xDS server. Should match the --service-cluster flag for the envoy binary, or the ENVOY_NODE_CLUSTER value for the envoy-simple Docker image.
    
        --xds.standalone-port=int
                (default: 80)
                The port on which Envoys consuming the standalone xDS server should listen. Ignored if --api.key is specified.
    
        --xds.standalone-zone=string
                (default: "default-zone")
                The name of the zone for the Envoys consuming the standalone xDS server. Should match the --service-zone flag for the envoy binary, or the ENVOY_NODE_ZONE value for the envoy-simple Docker image.
    
        --xds.static-resources.conflict-behavior=value
                (default: "merge")
                (valid values: "overwrite" or "merge")
                How to handle conflicts between configuration types. If "overwrite" configuration types overwrite defaults. For example, if one were to include "listeners" in the static resources configuration file, all existing listeners would be overwritten. If the
                value is "merge", listeners would be merged together, with collisions favoring the statically configured listener. Clusters are differentiated by name, while listeners are differentiated by IP/port. Listeners on 0.0.0.0 (or ::) on a given port will
                collide with any other IP with the same port. Specifying colliding static resources will produce a startup error.
    
        --xds.static-resources.filename=string
                Path to a file containing static resources. The contents of the file should be either a JSON or YAML fragment (as configured by the corresponding --format flag) containing any combination of "clusters" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/cluster.proto), "cluster_template" (a single cluster, which will be used as the prototype for all clusters not specified statically), and/or listeners" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/listener.proto). The file is read once at startup. Only the v2 API is parsed. Enum strings such as "ROUND_ROBIN" must be capitalized.
    
        --xds.static-resources.format=value
                (default: "yaml")
                (valid values: "json" or "yaml")
                The format of the static resources file
    
        Global options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_". For example, "--some-flag" becomes "GM_CONTROL_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    OPTIONS
        --filename=string
                The file from which to collect
    
        --format=string
                (default: "json")
                The I/O format (json or yaml)
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --version
                (default: false)
                Print the version and exit
    
        Options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_FILE_". For example, "--some-flag" becomes "GM_CONTROL_FILE_SOME_FLAG". Command-line flags take precedence over environment variables.
    $ gm-control kubernetes --help
    NAME
        kubernetes - kubernetes collector
    
    USAGE
        gm-control [GLOBAL OPTIONS] kubernetes [OPTIONS]
    
    VERSION
        1.0.3-dev
    
    DESCRIPTION
        Connects to a Kubernetes cluster API server and updates Clusters stored in the Greymatter API at startup and periodically thereafter. By default, the tool assumes that it is being run within the Kubernetes cluster and will automatically find the API server.
    
        Pod labels are used to determine to which API cluster a particular pod belongs. The default label name is "gm_cluster", but it may be overridden by command line flags (see -cluster-label). By default all pods in the configured namespace(s) are watched, but you
        may also provide a label selector (using the same format at the kubectl command) to specify a subset of pods to watch.
    
        In each pod, all containers must be running before the pod is considered live and ready for inclusion in the API cluster's instance list. Each container is examined for ports. The first TCP port found is used as the API instance's port unless a port name is
        specified (see -port-name), in which case the first port with that name becomes the API instance's port. Pods with no container port are ignored. All pod labels (except for the cluster label) are attached as instance metadata.
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every gm-control request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for gm-control requests. If no port is given, it defaults to port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for gm-control requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for gm-control requests
    
        --api.prefix=value
                The url prefix for gm-control requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for gm-control requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every gm-control request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every gm-control request.
    
        --api.zone-name=string
                The name of the API Zone for gm-control requests.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --delay=duration
                (default: 30s)
                Sets the minimum time between API updates. If the discovery data changes more frequently than this duration, updates are delayed to maintain the minimum time.
    
        --diff.dry-run
                (default: false)
                Log changes at the info level rather than submitting them to the API
    
        --diff.ignore-create
                (default: false)
                If true, do not create new Clusters in the API
    
        --diff.include-delete
                (default: false)
                If true, delete missing Clusters from the API
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --stats.api.header=header
                Specifies a custom header to send with every stats API request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --stats.api.host=host:port
                (default: localhost:80)
                The address (host:port) for stats API requests. If no port is given, it defaults to port 443 if --stats.api.ssl is true and port 80 otherwise.
    
        --stats.api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for stats API requests
    
        --stats.api.prefix=value
                The url prefix for stats API requests. Forms the path part of <host>:<port><path>
    
        --stats.api.ssl
                (default: true)
                If true, use SSL for stats API requests
    
        --stats.api.sslCert=value
                Specifies the SSL cert to use for every stats API request.
    
        --stats.api.sslKey=value
                Specifies the SSL key to use for every stats API request.
    
        --stats.backends=value
                (valid values: "dogstatsd", "prometheus", "statsd", or "wavefront")
                Selects which stats backend(s) to use.
    
        --stats.batch
                (default: true)
                If true, stats requests are batched together for performance.
    
        --stats.dogstatsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.dogstatsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.dogstatsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.dogstatsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.dogstatsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.dogstatsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.dogstatsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.dogstatsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.dogstatsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.dogstatsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.dogstatsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.event-backends=value
                (valid values: "console" or "honeycomb")
                Selects which stats backend(s) to use for structured events.
    
        --stats.exec.attempt-timeout=duration
                (default: 1s)
                Specifies the default timeout for individual action attempts. A timeout of 0 means no timeout.
    
        --stats.exec.delay=duration
                (default: 100ms)
                Specifies the initial delay for the exponential delay type. Specifies the delay for constant delay type.
    
        --stats.exec.delay-type=value
                (default: "exponential")
                (valid values: "constant" or "exponential")
                Specifies the retry delay type.
    
        --stats.exec.max-attempts=int
                (default: 8)
                Specifies the maximum number of attempts made, inclusive of the original attempt.
    
        --stats.exec.max-delay=duration
                (default: 30s)
                Specifies the maximum delay for the exponential delay type. Ignored for the constant delay type.
    
        --stats.exec.parallelism=int
                (default: 8)
                Specifies the maximum number of concurrent attempts running.
    
        --stats.exec.timeout=duration
                (default: 10s)
                Specifies the default timeout for actions. A timeout of 0 means no timeout.
    
        --stats.honeycomb.api-host=string
                (default: "https://api.honeycomb.io")
                The Honeycomb API host to send messages to
    
        --stats.honeycomb.batchSize=uint
                (default: 50)
                The Honeycomb batch size to use
    
        --stats.honeycomb.dataset=string
                They Honeycomb dataset to send messages to.
    
        --stats.honeycomb.sample-rate=uint
                (default: 1)
                The Honeycomb sample rate to use. Specified as 1 event sent per Sample Rate
    
        --stats.honeycomb.write-key=string
                They Honeycomb write key used to send messages.
    
        --stats.max-batch-delay=duration
                (default: 1s)
                If batching is enabled, the maximum amount of time requests are held before transmission
    
        --stats.max-batch-size=int
                (default: 100)
                If batching is enabled, the maximum number of requests that will be combined.
    
        --stats.node=string
                If set, specifies the node to use when submitting stats to backends. Equivalent to adding "--stats.tags=node=value" to the command line.
    
        --stats.prometheus.addr=value
                (default: 0.0.0.0:9102)
                Specifies the listener address for Prometheus scraping.
    
        --stats.prometheus.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. In either case, a UUID is appended to the value to insure that it is unique across proxies. Cannot be combined
                with --stats.unique-source.
    
        --stats.statsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.statsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.statsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.statsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.statsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.statsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.statsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.statsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.statsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.statsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.statsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.tags=value
                Tags to be included with every stat. May be comma-delimited or specified more than once. Should be of the form "<key>=<value>" or "tag"
    
        --stats.unique-source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. Unlike --stats.source, failing to specify a unique value may prevent stats from being recorded correctly. Cannot
                be combined with --stats.source.
    
        --stats.wavefront.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.wavefront.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.wavefront.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.wavefront.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.wavefront.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.wavefront.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.wavefront.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.wavefront.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.wavefront.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.wavefront.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.wavefront.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --version
                (default: false)
                Print the version and exit
    
        --xds.addr=value
                (default: :50000)
                The address on which to serve the envoy API server.
    
        --xds.ads-enabled
                (default: true)
                If false, turn off ads discovery mode
    
        --xds.ca-file=string
                Path to a file (on the Envoy host's file system) containing CA certificates for TLS.
    
        --xds.default-timeout=duration
                (default: 1m0s)
                The default request timeout, if none is specified in the RetryPolicy for a Route
    
        --xds.disabled
                (default: false)
                Disables the xDS listener.
    
        --xds.enable-tls
                (default: false)
                Enable grpc xDS TLS
    
        --xds.grpc-log-top=int
                (default: 0)
                When gRPC logging is enabled and this value is greater than 1, logs of non-success Envoy responses are tracked and periodically reported. This flag controls how many unique response code & request path combinations are tracked. When the number of
                tracked combinations in the reporting period is exceeded, uncommon paths are evicted.
    
        --xds.grpc-log-top-interval=duration
                (default: 5m0s)
                See the grpc-log-top flag. Controls the interval at which top logs are generated.
    
        --xds.interval=duration
                (default: 1s)
                The interval for polling the Greymatter API. Minimium value is 500ms
    
        --xds.resolve-dns
                (default: true)
                If true, resolve EDS hostnames to IP addresses.
    
        --xds.server-auth-type=string
                TLS client authentication type
    
        --xds.server-cert=string
                URL containing the server certificate for the grpc ADS server
    
        --xds.server-key=string
                URL containing the server certificate key for the grpc ADS server
    
        --xds.server-trusts=string
                Comma-delimited URLs containing truststores for the grpc ADS server
    
        --xds.standalone-cluster=string
                (default: "default-cluster")
                The name of the cluster for the Envoys consuming the standalone xDS server. Should match the --service-cluster flag for the envoy binary, or the ENVOY_NODE_CLUSTER value for the envoy-simple Docker image.
    
        --xds.standalone-port=int
                (default: 80)
                The port on which Envoys consuming the standalone xDS server should listen. Ignored if --api.key is specified.
    
        --xds.standalone-zone=string
                (default: "default-zone")
                The name of the zone for the Envoys consuming the standalone xDS server. Should match the --service-zone flag for the envoy binary, or the ENVOY_NODE_ZONE value for the envoy-simple Docker image.
    
        --xds.static-resources.conflict-behavior=value
                (default: "merge")
                (valid values: "overwrite" or "merge")
                How to handle conflicts between configuration types. If "overwrite" configuration types overwrite defaults. For example, if one were to include "listeners" in the static resources configuration file, all existing listeners would be overwritten. If the
                value is "merge", listeners would be merged together, with collisions favoring the statically configured listener. Clusters are differentiated by name, while listeners are differentiated by IP/port. Listeners on 0.0.0.0 (or ::) on a given port will
                collide with any other IP with the same port. Specifying colliding static resources will produce a startup error.
    
        --xds.static-resources.filename=string
                Path to a file containing static resources. The contents of the file should be either a JSON or YAML fragment (as configured by the corresponding --format flag) containing any combination of "clusters" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/cluster.proto), "cluster_template" (a single cluster, which will be used as the prototype for all clusters not specified statically), and/or listeners" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/listener.proto). The file is read once at startup. Only the v2 API is parsed. Enum strings such as "ROUND_ROBIN" must be capitalized.
    
        --xds.static-resources.format=value
                (default: "yaml")
                (valid values: "json" or "yaml")
                The format of the static resources file
    
        Global options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_". For example, "--some-flag" becomes "GM_CONTROL_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    OPTIONS
        --ca-cert=path
                The path to a trusted root certificate file for the Kubernetes API server. Only used if -kubernetes-host is set.
    
        --client-cert=path
                The path to a certificate file which the client will use to authenticate itself to the Kubernetes API server. Only used if -kubernetes-host is set.
    
        --client-key=path
                The path to a certificate key file which the client will use to authenticate itself to the Kubernetes API server. Only used if -kubernetes-host is set.
    
        --cluster-label=name
                (default: "gm_cluster")
                The name of Kubernetes label that specifies to which cluster a pod belongs.
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --kubernetes-host=host
                The host name for the kubernetes API server. Required if the collector is to run outside of the Kubernetes cluster.
    
        --log-level=string
                (default: "error")
                The log level used for this discovery plugin
    
        --namespaces=namespace
                (default: "default")
                A comma-delimited Kubernetes cluster namespace list to watch for pods.
    
        --port-name=string
                (default: "http")
                The named container port assigned to cluster instances.
    
        --selector=string
                A Kubernetes label selector that selects which pods are polled.
    
        --timeout=duration
                (default: 2m0s)
                The timeout used for Kubernetes API requests (converted to seconds).
    
        --version
                (default: false)
                Print the version and exit
    
        Options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_KUBERNETES_". For example, "--some-flag" becomes "GM_CONTROL_KUBERNETES_SOME_FLAG". Command-line flags take precedence over environment
        variables.
    $ gm-control xds-only --help
    NAME
        xds-only - Run the collector as only an xDS server and request logging sink.
    
    USAGE
        gm-control [GLOBAL OPTIONS] xds-only
    
    VERSION
        1.0.3-dev
    
    DESCRIPTION
        Run the collector as only an xDS server and request logging sink. Commonly used when running a pool of gm-control as standalone xDS servers, or when co-locating gm-control as an xDS sidecar.
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every gm-control request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for gm-control requests. If no port is given, it defaults to port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for gm-control requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for gm-control requests
    
        --api.prefix=value
                The url prefix for gm-control requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for gm-control requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every gm-control request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every gm-control request.
    
        --api.zone-name=string
                The name of the API Zone for gm-control requests.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --delay=duration
                (default: 30s)
                Sets the minimum time between API updates. If the discovery data changes more frequently than this duration, updates are delayed to maintain the minimum time.
    
        --diff.dry-run
                (default: false)
                Log changes at the info level rather than submitting them to the API
    
        --diff.ignore-create
                (default: false)
                If true, do not create new Clusters in the API
    
        --diff.include-delete
                (default: false)
                If true, delete missing Clusters from the API
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --stats.api.header=header
                Specifies a custom header to send with every stats API request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --stats.api.host=host:port
                (default: localhost:80)
                The address (host:port) for stats API requests. If no port is given, it defaults to port 443 if --stats.api.ssl is true and port 80 otherwise.
    
        --stats.api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for stats API requests
    
        --stats.api.prefix=value
                The url prefix for stats API requests. Forms the path part of <host>:<port><path>
    
        --stats.api.ssl
                (default: true)
                If true, use SSL for stats API requests
    
        --stats.api.sslCert=value
                Specifies the SSL cert to use for every stats API request.
    
        --stats.api.sslKey=value
                Specifies the SSL key to use for every stats API request.
    
        --stats.backends=value
                (valid values: "dogstatsd", "prometheus", "statsd", or "wavefront")
                Selects which stats backend(s) to use.
    
        --stats.batch
                (default: true)
                If true, stats requests are batched together for performance.
    
        --stats.dogstatsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.dogstatsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.dogstatsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.dogstatsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.dogstatsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.dogstatsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.dogstatsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.dogstatsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.dogstatsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.dogstatsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.dogstatsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.event-backends=value
                (valid values: "console" or "honeycomb")
                Selects which stats backend(s) to use for structured events.
    
        --stats.exec.attempt-timeout=duration
                (default: 1s)
                Specifies the default timeout for individual action attempts. A timeout of 0 means no timeout.
    
        --stats.exec.delay=duration
                (default: 100ms)
                Specifies the initial delay for the exponential delay type. Specifies the delay for constant delay type.
    
        --stats.exec.delay-type=value
                (default: "exponential")
                (valid values: "constant" or "exponential")
                Specifies the retry delay type.
    
        --stats.exec.max-attempts=int
                (default: 8)
                Specifies the maximum number of attempts made, inclusive of the original attempt.
    
        --stats.exec.max-delay=duration
                (default: 30s)
                Specifies the maximum delay for the exponential delay type. Ignored for the constant delay type.
    
        --stats.exec.parallelism=int
                (default: 8)
                Specifies the maximum number of concurrent attempts running.
    
        --stats.exec.timeout=duration
                (default: 10s)
                Specifies the default timeout for actions. A timeout of 0 means no timeout.
    
        --stats.honeycomb.api-host=string
                (default: "https://api.honeycomb.io")
                The Honeycomb API host to send messages to
    
        --stats.honeycomb.batchSize=uint
                (default: 50)
                The Honeycomb batch size to use
    
        --stats.honeycomb.dataset=string
                They Honeycomb dataset to send messages to.
    
        --stats.honeycomb.sample-rate=uint
                (default: 1)
                The Honeycomb sample rate to use. Specified as 1 event sent per Sample Rate
    
        --stats.honeycomb.write-key=string
                They Honeycomb write key used to send messages.
    
        --stats.max-batch-delay=duration
                (default: 1s)
                If batching is enabled, the maximum amount of time requests are held before transmission
    
        --stats.max-batch-size=int
                (default: 100)
                If batching is enabled, the maximum number of requests that will be combined.
    
        --stats.node=string
                If set, specifies the node to use when submitting stats to backends. Equivalent to adding "--stats.tags=node=value" to the command line.
    
        --stats.prometheus.addr=value
                (default: 0.0.0.0:9102)
                Specifies the listener address for Prometheus scraping.
    
        --stats.prometheus.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. In either case, a UUID is appended to the value to insure that it is unique across proxies. Cannot be combined
                with --stats.unique-source.
    
        --stats.statsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.statsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.statsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.statsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.statsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.statsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.statsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.statsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.statsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.statsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.statsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.tags=value
                Tags to be included with every stat. May be comma-delimited or specified more than once. Should be of the form "<key>=<value>" or "tag"
    
        --stats.unique-source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. Unlike --stats.source, failing to specify a unique value may prevent stats from being recorded correctly. Cannot
                be combined with --stats.source.
    
        --stats.wavefront.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.wavefront.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.wavefront.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.wavefront.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.wavefront.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.wavefront.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.wavefront.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.wavefront.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.wavefront.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.wavefront.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.wavefront.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --version
                (default: false)
                Print the version and exit
    
        --xds.addr=value
                (default: :50000)
                The address on which to serve the envoy API server.
    
        --xds.ads-enabled
                (default: true)
                If false, turn off ads discovery mode
    
        --xds.ca-file=string
                Path to a file (on the Envoy host's file system) containing CA certificates for TLS.
    
        --xds.default-timeout=duration
                (default: 1m0s)
                The default request timeout, if none is specified in the RetryPolicy for a Route
    
        --xds.disabled
                (default: false)
                Disables the xDS listener.
    
        --xds.enable-tls
                (default: false)
                Enable grpc xDS TLS
    
        --xds.grpc-log-top=int
                (default: 0)
                When gRPC logging is enabled and this value is greater than 1, logs of non-success Envoy responses are tracked and periodically reported. This flag controls how many unique response code & request path combinations are tracked. When the number of
                tracked combinations in the reporting period is exceeded, uncommon paths are evicted.
    
        --xds.grpc-log-top-interval=duration
                (default: 5m0s)
                See the grpc-log-top flag. Controls the interval at which top logs are generated.
    
        --xds.interval=duration
                (default: 1s)
                The interval for polling the Greymatter API. Minimium value is 500ms
    
        --xds.resolve-dns
                (default: true)
                If true, resolve EDS hostnames to IP addresses.
    
        --xds.server-auth-type=string
                TLS client authentication type
    
        --xds.server-cert=string
                URL containing the server certificate for the grpc ADS server
    
        --xds.server-key=string
                URL containing the server certificate key for the grpc ADS server
    
        --xds.server-trusts=string
                Comma-delimited URLs containing truststores for the grpc ADS server
    
        --xds.standalone-cluster=string
                (default: "default-cluster")
                The name of the cluster for the Envoys consuming the standalone xDS server. Should match the --service-cluster flag for the envoy binary, or the ENVOY_NODE_CLUSTER value for the envoy-simple Docker image.
    
        --xds.standalone-port=int
                (default: 80)
                The port on which Envoys consuming the standalone xDS server should listen. Ignored if --api.key is specified.
    
        --xds.standalone-zone=string
                (default: "default-zone")
                The name of the zone for the Envoys consuming the standalone xDS server. Should match the --service-zone flag for the envoy binary, or the ENVOY_NODE_ZONE value for the envoy-simple Docker image.
    
        --xds.static-resources.conflict-behavior=value
                (default: "merge")
                (valid values: "overwrite" or "merge")
                How to handle conflicts between configuration types. If "overwrite" configuration types overwrite defaults. For example, if one were to include "listeners" in the static resources configuration file, all existing listeners would be overwritten. If the
                value is "merge", listeners would be merged together, with collisions favoring the statically configured listener. Clusters are differentiated by name, while listeners are differentiated by IP/port. Listeners on 0.0.0.0 (or ::) on a given port will
                collide with any other IP with the same port. Specifying colliding static resources will produce a startup error.
    
        --xds.static-resources.filename=string
                Path to a file containing static resources. The contents of the file should be either a JSON or YAML fragment (as configured by the corresponding --format flag) containing any combination of "clusters" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/cluster.proto), "cluster_template" (a single cluster, which will be used as the prototype for all clusters not specified statically), and/or listeners" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/listener.proto). The file is read once at startup. Only the v2 API is parsed. Enum strings such as "ROUND_ROBIN" must be capitalized.
    
        --xds.static-resources.format=value
                (default: "yaml")
                (valid values: "json" or "yaml")
                The format of the static resources file
    
        Global options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_". For example, "--some-flag" becomes "GM_CONTROL_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    OPTIONS
        --help  (default: false)
                Show a list of commands or help for one command
    
        --version
                (default: false)
                Print the version and exit
    
        Options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_XDS_ONLY_". For example, "--some-flag" becomes "GM_CONTROL_XDS_ONLY_SOME_FLAG". Command-line flags take precedence over environment variables.
    $ ./gm-control consul --help
    NAME
        consul - Consul collector
    
    USAGE
        gm-control [GLOBAL OPTIONS] consul [OPTIONS]
    
    VERSION
        1.0.3-dev
    
    DESCRIPTION
        Connects to a Consul agent via HTTP API and updates Clusters stored in the Greymatter API at startup and periodically thereafter.
    
        A service is marked for import using tags, by default "gm-cluster" is used but it may be customized through the command line (see --cluster-tag). Each identified service will be imported as a Greymatter Cluster and the nodes that are marked with the configured
        tag are added as instances for that Cluster. For each instance within a Cluster, metadata is populated from a combination of service tags, node metadata, service metadata and health checks.
    
        Service Tags
    
        Service tags, excluding the cluster tag itself, are added with a "tag:" prefix. By default, they are treated as single value entries and are imported with empty values. The --tag-delimiter flag can be used to treat tags as key value pairs, and they will be
        parsed as such. Tags that have the delimiter as a suffix or that do not contain it at all are added with empty values, while tags that use it as a prefix are ignored and logged.
    
        Node Metadata
    
        Node metadata is added as instance metadata with a "node:" prefix for each key.
    
        Service Metadata
    
        Service metadata is passed through and is added as instance metadata without any namespacing.
    
        Health Checks
    
        Node health checks will be added as instance metadata named following the pattern "check:<check-id>" with the check status as value. Additionally "node-health" is added for an instance within each cluster to aggregate all the other health checks on that node
        that either are 1) not bound to a service or 2) bound to the service this cluster represents. The value for this aggregate metadata will be:
    
            passing   if all Consul health checks have a "passing" value
            mixed     if any Consul health check has a "passing" value
            failed    if no Consul health check has the value of "passing"
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every gm-control request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for gm-control requests. If no port is given, it defaults to port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for gm-control requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for gm-control requests
    
        --api.prefix=value
                The url prefix for gm-control requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for gm-control requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every gm-control request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every gm-control request.
    
        --api.zone-name=string
                The name of the API Zone for gm-control requests.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --delay=duration
                (default: 30s)
                Sets the minimum time between API updates. If the discovery data changes more frequently than this duration, updates are delayed to maintain the minimum time.
    
        --diff.dry-run
                (default: false)
                Log changes at the info level rather than submitting them to the API
    
        --diff.ignore-create
                (default: false)
                If true, do not create new Clusters in the API
    
        --diff.include-delete
                (default: false)
                If true, delete missing Clusters from the API
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --stats.api.header=header
                Specifies a custom header to send with every stats API request. Headers are given as name:value pairs. Leading and trailing whitespace will be stripped from the name and value. For multiple headers, this flag may be repeated or multiple headers can be
                delimited with commas.
    
        --stats.api.host=host:port
                (default: localhost:80)
                The address (host:port) for stats API requests. If no port is given, it defaults to port 443 if --stats.api.ssl is true and port 80 otherwise.
    
        --stats.api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for stats API requests
    
        --stats.api.prefix=value
                The url prefix for stats API requests. Forms the path part of <host>:<port><path>
    
        --stats.api.ssl
                (default: true)
                If true, use SSL for stats API requests
    
        --stats.api.sslCert=value
                Specifies the SSL cert to use for every stats API request.
    
        --stats.api.sslKey=value
                Specifies the SSL key to use for every stats API request.
    
        --stats.backends=value
                (valid values: "dogstatsd", "prometheus", "statsd", or "wavefront")
                Selects which stats backend(s) to use.
    
        --stats.batch
                (default: true)
                If true, stats requests are batched together for performance.
    
        --stats.dogstatsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.dogstatsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.dogstatsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.dogstatsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.dogstatsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.dogstatsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.dogstatsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.dogstatsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.dogstatsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.dogstatsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.dogstatsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.event-backends=value
                (valid values: "console" or "honeycomb")
                Selects which stats backend(s) to use for structured events.
    
        --stats.exec.attempt-timeout=duration
                (default: 1s)
                Specifies the default timeout for individual action attempts. A timeout of 0 means no timeout.
    
        --stats.exec.delay=duration
                (default: 100ms)
                Specifies the initial delay for the exponential delay type. Specifies the delay for constant delay type.
    
        --stats.exec.delay-type=value
                (default: "exponential")
                (valid values: "constant" or "exponential")
                Specifies the retry delay type.
    
        --stats.exec.max-attempts=int
                (default: 8)
                Specifies the maximum number of attempts made, inclusive of the original attempt.
    
        --stats.exec.max-delay=duration
                (default: 30s)
                Specifies the maximum delay for the exponential delay type. Ignored for the constant delay type.
    
        --stats.exec.parallelism=int
                (default: 8)
                Specifies the maximum number of concurrent attempts running.
    
        --stats.exec.timeout=duration
                (default: 10s)
                Specifies the default timeout for actions. A timeout of 0 means no timeout.
    
        --stats.honeycomb.api-host=string
                (default: "https://api.honeycomb.io")
                The Honeycomb API host to send messages to
    
        --stats.honeycomb.batchSize=uint
                (default: 50)
                The Honeycomb batch size to use
    
        --stats.honeycomb.dataset=string
                They Honeycomb dataset to send messages to.
    
        --stats.honeycomb.sample-rate=uint
                (default: 1)
                The Honeycomb sample rate to use. Specified as 1 event sent per Sample Rate
    
        --stats.honeycomb.write-key=string
                They Honeycomb write key used to send messages.
    
        --stats.max-batch-delay=duration
                (default: 1s)
                If batching is enabled, the maximum amount of time requests are held before transmission
    
        --stats.max-batch-size=int
                (default: 100)
                If batching is enabled, the maximum number of requests that will be combined.
    
        --stats.node=string
                If set, specifies the node to use when submitting stats to backends. Equivalent to adding "--stats.tags=node=value" to the command line.
    
        --stats.prometheus.addr=value
                (default: 0.0.0.0:9102)
                Specifies the listener address for Prometheus scraping.
    
        --stats.prometheus.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. In either case, a UUID is appended to the value to insure that it is unique across proxies. Cannot be combined
                with --stats.unique-source.
    
        --stats.statsd.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.statsd.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.statsd.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.statsd.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.statsd.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.statsd.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.statsd.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.statsd.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.statsd.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.statsd.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.statsd.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --stats.tags=value
                Tags to be included with every stat. May be comma-delimited or specified more than once. Should be of the form "<key>=<value>" or "tag"
    
        --stats.unique-source=string
                If set, specifies the source to use when submitting stats to backends. Equivalent to adding "--stats.tags=source=value" to the command line. Unlike --stats.source, failing to specify a unique value may prevent stats from being recorded correctly. Cannot
                be combined with --stats.source.
    
        --stats.wavefront.debug
                (default: false)
                If enabled, logs the stats data on stdout.
    
        --stats.wavefront.flush-interval=duration
                (default: 5s)
                Specifies the duration between stats flushes.
    
        --stats.wavefront.host=string
                (default: "127.0.0.1")
                Specifies the destination host for stats.
    
        --stats.wavefront.latch
                (default: false)
                Specifies whether stats are accumulated over a window before being sent to the backend.
    
        --stats.wavefront.latch.base-value=float
                (default: 0.001)
                Specifies the upper bound of the first bucket used for accumulating histograms. Each subsequent bucket's upper bound is double the previous bucket's. For timings this value is taken to be in units of seconds. Must be greater than 0.
    
        --stats.wavefront.latch.buckets=int
                (default: 20)
                Specifies the number of buckets used for accumulating histograms. Must be greater than 1.
    
        --stats.wavefront.latch.window=duration
                (default: 1m0s)
                Specifies the period of time over which stats are latched. Must be greater than 0.
    
        --stats.wavefront.max-packet-len=bytes
                (default: 8192)
                Specifies the maximum number of payload bytes sent per flush. If necessary, flushes will occur before the flush interval to prevent payloads from exceeding this size. The size does not include IP and UDP header bytes. Stats may not be delivered if the
                total size of the headers and payload exceeds the network's MTU.
    
        --stats.wavefront.port=int
                (default: 8125)
                Specifies the destination port for stats.
    
        --stats.wavefront.scope=string
                If specified, prepends the given scope to metric names.
    
        --stats.wavefront.transform-tags=string
                Defines one or more transformations for tags. A tag with a specific name whose value matches a regular expression can be transformed into one or more tags with values extracted from subexpressions of the regular expression. Transformations are specified
                as follows:
    
                    tag=/regex/,n1,n2...
    
                where tag is the name of the tag to be transformed, regex is a regular expression with 1 or more subexpressions, and n1,n2... is a sequence of names for the tags formed from the regular expression's subexpressions (matching groups). Any character may be
                used in place of the slashes (/) to delimit the regular expression. There must be at least one subexpression in the regular expression. There must be exactly as many names as subexpressions. If one of the names is the original tag name, the original tag
                is replaced with the transformed value. Otherwise, the original tag is passed through unchanged. Multiple transformations may be separated by semicolons (;). Any character may be escaped with a backslash (\).
    
                Examples:
    
                    foo=/^(.+):.*x=([0-9]+)/,foo,bar
                    foo=@.*y=([A-Za-z_]+)@,yval
    
        --version
                (default: false)
                Print the version and exit
    
        --xds.addr=value
                (default: :50000)
                The address on which to serve the envoy API server.
    
        --xds.ads-enabled
                (default: true)
                If false, turn off ads discovery mode
    
        --xds.ca-file=string
                Path to a file (on the Envoy host's file system) containing CA certificates for TLS.
    
        --xds.default-timeout=duration
                (default: 1m0s)
                The default request timeout, if none is specified in the RetryPolicy for a Route
    
        --xds.disabled
                (default: false)
                Disables the xDS listener.
    
        --xds.enable-tls
                (default: false)
                Enable grpc xDS TLS
    
        --xds.grpc-log-top=int
                (default: 0)
                When gRPC logging is enabled and this value is greater than 1, logs of non-success Envoy responses are tracked and periodically reported. This flag controls how many unique response code & request path combinations are tracked. When the number of
                tracked combinations in the reporting period is exceeded, uncommon paths are evicted.
    
        --xds.grpc-log-top-interval=duration
                (default: 5m0s)
                See the grpc-log-top flag. Controls the interval at which top logs are generated.
    
        --xds.interval=duration
                (default: 1s)
                The interval for polling the Greymatter API. Minimium value is 500ms
    
        --xds.resolve-dns
                (default: true)
                If true, resolve EDS hostnames to IP addresses.
    
        --xds.server-auth-type=string
                TLS client authentication type
    
        --xds.server-cert=string
                URL containing the server certificate for the grpc ADS server
    
        --xds.server-key=string
                URL containing the server certificate key for the grpc ADS server
    
        --xds.server-trusts=string
                Comma-delimited URLs containing truststores for the grpc ADS server
    
        --xds.standalone-cluster=string
                (default: "default-cluster")
                The name of the cluster for the Envoys consuming the standalone xDS server. Should match the --service-cluster flag for the envoy binary, or the ENVOY_NODE_CLUSTER value for the envoy-simple Docker image.
    
        --xds.standalone-port=int
                (default: 80)
                The port on which Envoys consuming the standalone xDS server should listen. Ignored if --api.key is specified.
    
        --xds.standalone-zone=string
                (default: "default-zone")
                The name of the zone for the Envoys consuming the standalone xDS server. Should match the --service-zone flag for the envoy binary, or the ENVOY_NODE_ZONE value for the envoy-simple Docker image.
    
        --xds.static-resources.conflict-behavior=value
                (default: "merge")
                (valid values: "overwrite" or "merge")
                How to handle conflicts between configuration types. If "overwrite" configuration types overwrite defaults. For example, if one were to include "listeners" in the static resources configuration file, all existing listeners would be overwritten. If the
                value is "merge", listeners would be merged together, with collisions favoring the statically configured listener. Clusters are differentiated by name, while listeners are differentiated by IP/port. Listeners on 0.0.0.0 (or ::) on a given port will
                collide with any other IP with the same port. Specifying colliding static resources will produce a startup error.
    
        --xds.static-resources.filename=string
                Path to a file containing static resources. The contents of the file should be either a JSON or YAML fragment (as configured by the corresponding --format flag) containing any combination of "clusters" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/cluster.proto), "cluster_template" (a single cluster, which will be used as the prototype for all clusters not specified statically), and/or listeners" (an array of
                https://www.envoyproxy.io/docs/envoy/v1.13.1/api-v2/api/v2/listener.proto). The file is read once at startup. Only the v2 API is parsed. Enum strings such as "ROUND_ROBIN" must be capitalized.
    
        --xds.static-resources.format=value
                (default: "yaml")
                (valid values: "json" or "yaml")
                The format of the static resources file
    
        Global options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_". For example, "--some-flag" becomes "GM_CONTROL_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    OPTIONS
        --cluster-tag=string
                (default: "gm-cluster")
                The tag used to indicate that a service should be imported as a Cluster. If used in conjunction with 'tag-delimiter' its value can be used to override the cluster name from the default value of the name of the service in consul.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --dc=string
                [REQUIRED] Collect Consul services only from this DC.
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --hostport=[host]:port
                (default: "localhost:8500")
                The [host]:port for the Consul API.
    
        --tag-delimiter=string
                The delimiter used to split key/value pairs stored in Consul service tags.
    
        --use-ssl
                (default: false)
                If set will instruct communications to the Consul API to be done via SSL.
    
        --version
                (default: false)
                Print the version and exit
    
        Options can also be configured via upper-case, underscore-delimited environment variables prefixed with "GM_CONTROL_CONSUL_". For example, "--some-flag" becomes "GM_CONTROL_CONSUL_SOME_FLAG". Command-line flags take precedence over environment variables.
    PROXY_DYNAMIC=true   # To run in dynamic configuration mode
    
    
    XDS_CLUSTER=example-service
    XDS_ZONE=us-east-1
    XDS_NODE_ID=an58xch3mf78
    gm-proxy -c ./config.yaml \
        --service-cluster=example-service \
        --service-zone=us-east-1 \
        --service-node=an58xch3mf78
    node:
      cluster: example-service
      id: n48xng&9#dsfd9
      locality:
        zone: us-east-1
    curl -X POST localhost:8001/logging?level=debug
    curl -X POST localhost:8001/logging?filter=debug
  • GM_CONTROL_API_GMDATA_SERVICES_FILE_SECURITY

  • GM_CONTROL_API_GMDATA_STARTUP_DELAY

  • GM_CONTROL_API_GMDATA_MAX_EVENTS

  • GM_CONTROL_API_GMDATA_POLLING_INTERVAL

  • GM_CONTROL_API_GMDATA_MAX_RETRIES

  • GM_CONTROL_API_GMDATA_RETRY_DELAY

  • GM_CONTROL_API_GMDATA_ROOT_EVENT_NAME

  • CLIENT_ADDRESS

  • CLIENT_PORT

  • CLIENT_PREFIX

  • CLIENT_IDENTITY

  • CLIENT_EMAIL

  • CLIENT_USE_TLS

  • INSECURE_TLS

  • MONGO_USE_TLS

  • ""

    Path to SSL CA on disk

    GM_CONTROL_API_SERVER_CERT_PATH

    String

    ""

    Path to SSL Cert on disk

    GM_CONTROL_API_SERVER_KEY_PATH

    String

    ""

    Path to SSL Key on disk

    GM_CONTROL_API_ORG_KEY

    String

    "org-deciphernow"

    Default Organization key to create on startup

    GM_CONTROL_API_ORG_NAME

    String

    "deciphernow"

    Name of default Organization

    GM_CONTROL_API_ORG_EMAIL

    String

    ""

    Email of default Organization

    GM_CONTROL_API_ZONE_KEY

    String

    "zone-default-zone"

    Default Zone key to create on startup

    GM_CONTROL_API_ZONE_NAME

    String

    "default-zone"

    Name of default Zone created on startup

    GM_CONTROL_API_LOG_LEVEL

    String

    "info"

    Log level ("info", "debug", "warn")

    GM_CONTROL_API_VERBOSE_LOGGING

    Boolean

    false

    Trigger extra verbose logging of the debug level

    GM_CONTROL_API_PERSISTER_TYPE

    String

    "file"

    Type of persistent storage for mesh config ("null", "file", "gmdata")

    GM_CONTROL_API_PERSISTER_PATH

    String

    "gm_control_api_backend.json"

    FUll path to file on disk. Only used if GM_CONTROL_API_PERSISTER_TYPE="file"

    GM_CONTROL_API_DOCS_PATH

    String

    "swagger.yaml"

    Path, relative to the server binary, of the swagger docs. Will be served at /.

    GM_CONTROL_API_GMDATA_OBJECT_POLICY_TEMPLATE

    String

    (if (contains email %s) (yield-all) (yield R X))

    Trigger serving only on HTTPS

    GM_CONTROL_API_GMDATA_POLICY_LABEL

    String

    "Localuser owned"

    User-friendly label to apply to all gm-data uploaded objects

    GM_CONTROL_API_GMDATA_TOP_LEVEL_FOLDER_SECURITY

    String

    "{"label":"PUBLIC//EVERYBODY","foreground":"#FFFFFF","background":"#008800"}"

    Trigger serving only on HTTPS

    GM_CONTROL_API_GMDATA_SERVICES_FILE_SECURITY

    String

    "{"label":"PUBLIC//EVERYBODY","foreground":"#FFFFFF","background":"#008800"}"

    Trigger serving only on HTTPS

    GM_CONTROL_API_GMDATA_STARTUP_DELAY

    String

    "5s"

    Duration to wait after startup before trying to connect to gm-data

    GM_CONTROL_API_GMDATA_MAX_EVENTS

    Integer

    1000

    Maximum number of history events to return with each listing

    GM_CONTROL_API_GMDATA_POLLING_INTERVAL

    String

    "2s"

    Duration to wait between polling gm-data for configuration changes

    GM_CONTROL_API_GMDATA_MAX_RETRIES

    Integer

    50

    Maximum number of time to attempt to connect to gm-data

    GM_CONTROL_API_GMDATA_RETRY_DELAY

    String

    "5s"

    During between gm-data connection attempts

    GM_CONTROL_API_GMDATA_ROOT_EVENT_NAME

    String

    "world"

    The name of the root folder in gm-data. Must match the gm-data deployment.

    CLIENT_ADDRESS

    String

    "localhost"

    Host of the gm-data connection

    CLIENT_PORT

    String

    "5555"

    Port of the gm-data connection

    CLIENT_PREFIX

    String

    "/services/data/1.0"

    Any URI path prefix on the gm-data connection

    CLIENT_IDENTITY

    String

    ""

    Identity to use when connecting to gm-data. Determines available policies and permissions.

    CLIENT_EMAIL

    String

    ""

    Email field of objects stored in gm-data

    CLIENT_USE_TLS

    Boolean

    false

    Connect to gm-data using TLS

    INSECURE_TLS

    Boolean

    true

    Connect to gm-data instance without doing server name verification

    MONGO_USE_TLS

    Boolean

    false

    ​

    Name

    Type

    Default

    Description

    GM_CONTROL_API_ADDRESS

    String

    "0.0.0.0:5555"

    Host interface and port to start the server on

    GM_CONTROL_API_USE_TLS

    Boolean

    false

    Trigger serving only on HTTPS

    GM_CONTROL_API_CA_CERT_PATH

    Grey Matter Proxyarrow-up-right
    Grey Matter Supportarrow-up-right

    String

    Configuration Variables

    Name

    Type

    Default

    Description

    BASE_URL

    String

    "/"

    Base URL, relative to the Grey Matter installation's hostname

    SERVER_SSL_ENABLED

    Boolean

    false

    Informs service to receive client connections over SSL only

    hashtag
    Questions

    circle-check

    Need help setting up the Intelligence 360? Create an account at Grey Matter Supportarrow-up-right to reach our team.

    Background

    Normally, the OS manages trusted certificates that tools like the browser, curl, etc use. These managed trusted certificates are provided by reliable third-party companies like Verisign and Cloudflare. Over time, as certificates expire or update, re-running OS tools like update-ca-certificates will fetch the most recent certs for known entities like google.com, amazon.com, etc.

    When a customer uses self-signed certificates, they may not be using one of these third-party companies for validation. In this case, the browser and curl won't trust that they are who they say they are, unless we explicitly tell them to using the methods outlined here.

    hashtag
    Setup

    To configure new trusted certificates, we'll need to pull the existing docker image and re-build it with any added certs to trust. This can't easily be done just by mounting certs at runtime, as it would also require overriding the default command. (possible to do, just not very clean)

    Note: the docs here assume the container is running Alpine Linux, but similar procedures can be found for Ubuntu, CentOS, etc.

    Warning: Each certificate loaded must be in separate file. Any file with multiple certs will cause the procedure to fail.

    1. Setup a local directory with any certs that need to be trusted.

      1. For demo purposes here, this directory is called ./certs-override

    2. Insert certs into that directly, one cert at a time. E.g.

      1. trust1.pem, trust2.pem, trust3.pem, etc

    3. Build the new image with the command docker build -t test/gm-proxy -f Dockerfile .. This will:

      1. Pull an existing proxy build

      2. Mount in the local certs

    The resulting image can be run exactly the same as any other gm-proxy image, but operations will now trust the mounted certs as well.

    hashtag
    Dockerfile

    Oauth Filterarrow-up-right

    Data

    hashtag
    Set up S3 for Grey Matter Data

    This guide goes over the steps to set up an S3 bucket to use to persist Grey Matter Data. These steps assume the AWS CLI is configured for your desired profile and region.

    hashtag
    1. Create a new S3 bucket

    Choose the name of your desired S3 bucket, and run the following to create it:

    You may get a warning that the bucket name is not available, if so choose a new name and rerun.

    hashtag
    2. Create an IAM policy

    In your browser, open the and click on Policies under Access Management in the left pane. Click Create Policy.

    In the JSON tab, copy the following and paste it in. Replace <data-bucket-name> with the name of the bucket you created in :

    Click Review Policy and name the policy gmdata-s3. Create policy.

    This policy will give a user for Grey Matter Data access to the bucket.

    hashtag
    3. Create an IAM user for the policy

    Now create an IAM user named gm-data:

    To attach the policy created in step 2, run:

    The output will look like this:

    Copy the value in the Arn field from the output, fill it into <policy-arn>, and run the following:

    hashtag
    4. Get user credentials

    To get programmatic access credentials for the user created in step 3, run:

    Immediately save the credentials for AccessKeyId and SecretAccessKey somewhere secure and accessible during the Grey Matter installation.

    You are now ready to install Grey Matter Data to persist to S3!

    API

    The Grey Matter Mesh is built out of a number of logical abstractions. The following table describes the logical abstractions, or objects, that make up the Grey Matter Mesh. The arrangement, connections, and configuration of these objects define the mesh and its behaviors.

    • cluster

    • domain

    hashtag
    Summary

    hashtag
    Detailed Descriptions

    hashtag
    Request Flow

    1. Listener: A request from inside or outside the mesh is first sent to the host:port of a listener.

    2. Domain: The request host:port is matched against the available domains, and redirected if need be, and sent to an attached set of routes.

    hashtag
    Nested Objects

    Each API Objects discussed above is made up of individual fields and nested structures. Some structures are unique to a given object, but some are re-used across multiple places. These are discussed below.

    Catalog

    Server Setup for Grey Matter Catalog

    Grey Matter Catalog provides additional metadata about services to the Grey Matter Dashboard. It connects to Grey Matter Control) to retrieve discovered services and combines them with user driven configuration to create summarized metadata about services running within a service mesh. Although it was designed to power the Grey Matter Intelligence 360 Application, this service can be leveraged as an API to develop further visualizations of a service mesh.

    See our guide on how to use Grey Matter Catalogarrow-up-right for how to interact with this service when it's up and running.

    hashtag
    Server Configuration Options

    hashtag
    Environment Variables

    hashtag
    Control

    The Catalog service discovers service instances from Grey Matter Control and up to ten instances of Control may be configured. Each environment variable for a specific control instance should have its own index number by replacing <N>. For example, CONTROL_SERVER_0_ADDRESS, CONTROL_SERVER_1_ADDRESS, etc.

    hashtag
    TLS

    In Grey Matter service mesh deployments, Transport Layer Security (TLS) is typically handled by a sidecar in front of the Catalog Service. In other words, the Catalog Service is treated like any other service in the service mesh, whereby security is offloaded to the sidecar proxy. However, this service can run standalone, and in these cases the following environment variables can be set to configure TLS.

    hashtag
    Persistence

    As indicated above, there are two options for configuring services known by the Catalog service - gmdata and env. Think about your deployment model before choosing either of these options, which are highlighted below.

    hashtag
    Configure Persistence Grey Matter Data

    When the Grey Matter service mesh is deployed via our , Catalog will already be configured to persist service configuration in an internal instance of Grey Matter Data. This is our recommended approach for production deployments because of the added security and change history provided by Grey Matter Data. Changes to services stored in Grey Matter Data are made possible via the Catalog RESTful API.

    Below is a list of environment variables required to use Grey Matter Data as the persistent store. Note, that these are automatically set when deploying with our .

    hashtag
    Configure Persistence with Environment Variables

    The second configuration option is to statically define the metadata associated with each service via environment variables. This is a quick way to get Catalog running without having to setup Grey Matter Data but there are some pitfalls with this approach.

    triangle-exclamation

    Pitfalls of Configuring Persistence with Environment Variables

    If you switch to using Grey Matter Data after using environment variables then the metadata will be stored in Grey Matter Data and subsequent changes to the environment variables won't take effect. You can end up creating a disconnect between configurations if you switch back and forth.

    If you use the Catalog RESTful API to make configuration changes and then switch back to using environment variables, you will need to keep environment variables in sync manually with those changes.

    The following environment variables define the additional metadata associated with each service known by Catalog. Each environment variable for a specific service instance should have its own index number by replacing <N>. For example, SERVICE_0_CLUSTER_NAME, SERVICE_1_CLUSTER_NAME, etc.

    hashtag
    Configuration File

    circle-info

    Note: the following is not a typical deployment model.

    If you're running the Catalog executable binary manually you can also provide a configuration file with key/value pairs.**

    If there are conflicting configurations between this file and environment variables, then the environment variables will take precedence.

    hashtag
    Questions

    circle-check

    Need help configuring Grey Matter Catalog? Contact us at .

    Sense

    Grey Matter Sense consists of four primary components: Intelligence 360, SLO, Business Impact and Catalog.

    Image of Sense layers

    Get a refresher on how Sense fits into Grey Matter's architecture.

    Core Componentschevron-right

    hashtag
    Components

    Intelligence 360chevron-rightSLOchevron-rightCatalogchevron-right

    Static Resources

    • Loading GM-Control From Static Resources

      • Static File

    hashtag
    Static File

    GM-Control has the ability to read in raw envoy clusters and listeners from a static file on startup. This is different from file discovery, which continually reads and writes to the file dynamically during runtime has has a . Static configs can be combined with discovery (see --xds.static-resources.conflict-behavior), but are themselves read only, and read in once at configuration time.

    hashtag
    Configuration

    Static file configs are read in from the following configuration values:

    hashtag
    API

    The static resource configuration has the following API:

    hashtag
    Cluster Templates

    Instead of defining a large number of clusters statically, specifying a clusterTemplate attribute provides a base structure which is filled in by each cluster definition in clusters. A full example of this is available in the .

    hashtag
    Local Clusters

    GM-Control provides a quick and easy utility to specify and for hosts running on the same host as control (i.e. running on localhost). It This can be useful for testing locally when running everything in a development environment on the same machine. This should not be used for a production environment.

    hashtag
    Examples

    To enable loading of local clusters, set the environment variable GM_CONTROL_LOCAL_CLUSTERS to a comma delimited string of cluster_name:port. Here is an example:

    This will create a pre-canned config for the gm-proxy cluster and import it into gm-control with the host 127.0.0.1:8080. The whole cluster config is listed below:

    Direct Config

    hashtag
    Config File

    The Grey Matter Proxy is first configured from a YAML configuration file on disk. This file takes any configuration available in the Envoy Bootstrap Config Filearrow-up-right, as well as the additional Grey Matter filters that are made available through the SDK.

    This file can be mounted into a container, or specified via flags with the gm-proxy binary.

    hashtag
    Base64 Encoding

    Due to the difficulties involved in mounting files directly on some container platforms, the Grey Matter Proxy also supports passing Envoy configuration files as a base64-encoded environment variable, which is then decoded and written to disk.

    When using this mechanism, you must supply the full config, because it will entirely overwrite the default on disk.

    hashtag
    When to Use Direct Config

    Using the template config discussed in the previous section is simple and less error prone. However, not everything is exposed via the template or API as there are a variety of functionalities continuously added to both Envoy and Grey Matter Proxy. So you may find yourself in a situation that certain configuration you want to specify is not yet available in the template or API. This is one of the situations you would need to use direct configuration. Due to the flat nature of environment variables, we found that highly nested configurations are easier to specify and read in JSON/YAML format. Of course, if you already use Envoy proxy in your current infrastructure, the hurdle of conversion is lower if you could use configuration files you already have. No matter what the reasons, Grey Matter provides a direct configuration mechanism.

    Envoy fundamentally employs an eventual consistency model, however, there are cases, the order in which resources (e.g. endpoints, routes, clusters, etc) are generated or modified is crucial. To achieve this, there is something called Aggregated Discovery Service (ADS) which ensures them to be coalesced to a single stream to a single management servers. If you would like to do this on startup, providing this via a direct configuration will likely be more straight forward and efficient. Perhaps, the easiest way to think of this would be a way to provide the bootstrap configuration in its native format whether you will use this as a static configuration or dynamic configuration.

    Here is the example configuration:

    You can provide this as a file or base64 encoded environment variable to your sidecar. If you are using Kubernetes, you could create a ConfigMap with a content like above, and when you create your deployment or statefulset for your service, you can pass that information to GM Proxy like this:

    As you see, the config.yaml file gets mapped under /etc/greymatter/ and when you create a service, that location gets passed on to the proxy via the argument -c /etc/greymatter/config.yaml as we discussed above.

    hashtag
    Questions

    circle-check

    Need help?

    Create an account at to reach our team.

    Template Config

    hashtag
    Config File

    The Grey Matter Proxy receives its initial configuration in the form of a YAML configuration file on disk. This file takes any configuration available in the Envoy Bootstrap Config Filearrow-up-right, as well as the additional Grey Matter filters that are made available through the SDK.

    circle-info

    Since the full bootstrap config file has a large number of complex options, a select number of common options have been exposed via template files and environment variables.

    hashtag
    Sample Dynamic Configuration

    The dynamic configuration template is used if PROXY_DYNAMIC=true. In this case, the bootstrap configuration sets up the proxy to receive all other configuration through the control plane. In this case, the proxy starts with almost no configuration (no listeners, routes, clusters, filters, etc), but will receive them through actions end users take.

    hashtag
    Sample Static Configuration

    The static configuration template is used if PROXY_DYNAMIC=false (default). In this case, the environment variables set certain behavior and options in the bootstrap config file that the Grey Matter Proxy will use at startup. A simple example of setting a static config is shown below. Note that these options (and all other defaults) are then locked in while this proxy is running. You will need to restart it to receive any modifications.

    hashtag
    Full Config Options

    The following tables lists the full configuration options.

    hashtag
    Questions

    circle-check

    Need help?

    Create an account at to reach our team.

    LDAP Configuration

    Configure JWT Security with LDAP

    You can configure the gm-jwt-security service to search an LDAP server for user payloads. To use LDAP as a backend service, refer to the following configuration options.

    hashtag
    Enable LDAP Configuration

    To enable, USE_LDAP must be set to true.

    hashtag
    Questions

    circle-check

    Need help configuring JWT for LDAP? Contact our team at .

    circuit-breakers

    hashtag
    Summary

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    max_connections

    MaxConnections is the maximum number of connections that will be established to all instances in a cluster within a proxy. If set to 0, no new connections will be created. If not specified, defaults to 1024.

    hashtag
    max_pending_requests

    MaxPendingRequests is the maximum number of requests that will be queued while waiting on a connection pool to a cluster within a proxy. If set to 0, no requests will be queued. If not specified, defaults to 1024.

    hashtag
    max_requests

    MaxRequests is the maximum number of requests that can be outstanding to all instances in a cluster within a proxy. Only applicable to HTTP/2 traffic since HTTP/1.1 clusters are governed by the maximum connections circuit breaker. If set to 0, no requests will be made. If not specified, defaults to 1024.

    hashtag
    max_retries

    MaxRetries is the maximum number of retries that can be outstanding to all instances in a cluster within a proxy. If set to 0, requests will not be retried. If not specified, defaults to 3.

    hashtag
    max_connection_pools

    MaxConnectionPools is the maximum number of connection pools per cluster that Envoy will concurrently support at once. If not specified, the default is unlimited. Set this for clusters which create a large number of connection pools.

    hashtag
    track_remaining

    TrackRemaining enables the publishing of stats that expose the number of resources remaining until the circuit breakers open. If not specified, the default is false.

    hashtag
    high

    Another for high-priority requests.

    Connect to Control

    Each Grey Matter Proxy connects to the Grey Matter Control server with a bi-directional gRPC stream. This connection is kept alive as long as both servers are up, and updates to configuration will flow every 30 seconds (default; but configurable) from gm-control to each connected gm-proxy.

    hashtag
    Set Up the Connection

    To properly set up the connection, run gm-proxy with the following two environment variables.

    hashtag
    Use TLS (Optional)

    When connecting to the Envoy management server, proxies may also connect with TLS:

    hashtag
    Verify Connection

    If you receive repeated proxy log messages in the form below, it means that the connection to gm-control is failing. Usually this is because the address is incorrect or not addressable. If these logs do not appear, the connection is successful.

    circle-info

    Note: a single, or intermittent, occurrence of this error message often occurs during startup, and is not a concern.

    hashtag
    Questions

    circle-check

    Need help connecting to Grey Matter Control?

    Create an account at to reach our team.

    Grey Matter JWT Security

    Configuration details for the Grey Matter JWT Security service.

    You can deploy the Grey Matter JSON Web Token (JWT) Service many ways, including the following:

    • The preferred approach is to deploy via a Docker container running inside of an OpenShift or Kubernetes Pod.

    • The service is also packaged as a TAR file. The TAR contains an executable binary file you can deploy to a server.

    Follow the configuration requirements below to set up the Grey Matter JWT Security service.

    hashtag
    Prerequisites

    hashtag
    Environment Variables

    There are three required pieces of information to configure and run the service.

    • Set JWT_API_KEY as an environment variable

    • Set PRIVATE_KEY and USERS_JSON as a base64 encoded string, or as a volume mount (recommended)

    JWT_API_KEY is the base64 encoding of a comma separated list of API keys.

    The users.json file should have a users field that contains an array of user payloads. This is an example:

    circle-info

    Note: any service that provides the header api-key with a value matching one of the values in this list will have access to the /policies endpoint of the service, and can receive full jwt tokens.

    hashtag
    Example

    For the API key, list: 123,my-special-key,super-secret-key,pub-keyandJWT_API_KEY set to the value of:

    Any service that provides the header api-key with a value matching one of the following will have access:

    • 123

    • my-special-key

    • super-secret-key

    hashtag
    Redis

    The gm-jwt-security services creates and writes jwt tokens to a Redis server. In order to successfully generate and store jwt tokens, a Redis client must be implemented to connect to a server using information from the following environment variables.

    hashtag
    Optional Configuration

    The following environment variables can be set to specify the host, ports, and logging capabilities of the gm-jwt-service. To specify an expiration time for generated tokens, set TOKEN_EXP_TIME.

    hashtag
    Configure LDAP

    The gm-jwt-security service supports as a backend server to search for user payloads.

    circle-info

    Note: if LDAP is configured, it will take precedence over the users.json file. If LDAP is not configured, the configured USERS_JSON file will be searched for user payloads.

    hashtag
    Configure TLS

    TLS can be configured on the gm-jwt-security service using .

    hashtag
    Questions

    circle-check

    Need help configuring JWT? Contact us at: .

    TLS Configuration

    To enable TLS support for the service, perform the following steps:

    1. Set ENABLE_TLS to true

    2. Specify cert, trust, and key either through a volume mount (recommended) or the following environment variables.

    circle-exclamation

    In the event that both a volume mount and environment variables are provided, the volume mounted files will take precedence over the environment variables.

    hashtag
    Enable TLS Configuration

    hashtag
    Questions

    circle-check

    Need help?

    Create an account at to reach our team.

    ssl

    hashtag
    Summary

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    cipher_filter

    . If specified, only the listed ciphers will be accepted. Only valid with TLSv1-TLSv1.2, but has no affect with TLSv1.3.

    Examples include the values below, but full options should be found in the link above.

    • [ECDHE-ECDSA-AES128-GCM-SHA256|ECDHE-ECDSA-CHACHA20-POLY1305]

    • [ECDHE-RSA-AES128-GCM-SHA256|ECDHE-RSA-CHACHA20-POLY1305]

    • ECDHE-ECDSA-AES128-SHA

    hashtag
    protocols

    Array of SSL protocols to accept: "TLSv1_0", "TLSv1_1", "TLSv1_2", "TLSv1_3"

    hashtag
    cert_key_pairs

    Array of (cert, key) pairs to use when sending requests to the instances of the . Each cert or key must point to files on disk.

    hashtag
    trust_file

    String representing the path on disk to the SSL trust file to use when sending requests to the instances of the . If omitted, then no trust verification will be performed.

    hashtag
    sni

    String representing the intended target of the request. Used when the server is behind a load balancer that identifies hosts through SNI.

    SLO

    The Grey Matter Service Level Objective (SLO) service is compatible with Postgres versions 10.x and 11.x only. For more information on the SLO service and using its API, see the usage docs.

    hashtag
    SSL Configuration

    The server certificate must have a CN that matches the hostname of the Postgres server. See Postgres Secure TCP/IP Connections with SSLarrow-up-right for details.

    To ensure that clients connect via SSL a pg_hba.conf file must be configured accordingly.

    hashtag
    Example

    Certificates and the pg_hba.conf file must be volume mounted into the container and referenced via a Postgres startup command. The same configuration should be followed for production deployments.

    hashtag
    Configuration Variables

    hashtag
    Questions

    circle-check

    Need help configuring SLOs? Contact us at .

    Configuration and Usage

    hashtag
    Reference Configurations

    The source distribution includes a set of example configuration information for each of the major Grey Matter deployment types:

    • Service-to-service

    • Sidecar Proxy

    • Edge Proxy

    • Multi-mesh

    The goal of this set of example configurations is to demonstrate the full capabilities of Grey Matter in a complex deployment. All features will not be applicable to all use cases. For full documentation see the .

    hashtag
    Configuration Generator

    We use templating to make our configurations easier to create and manage. The source distribution includes a version of the configuration generator that loosely approximates what we use at Lyft. We have also included three example configuration templates for each of the above three scenarios.

    • Generator script

    • Service to service template

    • Sidecar Proxy Template

    To generate the example configurations run the following from the root of the repo:

    The previous command will produce three fully expanded configurations using some variables defined inside of this-script. See the comments inside of this-script for detailed information on how the different expansions work.

    circle-info

    Notes about the example configurations:

    • An instance of endpoint discovery service is assumed to be running at discovery.yourcompany.net

    hashtag
    Grey Matter Configuration Topics

    • Consul

    • Fabric

      • Set Up the Grey Matter CLI

    hashtag
    Questions?

    circle-info

    Configuration Issues?

    Contact us at to reach our team.

    health checks

    hashtag
    Summary

    hashtag
    Example object

    ssl

    hashtag
    Summary

    hashtag
    Example object

    domain

    hashtag
    Summary

    Each domain controls requests for a specific host:port. This permits different handling of requests to domains like localhost:8080 or catalog:8080 if desired. If uniform handling is required, wildcards are understood to apply to all domains. A domain set to match *:8080 will match both of the above domains.

    redirect

    hashtag
    Summary

    Redirects specify how URLs may need to be rewritten. Each Redirect has a name, a regex that matches the requested URL, a to indicating how the url should be rewritten, and a flag to indicate how the redirect will be handled by the proxying layer.

    hashtag

    GM_CONTROL_API_ADDRESS="0.0.0.0:5555"
    GM_CONTROL_API_ORG_KEY="deciphernow"
    GM_CONTROL_API_PERSISTER_TYPE="file"
    GM_CONTROL_API_PERSISTER_PATH="/control-plane/data/backend.json"
    GM_CONTROL_API_ZONE_KEY="default-zone"
    GM_CONTROL_API_ZONE_NAME="default-zone"
    GM_CONTROL_API_PERSISTER_TYPE=null
    GM_CONTROL_API_PERSISTER_TYPE=file
    GM_CONTROL_API_PERSISTER_TYPE=gmdata
    FROM docker.greymatter.io/release/gm-proxy:1.4.5
    
    # Switch to root user, necessary for the following operations
    USER root
    
    ADD ./certs-override/ /usr/local/share/ca-certificates/
    RUN update-ca-certificates
    
    # Switch back to a non-root user for execution
    USER gmproxy
    
    CMD ./gm-proxy -c config.yaml
    {
      "max_connections": 1,
      "max_pending_requests": 1,
      "max_retries": 1,
      "max_requests": 1,
      "max_connection_pools": 1,
      "track_remaining": true,
      "high": null
    }
    {
      "cipher_filter": "",
      "protocols": [
        "TLSv1_0",
        "TLSv1_1",
        "TLSv1_2",
        "TLSv1_3"
      ],
      "cert_key_pairs": [
        {
          "certificate_path": "/etc/proxy/tls/sidecar/server.crt",
          "key_path": "/etc/proxy/tls/sidecar/server.key"
        }
      ],
      "trust_file": "/etc/proxy/tls/sidecar/ca.crt",
      "sni": null
    }

    Run the OS update command

  • Setup the correct default runtime command

  • SERVER_SSL_CA

    String

    ""

    Path to client trust file (SERVER_SSL_ENABLED=true is required)

    SERVER_SSL_CERT

    String

    ""

    Path to client certificate (SERVER_SSL_ENABLED=true is required)

    SERVER_SSL_KEY

    String

    ""

    Path to client private key (SERVER_SSL_ENABLED=true is required)

    CONFIG_SERVER

    String

    http://localhost:5555/v1.0/

    Control API endpoint (for retrieving mesh configuration of services)

    FABRIC_SERVER

    String

    http://localhost:1337/services/catalog/1.0/

    Catalog endpoint (for retrieving metadata of mesh services)

    OBJECTIVES_SERVER

    String

    http://localhost:1337/services/slo/1.0/

    Service Level Objectives endpoint (for retrieving and setting performance objectives)

    PROMETHEUS_SERVER

    String

    http://localhost:1337/services/prometheus/2.3/api/v1/

    Prometheus endpoint (for retrieving historical service metrics)

    USE_PROMETHEUS

    Boolean

    true

    Use prometheus to query service level metrics

    SENSE_SERVER

    String

    http://localhost:1337/services/sense/latest/

    Sense endpoint (for displaying recommended scaling of services) Experimental endpoint

    ENABLE_SENSE

    Boolean

    false

    Sense feature toggle toggle

    HIDE_EXTERNAL_LINKS

    Boolean

    false

    Hide Decipher social links in the app footer

    EXPOSE_SOURCE_MAPS

    Boolean

    false

    Expose JavaScript source maps to web browsers in production (recommended for debugging only)

    LDAP_PORT

    389

    the LDAP server port

    uint

    LDAP_TLS

    false

    true to encrypt the LDAP connection

    bool

    LDAP_BASE_DN

    dc=example,dc=com

    base userDN for LDAP search requests

    string

    LDAP_USER

    "cn=read-only-admin,dc=example,dc=com"

    user to associate with the LDAP session

    string

    LDAP_USER_PASSWORD

    `"echo \"password\"

    base64 -> cGFzc3dvcmQK"`

    password to associate with the LDAP session user

    base64

    LDAP_TEST_DN

    "cn=admin,dc=example,dc=com"

    test user payload for LDAP

    string

    Variable

    Default Value

    Description

    Type

    USE_LDAP

    false

    true to configure and search an LDAP server for user payloads

    bool

    LDAP_ADDR

    "ldap.example.com"

    the LDAP server address

    string

    Grey Matter Supportarrow-up-right
    circuit_breaker
    ECDHE-RSA-AES128-SHA
  • AES128-GCM-SHA256

  • AES128-SHA

  • ECDHE-ECDSA-AES256-GCM-SHA384

  • ECDHE-RSA-AES256-GCM-SHA384

  • ECDHE-ECDSA-AES256-SHA

  • ECDHE-RSA-AES256-SHA

  • AES256-GCM-SHA384

  • AES256-SHA

  • Envoy cipher suitearrow-up-right
    cluster
    cluster
    gm_control_api@deciphernow.comenvelope

    SERVER_CERT

    /gm-jwt-security/certs/server.cert.pem

    ""

    base64 encoded server certificate

    base64

    SERVER_KEY

    /gm-jwt-security/certs/server.key.pem

    ""

    base64 encoded server key

    base64

    Variable

    Mount Location

    Default Value

    Description

    Type

    ENABLE_TLS

    -

    false

    true to enable TLS support

    bool

    SERVER_TRUST

    /gm-jwt-security/certs/server.trust.pem

    ""

    base64 encoded server trust store

    Grey Matter Supportarrow-up-right

    base64

    AWS IAM consolearrow-up-right
    step 1

    Destination

    Short Form

    Long Form

    Default

    Envoy

    -c

    --config-file

    config.yaml (current directory)

    Variable

    Default

    Description

    ENVOY_CONFIG

    ""

    Base64 encoded string of envoy configuration file

    Grey Matter Supportarrow-up-right

    TIMEOUT

    Cluster route timeout

    3000s

    No

    IDLE_TIMEOUT

    Cluster idle timeout

    3000s

    No

    DRAIN_TIMEOUT

    Listener drain timeout

    3000s

    No

    USE_HTTP2

    Enable HTTP/2 for static cluster connection (doesn't work with HTTP/1.0)

    true

    false

    No

    ACCEPT_HTTP_10

    Accept HTTP/1.0 connections on the Envoy static listener

    true

    false

    No

    SERVICE_DNS_TYPE

    The type of DNS envoy will use to connect to the static cluster

    LOGICAL_DNS

    STRICT_DNS

    No

    SERVICE_HOST

    Proxied service host

    example-service

    0.0.0.0

    Yes

    SERVICE_PORT

    Proxied service port

    3000

    Yes

    ZK_ANNOUNCE_PATH

    Zookeeper discovery path

    /services/example-servic/1.0.0

    Yes

    ZK_ANNOUNCE_HOST

    Host of the original service

    172.0.3.18

    0.0.0.0

    No

    ZK_ADDRS

    List of host:port locations for ZooKeeper nodes

    zk1:2181,zk2:2181

    localhost:2181

    Yes

    METRICS_PORT

    Port for metrics listener

    8081

    8081

    Yes

    METRICS_FABRIC_PATH

    Route for Grey Matter Dashboard metrics collection

    /metrics

    /metrics

    No

    METRICS_PROMETHEUS_PATH

    Route for Prometheus metrics collection

    /prometheus

    /prometheus

    No

    METRICS_USE_TLS

    Expose metrics over 2-way SSL

    false

    false

    No

    INGRESS_USE_TLS

    Enable ingress TLS to the Envoy listener

    false

    false

    No

    INGRESS_CA_CERT_PATH

    Ingress trust certificate path

    ./ingress/trust.pem

    No

    INGRESS_CERT_PATH

    Ingress certificate path

    ./ingress/cert.pem

    No

    INGRESS_KEY_PATH

    Ingress key certificate path

    ./ingress/key.pem

    No

    EGRESS_USE_TLS

    Enable 2-way SSL to the proxied service

    false

    false

    No

    EGRESS_CA_CERT_PATH

    Egress trust certificate path

    ./egress/trust.pem

    No

    EGRESS_CERT_PATH

    Egress certificate path

    ./egress/cert.pem

    No

    EGRESS_KEY_PATH

    Egress key certificate path

    ./egress/key.pem

    No

    DELAY_MEAN

    Obfuscation delay mean

    1

    No

    DELAY_STD

    Obfuscation delay std

    4

    No

    OAUTH_ENABLED

    Full OAuth 2.0 Functionality

    true

    No

    OAUTH_CLIENT_ID

    Client ID issued by the authorization server

    client-id

    No

    OAUTH_CLIENT_SECRET

    Client secret issued by the authorization server

    client-secret

    No

    OAUTH_SERVER_NAME

    Authorization server name

    server

    No

    OAUTH_SERVER_INSECURE

    Enable if the OAuth authorization server is insecure

    true

    false

    No

    OAUTH_SESSION_SECRET

    OAuth session secret

    secret

    No

    OAUTH_DOMAIN

    Provider domain

    ``

    No

    CW_ENABLED

    Enable Amazon CloudWatch metrics collection

    false

    false

    No

    CW_NAMESPACE

    Customize namespace where metrics will be stored

    GM/EC2

    GM/EC2

    No

    CW_METRICS_ROUTES

    Regular expression describing routes to be recognized

    ^all$

    ^all$

    No

    CW_METRICS_VALUES

    Values reported to Amazon Cloudwatch

    latency_ms.count,latency_ms.p50,latency_ms.p9999,in_throughput,out_throughput

    latency_ms.count,latency_ms.p50,latency_ms.p9999,in_throughput,out_throughput

    No

    CW_DIMENSIONS

    The dimension names/values that the specified metrics will be stored under

    AutoScalingGroupName: test-proxy-asg, ServiceName: gm-fabric-proxy

    AutoScalingGroupName: test-proxy-asg, ServiceName: gm-fabric-proxy

    No

    AWS_REGION

    AWS defined region

    us-east-1

    us-east-1

    No

    AWS_ACCESS_KEY_ID

    AWS provided access key credential

    No

    AWS_SECRET_ACCESS_KEY

    AWS provided secret access key credential

    No

    AWS_PROFILE

    A locally defined AWS profile name associated with valid AWS credentials

    default

    default

    No

    AWS_CONFIG_FILE

    Location of the local AWS config

    /root/.aws/config

    ~/.aws/config

    No

    OBS_ENABLED

    Enables event emission to various brokers

    true

    false

    No

    OBS_KAFKA_TOPIC

    Kafka topic to send observables on

    gm-sidecar-events

    false

    No

    OBS_TOPIC

    Topic for the observable event. Sets eventType in the payload.

    ``

    false

    No

    OBS_ENFORCED

    Audit all events which pass through the proxy

    false

    false

    No

    OBS_FULL_RESPONSE

    If true, dump the request/response bodies as well as the regular audit event. If KAFKA_ENABLED also dumps into Kafka.

    false

    false

    No

    KAFKA_ENABLED

    Enable event emission to a Kafka topic

    false

    false

    No

    KAFKA_ZK_DISCOVER

    Discovery of Kafka brokers from ZooKeeper

    false

    false

    No

    KAFKA_SERVER_CONNECTION

    List of Kafka node locations

    kafka:9091,kafka2:9091

    localhost:9091

    No

    USE_KAFKA_TLS

    Enable TLS communication with Kafka nodes

    false

    false

    No

    KAFKA_TLS_TRUSTS

    Certificate authorities to be used when connecting to Kafka over TLS (command de-limited)

    file:///opt/certs/truststore.pem

    ``

    No

    KAFKA_TLS_CERT

    Certificate to be used when connecting to Kafka over TLS

    file:///opt/certs/certificate.pem

    ``

    No

    KAFKA_TLS_KEY

    Certificate key to be used when connecting to Kafka over TLS

    file:///opt/certs/key.pem

    ``

    No

    KAFKA_SERVER_NAME

    Server name to be used when connecting to Kafka over TLS

    cn=kafka-node

    ``

    No

    ACL_ENABLED

    Enables 2-Way SSL impersonation REST filter

    false

    false

    No

    ACL_SERVER_LIST

    A list of server DNs to be whitelisted (pipe delimited)

    C=US,ST=Virginia,L=Alexandria,O=Decipher Technology Studios,OU=Engineering,CN=localhost

    No

    LISTAUTH_ENABLED

    Enable/disable the whitelist/blacklist feature

    false

    false

    No

    LISTAUTH_WHITELIST

    List of DNs to be whitelisted (pipe delimited)

    C=US,ST=Virginia,L=Alexandria,O=Decipher Technology Studios,OU=Engineering,CN=localhost

    No

    LISTAUTH_BLACKLIST

    List of DNs to be blacklisted (pipe delimited)

    C=US,ST=Virginia,L=Alexandria,O=Decipher Technology Studios,OU=Engineering,CN=localhost

    No

    ENVOY_CONFIG

    Base64 encoded string of envoy configuration file

    No

    GM_CONFIG

    Base64 encoded string of gm-config.yaml configuration file

    No

    INGRESS_TLS_CERT

    Base64 encoded cert written out to ./certs/ingress_localhost.crt

    No

    INGRESS_TLS_KEY

    Base64 encoded key written out to ./certs/ingress_localhost.key

    No

    INGRESS_TLS_TRUST

    Base64 encoded trust written out to ./certs/ingress_intermediate.crt

    No

    EGRESS_TLS_CERT

    Base64 encoded cert written out to ./certs/egress_localhost.crt

    No

    EGRESS_TLS_KEY

    Base64 encoded key written out to ./certs/egress_localhost.key

    No

    EGRESS_TLS_TRUST

    Base64 encoded trust written out to ./certs/egress_intermediatet.crt

    No

    PROXY_DYNAMIC

    Enable dynamic configuration from Grey Matter xDS

    true

    false

    No

    XDS_CLUSTER

    Envoy xDS proxy cluster identifier

    catalog

    us-east-1

    Yes (only for dynamic config)

    XDS_NODE_ID

    Envoy node id per xds configuration

    default-node

    default

    Yes (only for dynamic config)

    XDS_HOST

    Host of Grey Matter xDS Server

    gm-xds

    localhost

    Yes (only for dynamic config)

    XDS_PORT

    Port of Grey Matter xDS Server

    18000

    18000

    Yes (only for dynamic config)

    XDS_ENABLE_TLS

    Enable TLS when communicating with the xDS server

    true

    false

    No (only for dynamic config)

    XDS_SERVER_CERT_PATH

    Path to certificate file to be used for connecting to xDS

    certs/xds_server_cert.crt

    certs/xds_server_cert.crt

    No (only for dynamic config)

    XDS_SERVER_KEY_PATH

    Path to key file to be used for connecting to xDS

    certs/xds_server_key.key

    certs/xds_server_key.key

    No (only for dynamic config)

    XDS_SERVER_CA_PATH

    Path to ca file to be used for connecting to xDS

    certs/xds_server_ca.crt

    certs/xds_server_ca.crt

    No (only for dynamic config)

    HOST

    Host for Envoy listener

    false

    false

    No

    INHEADERS_ENABLED

    Setup Impersonation headers

    false

    false

    No

    ENVOY_ADMIN_LOG_PATH

    Determine the path of logs the envoy admin server will emit too

    /dev/stdout

    /dev/null

    No

    ENVOY_ADMIN_HOST

    The host the envoy admin server will listen on

    0.0.0.0

    0.0.0.0

    No

    ENVOY_ADMIN_PORT

    The port the envoy admin server will listen on

    8001

    8001

    No

    SPIRE_PATH

    The Unix domain socket path Envoy will use to connect to a Spire agent

    /tmp/agent.sock

    ``

    No

    SPIRE_PORT

    The port a Spire agent is listening on if connecting over mTLS

    9090

    ``

    No

    SPIRE_HOST

    The host a Spire agent is listening on if connecting over mTLS

    0.0.0.0

    ``

    No

    SPIRE_CERT_PATH

    The path of a Spire agent certificate used to create an mTLS connection

    /certs/spire.crt

    ``

    No

    SPIRE_KEY_PATh

    The path of a Spire agent certificate key used to create an mTLS connection

    /certs/spire.key

    ``

    No

    TRACING_ENABLED

    Turn on request tracing using the Zipkin config

    false

    false

    No

    TRACING_ADDRESS

    The host of the trace collector server

    localhost

    No

    TRACING_PORT

    The port of the trace collector server

    9411

    No

    TRACING_USE_TLS

    Communicate to the trace server via TLS

    false

    false

    No

    TRACING_CA_CERT_PATH

    Trace server trust certificate path

    ./certs/egress_intermediate.crt

    No

    TRACING_CERT_PATH

    Trace server certificate path

    ./certs/egress_localhost.crt

    No

    TRACING_KEY_PATH

    Trace server key certificate path

    ./certs/egress_localhost.key

    No

    TRACING_DRIVER

    Receives "zipkin", "lightstep", "datadog", "opencensus", "instana"

    datadog

    zipkin

    No

    TRACING_COLLECTOR_ENDPOINT

    Used by Zipkin and Opencensus (only when exporting to Zipkin). Endpoint on the tracing server to send spans.

    /api/v1/spans

    /api/v1/spans

    No

    TRACING_COLLECTOR_ENDPOINT_VERSION

    API Version of the tracing collector endpoint

    HTTP_JSON

    HTTP_JSON

    No

    TRACING_LIGHTSTEP_ACCESS_TOKEN_PATH

    Used by Lightstep. Path to file containing the access token to the LightStep API.

    ./cfg/lightstep

    ./cfg/lightstep_access_token

    No

    TRACING_DATADOG_SERVICE_NAME

    Used by Datadog. A unique identifier to display in the Datadog dashboard.

    my-traced-service

    gm-proxy

    No

    TRACING_OPENCENSUS_CONTEXT_HEADER

    Header for manually tracking traces across services. Accepts "traceparent", "grpc-trace-bin", "x-cloud-trace-context", "x-b3-*".

    x-cloud-trace-context

    NONE

    No

    TRACING_OPENCENSUS_EXPORTER

    Receives "ocagent", "stackdriver", "zipkin"

    ocagent

    zipkin

    No

    TRACING_OPENCENSUS_STACKDRIVER_PROJECT_ID

    The cloud project_id to use when exporting to Stackdriver.

    my-project

    No

    TRACING_INSTANA_LIBRARY_PATH

    The path of the Instana library file to run when sending spans to Instana.

    /app/instana_sensor.so

    /app/instana_sensor.so

    No

    TCP_CLUSTER

    Name to assign the cluster that will be used for proxying requests with a configured TCP proxy filter

    tcp_proxy

    ``

    No

    TCP_HOST

    The host of a server that receives TCP connections

    tcp_server

    tcp_server

    No

    TCP_PORT

    The port of a server that receives TCP connections

    3000

    3000

    No

    TCP_SNI

    What Server Name Indication (SNI) to assign to the TCP cluster

    www.google.com

    ``

    No

    REDIS_CLUSTER

    Name to assign the cluster that will be used for proxying Redis requests with a configured Redis proxy filter

    redis_proxy

    ``

    No

    REDIS_HOST

    The host of a Redis server

    redis_server

    redis_server

    No

    REDIS_PORT

    The port of a Redis server

    6379

    3679

    No

    REDIS_SNI

    What Server Name Indication (SNI) to assign to the Redis cluster

    www.google.com

    ``

    No

    Option

    Description

    Example

    Default

    Required

    HOST

    Host for Envoy listener

    0.0.0.0

    0.0.0.0

    Yes

    PORT

    Port for Envoy listener

    8080

    8080

    Grey Matter Supportarrow-up-right

    Yes

    Grey Matter Supportarrow-up-right
    If both are provided, the volume mount supersedes the set variable

    USERS_JSON

    /gm-jwt-security/etc/users.json

    ""

    base64 encoded users.json file

    base64

    pub-key

    Redis database to be selected after connecting to the server

    uint

    REDIS_PASS

    "123"

    password for Redis server

    string

    https port for the server

    uint

    ZEROLOG_LEVEL

    "WARN"

    logging level: INFO, DEBUG, WARN, ERR

    string

    TOKEN_EXP_TIME

    28800

    token expiration time in seconds

    uint

    DEFAULT_PATH

    "/services/"

    default path to apply to cookies generated by the /policies endpoint

    string

    Variable

    Mount Location

    Default Value

    Description

    Type

    JWT_API_KEY

    -

    ""

    base64 encoded string of comma separated api keys

    base64

    PRIVATE_KEY

    /gm-jwt-security/certs/jwtES512.key

    ""

    base64 encoded private key file

    Variable

    Default Value

    Description

    Type

    REDIS_HOST

    "0.0.0.0"

    host name of Redis server

    string

    REDIS_PORT

    "6379"

    port number of Redis server

    string

    REDIS_DB

    Variable

    Default Value

    Description

    Type

    BIND_ADDRESS

    "0.0.0.0"

    bind address for the gm-jwt-security server

    string

    HTTP_PORT

    8080

    http port for the server

    uint

    HTTPS_PORT

    LDAP
    TLS Configuration
    Grey Matter Supportarrow-up-right

    base64

    0

    9443

    false

    Controls whether or not the schema is dropped when DB connection is established. Use with extreme caution in production.

    DATABASE_URI

    String

    none

    Database connection URL. In production, replace the password string with a secret.

    SSL_ENABLED

    Boolean

    false

    Informs service to connect to Postgres via SSL

    SSL_SERVER_CA

    String

    none

    Path to CA or intermediate certificate (SSL_ENABLED=true is required)

    SSL_SERVER_CERT

    String

    none

    Path to server certificate (SSL_ENABLED=true is required)

    SSL_SERVER_KEY

    String

    none

    Path to server certificate private key (SSL_ENABLED=true is required)

    SERVICE_PORT

    Number

    1337

    Port where gm-slo will listen (overridden to use 443 if SERVER_SSL_ENABLED=true)

    SERVICE_SSL_ENABLED

    Boolean

    false

    Informs service to receive client connections over SSL only

    SERVICE_SSL_CA

    String

    none

    Path to client trust file (SERVICE_SSL_ENABLED=true is required)

    SERVICE_SSL_CERT

    String

    none

    Path to client certificate (SERVICE_SSL_ENABLED=true is required)

    SERVICE_SSL_KEY

    String

    none

    Path to client private key (SERVICE_SSL_ENABLED=true is required)

    Name

    Type

    Default

    Description

    GITHUB_ACCESS_KEY

    String

    ""

    OAuth token used to interact with GitHub via automated scripts

    LOG_LEVEL

    String

    debug (dev), error (prod)

    Level of messages to log. debug (see Winston Loggerarrow-up-right for more)

    DROP_SCHEMA

    Grey Matter Supportarrow-up-right

    Boolean

    Edge Proxy Template
  • Multi-mesh Template

  • DNS for yourcompany.net is assumed to be setup for various things. Search the configuration templates for different instances of this.
  • Tracing is configured for .... To disable this or enable . ... or .... tracing, delete or change the tracing configuration accordingly.

  • The configuration demonstrates the use of a global rate limiting service.To disable this delete the rate limit configuration.

  • Route discovery service is configured for the service to service reference configuration and it is assumed to be running at rds.yourcompany.net.

  • Cluster discovery service is configured for the service to service reference configuration and it is assumed that be running at cds.yourcompany.net.

  • Set and Modify Sidecar Filters

  • Configure Audits

  • Use the Catalog API

  • Add and Delete a Service from the Mesh

  • Enable Audits to Be Ingested into Elasticsearch with Kibana

  • Mesh Configuration

    • Filters

      • Filters

      • RBAC

    • Networking

      • Host Identification

      • Protocol Switching

    • Routing

      • Multi-Mesh

  • More Mesh Configurations

    • Objects

      • Cluster

      • Domain

      • Listener

      • Proxy

      • Route

      • Shared Rules

      • Zone

    • gm-control

    • Discovery

    • Observability

    • L7 Traffic Management

    • Blacklist - Whitelist

    • Develop and Debug

  • Traffic Management

  • Authentication Policy

  • Authorization

  • Installation Options

  • Policies and Telemetry

  • Operator Installation

  • Resource Annotations

  • Installation Options Changes

  • Service Mesh

  • configuration referencearrow-up-right
    jinjaarrow-up-right
    Grey Matter Supportarrow-up-right
    hashtag
    Fields

    hashtag
    cipher_filter

    Envoy cipher suitearrow-up-right. If specified, only the listed ciphers will be accepted. Only valid with TLSv1-TLSv1.2, but has no affect with TLSv1.3.

    Examples include the values below, but full options should be found in the link above.

    • [ECDHE-ECDSA-AES128-GCM-SHA256|ECDHE-ECDSA-CHACHA20-POLY1305]

    • [ECDHE-RSA-AES128-GCM-SHA256|ECDHE-RSA-CHACHA20-POLY1305]

    • ECDHE-ECDSA-AES128-SHA

    • ECDHE-RSA-AES128-SHA

    • AES128-GCM-SHA256

    • AES128-SHA

    • ECDHE-ECDSA-AES256-GCM-SHA384

    • ECDHE-RSA-AES256-GCM-SHA384

    • ECDHE-ECDSA-AES256-SHA

    • ECDHE-RSA-AES256-SHA

    • AES256-GCM-SHA384

    • AES256-SHA

    hashtag
    protocols

    Array of SSL protocols to accept: "TLSv1, TLSv1.1, TLSv1.2, TLSv1.3"

    hashtag
    cert_key_pairs

    Array of (cert, key) pairs to use when receiving requests on this listener. Each cert or key must point to files on disk.

    hashtag
    require_client_certs

    If true, client cert verification will be performed. false will disable this check and not require client certificates to be presented when connecting to this listener.

    hashtag
    trust_file

    String representing the path on disk to the SSL trust file to use when receiving requests on this listener. If omitted, then no trust verification will be performed.

    hashtag
    sni

    String representing how this listener will identify itself during SSL SNI.

    hashtag
    Fields

    hashtag
    timeout_msec

    TimeoutMsec is the time to wait for a health check response. If the timeout is reached without a response, the health check attempt will be considered a failure. This is a required field and must be greater than 0.

    hashtag
    interval_msec

    IntervalMsec is the interval between health checks. Note that the first round of health checks will occur during startup before any traffic is routed to a cluster. This means that the no_traffic_interval_msec value will be used as the first interval of health checks.

    hashtag
    interval_jitter_msec

    IntervalJitterMsec is an optional jitter amount that is added to each interval value calculated by the proxy. If not specified, defaults to 0.

    hashtag
    unhealthy_threshold

    UnhealthyThreshold is the number of unhealthy health checks required before a host is marked unhealthy. Note that for http health checking if a host responds with 503 this threshold is ignored and the host is considered unhealthy immediately.

    hashtag
    healthy_threshold

    HealthyThreshold is the number of healthy health checks required before a host is marked healthy. Note that during startup, only a single successful health check is required to mark a host healthy.

    hashtag
    reuse_connection

    ReuseConnection determines whether to reuse a health check connection between health checks. Default is true.

    hashtag
    no_traffic_interval_msec

    NoTrafficIntervalMsec is a special health check interval that is used when a cluster has never had traffic routed to it. This lower interval allows cluster information to be kept up to date, without sending a potentially large amount of active health checking traffic or no reason. Once a cluster has been used for traffic routing, The proxy will shift back to using the standard health check interval that is defined. Note that this interval takes precedence over any other. Defaults to 60s.

    hashtag
    unhealthy_interval_msec

    UnhealthyIntervalMsec is a health check interval that is used for hosts that are marked as unhealthy. As soon as the host is marked as healthy, the proxy will shift back to using the standard health check interval that is defined. This defaults to the same value as IntervalMsec if not specified.

    hashtag
    unhealthy_edge_interval_msec

    UnhealthyEdgeIntervalMsec is a special health check interval that is used for the first health check right after a host is marked as unhealthy. For subsequent health checks the proxy will shift back to using either "unhealthy interval" if present or the standard health check interval that is defined. Defaults to the same value as UnhealthIntervalMsec if not specified.

    hashtag
    healthy_edge_interval_msec

    HealthyEdgeIntervalMsec is a special health check interval that is used for the first health check right after a host is marked as healthy. For subsequent health checks the proxy will shift back to using the standard health check interval that is defined. Defaults to the same value as IntervalMsec if not specified

    hashtag
    health_checker

    Defines the types of health checking to use, options : http_health_check and tcp_health_check. An example of an http health check could be:

    Example object

    hashtag
    Fields

    hashtag
    name

    Common name for this redirect, e.g. "force-https".

    hashtag
    from

    Regex pattern to match against incoming request URLs. Capture groups set here can be used in the to field.

    hashtag
    to

    New URL of the redirect. Can be direct string ("https://localhost:443arrow-up-right) or reference capture groups from the from field ("https://localhost:$1arrow-up-right").

    hashtag
    redirect_type

    One of "permanent" or "temporary". Selection "permanent" will set the response code to 301, "temporary" will be 302.

    hashtag
    header_constraints

    Array of Header Constraints that must match for the redirect to take effect.

    export DATA_S3_BUCKET=<data-bucket-name>
    aws s3api create-bucket --bucket $DATA_S3_BUCKET
    aws s3api put-public-access-block --bucket $DATA_S3_BUCKET --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "DataReadWrite",
                "Action": [
                    "s3:AbortMultipartUpload",
                    "s3:DeleteObject",
                    "s3:GetBucketAcl",
                    "s3:GetBucketPolicy",
                    "s3:GetObject",
                    "s3:GetObjectAcl",
                    "s3:ListBucket",
                    "s3:ListBucketMultipartUploads",
                    "s3:ListMultipartUploadParts",
                    "s3:PutObject",
                    "s3:PutObjectAcl"
                ],
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:s3:::<data-bucket-name>/*",
                    "arn:aws:s3:::<data-bucket-name>"
                ]
            }
        ]
    }
    aws iam create-user --user-name gm-data
    aws iam list-policies | grep -A 8 gmdata-s3
        "PolicyName": "gmdata-s3",
        "PolicyId": "<some-policy-id>",
        "Arn": "arn:aws:iam::<user-id>:policy/gmdata-s3",
        "Path": "/",
        "DefaultVersionId": "v1",
        "AttachmentCount": 0,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2020-11-25T14:48:36+00:00",
        "UpdateDate": "2020-11-25T14:48:36+00:00"
    },
    aws iam attach-user-policy --user-name gm-data --policy-arn <policy-arn>
    aws iam create-access-key --user-name gm-data
    static_resources:
      listeners:
        - name: ingress
          address:
            socket_address:
              address: 0.0.0.0
              port_value: 8443
          filter_chains:
            - filters:
              - name: envoy.http_connection_manager
                config:
                  idle_timeout: 1s
                  forward_client_cert_details: sanitize_set
                  set_current_client_cert_details:
                      uri: true
                  codec_type: AUTO
                  access_log:
                    - name: envoy.file_access_log
                      config:
                        path: "/dev/stdout"
                  stat_prefix: ingress
                  route_config:
                    name: local
                    virtual_hosts:
                      - name: local
                        domains: ["*"]
                        routes:
                          - match:
                              prefix: "/"
                            route:
                              cluster: local
                  http_filters:
                    - name: gm.metrics
                      typed_config:
                        "@type": type.googleapis.com/foo.gm_proxy.filters.MetricsConfig
                        metrics_port: 8080
                        metrics_host: 0.0.0.0
                        metrics_dashboard_uri_path: /metrics
                        metrics_prometheus_uri_path: /prometheus
                        prometheus_system_metrics_interval_seconds: 15
                        metrics_ring_buffer_size: 4096
                        metrics_key_function: depth
                        metrics_key_depth: "2"
                    - name: envoy.router
              tls_context:
                common_tls_context:
                  tls_certificate_sds_secret_configs:
                    - name: "spiffe://foo.com/ns/fabric/sa/api"
                      sds_config:
                        api_config_source:
                          api_type: GRPC
                          grpc_services:
                            envoy_grpc:
                              cluster_name: spire
                  tls_params:
                    ecdh_curves:
                      - X25519:P-256:P-521:P-384
      clusters:
        - name: local
          connect_timeout: 0.25s
          type: STATIC
          lb_policy: ROUND_ROBIN
          load_assignment:
            cluster_name: local
            endpoints:
              - lb_endpoints:
                - endpoint:
                    address:
                      socket_address:
                        address: 127.0.0.1
                        port_value: 10080
        - name: spire
          connect_timeout: 0.25s
          http2_protocol_options: {}
          type: STATIC
          lb_policy: ROUND_ROBIN
          load_assignment:
            cluster_name: spire
            endpoints:
              - lb_endpoints:
                - endpoint:
                    address:
                      pipe:
                        path: /run/spire/sockets/agent.sock
    admin:
      access_log_path: /dev/stdout
      address:
        socket_address:
          address: 127.0.0.1
          port_value: 8001
    apiVersion: apps/v1
    kind: StatefulSet
    spec:
      serviceName: api
      template:
        metadata:
          ...
        spec:
          serviceAccount: api
          containers:
            - name: sidecar
              image: "docker.greymatter.io/release:1.4.5-alpine"
              imagePullPolicy: IfNotPresent
              args:
                - -c
                - /etc/greymatter/config.yaml
              command:
                - /app/gm-proxy
              ports:
                ...
              volumeMounts:
                - name: sidecar-config
                  mountPath: /etc/greymatter
                  readOnly: true
          ...
          volumes:
            - name: sidecar-config
              configMap:
                name: api-sidecar
    PROXY_DYNAMIC="true"
    XDS_CLUSTER="example"
    XDS_HOST="gm-control.fabric.svc"
    XDS_PORT="50000"
    HOST="0.0.0.0"
    PORT=8080
    SERVICE_HOST="localhost"
    SERVICE_PORT=9080
    METRICS_PORT=8081
    OBS_ENABLED=true
    PROXY_DYNAMIC=true   # To run in dynamic configuration mode
    
    XDS_HOST=<gm-control host>
    XDS_PORT=<gm-control port>
    PROXY_DYNAMIC=true   # To run in dynamic configuration mode
    
    XDS_SERVER_CA_PATH-<gm-control trust path>
    XDS_SERVER_CERT_PATH=<gm-control certificate path>
    XDS_SERVER_KEY_PATH=<gm-control certificate key path>
    [2019-10-11 15:21:51.635][8][warning][config] [bazel-out/k8-fastbuild/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:102] gRPC config stream closed: 14, no healthy upstream
    [2019-10-11 15:21:51.635][8][warning][config] [bazel-out/k8-fastbuild/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:56] Unable to establish new stream
    {
      "users": [
        {
          "label": "CN=localuser,OU=Engineering,O=Decipher Technology Studios,=Alexandria,=Virginia,C=US",
          "values": {
            "email": [
              "localuser@deciphernow.com"
            ],
            "org": [
              "www.deciphernow.com"
            ]
          }
        },
        {
          "label": "cn=chris.holmes, dc=deciphernow, dc=com",
          "values": {
            "email": [
              "chris.holmes@deciphernow.com"
            ],
            "org": [
              "www.deciphernow.com"
            ],
            "privilege": [
              "root"
            ]
          }
        }
      ]
    }
    echo "123,my-special-key,super-secret-key,pub-key" | base64 MTIzLG15LXNwZWNpYWwta2V5LHN1cGVyLXNlY3JldC1rZXkscHViLWtleQo=
    # TYPE  DATABASE        USER            ADDRESS                 METHOD
    
    # "local" is for Unix domain socket connections only
    local all all trust
    
    # IPv4 local connections:
    host all all 127.0.0.1/32 trust
    
    # IPv4 remote connections for authenticated users
    hostssl all www-data 0.0.0.0/0 cert clientcert=1
    hostssl all postgres 0.0.0.0/0 cert clientcert=1
        environment:
          DATABASE_URI: postgres://postgres:mysecretpassword@postgres:5432/slo-db
          SSL_ENABLED: "true"
          SSL_SERVER_CA: /etc/gm-slo/certs/postgres/ca.crt
          SSL_SERVER_CERT: /etc/gm-slo/certs/postgres/server.crt
          SSL_SERVER_KEY: /etc/gm-slo/certs/postgres/server.key
          # Uncomment the env vars below to serve over TLS
          # SERVICE_SSL_ENABLED: "true"
          # SERVICE_SSL_CA: /etc/gm-slo/certs/server/ca.crt
          # SERVICE_SSL_CERT: /etc/gm-slo/certs/server/server.crt
          # SERVICE_SSL_KEY: /etc/gm-slo/certs/server/server.key
        volumes:
          - ./docker/postgres/certs/:/etc/gm-slo/certs/postgres/
          - ./docker/server/certs/:/etc/gm-slo/certs/server/
    mkdir -p generated/configs
    bazel build //configs:example_configs
    tar xvf $PWD/bazel-genfiles/configs/example_configs.tar -C generated/configs
    {
      "cipher_filter": "",
      "protocols": [
        "TLSv1.1",
        "TLSv1.2"
      ],
      "cert_key_pairs": [
        {
          "certificate_path": "/etc/proxy/tls/sidecar/server.crt",
          "key_path": "/etc/proxy/tls/sidecar/server.key"
        }
      ],
      "require_client_certs": true,
      "trust_file": "/etc/proxy/tls/sidecar/ca.crt",
      "sni": null
    }
    {
      "timeout_msec": 1000,
      "interval_msec": 60000,
      "interval_jitter_msec": 1000,
      "unhealthy_threshold": 3,
      "healthy_threshold": 3
    }
    {
      ...,
      "health_checker": {
        "http_health_check": {
          "path": "/health"
        }
      }
    }
    {
      "name": "force-https",
      "from": "(.*)",
      "to": "https://$1",
      "redirect_type": "permanent"
    }

    zone, domain

    proxy

    proxy

    The Proxy object represents the aggregate objects that get mapped to each instance of the Grey Matter Proxy.

    zone, domain, listener

    route

    A Route defines how a URL in a given domain is handled. Route objects directly affect URI path matching, prefix rewriting, and redirects.

    zone, domain

    shared_rules

    shared_rules

    Shared_rules define how requests are sent to clusters. They can perform traffic splitting and shadowing.

    zone

    route

    cluster

    Clusters represent collections of either hard-coded or discovered instances of a microservice. Clusters handle health checks, circuit breaking, and outlier detection.

    zone

    shared_rules

    Domainarrow-up-right
  • Routearrow-up-right

  • Shared Rulesarrow-up-right

  • Clusterarrow-up-right

  • Route: The request URI path portion is then passed through the route object for matching, rewriting, and redirecting to a shared_rule.
  • Shared Rule: The request is passed through the shared_rules to send, split, or shadow traffic to clusters.

  • Cluster: A cluster sends the final request to one of it's available instances; retrying and breaking connections as necessary.

  • rule

    Object Name

    Description

    Links To

    Links From

    zone

    A logically isolated region of the mesh. Zones are the highest level of organizational infrastructure. Zones contain all other objects.

    cluster, domain, listener, proxy, route, shared_rules

    domain

    The URL domains for which routes and clusters will be assigned within a proxy. E.g. www.deciphernow.com, localhost, or *

    zone

    listener, proxy, route

    listener

    listener
    proxy
    route
    shared_rules
    zone
    Zone
    Listenerarrow-up-right
    Proxy
    constraint
    match
    response_data

    The Envoy listener object that defines the host, port, and protocol for a proxy within the mesh.

    ""

    List of PKI Distinguished Names (DNs) approved to access metrics views in the dashboard. An empty string permits all access.

    INSTANCE_POLLING_INTERVAL

    String

    "5s"

    The interval at which health checks are made to service instances.

    INSTANCE_MAX_SILENCE

    String

    "15s"

    The wait time before a non-responsive service instance is considered down.

    METRICS_MAX_RETRIES

    Number

    3

    The maximum number of retries made before stopping requests to a service instance's metrics endpoint.

    METRICS_RETRY_DELAY

    String

    "10s"

    The wait period between attempts to connect to a service instance's metrics endpoint.

    METRICS_REQUEST_TIMEOUT

    String

    "15s"

    The wait time to indicate a request timeout from a service instance's metrics endpoint.

    "edge"

    The cluster used in the request made to Control to retrieve discovered service instances (i.e. the edge node proxy).

    CONTROL_SERVER_RETRY_DELAY

    String

    "5s"

    For each Control server, a "retry delay" can be specified. If an error occurs during creation of a client connection, establishing a stream, or sending a DiscoveryRequest, Catalog will restart the connection process. This occurs indefinitely and can only be stopped by sending a DELETE request to Control's /zones endpoint with the configured server's specified zoneName.

    ""

    Path to the server certificate.

    SERVER_KEY_PATH

    String

    ""

    Path to the server certificate key.

    ""

    Grey Matter Data RESTful API prefix.

    CLIENT_IDENTITY

    String

    ""

    Distinguished Name (DN) of JWT user associated with Catalog, see under internal-jwt-users for details.

    CLIENT_EMAIL

    String

    ""

    Email address used as the directory name where services JSON will be persisted.

    CLIENT_USE_TLS

    Boolean

    false

    Enable TLS.

    CLIENT_CERT

    String

    ""

    base64 encoded certificate for TLS.

    CLIENT_KEY

    String

    ""

    base64 encoded key for TLS.

    CLIENT_TRUST

    String

    ""

    base64 encoded certificate for TLS.

    INSECURE_TLS

    Boolean

    false

    MONGO_USE_TLS

    Boolean

    false

    GMDATA_STARTUP_DELAY

    String

    "5s"

    Delay at startup to allow all instances to write to Grey Matter Data.

    GMDATA_ROOT_EVENT_NAME

    String

    "world"

    Name of the top level folder in Grey Matter Data where services.json will be persisted.

    GMDATA_MAX_EVENTS

    Number

    1000

    GMDATA_POLLING_INTERVAL

    String

    "2s"

    The interval at which Catalog will poll Grey Matter Data for updated services.json.

    GMDATA_MAX_RETRIES

    Number

    10

    GMDATA_RETRY_DELAY

    String

    "1s"

    The delay for retry attempts for the initial Grey Matter Data connection.

    GMDATA_OBJECT_POLICY_TEMPLATE

    String

    (if (contains email %s)(yield-all)(yield R X)))

    Grey Matter Data object policy template to enforce permissions on servics.json folder. Default policy allows read and execute for the Catalog service.

    GMDATA_POLICY_LABEL

    String

    "localuser owned"

    GMDATA_TOP_LEVEL_FOLDER_SECURITY

    String

    {"label":"PUBLIC//EVERYBODY","foreground":"#FFFFFF","background":"#008800"}

    GMDATA_SERVICES_FILE_SECURITY

    String

    {"label":"PUBLIC//EVERYBODY","foreground":"#FFFFFF","background":"#008800"

    ""

    The name of the service to be displayed in the Intelligence 360 Dashboard (make this as human friendly as possible).

    SERVICE_<N>_VERSION

    String

    ""

    The semver version of the service.

    SERVICE_<N>_OWNER

    String

    ""

    The name of the service owner like "Decipher" or "Cool Customer". Used to sort by Owner in the Intelligence 360 Dashboard.

    SERVICE_<N>_CAPABILITY

    String

    ""

    The name of the service capability like "Storage" or "Security". Used to sort by Capability in the Intelligence 360 Dashboard.

    SERVICE_<N>_DOCUMENTATION

    String

    ""

    A URL path to documentation relative to the root URL of the service or a full path depending on your needs.

    SERVICE_<N>_PROMETHEUSJOB

    String

    ""

    The name of the Prometheus Job used to store and query time series data associated with this service. This must match the cluster name.

    SERVICE_<N>_MIN_INSTANCES

    Number

    1

    The minimum number of instances this service should scale to. If below this number, the service will be in a warning state in the Intelligence 360 Dashboard.

    SERVICE_<N>_MAX_INSTANCES

    Number

    1

    The maximum number of instances this service should scale to. If above this number, the service will be in a warning state in the Intelligence 360 Dashboard.

    SERVICE_<N>_ENABLE_INSTANCE_METRICS

    Boolean

    true

    Enable the instance metrics view in the Intelligence 360 Dashboard.

    SERVICE_<N>_ENABLE_HISTORICAL_METRICS

    Boolean

    false

    Enable the historical metrics view in the Intelligence 360 Dashboard.

    SERVICE_<N>_METRICS_TEMPLATE

    String

    ""

    URL template for constructing the service's instance metrics endpoint e.g. http://{{host}}:{{port}}/metrics.

    SERVICE_<N>_METRICS_PORT

    Number

    8081

    TCP port serving up service instance metrics.

    Name

    Type

    Default

    Description

    HOST

    String

    "0.0.0.0"

    Hostname or IP address.

    PORT

    Number

    8080

    TCP port to listen on.

    AUTHORIZED_USERS

    Name

    Type

    Default

    Description

    CONTROL_SERVER_<N>_ADDRESS

    String

    "localhost:50000"

    Address for a number of Control servers.

    CONTROL_SERVER_<N>_ZONE_NAME

    String

    "region-1"

    Federated zone name used for tracking, retrieving, and updating a mesh in the Catalog. Each mesh represents a collection of service clusters.

    CONTROL_SERVER_<N>_REQUEST_CLUSTER_NAME

    Name

    Type

    Default

    Description

    USE_TLS

    Boolean

    false

    Enables TLS

    CA_CERT_PATH

    String

    ""

    Path to the certificate authority file.

    SERVER_CERT_PATH

    Name

    Type

    Default

    Description

    CONFIG_SOURCE

    String

    "env"

    Configures where to store and read service configuration from. Set value to gmdata to store in Grey Matter Data. Set value to env to use environment variables.

    CONFIG_POLLING_INTERVAL

    String

    "3s"

    Interval for checking the store for configuration changes. Only applicable when CONFIG_SOURCE=gmdata.

    Name

    Type

    Default

    Description

    CLIENT_ADDRESS

    String

    ""

    Hostname or IP address of Grey Matter Data.

    CLIENT_PORT

    String

    ""

    TCP port exposed by Grey Matter Data.

    CLIENT_PREFIX

    Name

    Type

    Default

    Description

    SERVICE_<N>_CLUSTER_NAME

    String

    ""

    The name of the cluster (logical group name of service instances), provided by gm-control.

    SERVICE_<N>_ZONE_NAME

    String

    ""

    The name of the zone associated with the cluster, provided by gm-control. This must match the value of CONTROL_SERVER_<N>_ZONE_NAME defined in Control's environment variables.

    SERVICE_<N>_NAME

    Grey Matter Sidecar
    Helm chartsarrow-up-right
    Helm chartsarrow-up-right
    https://support.greymatter.io/support/homearrow-up-right

    String

    String

    String

    String

    String

    How to handle conflicts with cluster discovery

    overwrite (static config overwrites all other configuration on startup) or merge (use static configuration when a config collision is detected, default)

    List of endpoints to match to a cluster

    Command Line Flag

    Environment Variable

    Meaning

    Values

    --xds.static-resources.filename

    GM_CONTROL_XDS_STATIC_RESOURCES_FILENAME

    The path of the filename to read in as a static config

    /path/to/file.type

    --xds.static-resources.format

    GM_CONTROL_XDS_STATIC_RESOURCES_FORMAT

    Type of file

    json or yaml (default)

    --xds.static-resources.conflict-behavior

    Name

    Type

    Meaning

    clusters

    Array of envoy clusterarrow-up-right

    list of upstream clusters.

    clusterTemplate

    envoy clusterarrow-up-right

    listeners

    Array of envoy listenerarrow-up-right

    list of listeners to apply to proxies.

    loadAssignments

    Configuration
    API
    Cluster Templates
    Local Clusters
    Examples
    different apiarrow-up-right
    static_resources_test.go filearrow-up-right
    envoy clustersarrow-up-right
    load assignmentsarrow-up-right

    GM_CONTROL_XDS_STATIC_RESOURCES_CONFLICT_BEHAVIOR

    Array of

    NOTEThe domain port should, in most cases, match the port of the listenerarrow-up-right exposed on the proxy. If they do not match, users will need to supply HOST: header keywords to all requests to match the virtual domain.

    hashtag
    Features

    • Virtual host:port matching and redirecting

    • GZIP of requests

    • CORS

    • Setting custom headers for downstream requests

    • SSL Config for incoming requests

    hashtag
    Example Object

    hashtag
    TLS Configuration

    NOTE Do not set an ssl_config on any domain object whose service you want to use SPIFFE/SPIRE. If a domain ssl_config is set, it will override the secret set on the corresponding listener and the mesh configuration will be wrong.

    The domain object has an optional ssl_config field, which can be used to set up TLS and specify it's configuration. The Domain SSL Config Object appears as follows:

    The Domain SSL Configuration is used to populate a DownstreamTlsContextarrow-up-right for the Envoy Listener.

    The sni field for a domain accepts a list of strings and configures the Envoy Listener to detect the requested Server Name Indication.

    To specify a minimum and maximum TLS protocol version, set the protocols field to one of the following: "TLSv1_0", "TLSv1_1", "TLSv1_2", "TLSv1_3". If one protocol is specified, it will be set as both the minimum and maximum protocol versions in Envoy. If more than one protocol version is specified in the list, the lowest will set the minimum TLS protocol version and the highest will set the maximum TLS protocol version. If this field is left empty, Envoy will choose the default TLS version.

    The cipher_filter field takes a colon : delimited string to populate the cipher_suites cipher list in envoy for TLS.

    hashtag
    redirects

    This field can be used to configure redirect routes for the domain. See Redirect for details.

    Fields:

    • name

      • the name of the redirect

    • from

      • regex value that the incoming request :path will be regex matched to

    • to

      • the new URL that an incoming request matching from will route to

      • if set to "$host"

    • redirect_type

      • determines the response code of the redirect

      • must be one of: "permanent" (for a 301

    • header_constraints

      • a list of header constraint objects

      • each header constraint has the following fields:

    hashtag
    Envoy Reference

    • Envoy Virtual Host Referencearrow-up-right

    hashtag
    Fields

    hashtag
    domain_key

    A unique key used to identify this particular domain configuration. This key is used in proxy, listener, and route objects.

    hashtag
    zone_key

    The zone in which this object will live. It will only be able to be referenced by objects or sent to Sidecars that live in the same zone.

    hashtag
    name

    The name of this virtual domain, e.g. localhost, www.greymatter.io, or catalog.svc.local. Only requests coming in to the named host will be matched and handled by attached routes. Is used in conjunction with the port field.

    This field can be set to a wildcard (*) which will match against all hostnames.

    hashtag
    port

    Set the specific port of the virtual host to match. Is used in conjunction with the name field.

    E.g. port: 8080 and name: * will setup a virtual domain matching any request made to port 8080 regardless of the host.

    hashtag
    ssl_config

    Listener SSL configuration for this cluster. Setting the SSL Config at the domain level set this same config on all listeners that are directly linked to this domain.

    hashtag
    redirects

    Array of URL redirects

    hashtag
    gzip_enabled

    DEPRECATION: This field has been deprecated and will be removed in the next major version release.

    This field has no effect.

    hashtag
    cors_config

    A CORS policy to attach to this domain.

    hashtag
    aliases

    An array of additional hostnames that should be matched in this domain. E.g. name: "www.greymatter.io" with aliases: ["greymatter.io", "localhost"]

    hashtag
    force_https

    If true, listeners attached to this domain will only accept HTTPS connections. In this case, one of the secret or ssl_config fields should be set. If false, attached listeners will only accept plaintext HTTP connections.

    hashtag
    custom_headers

    An array of header key, value pairs to set on all requests that pass through this domain.

    E.g.

    hashtag
    checksum

    An API calculated checksum. Can be used to verify that the API contains the expected object before performing a write.

    proxy

    hashtag
    Summary

    Each proxy is the sum total of all configurations that will be sent to each Grey Matter Proxy. This includes the listenersarrow-up-right, domainsarrow-up-right, routesarrow-up-right, shared_rules, and clusters. Each proxy object may be mapped to 0 or more physical instances; each of which will share the exact same configurations.

    NOTE The name field in the object dictates which cluster in the mesh it gets applied to, see the for more information.

    hashtag
    Features

    • Configure active filters

    • Set virtual domains

    • Directly set listeners

    hashtag
    Multiple Listeners

    Multiple listeners can be configured for each Proxy object, either inline or through the listener_keys field. Each one defines a new network interface to handle different traffic patterns and protocols, like the example diagram below.

    hashtag
    Example Object

    hashtag
    Envoy Reference

    hashtag
    Fields

    hashtag
    proxy_key

    A unique key to identify this proxy configuration in the Fabric API.

    hashtag
    zone_key

    The zone in which this object will live. It will only be able to be referenced by objects or sent to Sidecars that live in the same zone.

    hashtag
    name

    The name of the service that this proxy configuration (and all linked objects) will be sent to. This name must exactly match the information when a sidecar registers in the mesh.

    hashtag
    domain_keys

    Array of to specify which domain objects should be included in this configuration.

    hashtag
    listener_keys

    Array of to specify which which objects should be included in this configuration. Listeners can also be specified in-line with the field.

    hashtag
    listeners

    Array of definitions to create for this Sidecar.

    hashtag
    upgrades

    String value to specify connection upgrades to all listeners on this Sidecar. The only currently supported option is "websocket".

    hashtag
    active_filters

    DEPRECATION: This field has been deprecated and will be removed in the next major version release. Use instead.

    Array of that should be active on this listener's filter chain. This list acts as a simple mechanism for turning specific filters on/off without needing to completely remove their configuration from the section.

    NOTE: The order of filters in this array dictates the evaluation order of the filters in the chain.

    hashtag
    filters

    DEPRECATION: This field has been deprecated and will be removed in the next major version release. Use instead.

    Array of filter configurations to be used when a filter is .

    hashtag
    checksum

    An API calculated checksum. Can be used to verify that the API contains the expected object before performing a write.

    shared_rules

    hashtag
    Summary

    A shared_rule defines a re-usable mapping of traffic between clustersarrow-up-right and routesarrow-up-right. This can be very simple, or very complex, depending on the need and the stage of deployment. In it's most simple setup, any given request is just sent to a single cluster. However, traffic can also be fractionally diverted and/or simultaneously shadowed to alternate clusters.

    NOTE Some features of the shared_rules object can also be defined in the object. When defined inline, they cannot be shared between proxies.

    hashtag
    Features

    • Retry Policies

    • Traffic Splitting

    • Traffic Tap/Shadow

    hashtag
    Example Object

    hashtag
    Envoy Reference

    hashtag
    Fields

    hashtag
    shared_rules_key

    A unique key to identify this shared_rule configuration in the Fabric API.

    hashtag
    name

    The name of the shared rules object. This will become the value of a header with key "shared_rules_key" on all routes linked to this shared rule.

    hashtag
    zone_key

    The zone in which this object will live. It will only be able to be referenced by objects or sent to Sidecars that live in the same zone.

    hashtag
    default

    The default field defines arrays that map requests to clusters. Currently, the only implemented field is the light field which is used to determine the Instance to which the live request will be sent and from which the response will be sent to the caller.

    Currently, the default field must set a light field that contains an array of .

    NOTE: default also contains a dark and a light field which currently have no effect. dark array will be used in future versions to support traffic shadowing to Instances. Similarly the tap array will determine an Instance to send a copy of the request to, comparing the response to the light response.

    hashtag
    rules

    A list of to specify various more complex route matching and forwarding specifications.

    hashtag
    response_data

    A collection of annotations that should be applied to responses when handling a request.

    hashtag
    cohort_seed

    This field has no effect.

    hashtag
    properties

    This field has no effect.

    hashtag
    retry_policy

    Specifies the default and timeout for any route referencing this shared rule object. Any retry policy set on the will take precedence over one configured here.

    hashtag
    checksum

    An API calculated checksum. Can be used to verify that the API contains the expected object before performing a write.

    response_data

    hashtag
    Summary

    The response data object is a simple json structure used as a common format across different types of response data (e.g. cookies, headers).

    hashtag

    edit

    Use greymatter edit to edit the configuration of an existing object in the Grey Matter mesh. Objects can be zones, proxies, domains, routes, shared_rules, and clusters.

    response_data

    hashtag
    Summary

    The response data object is a simple json structure used as a common format across different types of response data (e.g. cookies, headers).

    hashtag

    create

    Usegreymatter create to create a specific object in the Grey Matter mesh. Objects can be zones, proxies, domains, routes, shared_rules, and clusters.

    hashtag

    zone

    Zones are used to group API objects and segment the data plane. Sidecars and Control servers will all run and serve in only a single zone. This doesn't enforce any physical or network isolation, but logically segments the service mesh.

    hashtag
    Example Object

    hashtag

    ./gm-catalog --config=settings.toml
    GM_CONTROL_LOCAL_CLUSTERS=gm-proxy:8080
    {
      "clusters": [
        {
          "name": "gm-proxy",
          "type": "EDS",
          "edsClusterConfig": {
            "edsConfig": {
              "apiConfigSource": {
                "apiType": "GRPC",
                "grpcServices": [
                  {
                    "envoyGrpc": {
                      "clusterName": "tbn-xds"
                    }
                  }
                ],
                "refreshDelay": "30s"
              }
            },
            "serviceName": "gm-proxy"
          },
          "connectTimeout": "10s",
          "lbPolicy": "LEAST_REQUEST",
          "lbSubsetConfig": {
            "fallbackPolicy": "ANY_ENDPOINT"
          }
        }
      ],
      "loadAssignments": [
        {
          "clusterName": "gm-proxy",
          "endpoints": [
            {
              "lbEndpoints": [
                {
                  "endpoint": {
                    "address": {
                      "socketAddress": {
                        "address": "127.0.0.1",
                        "portValue": 8080
                      }
                    }
                  },
                  "healthStatus": "HEALTHY"
                }
              ]
            }
          ]
        }
      ]
    }
    {
      "domain_key": "catalog",
      "zone_key": "default",
      "name": "*",
      "port": 9080,
      "ssl_config": {
        "cipher_filter": "",
        "protocols": [
          "TLSv1.1",
          "TLSv1.2"
        ],
        "cert_key_pairs": [
          {
            "certificate_path": "/etc/proxy/tls/sidecar/server.crt",
            "key_path": "/etc/proxy/tls/sidecar/server.key"
          }
        ],
        "require_client_certs": true,
        "trust_file": "/etc/proxy/tls/sidecar/ca.crt",
        "sni": null
      },
      "redirects": null,
      "gzip_enabled": false,
      "cors_config": null,
      "aliases": null,
      "force_https": true,
      "custom_headers": null,
      "checksum": "b633fd4b535932fc1da31fbb7c6d4c39517871d112e9bce2d5ffe004e6d09735"
    }
    "ssl_config": {
      "cipher_filter": "",
      "protocols": [],
      "cert_key_pairs": null,
      "require_client_certs": false,
      "trust_file": "",
      "sni": null
    }
    "custom_headers" : [
      {
        "key": "x-forwarded-proto",
        "value": "https"
      }
    ]
    , will redirect to the name of the
    domain
    code),
    "temporary"
    (for a
    307
    code)

    name

    • the header key to be compared to the incoming requests headers

    • will be compared without case sensitivity

  • value

    • must be a valid regex

    • the value to be compared to the value of the incoming request header with matching name

  • case_sensitive

    • boolean indicating whether the value will be compared to the value of the header with matching name with case sensitivity

  • invert

    • boolean value

  • Helm Chartsarrow-up-right
    envoy load assignmentarrow-up-right
    routearrow-up-right
    Traffic Splittingarrow-up-right
    Weighted Routingarrow-up-right
    Retry Policyarrow-up-right
    cluster_constraint
    cluster_constraints
    rules
    Configuration
    retry policy
    route

    tracing

    hashtag
    Summary

    hashtag
    Example object

    {
      "ingress": true,
      "request_headers_for_tags": null
    }

    hashtag
    Fields

    hashtag
    ingress

    true if this listener handles incoming traffic. false if the listener handles outgoing traffic from the Sidecar to the mesh.

    hashtag
    request_headers_for_tags

    This field takes a list of header names to create tags for the active span. By default it is null, and no tags are configured. If values are configured, a tag is created if the value is present in the requests headers, with the header name used as the tag name, and the header value used as the tag value in a span.

    hashtag
    Usage

    greymatter [GLOBAL OPTIONS] edit [OPTIONS] <object type> [object key]

    hashtag
    Sample Usage

    By setting the EDITOR environment variable, the greymatter tool will open the editor of choice and let the user edit the object directly.

    After editing the JSON directly in vim, the returned object will look something like the below (depending on the user input).

    The API objects are updated directly in the mesh after exiting the editor.

    hashtag
    Help

    To list available commands run with the global help flag, greymatter edit --help:

    hashtag
    Questions?

    circle-check

    Need help with the CLI?

    Create an account at Grey Matter Supportarrow-up-right to reach our team.

    Usage

    greymatter [GLOBAL OPTIONS] create [OPTIONS] <object type>

    hashtag
    Sample Usage

    hashtag
    Create from JSON File

    Resources can be created directly from files on disk through shell redirects like shown below. Using the file listener-catalog.json with the following content:

    greymatter create domain will create the domain object from the given spec.

    hashtag
    Interactive Editor

    By setting the EDITOR environment variable, the greymatter tool will open the editor of choice and let the user create the object directly.

    After editing the JSON directly in vim, the returned object will look something like the below (depending on the user input).

    hashtag
    Help

    To list available commands run with the global help flag, greymatter create --help:

    hashtag
    Questions?

    circle-check

    Need help with the CLI?

    Create an account at Grey Matter Supportarrow-up-right to reach our team.

    {
      "shared_rules_key": "edge-catalog-shared-rules",
      "name": "catalog",
      "zone_key": "default",
      "default": {
        "light": [
          {
            "constraint_key": "",
            "cluster_key": "catalog-cluster",
            "metadata": null,
            "properties": null,
            "response_data": {},
            "weight": 1
          }
        ]
      },
      "rules": null,
      "response_data": {},
      "cohort_seed": null,
      "properties": null,
      "retry_policy": null
    }
    "constraints" : {
      "light": [
        {
          "cluster_key": "example-service-1.0",
          "weight": 10
        },
        {
          "cluster_key": "example-service-1.1",
          "weight": 1
        }
      ]
    }
    EDITOR=vim greymatter edit domain domain-localhost
    [info] 2019/07/10 03:38:43 Preferring --api.key for authentication
    {
      "domain_key": "domain-localhost",
      "zone_key": "zone-default",
      "name": "localhost",
      "port": 443,
      "redirects": null,
      "gzip_enabled": false,
      "cors_config": null,
      "aliases": null,
      "force_https": false,
      "checksum": "a35ccf0634599ac83b0b9cb61b07297e925f28bbc669a9a63cb65b9c6a6ea309"
    }
    $ greymatter edit --help
    NAME
        edit - edit an object from the Grey Matter API
    
    USAGE
        greymatter [GLOBAL OPTIONS] edit [OPTIONS] <object type> [object key]
    
    VERSION
        v1.2.1
    
    DESCRIPTION
        object type is one of: zone, proxy, listener, domain, route, shared_rules, cluster
    
        Editor Selection
    
        When changes need to be made an initial version of the object can be presented in an
        editor. The command used to launch the editor is taken from the EDITOR environment
        variable and must block execution until the changes are saved and the editor is
        closed. The current editor command is 'vim'.
    
        Example EDITOR values:
    
            vim
    
            emacs
    
            atom -w
    
        Using STDIN
    
        For scripting purposes it may be useful to use STDIN to provide the edited object
        instead of using an interactive editor. If so, simply make the new version available
        on STDIN through standard use of pipes.
    
        Example: cat "new_cluster.json" | greymatter create cluster
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every API request. Headers are given as
                name:value pairs. Leading and trailing whitespace will be stripped from the
                name and value. For multiple headers, this flag may be repeated or multiple
                headers can be delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for API requests. If no port is given, it defaults to
                port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for API requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for API requests
    
        --api.prefix=value
                The url prefix for API requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for API requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every API request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every API request.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --format=string
                (default: "json")
                The I/O format (json or yaml)
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --version
                (default: false)
                Print the version and exit
    
        Global options can also be configured via upper-case, underscore-delimited environment
        variables prefixed with "GREYMATTER_". For example, "--some-flag" becomes
        "GREYMATTER_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    OPTIONS
        --help  (default: false)
                Show a list of commands or help for one command
    
        --key=string
                [deprecated] key of the object to retrieve, if not provided will read input
                from stdin
    
        --version
                (default: false)
                Print the version and exit
    
        Options can also be configured via upper-case, underscore-delimited environment
        variables prefixed with "GREYMATTER_EDIT_". For example, "--some-flag" becomes
        "GREYMATTER_EDIT_SOME_FLAG". Command-line flags take precedence over environment
        variables.
    {
        "zone_key": "zone-default-zone",
        "name": "catalog",
        "ip": "0.0.0.0",
        "port": 8080,
        "protocol": "http_auto",
        "domain_keys": ["domain-*"],
        "tracing_config": null,
        "checksum": "5e3f86011c958c05fbb51a51f9363bd014bef5aa4505728daf4dd35db440ff01"
    }
    $ ./greymatter create domain < listener-catalog.json
    [info] 2019/07/10 03:43:46 Preferring --api.key for authentication
    {
      "domain_key": "domain-catalog",
      "zone_key": "zone-default-zone",
      "name": "catalog",
      "port": 8080,
      "redirects": null,
      "gzip_enabled": false,
      "cors_config": null,
      "aliases": null,
      "force_https": false,
      "checksum": "82581e0c56c2ad385e84234fe118ccf8cf8deb1852a5aa318eab887e9a2717d2"
    }
    EDITOR=vim greymatter create domain
    [info] 2019/07/10 03:38:43 Preferring --api.key for authentication
    {
      "domain_key": "domain-localhost",
      "zone_key": "zone-default",
      "name": "localhost",
      "port": 443,
      "redirects": null,
      "gzip_enabled": false,
      "cors_config": null,
      "aliases": null,
      "force_https": false,
      "checksum": "a35ccf0634599ac83b0b9cb61b07297e925f28bbc669a9a63cb65b9c6a6ea309"
    }
    $ greymatter create --help
    NAME
        create - create an object within the Grey Matter API
    
    USAGE
        greymatter [GLOBAL OPTIONS] create [OPTIONS] <object type>
    
    VERSION
        v1.2.1
    
    DESCRIPTION
        object type is one of: zone, proxy, listener, domain, route, shared_rules, cluster
    
        Editor Selection
    
        When changes need to be made an initial version of the object can be presented in an
        editor. The command used to launch the editor is taken from the EDITOR environment
        variable and must block execution until the changes are saved and the editor is
        closed. The current editor command is 'vim'.
    
        Example EDITOR values:
    
            vim
    
            emacs
    
            atom -w
    
        Using STDIN
    
        For scripting purposes it may be useful to use STDIN to provide the created object
        instead of using an interactive editor. If so, simply make the new version available
        on STDIN through standard use of pipes.
    
        Example: cat "new_cluster.json" | greymatter create cluster
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every API request. Headers are given as
                name:value pairs. Leading and trailing whitespace will be stripped from the
                name and value. For multiple headers, this flag may be repeated or multiple
                headers can be delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for API requests. If no port is given, it defaults to
                port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for API requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for API requests
    
        --api.prefix=value
                The url prefix for API requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for API requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every API request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every API request.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --format=string
                (default: "json")
                The I/O format (json or yaml)
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --version
                (default: false)
                Print the version and exit
    
        Global options can also be configured via upper-case, underscore-delimited environment
        variables prefixed with "GREYMATTER_". For example, "--some-flag" becomes
        "GREYMATTER_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    OPTIONS
        --help  (default: false)
                Show a list of commands or help for one command
    
        --version
                (default: false)
                Print the version and exit
    
        Options can also be configured via upper-case, underscore-delimited environment
        variables prefixed with "GREYMATTER_CREATE_". For example, "--some-flag" becomes
        "GREYMATTER_CREATE_SOME_FLAG". Command-line flags take precedence over environment
        variables.
    Example object

    hashtag
    Fields

    hashtag
    headers

    A list of headers, each of which has fields name, value, and value\_is\_literal.

    • name: Name of the header being sent back to the requesting client.

    • value: Either a literal value or a reference to metadatum on the server that handles a request.

    • value\_is\_literal: If true, the value field will be treated as a literal.

    hashtag
    cookies

    A list of cookies, each of which has fields name, value, value\_is\_literal, expires\_in\_sec, domain, path, secure, http\_only, and same\_site.

    • name: Name of the cookie being sent back to the requesting client.

    • value: Either a literal value or a reference to metadatum on the server that handles a request.

    • value\_is\_literal: If true, the value field will be treated as a literal. This field must be set to true if the response_data set is a literal value you want set on the return header/cookie.

    • expires\_in\_sec: Integer that specifies how long the cookie will be valid. Defaults to 0.

    • domain: String of host to which the cookie will be sent. Defaults to "".

    • path: URL path that must be met in a request for the cookie to be sent to the server. Defaults to "".

    • secure: If true, then cookie will only be sent to a server when a request is made via HTTPS. Defaults to false.

    • http\_only: If true, cookies are not available via javascript through Document.cookie. Defaults to false.

    • same\_site: Specifies how a cookie should be treated when a request is being made across site boundaries. Must be one of:

      • "strict" : causes SameSite=Strict to be passed back with a cookie

    Example object

    hashtag
    Fields

    hashtag
    headers

    A list of headers, each of which has fields name, value, and value\_is\_literal.

    • name: Name of the header being sent back to the requesting client.

    • value: Either a literal value or a reference to metadatum on the server that handles a request.

    • value\_is\_literal: If true, the value field will be treated as a literal.

    hashtag
    cookies

    A list of cookies, each of which has fields name, value, value\_is\_literal, expires\_in\_sec, domain, path, secure, http\_only, and same\_site.

    • name: Name of the cookie being sent back to the requesting client.

    • value: Either a literal value or a reference to metadatum on the server that handles a request.

    • value\_is\_literal: If true, the value field will be treated as a literal. This field must be set to true if the response_data set is a literal value you want set on the return header/cookie.

    • expires\_in\_sec: Integer that specifies how long the cookie will be valid. Defaults to 0.

    • domain: String of host to which the cookie will be sent. Defaults to "".

    • path: URL path that must be met in a request for the cookie to be sent to the server. Defaults to "".

    • secure: If true, then cookie will only be sent to a server when a request is made via HTTPS. Defaults to false.

    • http\_only: If true, cookies are not available via javascript through Document.cookie. Defaults to false.

    • same\_site: Specifies how a cookie should be treated when a request is being made across site boundaries. Must be one of:

      • "strict" : causes SameSite=Strict to be passed back with a cookie

    API Configuration

    Zones can be created using the Grey Matter CLIarrow-up-right or hitting the API endpointarrow-up-right.

    The zone must be the first Grey Matter object created in series, followed by domain, cluster, listener, proxy, shared_rules, and route (in that order) as each Grey Matter object contains a zone_key field, which must reference an existing zone object.

    hashtag
    Bootstrap Configuration

    To configure Grey Matter Control API to automatically create a zone on startup, set GM_CONTROL_API_ZONE_KEY, GM_CONTROL_API_ZONE_NAME, and, optionally, GM_CONTROL_API_ORG_KEY and a zone will be created using these values. This allows a bootstrap configuration to reference this zone_key immediately.

    hashtag
    Grey Matter Proxy

    A Grey Matter Sidecar is run in a single zone, configured with environment variable XDS_ZONE. This value should be the name of the zone with the zone_key matching it's corresponding Grey Matter Control API objects.

    Each Envoy proxy set up by the Grey Matter Sidecar has a Nodearrow-up-right specification that identifies it as a specific Envoy instance. A part of this identification is a Node's localityarrow-up-right, which gives the information on "where" the instance is running. This locality has a field zone, which is used to indicate this location.

    The value of the Node locality's zone is set using this variable, XDS_ZONE. This value must also match the GM_CONTROL_API_ZONE_NAME, because Control will only retrieve Envoy instances with the locality containing this zone for configuration (see below).

    hashtag
    Grey Matter Control

    Grey Matter Control is also run with a single configured zone. It has an environment variable GM_CONTROL_API_ZONE_NAME, which is used to indicate both the name of the Grey Matter API zone for requests and the Node locality's zone of its Discovery Requestsarrow-up-right to Envoy.

    Grey Matter Control will only make requests to Grey Matter Control API in this zone, and only Grey Matter clusters in the API with zone_key corresponding to this zone name will be populated with discovered instances.

    If GM_CONTROL_API_ZONE_NAME does not match the name of the zone corresponding to the zone_key on any of the mesh objects, the objects will not be found by gm-control in it's requests, thus objects in one zone cannot reference objects in a different zone.

    When Grey Matter Control makes a discovery request to Envoy, it sets the node.locality.zone field on its request set to the value of this variable. This value must match XDS_ZONE on a service's sidecar, as only Envoy instances with this zone in their Node locality will be retrieved from Envoy and configured via Grey Matter Control.

    hashtag
    Example

    For the zone object example created above, the proper configuration for a bootstrap setup is as follows:

    Grey Matter Control:

    • GM_CONTROL_API_ZONE_NAME=zone

    Grey Matter Control API:

    • GM_CONTROL_API_ZONE_KEY=default-zone

    • GM_CONTROL_API_ZONE_NAME=zone

    Grey Matter Sidecar:

    • XDS_ZONE=zone

    Grey Matter Objects:

    • Must have "zone_key": "default-zone"

    hashtag
    Zone Architecture

    hashtag
    Fields

    hashtag
    zone_key

    A unique key to identify this zone. All api objects will be created using an existing zone_key.

    hashtag
    name

    A human readable name to give further descriptive information about this zone.

    discovery docs
    Envoy Filters Referencearrow-up-right
    service announcement
    domain keys
    listener keys
    listeners
    listener
    listener.active_http_filters
    http filters
    http_filter
    listener.http_filters
    enabled

    greymatter

    greymatter is the base command for performing operations on the Grey Matter Service Mesh.

    hashtag
    Child Commands

    Command

    Description

    hashtag
    Help

    To list available commands, either run greymatter with no parameters or with the global help flag, greymatter --help:

    hashtag
    Questions?

    circle-check

    Need help with the CLI?

    Create an account at to reach our team.

    delete

    Use greymatter delete to delete a specific object and its configurations from the Grey Matter mesh. Objects can be zones, proxies, domains, routes, shared_rules and clusters.

    hashtag
    Usage

    greymatter [GLOBAL OPTIONS] delete [OPTIONS] <object type> <object key>

    hashtag
    Sample Usage

    hashtag
    Help

    To list available commands run with the global help flag, greymatter delete --help:

    hashtag
    Questions?

    circle-check

    Need help with the CLI?

    Create an account at to reach our team.

    CLI

    Get a technical overview of the commands you can perform with Grey Matter.

    The following commands let you manipulate objects in the Grey Matter mesh.

    Command

    Description

    greymatter

    for performing operations on the Grey Matter Service Mesh.

    create

    a specific object in the Grey Matter mesh.

    delete

    Need a reminder about mesh objects and their definitions?

    hashtag
    Questions?

    circle-check

    Need help with the CLI? Contact our team at .

    outlier detection

    hashtag
    Summary

    Outlier detection is a passive health check that tracks which instances assigned in a cluster are up or down, using user-defined rules. If a cluster is found to be down, the proxy will eject the unresponsive instance, diverting traffic preventing timeouts and disruptions throughout the mesh. After a specified amount of time, that instance will come back online, however the ejection time grows with each subsequent ejection.

    hashtag
    Example Object

    hashtag
    Fields

    hashtag
    interval_msec

    The time interval between ejection analysis sweeps. This can result in both new ejections due to success rate outlier detection as well as hosts being returned to service. Defaults to 10s and must be greater than 0.

    hashtag
    base_ejection_time_msec

    The base time that a host is ejected for. The real time is equal to the base time multiplied by the number of times the host has been ejected. Defaults to 30s. Setting this to 0 means that no host will be ejected for longer than interval_msec.

    hashtag
    max_ejection_percent

    The maximum % of an upstream cluster that can be ejected due to outlier detection. Defaults to 10% but will always eject at least one host.

    hashtag
    consecutive_5xx

    The number of consecutive 5xx responses before a consecutive 5xx ejection occurs. Defaults to 5. Setting this to 0 effectively turns off the consecutive 5xx detector.

    hashtag
    enforcing_consecutive_5xx

    The % chance that a host will be actually ejected when an outlier status is detected through consecutive 5xx. This setting can be used to disable ejection or to ramp it up slowly. Defaults to 100.

    hashtag
    enforcing_success_rate

    The % chance that a host will be actually ejected when an outlier status is detected through success rate statistics. This setting can be used to disable ejection or to ramp it up slowly. Defaults to 100.

    hashtag
    success_rate_minimum_hosts

    The number of hosts in a cluster that must have enough request volume to detect success rate outliers. If the number of hosts is less than this setting, outlier detection via success rate statistics is not performed for any host in the cluster. Defaults to 5. Setting this to 0 effectively triggers the success rate detector regardless of the number of valid hosts during an interval (as determined by success_rate_request_volume).

    hashtag
    success_rate_request_volume

    The minimum number of total requests that must be collected in one interval (as defined by the interval duration) to include this host in success rate based outlier detection. If the volume is lower than this setting, outlier detection via success rate statistics is not performed for that host. Defaults to 100.

    hashtag
    success_rate_stdev_factor

    This factor is used to determine the ejection threshold for success rate outlier ejection. The ejection threshold is the difference between the mean success rate, and the product of this factor and the standard deviation of the mean success rate: mean - (stdev * success_rate_stdev_factor). This factor is divided by a thousand to get a double. That is, if the desired factor is 1.9, the runtime value should be 1900. Defaults to 1900. Setting this to 0 effectively turns off the success rate detector.

    hashtag
    consecutive_gateway_failure

    The number of consecutive gateway failures (502, 503, 504 status or connection errors that are mapped to one of those status codes) before a consecutive gateway failure ejection occurs. Defaults to 5.

    hashtag
    enforcing_consecutive_gateway_failure

    The % chance that a host will be actually ejected when an outlier status is detected through consecutive gateway failures. This setting can be used to disable ejection or to ramp it up slowly. Defaults to 0.

    cluster

    hashtag
    Summary

    A cluster handles final routing of each request to its intended target. Each cluster can have 0 or more instances defined, which are simply the host:port pairs of the targets. Instances within a cluster can be statically configured when the object is created, or dynamically loaded through the service discovery mechanismsarrow-up-right

    hashtag
    Features

    • Static or dynamic instances

    • Circuit breakers

    • Health Checks

    hashtag
    Example Object

    hashtag
    TLS Configuration

    To require TLS on the cluster object, an additional field, require_tls must be set to true.

    There is also an optional ssl_config field, which can be set to specify it's configuration. The Cluster SSL Config Object appears as follows:

    The Cluster SSL Configuration is used to populate an for the Envoy Cluster.

    The sni field for a cluster accepts a string that the Envoy cluster uses to specify Server Name Indication when creating TLS backend connections.

    To specify a minimum and maximum TLS protocol version, set the protocols field to one of the following: "TLSv1_0", "TLSv1_1", "TLSv1_2", "TLSv1_3". If one protocol is specified, it will be set as both the minimum and maximum protocol versions in Envoy. If more than one protocol version is specified in the list, the lowest will set the minimum TLS protocol version and the highest will set the maximum TLS protocol version. If this field is left empty, Envoy will choose the default TLS version.

    The cipher_filter field takes a colon : delimited string to set a specified cipher list for TLS. This populates Envoys cipher_suites field.

    Specifying the path to a trust_file is optional. If a path is specified, it will be added to the UpstreamTlsContext for verifying a presented server certificate. Otherwise, the server certificate will not be verified.

    If the Cluster connects to a service that's secured via TLS, the trust_file will need to contain the root CA and any intermediate certificates to authenticate the sidecar as a client of the service. If you're unfamiliar with generating certificates, is a great primer. The end result is that you have all trust certificates required by the service in a single file, the trust_file.

    hashtag
    Secret Configuration for SPIFFE/SPIRE

    To configure the service to use SPIFFE/SPIRE on its egress, you must set a secret on the cluster. In the same form as above, require_tls must be set to true. Note that if both an ssl_config and a secret are set on a cluster, the secret will override the ssl configuration. An example secret object is as follows:

    This object will configure Envoy to use Secret Discovery Service to fetch SPIFFE certificates from the configured path specified as an environment variable SPIRE_PATH in gm-proxy. For information on how Envoy's SDS works, see the . The secret_key specifies the name of the secret to fetch. secret_name should be the of your certificate. secret_validation_name will set the validation context for the sds secret config.

    hashtag
    Envoy Reference

    hashtag
    Fields

    hashtag
    cluster_key

    A unique key used to identify this particular cluster configuration. This key is used in or to handle routing of traffic to an endpoint.

    hashtag
    zone_key

    The zone in which this object will live. It will only be able to be referenced by objects or sent to Sidecars that live in the same zone.

    hashtag
    name

    The name of the service that this cluster is addressing. This field has different behavior depending on whether this cluster will be pulling from service discovery or wether they will be manually inserted.

    When are manually inserted, this field has no affect. When they will instead be auto-populated, this field must match the announced service name from .

    hashtag
    require_tls

    If true, this cluster will only accept HTTPS connections. In this case, one of the or fields should be set. If false, this cluster will only accept plaintext HTTP connections.

    hashtag
    secret

    Configure SSL certificate configuration through Envoy's SDS (Secret Discovery Service)

    hashtag
    ssl_config

    for this cluster.

    hashtag
    instances

    An array of instances that this cluster will use to route requests. Can be either manually inserted, or automatically populated from service discovery.

    The order of how instances will handle requests is governed by the field.

    hashtag
    circuit_breakers

    Default and high priority

    hashtag
    outlier_detection

    hashtag
    health_checks

    Array of .

    hashtag
    lb_policy

    .

    Defaults to least_request, supported options are: round_robin, least_request, ring_hash, random, maglev, and cluster_provided.

    hashtag
    checksum

    An API calculated checksum. Can be used to verify that the API contains the expected object before performing a write.

    route

    hashtag
    Summary

    Routes match the URI portion of the incoming request and route traffic to different shared_rules. This allows requests to routes like /api/v1 and /api/v2 to end up at entirely different hosts if desired. The routes objects support matching, prefix rewriting, and redirection of requests.

    hashtag
    Example Object

    hashtag
    Features

    See the for an explanation of all of the routing features and how to set them.

    hashtag
    Envoy Reference

    hashtag
    Fields

    hashtag
    route_key

    A unique key to identify this route configuration in the Fabric API.

    hashtag
    domain_key

    String to specify which domain object this route should attach to.

    hashtag
    zone_key

    The zone in which this object will live. It will only be able to be referenced by objects or sent to Sidecars that live in the same zone.

    hashtag
    path

    DEPRECATION: This field has been deprecated and will be removed in the next major version release. Use instead.

    Explicit path string to match on. E.g. "/services/catalog/" or "/apps/ui/".

    hashtag
    route_match

    to match against incoming requests.

    hashtag
    prefix_rewrite

    When a match is found using the values configured in , the value of the :path header on the request will be replaced with this value for forwarding.

    hashtag
    redirects

    This field has no effect. To set redirects on thee virtual host level, see the object.

    hashtag
    shared_rules_key

    Indicates the key of the object to use for specifying a default cluster to forward to.

    This field may be omitted if are defined directly.

    hashtag
    rules

    A list of to specify various more complex route matching and forwarding specifications.

    hashtag
    response_data

    A collection of annotations that should be applied to responses when handling a request.

    hashtag
    cohort_seed

    This field has no effect.

    hashtag
    retry_policy

    A that controls how this route will handle automatic retries to upstream clusters and govern timeouts.

    hashtag
    high_priority

    Defaults to false, which indicates this is normal traffic. If set to true, routes are considered which allows different handling of the request.

    hashtag
    filter_metadata

    A map from from string to metadata that can be used to provide virtual host-specific configurations for . See the guide for info on setting this up.

    hashtag
    checksum

    An API calculated checksum. Can be used to verify that the API contains the expected object before performing a write.

    rule

    hashtag
    Summary

    Rules dictate logic on traffic is routed. Attributes from incoming requests are matched against preconfigured rule objects to determine where the outgoing upstream request should be routed. Rules are specified programmatically in routes and shared_rules as an array in the Rules attribute.

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    rule_key

    A unique key for each rule. When a request is routed by a rule, it appends the header "X-Gm-Rule" with the rule key.

    hashtag
    methods

    The supported request methods for this rule. Setting to an empty array will allow all methods.

    hashtag
    matches

    for this rule.

    hashtag
    constraints

    The constraints field defines arrays that map requests to clusters. Currently, the only implemented field is the light field which is used to determine the Instance to which the live request will be sent and from which the response will be sent to the caller.

    Currently, the constraints field must set a light field that contains an array of .

    NOTE: constraints also contains a dark and a light field which currently have no effect. dark array will be used in future versions to support traffic shadowing to Instances. Similarly the tap array will determine an Instance to send a copy of the request to, comparing the response to the light response.

    hashtag
    cohort_seed

    This field has no effect.

    cluster_constraints

    hashtag
    Summary

    A ClusterConstraint describes a filtering of the Instances in a Cluster based on their Metadata. Instances in the keyed cluster with a superset of the specified Metadata will be included. The Weight of the ClusterConstraint is used to inform selection of one ClusterConstraint over another.

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    constraint_key

    A string value to uniquely identify this constraint in the API.

    hashtag
    cluster_key

    The key of the that matching requests will be sent to.

    hashtag
    metadata

    Array of {"key": "", "value": } pairs that mush match on the intended cluster.

    hashtag
    properties

    This field has no effect.

    hashtag
    response_data

    Array of

    hashtag
    weight

    Relative weight of this constraint expressed as an integer.

    rule

    hashtag
    Summary

    Rules dictate logic on traffic is routed. Attributes from incoming requests are matched against preconfigured rule objects to determine where the outgoing upstream request should be routed. Rules are specified programmatically in routes and shared_rules as an array in the Rules attribute.

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    rule_key

    A unique key for each rule. When a request is routed by a rule, it appends the header "X-Gm-Rule" with the rule key.

    hashtag
    methods

    The supported request methods for this rule. Setting to an empty array will allow all methods.

    hashtag
    matches

    for this rule.

    hashtag
    constraints

    The constraints field defines arrays that map requests to clusters. Currently, the only implemented field is the light field which is used to determine the Instance to which the live request will be sent and from which the response will be sent to the caller.

    Currently, the constraints field must set a light field that contains an array of .

    NOTE: constraints also contains a dark and a light field which currently have no effect. dark array will be used in future versions to support traffic shadowing to Instances. Similarly the tap array will determine an Instance to send a copy of the request to, comparing the response to the light response.

    hashtag
    cohort_seed

    This field has no effect.

    retry_policy

    hashtag
    Summary

    A retry policy is a way for the Grey Matter Sidecar or Edge to automatically retry a failed request on behalf of the client. This is mostly transparent to the client; they will only get the status and return of the final request attempted (failed or succeeded). The only effects they should see from a successful retry is a longer average request time and fewer failures.

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    num_retries

    This is the max number of retries attempted. Setting this field to N will cause up to N retries to be attempted before returning a result to the user.

    Setting to 0 means only the original request will be sent and no retries are attempted. A value of 1 means the original request plus up to 1 retry will be sent, resulting in potentially 2 total requests to the server. A value of N will result in up to N+1 total requests going to the service.

    hashtag
    per_try_timeout_msec

    This is the timeout for each retry. The retry attempts can have longer or shorter timeouts than the original request. However, if the per_try_timeout_msec is too long, it is possible that not all retries will be attempted as it would violate the field.

    hashtag
    timeout_msec

    This is the total timeout for the entire chain: initial request + all timeouts. This should typically be set large enough to accommodate the request and all retries desired.

    CORS

    hashtag
    Summary

    Grey Matter supports the configuration of cross-origin resource sharing on a sidecar. CORS can be configured to allow an application to access resources at a different origin (domain, protocol, or port) than its own.

    For more information on CORS, see this CORS referencearrow-up-right.

    hashtag
    Simple Requests

    Simple requests are classified as requests that don't require a . This distinction is made between requests that might be dangerous (i.e. modifies server resources) and those that are most likely benign. A request is considered simple when all of the following criteria is true:

    • The method is GET, HEAD, or POST

    • Only are present

    A more comprehensive list and explanation can be found .

    As an example, say an app running at http://localhost:8080 is trying to call a backend service with a Grey Matter sidecar at http://localhost:10808. Without a CORS configuration, this request would fail, because the app localhost:8080 is trying to access resources from a server at a different origin, localhost:10808. To solve this, the following CORS config is set on its domain:

    With this configuration, if a simple request comes in to the sidecar from the app, it will have an Origin header value of http://localhost:8080, and this request will succeed. The server will attach a header Access-Control-Allow-Origin: http://localhost:8080 to the response, which signals to the browser that this request is allowed.

    hashtag
    Preflight Requests

    are initiated by the browser using the OPTIONS HTTP method before sending a request in order to determine if the real request is safe to send. The response to this kind of request contains information about what is allowed from a request, and the server determines whether or not to send the actual request based on this information.

    This response information is in the form of three HTTP headers, access-control-request-method, access-control-request-headers, and the origin header. These correspond to of the cors_config - thus these configurations can be specified to determine how the Grey Matter sidecar will respond to preflight requests. If a preflight request comes in to the Grey Matter Sidecar that does not meet the specification for one of these configured fields, the sidecar will send back a response to not initiate the request.

    Based on the same , say the app running at http://localhost:8080 wants to send requests to the backend service at http://localhost:10808 with a content-type of application/json;charset=UTF-8. This particular content-type is outside of those allowed by CORS for simple requests, and thus would result in sending a preflight request to determine if the request can be sent. In order for the CORS configuration to indicate that the request can be sent, it would need to allow the content-type header by configuring the field:

    In the above configuration, CORS will allow requests with Origin header value http://localhost:8080 only and indicate that the content-type header can be set according to the request.

    hashtag
    Configuration

    To set up CORS, set the cors_config field on a domain object with the desired configuration, see the below.

    For an existing domain, run

    and add the desired cors_config object.

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    allowed_origins

    This field specifies an array of string patterns that match allowed origins. The proxy will use these matchers to set the header. This header will be set on any cross-origin response that matches one of the allowed_origins.

    Available matchers include:

    • exact

    • prefix

    • suffix

    Example:

    A wildcard value * is allowed except when using the regex matcher.

    hashtag
    allow_credentials

    Specifies the content for the header that the proxy will set on any cross-origin request that matches one of the allowed_origins. This header specifies whether or not the upstream service allows credentials.

    hashtag
    exposed_headers

    Specifies the content for the header that the proxy will set on any cross-origin request that matches one of the allowed_origins. This header specifies an array of headers that are allowed on the response.

    hashtag
    max_age

    Specifies the content for the header that the proxy will set on the preflight response. This header is an integer value specifying how long a preflight request can be cached by the browser.

    hashtag
    allowed_methods

    Specifies the content for the header that the proxy will set on the preflight response. This header specifies an array of methods allowed by the upstream service.

    hashtag
    allowed_headers

    Specifies the content for the header that the proxy will set on the preflight response. This header specifies an array of headers allowed by the upstream service.

    hashtag
    Notes

    • By default, the proxy will use the upstream service's CORS policy on the gateway and on the upstream service. The gateway policy is ignored.

    • Because CORS is a browser construct, curl can always make a request to the server, with or without CORS. However, it can be used to mimic a browser and verify how the proxy will react to CORS requests:

    header_constraint

    hashtag
    Summary

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    name

    Header key to match on. Supports regex expressions.

    hashtag
    value

    Header value to match on. Supports regex expressions.

    hashtag
    case_sensitive

    If true, then the regex matching will be case sensitive. Defaults to false.

    hashtag
    invert

    If true, invert the regex match. This allows easier "not" expressions.

    For example, to match only X-Forwarded-Proto: "http":

    But to match anything NOT "https":

    listener

    hashtag
    Summary

    Any number of listener objects are attached to each proxyarrow-up-right in order to receive incoming traffic. Their main use is to specify the address, port, and protocol that will be used to receive incoming requests. A sidecar can have as many Listeners created as is needed, though at least one listener must exist or it will not be able to receive any traffic.

    WARNING At least one listener must be allocated for a proxy to receive traffic. Additionally, the port exposed by at least one listener must match the port advertised to the mechanism in use.

    NOTE Listeners can also be configured directly in the proxy objects. When defined there, they cannot be shared among multiple proxies, but saves the user from creating additional config objects.

    hashtag
    Usage

    A common usage pattern is for 2-3 listeners on each sidecar. One listener is setup to listen on all network interfaces (0.0.0.0) and will receive traffic from the rest of the mesh. This is the port that is advertised in service discovery. This listener would have configurations for AuthN and AuthZ, instrumentation, etc.

    Another listener is configured for egress requests from the service. This is bound only to the loopback interface (127.0.0.1), and will thus only receive traffic from the local microservice. This listener would be setup with routes and filters to facilitate regular HTTP traffic out to the rest of the mesh.

    A third lister (or more, as needed) would handle specific traffic and protocols to dependencies. The example below shows a listener setup to handle TCP traffic to MongoDB listening on the default MongoDB port.

    hashtag
    Example Object

    hashtag
    Secret Configuration for SPIFFE/SPIRE

    To configure the service to require SPIFFE/SPIRE on its ingress, you must set a secret on the listener. NOTE if you intend to use SPIFFE/SPIRE on a service ingress, do not set an ssl_config on the corresponding domain object. Any ssl_config set on the domain will override this secret set on the listener. An example secret object is as follows:

    This object will configure Envoy to use Secret Discovery Service to fetch SPIFFE certificates from the configured path specified as an environment variable SPIRE_PATH in gm-proxy. For information on how Envoy's SDS works, see the . The secret_key specifies the name of the secret to fetch. secret_name should be the of your certificate. secret_validation_name will set the validation context for the sds secret config.

    hashtag
    Envoy Reference

    hashtag
    Fields

    hashtag
    listener_key

    A unique key to identify this listener configuration in the Fabric API. This key is used in objects to attach physical listeners to Sidecars.

    hashtag
    zone_key

    The zone in which this object will live. It will only be able to be referenced by objects or sent to Sidecars that live in the same zone.

    hashtag
    name

    A unique name for this listener on the Sidecar. This does not need to be globally unique across the Fabric mesh, but needs to be unique for each Sidecar.

    hashtag
    active_network_filters

    Array of that should be active on this listener's filter chain. This list acts as a simple mechanism for turning specific filters on/off without needing to completely remove their configuration from the section.

    NOTE: The order of filters in this array dictates the evaluation order of the filters in the chain.

    hashtag
    network_filters

    Array of filter configurations to be used when a filter is .

    hashtag
    active_http_filters

    Array of that should be active on this listener's filter chain. This list acts as a simple mechanism for turning specific filters on/off without needing to completely remove their configuration from the section.

    NOTE: The order of filters in this array dictates the evaluation order of the filters in the chain.

    hashtag
    http_filters

    Array of filter configurations to be used when a filter is .

    hashtag
    ip

    Network interface this listener will bind to. For example, "0.0.0.0" to listen for requests from anywhere on the network, or "127.0.0.1: to listen only for local requests.

    hashtag
    port

    Integer port this listener will bind to. Must be available on the host or the listener provisioning will fail.

    hashtag
    protocol

    DEPRECATION: This field has been deprecated and will be removed in the next major version release.

    This field has no effect.

    hashtag
    domain_keys

    Array of that will be linked to this listener.

    hashtag
    tracing_config

    to set up distributed tracing on this listener.

    hashtag
    secret

    to set up SSL certificates through Envoy's .

    hashtag
    checksum

    An API calculated checksum. Can be used to verify that the API contains the expected object before performing a write.

    secret

    hashtag
    Summary

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    secret_key

    String key that uniquely identifies this secret configuration in the Secret Discovery Service.

    hashtag
    secret_name

    Secret names are identities that live within the cert pool of Envoy. A name should correspond to one certificate that Envoy has registered, and will be used when querying the SDS API.

    hashtag
    secret_validation_name

    ValidationNames are used to verify a certificate in the Envoy cert pool against a Certificate Authority.

    hashtag
    subject_names

    When performing 2-Way SSL, Subject Alternative Names are required for client certificate verification. Without this configuration option, Envoy will not understand what certificate to verify when it attempts to connect to it's upstream/downstream host.

    hashtag
    ecdh_curves

    If specified, the TLS connection established when using secrets, will only support the specified ECDH curves. If not specified, the default curves will be used within Envoy.

    hashtag
    forward_client_cert_details

    This field specifies how to handle the x-forwarded-client-cert (XFCC) HTTP header.

    The possible options when forwarding client cert details are:

    • "SANITIZE"

    • "SANITIZE_SET"

    • "FORWARD_ONLY"

    hashtag
    set_current_client_cert_details

    Valid only when is APPEND_FORWARD or SANITIZE_SET and the client connection is mTLS. It specifies the fields in the client certificate to be forwarded. Note that in the x-forwarded-client-cert header, Hash is always set, and By is always set when the client certificate presents the URI type Subject Alternative Name value.

    match

    hashtag
    Summary

    Matches determine if a request "matches" a specified parameter. In rules they are used to see if individual requests should be routed to an upstream cluster.

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    kind

    One of:

    • cookie: matches from.value against the Cookie key from.key in the request

    • header: matches from.value against the Header key from.key

    hashtag
    behavior

    One of:

    • "exact"

    • "regex"

    • "range"

    Defaults to exact

    hashtag
    from

    {"key": "", "value": ""} pair specifying the source of the match.

    hashtag
    to

    {"key": "", "value": ""} pair specifying the new location to set based off the match.

    cluster_constraints

    hashtag
    Summary

    A ClusterConstraint describes a filtering of the Instances in a Cluster based on their Metadata. Instances in the keyed cluster with a superset of the specified Metadata will be included. The Weight of the ClusterConstraint is used to inform selection of one ClusterConstraint over another.

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    constraint_key

    A string value to uniquely identify this constraint in the API.

    hashtag
    cluster_key

    The key of the that matching requests will be sent to.

    hashtag
    metadata

    Array of {"key": "", "value": } pairs that mush match on the intended cluster.

    hashtag
    properties

    This field has no effect.

    hashtag
    response_data

    Array of

    hashtag
    weight

    Relative weight of this constraint expressed as an integer.

    route_match

    hashtag
    Summary

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    path

    String value that will be compared to the :path header on the incoming request using the chosen .

    hashtag
    match_type

    Sets the type of string matching to perform on the . Must be one of:

    • "prefix" : must match the beginning of the :path header

    • "exact" : must match exactly match the :path header. (query strings are not included in this match)

    retry_policy

    hashtag
    Summary

    A retry policy is a way for the Grey Matter Sidecar or Edge to automatically retry a failed request on behalf of the client. This is mostly transparent to the client; they will only get the status and return of the final request attempted (failed or succeeded). The only effects they should see from a successful retry is a longer average request time and fewer failures.

    hashtag
    Example object

    hashtag
    Fields

    hashtag
    num_retries

    This is the max number of retries attempted. Setting this field to N will cause up to N retries to be attempted before returning a result to the user.

    Setting to 0 means only the original request will be sent and no retries are attempted. A value of 1 means the original request plus up to 1 retry will be sent, resulting in potentially 2 total requests to the server. A value of N will result in up to N+1 total requests going to the service.

    hashtag
    per_try_timeout_msec

    This is the timeout for each retry. The retry attempts can have longer or shorter timeouts than the original request. However, if the per_try_timeout_msec is too long, it is possible that not all retries will be attempted as it would violate the field.

    hashtag
    timeout_msec

    This is the total timeout for the entire chain: initial request + all timeouts. This should typically be set large enough to accommodate the request and all retries desired.

    {
      "headers": [
        {
          "name": "test-new-header",
          "value": "yes"
        }
      ],
      "cookies": [
        {
          "name": "dev-cookie",
          "value": "false",
          "value_is_literal": true
        }
      ]
    }
    {
      "headers": [
        {
          "name": "test-new-header",
          "value": "yes"
        }
      ],
      "cookies": [
        {
          "name": "dev-cookie",
          "value": "false",
          "value_is_literal": true
        }
      ]
    }
    {
      "zone_key": "default-zone",
      "zone_name": "zone"
    }
    {
      "proxy_key": "catalog",
      "zone_key": "default",
      "name": "catalog",
      "domain_keys": [
        "catalog-domain"
      ],
      "listener_keys": [
        "catalog-listener"
      ],
      "listeners": null,
      "upgrades": "",
      "active_proxy_filters": [
        "gm.metrics",
        "gm.observables"
      ],
      "proxy_filters": {
        "envoy_rbac": null,
        "gm_impersonation": {},
        "gm_inheaders": {},
        "gm_listauth": {},
        "gm_metrics": {
          "metrics_port": 8081,
          "metrics_host": "0.0.0.0",
          "metrics_dashboard_uri_path": "/metrics",
          "metrics_prometheus_uri_path": "/prometheus",
          "prometheus_system_metrics_interval_seconds": 15,
          "metrics_ring_buffer_size": 4096,
          "metrics_key_function": "depth",
          "metrics_key_depth": "1"
        },
        "gm_oauth": {},
        "gm_observables": {
          "useKafka": true,
          "topic": "production-catalog-1.0",
          "eventTopic": "events",
          "kafkaServerConnection": "kafka-observables.observables.svc:9092"
        }
      },
      "checksum": "9830e988dd93d560426e3ddff6758ca2976565b9e064e68f99661a39b3b17239"
    }
    {
      "name": "X-Forwarded-Proto",
      "value": "http"
    }
    {
      "secret_key": "web-secret",
      "secret_name": "spiffe://greymatter.io/web_proxy/mTLS",
      "secret_validation_name": "spiffe://greymatter.io",
      "subject_names": "spiffe://greymatter.io/echo_proxy/mTLS",
      "ecdh_curves": [
        "X25519:P-256:P-521:P-384"
      ],
      "forward_client_cert_details": "SANITIZE",
      "set_current_client_cert_details": {
        "uri": false
      }
    }
    {
      "path": "/services/example/latest/",
      "match_type": "prefix"
    }
    "lax" : causes SameSite=Lax to be passed back with a cookie
  • "" : does not alter the cookie annotation set (default value)

  • "lax" : causes SameSite=Lax to be passed back with a cookie
  • "" : does not alter the cookie annotation set (default value)

  • "APPEND_FORWARD"

  • "ALWAYS_FORWARD_ONLY"

  • forward_client_cert_details

    list

    list all of a particular object in the Grey Matter API

    get

    retrieve an object from the Grey Matter API

    create

    create an object within Grey Matter API

    edit

    edit an object from Grey Matter API

    delete

    delete an object from Grey Matter API

    Grey Matter Supportarrow-up-right
    Grey Matter Supportarrow-up-right
    Outlier Detection
  • Outgoing SSL configuration

    • Directly via SSL certs on disk

    • Through SPIFFE/SPIRE and Envoy SDS

  • UpstreamTlsContextarrow-up-right
    OpenSSL Certificate Authorityarrow-up-right
    docsarrow-up-right
    SPIFFE Idarrow-up-right
    Envoy Cluster Referencearrow-up-right
    routes
    shared_rules
    instances
    instances
    service discovery
    secret
    ssl_config
    Cluster SSL configuration
    lb_policy
    circuit breakers
    outlier detection
    health checks
    Envoy Load Balancing Policyarrow-up-right
    Matches
    cluster_constraint
    cluster_constraints
    cluster
    Response data
    Matches
    cluster_constraint
    cluster_constraints
    timeout_msec
    The Content Type header is set to one of application/x-www-form-urlencoded or multipart/form-data or text/plain

    regex

    CORS preflight check
    CORS safe-listed headersarrow-up-right
    herearrow-up-right
    Preflight requestsarrow-up-right
    fields
    example from simple requests
    allowed_headers
    fields
    access-control-allow-originarrow-up-right
    access-control-allow-credentialsarrow-up-right
    access-control-expose-headersarrow-up-right
    access-control-max-agearrow-up-right
    access-control-allow-methodsarrow-up-right
    access-control-allow-headersarrow-up-right
    value
    value
    in the request
  • query: matches from.value against the Query key from.key in the request

  • "prefix"

  • "suffix"

  • cluster
    Response data
    timeout_msec
    $ greymatter --help
    NAME
        greymatter - Command line tool for interacting with the Grey Matter API
    
    USAGE
        greymatter [GLOBAL OPTIONS] <command> [COMMAND OPTIONS] [arguments...]
    
    VERSION
        v1.2.1
    
    COMMANDS
        list    list all of a particular object in the Grey Matter API
    
        get     retrieve an object from the Grey Matter API
    
        create  create an object within the Grey Matter API
    
        edit    edit an object from the Grey Matter API
    
        delete  delete an object from the Grey Matter API
    
        export-zone
                export a Zone from the Grey Matter API
    
        import-zone
                import a Zone to the Grey Matter API
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every API request. Headers are given as
                name:value pairs. Leading and trailing whitespace will be stripped from the
                name and value. For multiple headers, this flag may be repeated or multiple
                headers can be delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for API requests. If no port is given, it defaults to
                port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for API requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for API requests
    
        --api.prefix=value
                The url prefix for API requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for API requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every API request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every API request.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --format=string
                (default: "json")
                The I/O format (json or yaml)
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --version
                (default: false)
                Print the version and exit
    
        Global options can also be configured via upper-case, underscore-delimited environment
        variables prefixed with "GREYMATTER_". For example, "--some-flag" becomes
        "GREYMATTER_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    Run "greymatter help <command>" for more details on a specific command.
    $ greymatter delete domain domain-catalog
    [info] 2019/07/10 03:47:57 Preferring --api.key for authentication
    {
      "domain_key": "domain-catalog",
      "zone_key": "zone-default-zone",
      "name": "catalog",
      "port": 8080,
      "redirects": null,
      "gzip_enabled": false,
      "cors_config": null,
      "aliases": null,
      "force_https": false,
      "checksum": "82581e0c56c2ad385e84234fe118ccf8cf8deb1852a5aa318eab887e9a2717d2"
    }
    $ greymatter delete --help
    NAME
        delete - delete an object from the Grey Matter API
    
    USAGE
        greymatter [GLOBAL OPTIONS] delete [OPTIONS] <object type> <object key>
    
    VERSION
        v1.2.1
    
    DESCRIPTION
        object type is one of: zone, proxy, listener, domain, route, shared_rules, cluster
    
    GLOBAL OPTIONS
        --api.header=header
                Specifies a custom header to send with every API request. Headers are given as
                name:value pairs. Leading and trailing whitespace will be stripped from the
                name and value. For multiple headers, this flag may be repeated or multiple
                headers can be delimited with commas.
    
        --api.host=host:port
                (default: localhost:80)
                The address (host:port) for API requests. If no port is given, it defaults to
                port 443 if --api.ssl is true and port 80 otherwise.
    
        --api.insecure
                (default: false)
                If true, don't validate server cert when using SSL for API requests
    
        --api.key=string
                (default: "none")
                [SENSITIVE] The auth key for API requests
    
        --api.prefix=value
                The url prefix for API requests. Forms the path part of <host>:<port><path>
    
        --api.ssl
                (default: true)
                If true, use SSL for API requests
    
        --api.sslCert=value
                Specifies the SSL cert to use for every API request.
    
        --api.sslKey=value
                Specifies the SSL key to use for every API request.
    
        --console.level=level
                (default: "info")
                (valid values: "debug", "info", "error", or "none")
                Selects the log level for console logs messages.
    
        --format=string
                (default: "json")
                The I/O format (json or yaml)
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --version
                (default: false)
                Print the version and exit
    
        Global options can also be configured via upper-case, underscore-delimited environment
        variables prefixed with "GREYMATTER_". For example, "--some-flag" becomes
        "GREYMATTER_SOME_FLAG". Command-line flags take precedence over environment variables.
    
    OPTIONS
        --deep  (default: false)
                if true, delete the entire object graph below the specified object
    
        --help  (default: false)
                Show a list of commands or help for one command
    
        --key=string
                [deprecated] key of the object to delete
    
        --version
                (default: false)
                Print the version and exit
    
        Options can also be configured via upper-case, underscore-delimited environment
        variables prefixed with "GREYMATTER_DELETE_". For example, "--some-flag" becomes
        "GREYMATTER_DELETE_SOME_FLAG". Command-line flags take precedence over environment
        variables.
    {
      "interval_msec": 1000,
      "base_ejection_time_msec": 5000,
      "max_ejection_percent": 100,
      "consecutive_5xx": 3,
      "enforcing_consecutive_5xx": 100,
      "enforcing_success_rate": 0,
      "success_rate_minimum_hosts": 0,
      "success_rate_request_volume": 1,
      "success_rate_stdev_factor": 1900,
      "consecutive_gateway_failure": 1,
      "enforcing_consecutive_gateway_failure": 0
    }
    {
        "zone_key": "default-zone",
        "cluster_key": "catalog-service",
        "name": "service",
        "instances": [
            {
                "host": "localhost",
                "port": 8080
            }
        ],
        "circuit_breakers": {
          "max_connections": 500,
          "max_requests": 500
        },
        "outlier_detection": null,
        "health_checks": [],
        "lb_policy": "",
        "secret": {
            "secret_key": "",
            "secret_name": "",
            "secret_validation_name": "",
            "subject_alt_name": "",
            "ecdh_curves": null,
            "set_current_client_cert_details": {
                "uri": false
            },
            "checksum": ""
        }
    }
    "ssl_config": {
      "cipher_filter": "",
      "protocols": [],
      "cert_key_pairs": null,
      "trust_file": "",
      "sni": null
    }
    "secret" : {
      "secret_key": "secret-{{.service.serviceName}}-secret",
      "secret_name": "spiffe://{{ .Values.global.spire.trustDomain }}/{{.service.serviceName}}/mTLS",
      "secret_validation_name": "spiffe://{{ .Values.global.spire.trustDomain }}",
      "ecdh_curves": [
        "X25519:P-256:P-521:P-384"
      ]
    }
    {
      "rule_key": "rkey1",
      "methods": [
        "GET"
      ],
      "matches": [
        {
          "kind": "header",
          "from": {
            "key": "routeTo",
            "value": "passthrough-cluster"
          }
        }
      ],
      "constraints": {
        "light": [
          {
            "cluster_key": "passthrough-cluster",
            "weight": 1
          }
        ]
      }
    }
    "constraints" : {
      "light": [
        {
          "cluster_key": "example-service-1.0",
          "weight": 10
        },
        {
          "cluster_key": "example-service-1.1",
          "weight": 1
        }
      ]
    }
    {
      "constraint_key": "constraint-key-1",
      "cluster_key": "passthrough-cluster-3",
      "metadata": null,
      "properties": null,
      "response_data": {},
      "weight": 1
    }
    {
      "rule_key": "rkey1",
      "methods": [
        "GET"
      ],
      "matches": [
        {
          "kind": "header",
          "from": {
            "key": "routeTo",
            "value": "passthrough-cluster"
          }
        }
      ],
      "constraints": {
        "light": [
          {
            "cluster_key": "passthrough-cluster",
            "weight": 1
          }
        ]
      }
    }
    "constraints" : {
      "light": [
        {
          "cluster_key": "example-service-1.0",
          "weight": 10
        },
        {
          "cluster_key": "example-service-1.1",
          "weight": 1
        }
      ]
    }
    {
      "num_retries": 2,
      "per_try_timeout_msec": 60000,
      "timeout_msec": 60000
    }
    {
      "zone_key": "zone-default-zone",
      "domain_key": "domain-backend-service",
      "name": "*",
      "port": 10808,
      "cors_config": {
        "allowed_origins": [
          { "match_type": "exact", "value": "http://localhost:8080" }
        ],
        "allowed_headers": [],
        "allowed_methods": [],
        "exposed_headers": [],
        "max_age": 60
      }
    }
    {
      "zone_key": "zone-default-zone",
      "domain_key": "domain-backend-service",
      "name": "*",
      "port": 10808,
      "cors_config": {
        "allowed_origins": [
          { "match_type": "exact", "value": "http://localhost:8080" }
        ],
        "allowed_headers": ["content-type"],
        "allowed_methods": [],
        "exposed_headers": [],
        "max_age": 60
      }
    }
    greymatter edit domain <domain-name>
      "cors_config": {
        "allowed_origins": [],
        "allowed_headers": [],
        "allowed_methods": [],
        "exposed_headers": [],
        "max_age": 0,
        "allow_credentials": true
      }
      "allowed_origins": [
          { "match_type": "exact", "value": "http://localhost:8080" }
        ]
    $ curl -v 'http://localhost:9080/services/catalog/latest/' \
        -X OPTIONS \
        -H 'Access-Control-Request-Method: POST' \
        -H 'Access-Control-Request-Headers: content-type' \
        -H 'Origin: http://localhost:8080'
    *   Trying ::1...
    * TCP_NODELAY set
    * Connected to localhost (::1) port 9080 (#0)
    > OPTIONS /services/catalog/latest/ HTTP/1.1
    > Host: localhost:9080
    > User-Agent: curl/7.64.1
    > Accept: */*
    > Access-Control-Request-Method: POST
    > Access-Control-Request-Headers: content-type
    > Origin: http://localhost:8080
    >
    < HTTP/1.1 200 OK
    < access-control-allow-origin: http://localhost:8080
    < access-control-max-age: 60
    < date: Tue, 12 May 2020 20:11:13 GMT
    < server: envoy
    < content-length: 0
    <
    * Connection #0 to host localhost left intact
    * Closing connection 0
    {
      "name": "X-Forwarded-Proto",
      "value": "http"
    }
    {
      "name": "X-Forwarded-Proto",
      "value": "https",
      "invert": true
    }
    {
      "kind": "/services/example/latest/",
      "behavior": "prefix",
      "from": "",
      "to": ""
    }
    {
        "constraint_key": "constraint-key-1",
        "cluster_key": "passthrough-cluster-3",
        "metadata": null,
        "properties": null,
        "response_data": {},
        "weight": 1
    }
    {
      "num_retries": 2,
      "per_try_timeout_msec": 60000,
      "timeout_msec": 60000
    }
  • Path matchingarrow-up-right
    Prefix Rewritingarrow-up-right
    Redirectarrow-up-right
    routing documentationarrow-up-right
    Envoy Route Referencearrow-up-right
    domain key
    route_match
    Route match
    route_match
    domain
    shared_rules
    rules
    rules
    Configuration
    retry_policy
    high priorityarrow-up-right
    filters
    per route filter configuration

    "regex" : is interpreted as a regex that must match the :path header. (query strings are not included in this match)

    match_type
    path
    path
    path

    a specific object and its configurations from the Grey Matter mesh.

    edit

    the configuration of an existing object in the Grey Matter mesh.

    get

    a specific object and its configurations from the Grey Matter mesh.

    list

    objects and their configurations in the Grey Matter mesh.

    Grey Matter Supportarrow-up-right
    Base command
    Create
    service discoveryarrow-up-right
    docsarrow-up-right
    SPIFFE Idarrow-up-right
    Envoy Listener Referencearrow-up-right
    proxy
    network filters
    network_filter
    enabled
    http filters
    http_filter
    enabled
    domain keys
    Configuration
    Configuration
    Secret discovery servicearrow-up-right
    {
      "zone_key": "default",
      "domain_key": "fibonacci",
      "route_key": "fibonacci-route",
      "path": "/",
      "prefix_rewrite": null,
      "redirects": null,
      "shared_rules_key": "",
      "rules": [
        {
          "rule_key": "default",
          "constraints": {
            "light": [
              {
                "constraint_key": "",
                "cluster_key": "fibonacci-service",
                "metadata": null,
                "properties": null,
                "response_data": {},
                "weight": 1
              }
            ],
            "dark": null,
            "tap": null
          }
        }
      ],
      "response_data": {},
      "cohort_seed": null,
      "retry_policy": {
        "num_retries": 3
      }
    }
    {
        "zone_key": "default-zone",
        "listener_key": "catalog-listener",
        "domain_keys": ["catalog"],
        "name": "catalog",
        "ip": "0.0.0.0",
        "port": 9080,
        "protocol": "http_auto",
        "tracing_config": null
    }
    "secret": {
        "secret_key": "secret-{{.service.serviceName}}-secret",
        "secret_name": "spiffe://{{ .Values.global.spire.trustDomain }}/{{.service.serviceName}}/mTLS",
        "secret_validation_name": "spiffe://{{ .Values.global.spire.trustDomain }}",
        "ecdh_curves": [
            "X25519:P-256:P-521:P-384"
        ]
    }
    Routing Rulesarrow-up-right
    Retry Policiesarrow-up-right
    path
    Delete
    Edit
    Get
    Listarrow-up-right