# Service Deployment for Ingress/Egress Actions

This is an example of deploying a new service into the mesh and configuring it for ingress and egress actions using SPIFFE/SPIRE identities.

## Prerequisites

1. An existing Grey Matter deployment following the [Grey Matter Quickstart Install with Helm/Kubernetes](https://greymatter.gitbook.io/grey-matter-documentation/1.3/installation/installation-kubernetes) guide.
   1. This guide assumes you've followed the step to configure a load balancer for ingress access to the mesh. If so, you should have a URL to access the Grey Matter application (e.g. `a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com:10808`).
   2. Make note of your load balancer ingress URL and use it wherever this guide references `{your-gm-ingress-url}`.
2. [greymatter](https://greymatter.gitbook.io/grey-matter-documentation/1.3/installation/commands-cli) setup with a running Fabric mesh.

## Overview

1. Generate k8s deployment with the service and sidecar
2. Configure the sidecar for ingress actions
3. Configure the sidecar for egress actions

## Steps

### 1. Create Deployment

The service we will be using for this walkthrough is a simple egress/ingress service with two endpoints. The `/ingress` endpoint will return `Ingress request to simple-service successful!`. The `/egress` endpoint takes an environment variable `EGRESS_ROUTE`, generates a request to that route, and returns its response.

There are a few things to note before generating the deployment file:

1. The value of the pod label that Grey Matter Control is discovering, environment variable `GM_CONTROL_KUBERNETES_CLUSTER_LABEL`
2. The value of the kubernetes port name that Grey Matter Control is discovering, environment variable `GM_CONTROL_KUBERNETES_PORT_NAME`
3. The trust\_domain of the SPIRE server
4. The SPIRE server registrar service configuration

Using the Grey Matter Helm Charts, the values of the first and second points are `greymatter.io/control` and `proxy` by default. The deployment will then need that pod label `greymatter.io/control` with a value identifying the service to Grey Matter Control, and its sidecar container will need a port labeled `proxy`.

The SPIRE trust domain, by default, is `quickstart.greymatter.io`. This will be important for [generating the mesh configurations](#2-configure-the-deployment-for-ingress-actions). The SPIRE registrar service, by default, is configured to register entries with the SPIRE server for every pod that is created with the same label that Grey Matter Control is looking for, `greymatter.io/control`. It will generate SPIFFE identities for these pods in the form `spiffe://<trust_domain>/<greymatter.io/control-value>`. For example, for a pod in the default setup with label `greymatter.io/control: control-api`, the registrar will generate an entry with SPIFFE ID `spiffe://quickstart.greymatter.io/control-api`.

> Note: If the registrar service in your setup is configured without the `pod_label` specification, it will generate entries with the SPIRE server in a different form, which will determine both the way that the mesh is configured and the deployment. The entries in this setup will be of the form `spiffe://<trust_domain>/ns/<namespace>/sa/<service_account>`, and thus the secrets in the mesh configurations will need to reflect this format instead.

Based on the defaults described, the deployment for the `simple-service` will look like the following:

```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: simple-service
    greymatter.io/control: simple-service
  name: simple-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: simple-service
      greymatter.io/control: simple-service
  template:
    metadata:
      labels:
        app: simple-service
        greymatter.io/control: simple-service
    spec:
      containers:
        - name: service
          image: "zoemccormick/simple-service:latest"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          env:
            - name: EGRESS_ROUTE
              value: http://localhost:10909/catalog/summary
        - name: sidecar
          image: "docker.greymatter.io/release/gm-proxy:1.4.2"
          imagePullPolicy: IfNotPresent
          ports:
            - name: metrics
              containerPort: 8081
            - name: proxy
              containerPort: 10808
          env:
            - name: ENVOY_ADMIN_LOG_PATH
              value: "/dev/stdout"
            - name: PROXY_DYNAMIC
              value: "true"
            - name: SPIRE_PATH
              value: "/run/spire/socket/agent.sock"
            - name: XDS_CLUSTER
              value: "simple-service"
            - name: XDS_HOST
              value: "control.default.svc"
            - name: XDS_NODE_ID
              value: "default"
            - name: XDS_PORT
              value: "50000"
            - name: XDS_ZONE
              value: "zone-default-zone"
          volumeMounts:
            - name: spire-socket
              mountPath: /run/spire/socket
              readOnly: false
      imagePullSecrets:
        - name: docker.greymatter.io
      volumes:
        - name: spire-socket
          hostPath:
            path: /run/spire/socket
            type: DirectoryOrCreate
```

> Note: we set the environment for the service container `EGRESS_ROUTE` to `https://localhost:10909/catalog/summary`. This will be important when [configuring the egress actions](#3-configure-the-deployment-for-egress-actions)

Save the deployment as `deployment.yaml` and apply the deployment:

```bash
kubectl apply -f deployment.yaml
```

Run `kubectl get pods -l=greymatter.io/control=simple-service` and make sure that the containers are running `2/2` before moving on the the mesh configuration.

### 2. Configure the deployment for ingress actions

Now, we will generate the mesh configurations in order to route from edge to the sidecar of the deployment and from the sidecar to the simple-service itself. At the end of this step, the `/ingress` endpoint of the simple service should be accessible through the edge and properly return `Ingress request to simple-service successful!`.

#### Domain

The first object to generate is the ingress domain of the service.

```javascript
{
  "zone_key": "zone-default-zone",
  "domain_key": "simple-service-domain",
  "name": "*",
  "port": 10808,
  "force_https": true
}
```

Save this file as `ingress-domain.json` and apply the domain object using the greymatter cli:

```bash
greymatter create domain < ingress-domain.json
```

#### Listener

Next, create the corresponding ingress listener of the service.

```javascript
{
  "zone_key": "zone-default-zone",
  "listener_key": "simple-service-listener",
  "domain_keys": [
    "simple-service-domain"
  ],
  "name": "ingress",
  "ip": "0.0.0.0",
  "port": 10808,
  "protocol": "http_auto",
  "secret": {
    "secret_key": "simple-service.identity",
    "secret_name": "spiffe://quickstart.greymatter.io/simple-service",
    "secret_validation_name": "spiffe://quickstart.greymatter.io",
    "subject_names": [
      "spiffe://quickstart.greymatter.io/edge"
    ],
    "ecdh_curves": [
      "X25519:P-256:P-521:P-384"
    ]
  }
}
```

The combination of the ingress domain and listener opens `0.0.0.0:10808` to accept connections. The `secret` set on the listener specifies the SPIFFE identity for the `simple-service` pod, `spiffe://quickstart.greymatter.io/simple-service`. The sidecar will fetch the certificate for this identity via [Envoy SDS](https://www.envoyproxy.io/docs/envoy/v1.15.0/configuration/security/secret). The `subject_names` field of the secret specifies that only incoming connections with certificate SAN equal to `spiffe://quickstart.greymatter.io/edge` will be accepted by the listener.

Save this file as `ingress-listener.json` and apply it:

```bash
greymatter create listener < ingress-listener.json
```

#### Proxy

The proxy object will indicate that the above domain and listener are meant to configure this specific sidecar. In order to link these objects to the deployment, the `name` field of this proxy object **must** equal the value of the deployment label `greymatter.io/control`, which in this case is `simple-service`.

```javascript
{
  "zone_key": "zone-default-zone",
  "proxy_key": "simple-service-proxy",
  "domain_keys": [
    "simple-service-domain"
  ],
  "listener_keys": [
    "simple-service-listener"
  ],
  "name": "simple-service",
  "listeners": null
}
```

> Note: the `domain_keys` and `listener_keys` fields take a list of strings. When we add a second domain and listener in [egress configurations](#3-configure-the-deployment-for-egress-actions), we will add their keys to this proxy.

Save this file as `proxy.json` and apply it:

```bash
greymatter create proxy < proxy.json
```

#### Clusters

There are two clusters necessary for ingress connections. The first is the `edge-to-simple-service-cluster`. This cluster will be used to locate and route to the simple-service sidecar from edge. In opposition to the listener secret above, the `secret` on the `edge-to-simple-service-cluster` specifies then that when connecting to the sidecar, use the spiffe certificate with identity `spiffe://quickstart.greymatter.io/edge` and only connect to certificate with SAN equal to `spiffe://quickstart.greymatter.io/simple-service`.

The `instances` field is left empty because Grey Matter Control will discover the sidecar with label `greymatter.io/control: simple-service`, and add the instances to this cluster because the `name` field matches. Thus, like the proxy object, it is important to note that the name field **must** equal the `greymatter.io/control` label.

```javascript
{
  "zone_key": "zone-default-zone",
  "cluster_key": "edge-to-simple-service-cluster",
  "name": "simple-service",
  "instances": [],
  "require_tls": true,
  "secret": {
    "secret_key": "edge.identity",
    "secret_name": "spiffe://quickstart.greymatter.io/edge",
    "secret_validation_name": "spiffe://quickstart.greymatter.io",
    "subject_names": [
      "spiffe://quickstart.greymatter.io/simple-service"
    ],
    "ecdh_curves": [
      "X25519:P-256:P-521:P-384"
    ]
  }
}
```

The second cluster connects the sidecar to the `simple-service`, which is running on port `8080` as specified in the deployment. Because the two containers share a pod, this connection is plaintext over `localhost` and needs no `secret`. Because we know where the service is running in connection to the sidecar, `localhost:8080`, we can hardcode the instance.

```javascript
{
  "zone_key": "zone-default-zone",
  "cluster_key": "simple-service-cluster",
  "name": "service",
  "instances": [
    {
      "host": "localhost",
      "port": 8080
    }
  ],
  "require_tls": false
}
```

Save these files as `edge-to-simple-service-cluster.json` and `simple-service-cluster.json`, respectively, and apply them:

```bash
greymatter create cluster < edge-to-simple-service-cluster.json
greymatter create cluster < simple-service-cluster.json
```

#### Shared Rules

The two clusters created above, `edge-to-simple-service-cluster` and `simple-service-cluster` need two shared\_rules objects to allow routes to connect to them.

```javascript
{
  "zone_key": "zone-default-zone",
  "shared_rules_key": "edge-to-simple-service-rules",
  "name": "edge-to-simple-service",
  "default": {
    "light": [
      {
        "constraint_key": "",
        "cluster_key": "edge-to-simple-service-cluster",
        "metadata": null,
        "properties": null,
        "response_data": {},
        "weight": 1
      }
    ],
    "dark": null,
    "tap": null
  }
}
```

```javascript
{
  "zone_key": "zone-default-zone",
  "shared_rules_key": "simple-service-rules",
  "name": "service",
  "default": {
    "light": [
      {
        "constraint_key": "",
        "cluster_key": "simple-service-cluster",
        "metadata": null,
        "properties": null,
        "response_data": {},
        "weight": 1
      }
    ],
    "dark": null,
    "tap": null
  }
}
```

When specifying routes from edge to the simple-service sidecar, we'll use `edge-to-simple-service-rules`, and when specifying routes from the simple-service sidecar to the service, we'll use `simple-service-rules`.

Save these files as `edge-to-simple-service-rules.json` and `simple-service-rules.json`, respectively, and apply them:

```bash
greymatter create cluster < edge-to-simple-service-rules.json
greymatter create cluster < simple-service-rules.json
```

#### Routes

The last step in the ingress configuration is routes. There will be three routes for ingress to the service.

The first two specify when and how edge should connect to the simple-service sidecar. These two routes will be configured on the edge sidecar, and when a request comes in with `path` prefix equal to `/services/simple-service/` or `/services/simple-service`, it will strip off that value from the path and forward the request to the cluster specified in the shared\_rules object that it links to.

```javascript
{
  "zone_key": "zone-default-zone",
  "domain_key": "edge",
  "route_key": "edge-to-simple-service-route",
  "path": "/services/simple-service/",
  "prefix_rewrite": "/",
  "shared_rules_key": "edge-to-simple-service-rules"
}
```

```javascript
{
  "zone_key": "zone-default-zone",
  "domain_key": "edge",
  "route_key": "edge-to-simple-service-route-slash",
  "path": "/services/simple-service",
  "prefix_rewrite": "/services/simple-service/",
  "shared_rules_key": "edge-to-simple-service-rules"
}
```

The last route configures the simple-service sidecar to route any incoming requests with path prefix `"/"` to the simple-service.

```javascript
{
  "zone_key": "zone-default-zone",
  "domain_key": "simple-service-domain",
  "route_key": "simple-service-route",
  "path": "/",
  "prefix_rewrite": "",
  "shared_rules_key": "simple-service-rules"
}
```

Save these files as `edge-route.json`, `edge-route-slash.json`, and `service-route.json` and apply them:

```bash
greymatter create route < edge-route.json
greymatter create route < edge-route-slash.json
greymatter create route < service-route.json
```

Once all of the configurations are applied, you should be able to access the simple-service at `https://{your-gm-ingress-url}/services/simple-service/` and the ingress route at `https://{your-gm-ingress-url}/services/simple-service/ingress`. Note that if you try to hit `https://{your-gm-ingress-url}/services/simple-service/egress`, the request will fail with `upstream connect error or disconnect/reset before headers. reset reason: connection termination`. This is because we have not yet configured the service for egress actions.

### 3. Configure the deployment for egress actions

For any service being deployed in place of `simple-service` that generates requests to other services in the mesh, there is another set of configurations necessary.

For this walkthrough, we have configured the `simple-service` with environment variable `EGRESS_ROUTE=http://localhost:10909/catalog/summary`. The example that this will show is the `simple-service` `/egress` endpoint generating a request to the Grey Matter Catalog service, which is also running inside the mesh, and returning the json response from a `GET` request of its `/summary` endpoint.

#### SPIFFE/SPIRE and egress explanation

In order to allow the simple-service to make requests inside the mesh, we need to open up a second domain/listener combo on the sidecar, this time on a different port than the ingress at `10808`. We will use `10909`. In a non-SPIFFE/SPIRE setup, this second domain/listener is not necessary.

This is necessary for a SPIFFE/SPIRE setup because all of the sidecars are using SPIFFE certificates, fetched by the sidecar using Envoy SDS and not through a physical mount into the service or sidecar containers. Thus, the containers themselves can not have and do not need certificates mounted into them. Because of this, any request generated from within the service (`service-A`) can only be `http`. But, all incoming requests to a service (say `service-B`) within the mesh must go through it's sidecar, and the sidecar is only accepting `https` requests with specific SPIFFE certificates.

The services `service-A` and `service-B` can talk plaintext to their own sidecars. All sidecars have access to their SPIFFE identities via Envoy SDS, so two sidecars can communicate with each other over `https`. Thus, the flow of a request generated by `service-A` to `service-B` looks like the following:

![Simple Egress Flow](https://1676458320-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LsNFVozLgvw3NDMzxBg%2Fsync%2F7db1336d6e12a819ed5abff922bf54f974acfc58.png?generation=1590075289917131\&alt=media)

This allows us to break down the need for the second, "egress" [domain](#egress-domain)/[listener](#egress-listener) combo on port `10909`, and how the `EGRESS_ROUTE` is formed. With the sidecar for `service-A` (or in this example the `simple-service` sidecar), listening for plaintext connections over localhost at `10909`, the `simple-service` itself can direct requests to its own sidecar at `http://localhost:10909`. The sidecar, then, can use `path_matching` on a configured [egress route](#egress-route) to forward specific requests to an [internal, or egress, cluster](#egress-cluster). As described below, this egress cluster will be configured with the correct SPIFFE identity for `service-A`, and the request will be sent to the ingress domain/listener for `service-B` at `10808`.

For this example, where `simple-service` wants to make a request to the Grey Matter Catalog service for its `/summary` endpoint, the flow will look like:

![Example Egress Flow](https://1676458320-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LsNFVozLgvw3NDMzxBg%2Fsync%2Fcd69c3a684c88dd5a7ea03c7c5c53b28fd9f1978.png?generation=1590075290365138\&alt=media)

#### Egress Domain

The egress domain object will have `force_https` set to false, as we want the sidecar to accept plaintext connections on `10909`. It will add the custom header `x-forwarded-proto: https` to requests coming from the domain.

```javascript
{
  "zone_key": "zone-default-zone",
  "domain_key": "simple-service-domain-egress",
  "name": "*",
  "port": 10909,
  "force_https": false,
  "custom_headers": [
    {
      "key": "x-forwarded-proto",
      "value": "https"
    }
  ]
}
```

Save this file as `egress-domain.json` and apply it:

```bash
greymatter create domain < egress-domain.json
```

#### Egress Listener

```javascript
{
  "zone_key": "zone-default-zone",
  "listener_key": "simple-service-listener-egress",
  "domain_keys": [
    "simple-service-domain-egress"
  ],
  "name": "egress",
  "ip": "0.0.0.0",
  "port": 10909,
  "protocol": "http_auto"
}
```

Save this file as `egress-listener.json` and apply it:

```bash
greymatter create listener < egress-listener.json
```

#### Egress Cluster

The egress cluster **must** have `name` equal to the `greymatter.io/control` pod label for the service it is trying to reach in order to Grey Matter Control to give it the instance for the service. In our case, this is the Grey Matter Catalog service, which has `greymatter.io/control` label `"catalog"`.

The `secret` on this cluster will use the SPIFFE certificate with id `spiffe://quickstart.greymatter.io/simple-service` and connect only over connections with SAN `spiffe://quickstart.greymatter.io/catalog`.

```javascript
{
  "zone_key": "zone-default-zone",
  "cluster_key": "simple-service-to-catalog-cluster",
  "name": "catalog",
  "instances": [],
  "require_tls": true,
  "secret": {
    "secret_key": "simple-service.identity",
    "secret_name": "spiffe://quickstart.greymatter.io/simple-service",
    "secret_validation_name": "spiffe://quickstart.greymatter.io",
    "subject_names": [
      "spiffe://quickstart.greymatter.io/catalog"
    ],
    "ecdh_curves": [
      "X25519:P-256:P-521:P-384"
    ]
  }
}
```

Save this file as `simple-service-to-catalog-cluster.json` and apply it:

```bash
greymatter create cluster < simple-service-to-catalog-cluster.json
```

#### Egress Shared Rules

To configure the egress route to send requests to the simple-service-to-catalog-cluster, we'll generate a shared\_rules.

```javascript
{
  "zone_key": "zone-default-zone",
  "shared_rules_key": "simple-service-to-catalog-rules",
  "name": "simple-service-to-catalog",
  "default": {
    "light": [
      {
        "constraint_key": "",
        "cluster_key": "simple-service-to-catalog-cluster",
        "metadata": null,
        "properties": null,
        "response_data": {},
        "weight": 1
      }
    ],
    "dark": null,
    "tap": null
  }
}
```

Save this file as `simple-service-to-catalog-rules.json` and apply it:

```bash
greymatter create shared_rules < simple-service-to-catalog-rules.json
```

#### Egress Route

For this example, the egress route we want is to the Grey Matter Catalog service. The way we have chosen to specify this is using path prefix `/catalog` on the request. This route will take requests into the `simple-service-domain-egress` with path prefix math `/catalog`, strip the prefix and forward the request through the cluster specified in shared\_rules object `simple-service-to-catalog-rules`.

```javascript
{
  "zone_key": "zone-default-zone",
  "domain_key": "simple-service-domain-egress",
  "route_key": "simple-service-to-catalog-egress-route",
  "path": "/catalog/",
  "prefix_rewrite": "/",
  "shared_rules_key": "simple-service-to-catalog-rules"
}
```

Save this file as `simple-service-to-catalog-route.json` and apply it:

```bash
greymatter create route < simple-service-to-catalog-route.json
```

Now there are several updates to existing Grey Matter objects that need to be made. First, we know that the `simple-service-to-catalog-cluster` will send requests to the Catalog service using SPIFFE certificate with id (or SAN) `spiffe://quickstart.greymatter.io/simple-service`. The Catalog service ingress listener `secret` will need to have this id added to its `subject_names` in order to accept this request.

```bash
greymatter edit listener listener-catalog
```

Locate the secret field. The `subject_names` should contain only the edge identity, `spiffe://quickstart.greymatter.io/edge`. Change the subject names field to also allow our simple-service identity,

```javascript
  "subject_names": [
    "spiffe://quickstart.greymatter.io/edge",
    "spiffe://quickstart.greymatter.io/simple-service"
  ],
```

Lastly, we need to tell the [proxy](#proxy) object for the simple-service to add the egress domain and listener to the service configuration.

```bash
greymatter edit proxy simple-service-proxy
```

and add `simple-service-domain-egress` to the `domain_keys` and `simple-service-listener-egress` to the `listener_keys`. It should look like:

```javascript
{
  "zone_key": "zone-default-zone",
  "proxy_key": "simple-service-proxy",
  "domain_keys": [
    "simple-service-domain",
    "simple-service-domain-egress"
  ],
  "listener_keys": [
    "simple-service-listener",
    "simple-service-listener-egress"
  ],
  "name": "simple-service",
  "listeners": null
}
```

Now, the `simple-service` should be configured to successfully make requests to the Grey Matter Catalog service. Go to `https://{your-gm-ingress-url}/services/simple-service/egress` to test this out. The egress request should return a string of json, and if you hit the catalog `/summary` endpoint directly through edge, `https://{your-gm-ingress-url}/services/catalog/latest/summary`, you should see the same result.

For any internal services that your service needs to be able to connect to in the above way will need their own `service-a-to-service-b` cluster, route, and shared\_rules like the egress ones above. They will connect to the egress domain.
