git installed
helm v3
envsubst (a dependency of our helm charts)
eksctl or an already running Kubernetes cluster.
NOTE: if you already have a Kubernetes cluster up and running, move to step 2. Just verify you can connect to the cluster with a command like
kubectl get nodes
For this deployment, we'll use to automatically provision a Kubernetes cluster for us. The eksctl will use our preconfigured AWS credentials to create master nodes and worker nodes to our specifications, and will leave us off with kubectl setup to manipulate the cluster.
The regions, node type/size, etc can all be tuned to your use case, the values given are simply examples.
Cluster provisioning usually takes between 10 and 15 minutes. When it is complete, you will see the follwing output:
When your cluster is ready, run the following to test that your kubectl configuration is correct:
Though Helm is not the only way to install Grey Matter into Kubernetes, it does make some things very easy and reduces a large number of individual configurations to a few charts. For this step, we'll clone the public git repository that holds Grey Matter and cd into the resulting directory.
NOTE: this tutorial is using a release candidate, so only a specific branch is being pulled. The entire repository can be cloned if desired.
Before running this step, determine whether or not you wish to install . If so, determine whether or not you will use S3 for backing. If you do want to configure Grey Matter Data with S3, follow the guide. You will need the AWS credentials from here.
To set up credentials, we need to create a credentials.yaml file that holds some secret information like usernames and passwords. The helm-charts repository contains some convenience scripts to make this easier.
Run:
and follow the prompts. The email and password you are prompted for should match your credentials to access the Decipher Nexus at . If you have decided to install Grey Matter Data persisting to S3, indicate that when prompted, and provide the access credentials, region, and bucket name.
Note that if your credentials are not valid, you will see the following response:
To see the default configurations, check the global.yaml file from the root directory of your cloned repo. In general for this tutorial, you should use the default options, but there are a couple of things to note.
If you would like to install a Grey Matter Data that is external and reachable from the dashboard, set global.data.external.enabled to true.
If you are installing data and set up your , set global.data.external.uses3 to true.
If you plan to update ingress certificates or modify RBAC configurations in the mesh, set global.rbac.edge
You can set global.environment to eks instead of kubernetes for reference, but we will also override this value with a flag during the installation steps in .
Grey Matter is made up of a handful of components, each handling different pieces of the overall platform. Please follow each installation step in order.
Add the charts to your local Helm repository, install the credentials file, and install the Spire server.
Watch the Spire server pod.
Watch it until the READY status is 2/2, then proceed to the next step.
Install the Spire agent, and remaining Grey Matter charts.
NOTE: for easy setup, access to this deployment was provisioned with quickstart SSL certificates. They can be found in the helm chart repository at
./certs. For access to the dashboard via the public access point, import the./certs/quickstart.p12file into your browser of choice - the password ispassword.
An will be created automatically when we specified the flag --set=global.environment=eks during installation. The ELB is accessible through the randomly created URL attached to the edge service:
You will need to use this value for EXTERNAL-IP in the .
Visit the url (e.g. https://a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com:10808/) in the browser to access the Intelligence 360 Application
If you intend to move onto into your installation, or otherwise modify/explore the Grey Matter configurations, you will need to .
For this installation, the configurations will be as follows. Fill in the value of the edge service's external IP from the for <EDGE-EXTERNAL-IP>, and the path to your helm-charts directory in <path/to/helm-charts>:
Run these in your terminal, and you should be able to use the CLI, greymatter list cluster.
You have now successfully installed Grey Matter!
If you're ready to shut down your cluster:
NOTE: this deletion actually takes longer than the output would indicate to terminate all resources. Attempting to create a new cluster with the same name will fail for some time until all resources are purged from AWS.
If you would like to install Grey Matter without SPIFFE/SPIRE, set global.spire.enabled to false.
Error: could not find tiller, verify that you are using Helm version 3.2.4 and try again. If you need to manage multiple versions of Helm, we highly recommend using helmenv to easily switch between versions.NOTE: Notice in the edge installation we are setting
--set=edge.ingress.type=LoadBalancer, this value sets the service type for edge. The default isClusterIP. In this example we want an AWS ELB to be created automatically for edge ingress (see below), thus we are setting it toLoadBalancer. See the Kubernetes publishing services docs for guidance on what this value should be in your specific installation.
While these are being installed, you can use the kubectl command to check if everything is running. When all pods are Running or Completed, the install is finished and Grey Matter is ready to go.
eksctl create cluster \
--name production \
--version 1.17 \
--nodegroup-name workers \
--node-type m4.2xlarge \
--nodes=2 \
--node-ami auto \
--region us-east-1 \
--zones us-east-1a,us-east-1b \
--profile default[ℹ] using region us-east-1
[ℹ] subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
[ℹ] subnets for us-east-1b - public:192.168.32.0/19 private:192.168.96.0/19
[ℹ] nodegroup "workers" will use "ami-0d373fa5015bc43be" [AmazonLinux2/1.15]
[ℹ] using Kubernetes version 1.15
[ℹ] creating EKS cluster "production" in "us-east-1" region
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --name=production'
[ℹ] CloudWatch logging will not be enabled for cluster "production" in "us-east-1"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-1 --name=production'
[ℹ] 2 sequential tasks: { create cluster control plane "production", create nodegroup "workers" }
[ℹ] building cluster stack "eksctl-production-cluster"
[ℹ] deploying stack "eksctl-production-cluster"
[ℹ] building nodegroup stack "eksctl-production-nodegroup-workers"
[ℹ] --nodes-min=2 was set automatically for nodegroup workers
[ℹ] --nodes-max=2 was set automatically for nodegroup workers
[ℹ] deploying stack "eksctl-production-nodegroup-workers"
[✔] all EKS cluster resource for "production" had been created
[✔] saved kubeconfig as "/home/user/.kube/config"
[ℹ] adding role "arn:aws:iam::828920212949:role/eksctl-production-nodegroup-worke-NodeInstanceRole-EJWJY28O2JJ" to auth ConfigMap
[ℹ] nodegroup "workers" has 0 node(s)
[ℹ] waiting for at least 2 node(s) to become ready in "workers"
[ℹ] nodegroup "workers" has 2 node(s)
[ℹ] node "ip-192-168-29-248.ec2.internal" is ready
[ℹ] node "ip-192-168-36-13.ec2.internal" is ready
[ℹ] kubectl command should work with "/home/user/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "production" in "us-east-1" region is readyeksctl get cluster --region us-east-1 --profile default
eksctl get nodegroup --region us-east-1 --profile default --cluster productiongit clone --single-branch --branch release-2.2 https://github.com/greymatter-io/helm-charts.git && cd ./helm-chartsCloning into 'helm-charts'...
remote: Enumerating objects: 337, done.
remote: Counting objects: 100% (337/337), done.
remote: Compressing objects: 100% (210/210), done.
remote: Total 4959 (delta 225), reused 143 (delta 126), pack-reused 4622
Receiving objects: 100% (4959/4959), 1.09 MiB | 2.50 MiB/s, done.
Resolving deltas: 100% (3637/3637), done.make credentials./ci/scripts/build-credentials.sh
decipher email:
first.lastname@company.io
password:
Do you wish to configure S3 credentials for gm-data backing [yn] n
Setting S3 to false
"decipher" has been added to your repositoriesError: looks like "https://nexus.greymatter.io/repository/helm" is not a valid chart repository or cannot be reached: failed to fetch https://nexus.greymatter.io/repository/helm/index.yaml : 401 Unauthorized helm dep up spire
helm dep up edge
helm dep up data
helm dep up fabric
helm dep up sense
make secrets
helm install server spire/server -f global.yaml kubectl get pod -n spire -w NAME READY STATUS RESTARTS AGE
server-0 2/2 Running 1 30s helm install agent spire/agent -f global.yaml
helm install fabric fabric --set=global.environment=eks -f global.yaml
helm install edge edge --set=global.environment=eks --set=edge.ingress.type=LoadBalancer -f global.yaml
helm install data data --set=global.environment=eks --set=global.waiter.service_account.create=false -f global.yaml
helm install sense sense --set=global.environment=eks --set=global.waiter.service_account.create=false -f global.yaml$ kubectl get svc edge
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
edge LoadBalancer 10.100.197.77 a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com 10808:32623/TCP,8081:31433/TCP 2m4sexport GREYMATTER_API_HOST=<EDGE-EXTERNAL-IP>:10808
export GREYMATTER_API_PREFIX=/services/control-api/latest
export GREYMATTER_API_SSL=true
export GREYMATTER_API_INSECURE=true
export GREYMATTER_API_SSLCERT=</path/to/helm-charts>/certs/quickstart.crt
export GREYMATTER_API_SSLKEY=</path/to/helm-charts>/certs/quickstart.key
export EDITOR=vim # or your preferred editormake uninstalleksctl delete cluster --name production[ℹ] using region us-east-1
[ℹ] deleting EKS cluster "production"
[✔] kubeconfig has been updated
[ℹ] cleaning up LoadBalancer services
[ℹ] 2 sequential tasks: { delete nodegroup "workers", delete cluster control plane "prod" [async] }
[ℹ] will delete stack "eksctl-production-nodegroup-workers"
[ℹ] waiting for stack "eksctl-production-nodegroup-workers" to get deleted
[ℹ] will delete stack "eksctl-production-cluster"
[✔] all cluster resources were deleted kubectl get pods NAME READY STATUS RESTARTS AGE
catalog-5b54979554-hs98q 2/2 Running 2 91s
catalog-init-k29j2 0/1 Completed 0 91s
control-887b76d54-gbtq4 1/1 Running 0 18m
control-api-0 2/2 Running 0 18m
control-api-init-6nk2f 0/1 Completed 0 18m
dashboard-7847d5b9fd-t5lr7 2/2 Running 0 91s
data-0 2/2 Running 0 17m
data-internal-0 2/2 Running 0 17m
data-mongo-0 1/1 Running 0 17m
edge-6f8cdcd8bb-plqsj 1/1 Running 0 18m
internal-data-mongo-0 1/1 Running 0 17m
internal-jwt-security-dd788459d-jt7rk 2/2 Running 2 17m
internal-redis-5f7c4c7697-6mmtv 1/1 Running 0 17m
jwt-security-859d474bc6-hwhbr 2/2 Running 2 17m
postgres-slo-0 1/1 Running 0 91s
prometheus-0 2/2 Running 0 59s
redis-5f5c68c467-j5mwt 1/1 Running 0 17m
slo-7c475d8597-7gtfq 2/2 Running 0 91s