-->
Home ELK on EKS cluster
Post
Cancel

ELK on EKS cluster

Introduction

This post provides instructions for setting up EKS kubernetes cluster with Elastic stack.

One of my projects required deployment to kubernetes cluster and since this was a new project I decided to do it completely in the cloud. So my obvious choice was EKS from AWS as a main provider of cloud technology. My next task was to set up cross-cutting concerns for the application. First default requirement was setting up logging system. I decided to use Elastic search with Kibana for this purpose.

The following post contains steps that I did to generate such environment

EKS

I decided to use CDK in order to manipulate my AWS environment. Prerequisites for the CDK are:

In order to generate CDK project type (I decided to use go version of the tool)

1
cdk init app --language go

Modify app.go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
package main  
  
import (  
   "github.com/aws/aws-cdk-go/awscdk/v2"  
   "github.com/aws/aws-cdk-go/awscdk/v2/awsautoscaling"   
   "github.com/aws/aws-cdk-go/awscdk/v2/awsec2"   
   "github.com/aws/aws-cdk-go/awscdk/v2/awseks"   
   "github.com/aws/aws-cdk-go/awscdk/v2/awsiam"   
   "github.com/aws/aws-sdk-go-v2/aws"   
   "github.com/aws/constructs-go/constructs/v10"   
   "github.com/aws/jsii-runtime-go")  
  
type AwsStackProps struct {  
   awscdk.StackProps  
}  
  
func NewAwsStack(scope constructs.Construct, id string, props *AwsStackProps) awscdk.Stack {  
	var sprops awscdk.StackProps  
	if props != nil {  
      sprops = props.StackProps  
	}  
	stack := awscdk.NewStack(scope, &id, &sprops)  
  
   // Create VPC  
	vpc := awsec2.NewVpc(stack, aws.String("my-vpc"), &awsec2.VpcProps{})  
  
   // IAM role for our EC2 worker nodes  
	workerRole := awsiam.NewRole(stack, jsii.String("EKSWorkerRole"), &awsiam.RoleProps{  
      AssumedBy: awsiam.NewServicePrincipal(jsii.String("ec2.amazonaws.com"), nil),  
   })  
  
   // Create Kubernetes cluster  
	cluster := awseks.NewCluster(stack, jsii.String("my-cluster"), &awseks.ClusterProps{  
      ClusterName:             jsii.String("my-cluster"),  
      Vpc:                     vpc,  
      Version:                 awseks.KubernetesVersion_V1_24(),  
      DefaultCapacityInstance: awsec2.InstanceType_Of(awsec2.InstanceClass_T3, awsec2.InstanceSize_MEDIUM),  
   })  
  
	onDemandASG := awsautoscaling.NewAutoScalingGroup(stack, jsii.String("OnDemandASG"), &awsautoscaling.AutoScalingGroupProps{  
      Vpc:          vpc,  
      Role:         workerRole,  
      MinCapacity:  jsii.Number(1),  
      MaxCapacity:  jsii.Number(10),  
      InstanceType: awsec2.NewInstanceType(jsii.String("t3.medium")),  
      MachineImage: awseks.NewEksOptimizedImage(&awseks.EksOptimizedImageProps{  
         KubernetesVersion: jsii.String("1.24"),  
         NodeType:          awseks.NodeType_STANDARD,  
      }),  
   })  
  
	cluster.ConnectAutoScalingGroupCapacity(onDemandASG, &awseks.AutoScalingGroupOptions{})
	
	return stack  
}

...

I configured the cluster to use t3.medium boxes for control plane and for worker nodes.

Deployment

Prerequisites for this part is:

Deploy EKS cluster

1
cdk deploy

Deployment of the cluster could take 15 minutes. After that you should register context in kubernetes aws eks update-kubeconfig --region us-east-1 --name my-cluster --role-arn <role arn>

Example:

1
aws eks update-kubeconfig --name my-cluster --region us-east-1 --role-arn arn:aws:iam::11111111111:role/AwsStack-myclusterMastersRoleA2A674AE-1K40LT54NLBLL

Create ODIC provider for the cluster

1
eksctl utils associate-iam-oidc-provider --cluster=my-cluster --region us-east-1 --approve

Add Amazon EBS CSI driver

Elasticsearch uses local storage and Kubernetes cluster need a driver to connect to EBS. Basically you need to install addon - EBS CSI driver

1
2
3
4
5
6
7
8
9
10
11
eksctl create iamserviceaccount \  
	--name ebs-csi-controller-sa \  
	--namespace kube-system \  
	--cluster my-cluster \  
	--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \  
	--approve \  
	--role-only \  
	--role-name AmazonEKS_EBS_CSI_DriverRole

   
eksctl create addon --name aws-ebs-csi-driver --cluster my-cluster --service-account-role-arn arn:aws:iam::111111111111:role/AmazonEKS_EBS_CSI_DriverRole --force

Create EBS Storage Class

Create file storage-class.yaml with content

1
2
3
4
5
6
7
8
9
10
kind: "StorageClass"  
apiVersion: "storage.k8s.io/v1"  
metadata:  
  name: "standard"  
  annotations:    
	  storageclass.kubernetes.io/is-default-class: "true"
provisioner: "kubernetes.io/aws-ebs"  
parameters:  
  type: "gp2"  
  fsType: "ext4"  

Apply the yaml file with kubectl

1
kubectl apply -f storage-class.yaml

ELK

Add elastic repository and create namespace for the ELK

1
2
3
helm repo add elastic https://helm.elastic.co
helm repo update  
kubectl create namespace monitoring

Elasticsearch

Create file values.yaml with content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---  
# Permit co-located instances for solitary minikube virtual machines.  
antiAffinity: "soft"  
  
# Shrink default JVM heap.  
esJavaOpts: "-Xmx512m -Xms512m"  
  
# Allocate smaller chunks of memory per pod.  
resources:  
  requests:    
	  cpu: "100m"    
	  memory: "1Gi"  
  limits:    
	  cpu: "1000m"    
	  memory: "1Gi"  
# Request smaller persistent volumes.  
volumeClaimTemplate:  
  accessModes: [ "ReadWriteOnce" ]  
  storageClassName: "standard"  
  resources:    
	requests:
		storage: 10Gi  

Install elastic

1
helm install elasticsearch elastic/elasticsearch -n monitoring -f values.yaml --set replicas=1

Kibana

1
helm install kibana elastic/kibana -n monitoring --set replicas=1

You’ll be able to connect to Kibana with elastic username. In order to get password run the following command:

1
kubectl get secrets --namespace=monitoring elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d

Usage

In order to get access to Kibana run the following commend in a separate window:

1
kubectl port-forward deployment/kibana-kibana 5601 -n monitoring

After that you can access your cluster through Kibana by URL http://localhost:5601

Credentials are elastic and password could be obtained as described above

Hope it helped somebody…

This post is licensed under CC BY 4.0 by the author.

System Design. Distributed Systems. Concepts

-

Comments powered by Disqus.