Install GKE and EKS and Run a Sample Kubernetes Job

Modern cloud infrastructure relies heavily on container orchestration. Platforms such as Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Service (EKS) allow organizations to deploy scalable containerized applications without managing the Kubernetes control plane.

In this tutorial, we will:

This guide is designed for DevOps engineers, cloud architects, and developers who want hands-on experience running workloads on managed Kubernetes platforms.


What is GKE and EKS?

Managed Kubernetes services simplify cluster operations by handling upgrades, scaling, and security.

Google Kubernetes Engine (GKE)

Google Kubernetes Engine is Google Cloud’s managed Kubernetes service. It automatically manages:


Amazon Elastic Kubernetes Service (EKS)

Amazon Elastic Kubernetes Service is AWS’s managed Kubernetes platform. It integrates tightly with AWS services such as:


Prerequisites

Before starting, ensure you have:

Required tools:

kubectl gcloud CLI aws CLI eksctl


Install kubectl:

curl -LO https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

Verify installation:

kubectl version --client

Step 1: Create a GKE Cluster

First authenticate with Google Cloud.

gcloud auth login

Set your project.

gcloud config set project YOUR_PROJECT_ID

Create a Kubernetes cluster.

gcloud container clusters create cloudbusket-gke \
--zone us-central1-a \
--num-nodes 2

Get cluster credentials.

gcloud container clusters get-credentials cloudbusket-gke \
--zone us-central1-a

Verify cluster nodes.

kubectl get nodes

You should see worker nodes ready.


Step 2: Create an EKS Cluster

Now create an EKS cluster using eksctl.

Install eksctl if not installed.

curl -LO https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz
tar -xzf eksctl_Linux_amd64.tar.gz
sudo mv eksctl /usr/local/bin/

Create the cluster.

eksctl create cluster \
--name cloudbusket-eks \
--region us-east-1 \
--nodes 2

Cluster creation may take several minutes.

Verify cluster access.

kubectl get nodes

You should see worker nodes from AWS.


Step 3: Deploy a Sample Kubernetes Job

Now that both clusters are running, let’s deploy a sample job.

Create a file:

sample-job.yaml

Content:

apiVersion: batch/v1
kind: Job
metadata:
  name: cloudbusket-demo-job
spec:
  template:
    spec:
      containers:
      - name: demo
        image: busybox
        command: ["echo", "Hello from CloudBusket Kubernetes Job"]
      restartPolicy: Never
  backoffLimit: 2

Deploy the job.

kubectl apply -f sample-job.yaml

Verify job execution.

kubectl get jobs

Check logs.

kubectl logs job/cloudbusket-demo-job

Expected output:

Hello from CloudBusket Kubernetes Job

Step 4: Monitor Job Execution

To check job status:

kubectl describe job cloudbusket-demo-job

View pods created by the job:

kubectl get pods

This demonstrates how workloads can run on both cloud providers using the same Kubernetes configuration.

Tags: