Deploy your first container on Kuberentes cluster
Well if you have followed us on the previous blog post on how to create a kubernetes cluster on AWS EKS or have gone with local kubernetes clusters like minikube
etc. Then you must be aware that to manage all the resources & objects on your cluster you will need a command-line tool kubectl
which interacts on your behalf with the kube-api-server
to perform almost all operations on your cluster.
What are deployments and pods?
A deployment can be understood as a template that defines how your actual running container will look based on the specifications provided. Each actual running container is built from the deployment specified and is termed as a pod. A sample deployment will look like below -
|
|
This is a deployment manifest for a microservice named s3-streamer written on Java to stream AWS S3 objects into the cluster. More details about the service can be found here.
To apply this deployment you can go to the saved directory of above yaml file and run the following command in your OS terminal-
|
|
And see a similar output -
deployment.apps/s3-streamer created
Then you can run commands to get deployment and pods to see them running status -
|
|
You can further run describe commands on your resources to understand more on what’s happening by running - kubectl describe pod s3-streamer-78779d7c85-j7clz
What’s important to note here is that there can be multiple pods running for the same deployment based on the replica count set, which comes in handy in the case of load distribution and canary deployments. Also, another important point to note down here is that this application running on port 9999 is just like a black box and has no cluster endpoint assigned to it. So it will not be accessible over the network, but can only be accessed by port-forwarding the pod or deployment directly as of now. Each pod will have its random IP which we can’t guess because pods are volatile.
What is a Service?
As stated in the note above that the deployed pods will not be accessible over the cluster network because they have no known IP assigned (but some random IPs based on your cluster CIDR block). This is exactly the problem that a service solves for you. It assigns a specific cluster DNS endpoint to your deployment and then all the pods that are running for that particular deployment are load-balanced under a single DNS name in a round-robin fashion. This all networking is taken care of by kube-proxy
agent on your cluster.
|
|
To apply this service run the following command in your OS terminal -
|
|
And see a similar output -
service/s3-streamer created
Then you can run commands to get or describe your service to see the status -
|
|
Now this service is configured with a cluster IP. Also, the application would be available at http://service-name:port-number/ i.e. http://s3-streamer:9999/. We can also get an external IP attached, but we will see that some other time.
The bridge in between
The deployment is an independent resource and so is the service. So we need to make sure that both are intertwined, which is taken care of by the label
in deployment.yaml and selector
in service.yaml. In this way, the provided selector app=s3-streamer
in service yaml knows that it needs to take care of all the pods with the same labels while load balancing.
To test this we will create a throw-away pod using busy box docker image with curl utility installed and bash right into its shell -
|
|
Then run curl command, keep the output handy and exit the pod. -
|
|
This resulted in a 400 status error because it was looking for an AWS S3 bucket abc with object key xyz which is not present, but we know that we were able to trigger the service communication at least.
To verify that s3-streamer service was invoked using the DNS assigned on the local cluster network from inside of our throw-away pod, we will take a look at the logs generated by s3-streamer service as pasted below. To confirm we can see line number 27 in the output -
|
|
You can refer to the yaml’s on my GitHub repo as well –> Kubernetes deployment and service example yaml.
I will be next writing on how to set up these manifest for better configuration management to support deployment on different environments and support Multiple environments (DEV, Staging, QA, Prod) with Kubernetes and Kustomize
Share with others on