Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
April 14, 2023 10:00 am GMT

Kubernetes-101: Services

We have encountered Pods, the lowest level object where our containers live. We have encountered Deployments which are abstractions on top of ReplicaSets, which in turn creates a number of replicas of our Pods. These resources belong in the category of workload resources. Today we will encounter Services, which belong to the category of network resources.

Each Pod that we create in our cluster is assigned an IP-address that is available internally in our cluster. If we set up a Deployment with three replicas of a Pod, each Pod is assigned its own IP-address. How can we load balance traffic between our three Pods? One way would be to use our favorite third-party load balancer tool1, and add the IP-addresses of our Pods to the load balancer. Then we would expose the load balancer to the internet. What happens if one of our Pods is replaced by a new Pod? The new Pod would have a new IP-address, and we would need to provide this new IP-address to our load balancer to make sure the new Pod also receives traffic. This sounds tedious!

This is where the Service resource comes to the rescue. Let us dig deeper into what a Service is and how it can help us in this situation!

Services

What is a Service in Kubernetes? To begin answering that question I will cite the official documentation on this topic2:

An abstract way to expose an application running on a set of Pods as a network service.

This definition tells us two things

  1. what the purpose of a Service is, and
  2. where the name Service comes from

When you first encounter the Service resource type in Kubernetes it is easy to mix it up with the term service from microservices or Azure service or something similar. Here we are talking about a network service.3

In the introduction I mentioned that it could quickly become tedious to keep track of all the IP-addresses for our Pods. Each time a Pod is started or terminated we must make sure to either add or remove it from our load balancer. The Service resource simplifies this work for us by being the one who keeps track of the IP-addresses of our Pods. An illustration of what this means is shown below:

service 1

So the Service basically keeps a current up-to-date list of IP-addresses of Pods. Which Pods? We'll get the answer to that question in the next section.

Declaratively creating a simple Service

Without further ado, here is a basic Kubernetes manifest for a Service:

# service.yamlapiVersion: v1kind: Servicemetadata:  name: nginx-servicespec:  selector:    tier: web  ports:    - protocol: TCP      port: 80      targetPort: 80

A few things in this manifest should be pointed out:

  • As with all the manifests we have seen so far, this one has an .apiVersion, a .kind, a name in .metadata.name, and a specification in .spec
  • In .spec there are two sections that make the Service work
    • .spec.selector specifies the labels (key-value pairs) that the Service uses to know which Pods to send traffic to, in this case there is one label tier: web
    • .spec.ports specifies the listener details for this Service, i.e. it listens on protocol: TCP, port: 80, and it will send this traffic to the targeted pods on targetPort: 80 (if the targetPort is the same as the port, you can just specify port and leave out targetPort)

So now we have a Service, how do we know which Pods it will send traffic to?

Any Pod that has the labels (the key-value pairs) specified in the .spec.selector in the Service will be a target for the traffic coming into the Service. This is illustrated in the following image:

service 2

In this example we have a single label, but we could use several labels to match with if we wish.

In the next section we will create a Service with a corresponding Deployment and Pods, but here we will create a Service in isolation to see how to see information about it using kubectl. We begin by creating the Service using kubectl apply:

$ kubectl apply -f service.yamlservice/nginx-service created

Next we list all of our Services:

$ kubectl get servicesNAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGEkubernetes      ClusterIP   10.96.0.1       <none>        443/TCP   5d20hnginx-service   ClusterIP   10.105.138.93   <none>        80/TCP    15s

We can see our new nginx-service in the list, but there is also a Service named kubernetes. The number of Services you see in this list might vary depending on what type of Kubernetes cluster you are using. I am currently using a local Minikube cluster4.

As with all the other Kubernetes objects we can use a short form for services which is svc, so the previous command could have been shortened to kubectl get svc5.

We can also describe a given Service using kubectl describe:

$ kubectl describe service nginx-serviceName:              nginx-serviceNamespace:         defaultLabels:            <none>Annotations:       <none>Selector:          tier=webType:              ClusterIPIP Family Policy:  SingleStackIP Families:       IPv4IP:                10.105.138.93IPs:               10.105.138.93Port:              <unset>  80/TCPTargetPort:        80/TCPEndpoints:         <none>Session Affinity:  NoneEvents:            <none>

We see a relatively modest list of properties for our Service, but we can recognize the important parts that we specified in our manifest. The property named Type is interesting. Currently its value is set to ClusterIP - what does that mean? We'll explore this in the next section.

Publishing Services

A Service has a Type (or ServiceType). There are a few different types available. The three that I wish to highlight in this article are the following:

  • ClusterIP: exposes your Service with an internal IP-address reachable from within your cluster. This is the default type if you don't specify anything else. This is a good choice if you have an application that should only be available from within the cluster itself. For this type the Service gets a static IP inside of the cluster.
  • LoadBalancer: exposes your Service externally using a compatible cloud provider load balancer. The LoadBalancer type is more advanced as it will create resources in a cloud environment (AWS, Azure, GCP, etc), and that require a bit additional work to set up. If you are running your Kubernetes cluster inside of a cloud provider you might find this option interesting. Be aware that if you set up 20 different Services, each of the LoadBalancer type, you might end up with 20 load balancers in your cloud environment, and each load balancer comes with a cost. Thus, in a real production set up you might instead manually set up a single load balancer for your cluster, and then distribute the traffic in a different way. This is an advanced topic and not something we will touch in this Kubernetes-101 series.
  • NodePort: exposes your Service on a static port on each of your cluster's Nodes (virtual, or physical, machines that makes up your cluster). All traffic reaching this port on each node will be forwarded to your Service.

In this section we will focus on the NodePort type of Service. Let us modify our manifest from before, and add an explicit type to it:

# service.yamlapiVersion: v1kind: Servicemetadata:  name: nginx-servicespec:  type: NodePort # we added this!  selector:    tier: web  ports:    - protocol: TCP      port: 80      targetPort: 80

For our Service to be useful we will also add a Deployment that will create a few Pods for us. These Pods will be targeted by the Service. To repeat, the important part for the connection to work between a Service and a Pod is that the Pods are labeled with tier: web, because that is what we specified in .spec.selector of our Service. The manifest for the Deployment resource looks like this:

# deployment.yamlapiVersion: apps/v1kind: Deploymentmetadata:  name: nginx-deployment  labels:    tier: webspec:  replicas: 3  selector:    matchLabels:      tier: web  template:    metadata:      labels:        tier: web # important!    spec:      containers:        - name: nginx          image: nginx:latest          ports:            - containerPort: 80

This manifest looks familiar from the previous parts about Deployments.

Now it is time for a brief intermission where I will mention a few things about manifest file structure. Above I have defined two files, service.yaml and deployment.yaml.

When I want to run kubectl apply on these files, I could do it in any order because there is no inherent dependency between them. Well, the Service will not have anywhere to send traffic if the Deployment does not exist, and if the Deployment exists but the Service does not exist there will be no (easy) way to send traffic to the Pods. But Kubernetes will not complain in any of those situations. However, there is a better way to apply the manifests. Two ways actually:

  1. Place all related manifests in a directory, e.g. ./application, and then run kubectl apply -f ./application, then all manifests available in the ./application directory will be applied. This is the best approach for larger applications with several manifests in separate files.
  2. Place all related manifests in the same file, and separate the different manifests from each other by a line containing only ---. This could be the best approach for small applications with at most 2-3 manifests (perhaps with a Deployment and a Service).

Now the intermission is over! I will use method 1 from the list above, and I place my service.yaml and deployment.yaml in a directory ./application. Then I use kubectl apply to create my Service and Deployment in one go:

$ tree. application     deployment.yaml     service.yaml1 directory, 2 files$ kubectl apply -f ./applicationdeployment.apps/nginx-deployment createdservice/nginx-service created

Do I have a working application now? Not quite. Due to a technicality with Minikube on Mac (I am using Mac!) I have to perform one more step. Note that this is purely due to limitations with Minikube on a Mac. Minikube is not a production-grade cluster, so we must expect that there are a few limitations. Anyway, what I need to do is run the following command:

$ minikube service nginx-service --urlhttp://127.0.0.1:53904! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

As the output indicates I need to let this terminal window be open for this to work. If I visit http://127.0.0.1:53904 in my browser I see the Nginx welcome page.

In a future article we will revisit a Service of type NodePort deployed to a real Kubernetes cluster, and we will see that it works better than on Minikube.

Before we end this section, if we were using a better Kubernetes cluster, how would we know which port our NodePort services uses? We can run kubectl describe to find out:

$kubectl describe service nginx-serviceName:                     nginx-serviceNamespace:                defaultLabels:                   <none>Annotations:              <none>Selector:                 tier=webType:                     NodePortIP Family Policy:         SingleStackIP Families:              IPv4IP:                       10.104.193.42IPs:                      10.104.193.42Port:                     <unset>  80/TCPTargetPort:               80/TCPNodePort:                 <unset>  32732/TCP  # <----------- here we can see it!Endpoints:                172.17.0.3:80,172.17.0.4:80,172.17.0.5:80Session Affinity:         NoneExternal Traffic Policy:  ClusterEvents:                   <none>

It seems like port 32732 is the one to use. We can see that this is not the same port that Minikube used, but again, that is due to technicalities with Minikube.

Using a named port

There is a convenient feature in Pods that we have not seen before. I said that I would come back to Pods again whenever we had a reason to do so, didn't I?

We can give an exposed port a name. We can then refer to this name in our Service object. An example of what this looks like is this:

# pod.yamlapiVersion: v1kind: Podmetadata:  name: nginx  labels:    tier: webspec:  containers:    - name: nginx      image: nginx:latest      ports:        - containerPort: 80          name: web-port # here we give our port a name

Here we have defined a port on our container in .spec.containers[].ports and we have given it the name web-port. The manifest for the Service that uses this named port is shown below:

# service.yamlapiVersion: v1kind: Servicemetadata:  name: nginx-servicespec:  selector:    tier: web  ports:    - protocol: TCP      port: 80      targetPort: web-port # here we use the named port

The Service uses the web-port as the targetPort value. This is convenient because if we need to update what port we expose on our Pod, we only update the Pod itself, we do not have to update any other object that references this named port.

Exposing several ports on your Service

In the Service manifests we have seen above the .spec.ports part has always been a list with one element. Since it is a list we can intuitively understand that we could expose several ports, not just a single port. This is useful if your Pods expose multiple ports for different reasons. I will not include an example of this, but it is keep in mind that this is indeed possible.

Summary

We have learned a lot about Kubernetes Services in this article. We now know the purpose of a Service. We know how to create a Service using a Kubernetes manifest. We found out that there are a few different types of Services, and we looked closer at the NodePort type. We briefly saw that we can give an exposed port a name on a Pod, and refer to this name from a Service. Finally we also realized that a Service could listen on more than one port, if there is a need for this.

In the next article we will introduce the concept of a Namespace. So far we have not mentioned that there is something called a Namespace, but without knowing it we have been using the default Namespace in our Kubernetes cluster. Namespaces are used to separate resources and applications from each other into logical compartments. If Namespaces did not exist we would have a hard time keeping track of our Pods, Deployments, Services, etc, once the number of each type of resource increases.

  1. I have never used anything that could be classified as a third-party load balancer tool, so I am not entirely sure what I am talking about here!

  2. Obtained from https://kubernetes.io/docs/concepts/services-networking/service/ on December 19.

  3. Maybe a network service and a service in a microservice architecture is the same thing? I think that they could be, but in my mind they are different entities.

  4. Read about how to get started with Minikube in the documentation at https://minikube.sigs.k8s.io/docs/

  5. As I've said before, it is common to set up different aliases to shorten these commands even more. Why is this common? Because when you work with Kubernetes you will use kubectl a lot, and then you will find that aliases simplifies your life. When working with Kubernetes Services a useful alias could be alias kgs="kubectl get services".


Original Link: https://dev.to/mattiasfjellstrom/kubernetes-101-services-191d

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To