Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
August 23, 2022 10:04 am GMT

The Future Of Kubernetes

Understanding what will happen to a technology once you learn it, live it, and use it daily pops up a lot in a persons mind. The constant questions of will this technology stick around? or when should I start learning the new thing? comes into our mind and throw us for a loop.

With Kubernetes, the question of is this technology going to stick around doesnt come up all that much, and for good reason.

In this blog post, youll learn about what I believe the future of Kubernetes is and how you should think about it for the next several years.

Kubernetes Will Be Irrelevant

When thinking about the direction that a platform can go, I like to think about it with the end in mind. As we all know in technology, theres a beginning, a middle, and an end.

As for the beginning of Kubernetes, I would say that were still in it. Organizations are still trying to adopt Kubernetes, and once adopted, engineers are still trying to make it work in the way that theyre hoping for in terms of how they need to run workloads.

The middle of Kubernetes will be like all technology middle grounds. Itll be adopted, wildly used in production, and just as much of a day-to-day operation as using the UI of a public cloud. Most of the kinks will be worked out and adoption rates will be steady, along with success rates.

At the end, Kubernetes wont be the end of orchestration. Itll just be another platform that was replaced by something else. Think about it like this - Managed Kubernetes Services like Azure Kubernetes Service and AWS EKS already have the tools to put any label on Kubernetes that they want. With Azure Container Apps and EKS Fargate, they could replace Kubernetes and call it something else. How? Because at the end of the day, an orchestration platform is an orchestration platform. It doesnt matter what name it has. The beginning of the end already exists. All thats needed is the name of the platform and a scalable approach.

I do believe that the end of Kubernetes will take a little bit longer than typical technology. The reason why is because just as much as Kubernetes is popular, its not the easiest platform to manage and deploy. In the past year or so is when products/platforms around making Kubernetes easier started coming out, and Kubernetes hit the shelves for users in 2014, so as you can see theres a bit of a gap there. It almost feels like Kubernetes just came out, but at the same time, its been out for a while.

I decided to write this section first because thinking with the end in mind is crucial to understand how the rest of the future of Kubernetes, before the end, will play out. Its also crucial to understand what youll need before the end of the Kubernetes.

The platform may become irrelevant, but the doors that the platform opened up for us wont be.

Environment Instead Of Platform

As of right now, Kubernetes is thought of this mystery box. Its a platform that engineers know, based on their experience level, sort of whats happening inside of it, but not really. Many engineers take to the cloud and deploy Kubernetes workloads without understanding whats happening inside of, for example, the control plane because they dont have to manage it.

They dont know about the underlying components, which can in-turn turn into a negative experience later on, which well talk about in the upcoming sections.

Because of that negative experience, Kubernetes will be thought about less like a platform and more like a datacenter. Itll be thought of as the datacenter of the cloud.

With all of its moving parts, security needs, networking needs, and deployment needs, environments will be built out around the successful deployment of Kubernetes and its workloads.

Whats Happening Underneath The Hood

When it comes to Kubernetes, theres a lot of abstraction. If you think about Managed Kubernetes Services like AKS and GKE, you dont have to worry about the control plane. That means you dont have to think about:

  • Scaling the control plane
  • The API server
  • Scheduler
  • Controller Manager
  • Etcd

Which are arguably the most important parts of Kubernetes.

With Managed Kubernetes Services, you dont even have the ability to properly know how all of the components work.

The question that I constantly ask myself is how does someone troubleshoot if something is going wrong?. For example, lets say you're having a problem with your Kubernetes cluster and it ends up being that the Container Network Interface (CNI) is having issues on the control plane, and therefore, Kubernetes Worker Nodes arent able to connect.

If you dont know how the CNI works and that worker nodes and other control planes wont connect to the primary control plane without CNI working, how can you troubleshoot the problem? How could you know if its something you did or something that the cloud provider did?

Understanding whats going on underneath the hood is a crucial need for any engineer working with Kubernetes. Otherwise, youre literally flying blind.

Hybrid Solutions

At the time of writing this, there are four primarily hybrid solutions:

  • Azure Stack HCI
  • Google Anthos
  • AWS Outposts
  • EKS Anywhere

Theyre all more or less doing the same thing at a high level - giving engineers the ability to run on-prem workloads like they are on the cloud.

You may be thinking to yourself why wouldnt I just run the workloads in the cloud? and the answer to that question can be one in a million. Some common scenarios are organizations have legacy apps that they arent ready to move to the cloud yet, latency concerns, security concerns, the lift and shift would be too great for the reward, or they simply want a combination of both on-prem and in the cloud (the hybrid model).

The next question becomes why would they use these services then? and I feel that the answer to the question is the same as why people want to use OpenStack. Its like having a cloud, but on-prem.

For example, lets say you already have containerized applications that are running in the cloud. Perhaps you already know EKS or AKS, and you want to containerize legacy apps and orchestrate them, but you dont want to run them in the cloud. You could run them on-prem, with Kubernetes, the same way you run containerized apps in the cloud. Itll look and feel the same. The only difference is that itll be on-prem instead of in the cloud. Engineering teams would be interacting with the apps and the interface the same exact way.

There will organizations, at least for a very long time, thatll always have on-prem workloads. But just because theyre on-prem doesnt mean they cant get the benefits of cloud-native.

Serverless Kubernetes

Before Managed Kubernetes Services like AKS, EKS, and GKE, engineers had to create, manage, and update control planes and worker nodes. With AKS, EKS, and GKE, control planes are now managed from a day-to-day perspective via the cloud provider. With those services, an engineer still has to worry about the worker nodes.

The next iteration of this would be that worker nodes arent managed from a day-to-day perspective by engineers either. Thats where Serverless Kubernetes comes into play.

There are a few services out there that do exactly this:

  • GKE AutoPilot
  • AWS Fargate
  • Azure Container Apps

Serverless Kubernetes, as in, removing the need to run day-to-day operations for the control planes AND worker nodes is as close as we are to not even needing the Kubernetes platform anymore. At this point, the only thing that engineers have to think about are are Controllers, Customer Resource Definitions, and the Kubernetes API itself.

With Serverless Kubernetes, engineers are truly only worrying about managing their infrastructure with an API.

However, there is a catch here. Even though youre not performing the day-to-day operations of the underlying infrastructure, things can still go wrong and you should still know about it. If the Kubernetes API, scheduler, or other components in the Kubernetes cluster are degrading for whatever reason, you need to know for troubleshooting purposes.

If you arent managing it, its not like theres anything you can do about it, but you can at least know so you arent going down a rabbit hole thats out of your control and so you can let management know whats happening.

Itll still be a waiting game for it to be fixed, but at least youll have a root cause analysis for the organization.

As of right now, I believe Serverless Kubernetes is still too new and not used in production all that much, which makes it an overall future prediction as to how itll be used. I believe the primary concern for organizations will be scale and latency. If the environment is completely out of your hands, youre relying on underlying automation of these services to scale for you. A few folks at Microsoft for example have said that Azure Container App is great for testing right now, but not for production as things like Namespace segregation doesnt exist. If issues like that get fixed, Serverless Kubernetes might be a solid contender.

Creating Kubernetes Resources

When Infrastructure-as-Code and Configuration Management started to become more of a reality for engineering teams, there was never an idea around developers working with it unless they really had to. The idea was to keep it more for Sysadmins and infrastructure engineers that needed to create, update, manage, and delete environments.

Then, this idea started to change a bit. We began to saw tools like Pulumi and the AWS CDK emerge, which allow you to perform Infrastructure-as-Code, but with a general-purpose programming language like Go or Python.

Now, engineers can choose whatever language they want to create, update, manage, and delete environments.

I believe the same will occur for creating and managing Kubernetes resources.

As of right now, the primary method is with YAML. However, thats far from the only option available.

When youre interacting with Kubernetes, all youre doing is talking to an API. Because of that, any language that can talk to any API can create resources in Kubernetes.

For example, heres a link to some Go code that I wrote to create a Kubernetes Deployment and Service for an Nginx app.

The know-how to do this is already there, but not a lot of engineers are doing it. I believe this will become more relevant as people want to move away from YAML.

Dive Deeper

Ive touched on this a bit throughout the blog post so far, but Id like to call it out more directly.

Theres far too much abstraction in todays world, and there was a different purpose for abstraction when it was originally designed for technlogy.

Automation and repeatability was supposed to be implemented after the knowledge of manual efforts and as a solution for those manual efforts.

Now, platforms, systems, and overall environments are automated and abstracted away without the engineer even truly understanding whats happening inside of the environment.

Without the underlying knowledge, theres no possible way for an engineer to properly troubleshoot, re-architect, or scale out environments.

Ive heard time and time again why do I need to know this if its abstracted away or why do I need to know the underlying components of Kubernetes if Im not working with them.

The reason why is because scaling out large environments, troubleshooting, and figuring out solutions to problems isnt always going to be as easy as clicking a few buttons or running some Terraform code.

Theres an ongoing joke in todays engineering world that if folks cant find the answer on StackOverflow in 10 minutes, they say it cannot be done.

Engineers cannot continue to work in this fashion or it will destroy environments.

Kubernetes is no different. Engineers must dive as deep as they can into whats quickly becoming the datacenter of the cloud, just like Sysadmins had to dive deep into system components.

Without truly going deep into a topic, you can never really know that topic at all.

If youre curious about the prerequisites to Kubernetes and how to dive deep into them, check out this blog post.

Kubernetes Adoption and Incubators

For the past year or so, it seems like new Kubernetes projects, products, and incubators are popping up to solve certain problems in the Kubernetes landscape.

Some are focused on RBAC, others are focused on logging, some are focused on making Kubernetes easier, and everything in-between.

The CNCF has been working with a lot of these startups and its clear that the CNCF is putting a lot of time into Kubernetes in general, but also into the direction that Kubernetes is going in via these products.

At some point, once a lot more of these startups pop up, there will be a tool belt or some kind of best practices for your environment with the products that ended up making it out of the start up phase and into the enterprise phase. Perhaps this will be all under the umbrella of CNCF or of a parent company of sorts for all of the products.

Virtual Machines

This prediction is probably the most hmm, Im not sure about this prediction. However, I do believe it makes sense because of legacy applications.

Think about HashiCorp Nomad - its an orchestrator and does the same thing as Kubernetes, except Kubernetes only supports containerized apps. Nomad supports any type of app.

For Kubernetes to stay competitive, this will have to be thought about and ultimately figured out.

The good news is, there are solutions that help with this.

Kube-virt provides a platform that engineers can build apps for both containers and virtual machines in the same environment. Essentially, it allows you to run virtual machines on Kubernetes.

This goes back to the hybrid approach from an earlier section. Some organizations arent going to want to move everything to the cloud, and to take it a step further, some organizations arent going to want to containerize certain legacy applications. However, what they will want is the power behind Kubernetes to manage legacy applications.

This is where tools like Kube-virt comes into play, and Im certain that others will enter the market as well once Kubernetes becomes used more in production. If Kubernetes is truly a way to manage infrastructure with an API, that means we should be able to manage all infrastructure.

Cluster Management

Last but certainly not least, theres cluster management. With everything from:

  • Hybrid-cloud
  • Multi-cloud
  • Clusters on-prem
  • Clusters in OpenStack
  • All of the different products and solutions

theres a lot of infrastructure flying around all over the place.

Cluster management tools like Rancher and Azure Arc already exist, but they will become more relevant when organizations begin to incorporate multiple Kubernetes clusters. As of right now, the number is around less than 10% of organizations are running over 50 Kubernetes clusters. Because the number is so low, you have to imagine that a lot of organizations are running a few Kubernetes clusters at most, and that doesnt mean theyre all in production.

Once Kubernetes becomes the environment instead of the platform, more and more clusters will be deployed, and cluster management tools will not be an option, theyll be needed. Also, those cluster management tools will give you the ability to easily manage the internals of your clusters. For example, overall management of RBAC across all of your clusters instead of managing RBAC per cluster.


Original Link: https://dev.to/thenjdevopsguy/the-future-of-kubernetes-2l0b

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To