Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
March 2, 2021 04:57 pm GMT

Why Is Kubernetes So Hard?

Introduction

Kubernetes (k8s) has been all the rage for the last few years because application orchestration has become a de facto table-stakes requirement for production workloads running containers. Containerising applications is relatively straightforward, and most DevOps engineers worth their salt can create a few Dockerfiles and build images in a pipeline that are ready to run. But where do you run your Docker containers? And which versions do you deploy? And how do all the containers talk to each other? This is where orchestration comes into play and where a few options are proposed by large vendors. There are two main options available at the time of this writing: Elastic Container Service (ECS) from Amazon Web Services (AWS) and Kubernetes which is offered by all the Infrastructure As A Service (IaaS) providers, including even AWS.

The hope is that orchestration will allow companies to deliver their containerised applications to test and integration environments quickly and painlessly. The ideal scenario is that it just works, that is, you would snap your fingers and wait a few minutes before you see your application running in front of you. Ideally, youd only need to specify the minimum information necessary to run your application: name, framework, dependencies, and so forth, preferably read out from existing configuration files you already have available. Thats the hope anyway.

Obviously, from the title, we are focusing on Kubernetes, mainly because it is available everywhere and mainly because the only other option is ECS which only proves the point of our thesis that Kubernetes is hard to use because AWS came up with their own solution that is supposed to be easier. But why is it so hard to use? How long would it reasonably take to get your application(s) running in K8s?? And why isnt it easier?

Kubernetes Infrastructure Is Hard

We always start with the infrastructure even though most companies wouldnt build their own Kubernetes clusters themselves. If you were ever going to use a managed infrastructure service, K8s would be at the top of the list. Im sure there are a few people who start up minikube on their laptops and say to themselves, Wow, this is easy! I can do this myself!! This reminds me of people who start up an Elasticsearch container on their laptop and say, Wow, we should implement this for our website!! Fast forward to a production launch six months or a year later and the simple We can do this ourselves mantra turns into I wish we didnt have to do this anymore.

If you were truly going to build your own Kubernetes cluster, youd need to build all the control plane servers and services on your bare-metal or Virtual Machines (VMs) from an IaaS of choice, and then tie them all together with some fancy networking configuration to separate control-plane traffic from container traffic. Youd need to configure and run all of the control plane software and get them all talking to each other, running stably and monitored properly. Perversely, youd be orchestrating the containers that orchestrate the application, but without a lot of orchestration! The fancy mirage thats presented when you run minikube or Docker Desktop on Windows hides all the inception of running a container orchestration system using containers.

We havent even gotten to the complications of setting up ingresses (which are just nginx instances, usually) and load balancers that sit on top of or next to the control plane stack. A lot of the time, youll feel like you are creating a whole infrastructure just for your infrastructure to run (which isnt unusual, but definitely doesnt feel better than trying to orchestrate things yourself). We also havent gotten into the Role Based Authentication Controls and network policies that need to be set to support more than a single application or stack running in one cluster. The number of configuration points and server-side setups start to mount quickly and we havent even started orchestrating applications yet, which is the whole point of the orchestration system were supposed to be setting up.

And lets suppose that you really do wade out into this deep North Atlantic Ocean of huge waves and death-inducing freezing waters, and build yourself a production-worthy ship that can orchestrate your containers into an actual application. You look back up at your calendar and its been six months or a year since you started, and youre just now deploying a control plane that says, Hello World! You think youre successful and youre about to celebrate when you check the releases section on the website and now you have a new version of k8s to deploy!!!

I hear what youre saying, Were large company and we have lots of DevOps engineers who are top decile of engineering talent in the whole world. We can handle all the heavy lifting. Youre just a whining, jealous baby. I see you, Datadog and Ticketmaster. (By the way, your accusations of jealousy might be correct. At the end of my good friend Justin Deans keynote speech where he shows the slides with all the team members, my picture should have been up there -- but I had left the team two years earlier.) For everyone else, we all just decide to not spend six months or a year trying to build our control plane and start up our IaaS providers managed service and cross our fingers and pray.

K8s YAML Aint Markup Language

If youve skipped ahead and just started up a managed k8s cluster, youre still in for a long and tedious journey wading into a deep sea of confusing YAML. YAML is to text what James Joyces Finnegans Wake is to English. If you close one eye, use only your left pinky and right thumb to follow some brail, put your feet into ballets fifth position, and then recite World War II codes under your breath, then you will easily see that YAML is quite a breeze to comprehend. Once you get the hang of it, its like riding a bicycle over a frozen lake on centimeter-thin ice with rabid wolves chasing you. Its as easy as trying to crash the Ancient Aliens cocktail party held in Fort Knox on gold smuggling days.

Look, its not actually that hard, right? Lets say a guy walks up to you on the street. Hes a k8s expert and hes going to show you how easy the hello world web service deployment is. The conversation goes like this:

Him: kind: DeploymentYou: Oh, I see. Yes, I like it.Him: apiVersion: apps/v1beta1.You: Uh, okay. Isnt v1beta1 out of date? You can use v1 as of k8s 1.9.Its actually removed in 1.16, but I wonder how many people have neverupdated.Him: Start over.You: Wat.Him: kind: DeploymentYou: Stop with the Kinds everywhere!Him: apiVersion: apps/v1You: This again.Him: spec:You: Huh??Him: selector:You: No.Him: matchLabels:You: Wat.Him: app: nginxYou: Thats nearly the first thing Ive understood about this so far.Him: spec:You: Again?Him: containers:You: Okay, now were getting somewhere.Him: image: nginx:1.14.2You: Hmm.Him: ports:You: Aiiiiieeee.Him: containerPort: 80You: Im going home. I quit. There must be a devops job I can get whereI work on [Gatsby blogs](https://www.gatsbyjs.com/ "Gatsby nodejs frontend")all day.
Enter fullscreen mode Exit fullscreen mode

And thats just trying to read and understand the file. Try reading two k8s yaml examples and then generate one yourself from scratch. Even better, every day try a code kata practice of writing working and deployable configurations to Kubernetes.

I DARE YOU.

Copy Paste Aint Code

Im still chuckling over the previous section. I have to chuckle because this is the daily pain of my day-to-day existence and facing that pain directly is like standing in front of a bus being driven by Keanu Reeves on the freeway. The only thing that keeps my nose going back to the grindstone is the realization that working with NodeJs would be worse. The problem is that the Kubernetes docs are pretty good. You copy-paste some hello world examples and the outputs look like they work. You start to get pretty good at using kubectl. You can see vague shapes and outlines in YAML. Youre starting to gain confidence that you might be able to do something useful.

Lets try to move our application into Kubernetes! you yell into the air as you emerge dripping wet from your bathtub wrapped only in a towel, like Archimedes sprinting through the streets of Syracuse. Well just copy-paste some sections from here and here and put them there and there, and well have our app running in no time, you breathlessly explain to your coworkers. Does it work?! they excitedly ask. Not yet. I mean, no. I need to indent the section and remove one piece that is not used in this spec. Then I need to decide if we use a deployment or a daemonset, but its almost there. I swear!

First of all, put on some clothes. Im all for taking a bath while thinking about kubernetes YAML files, but you need to get dressed afterward. Also, if you drop your Macbook Air into the bath with you, the results can be electrifying. I know. Second, heres a riddle for you: how many YAML files do you think you need to run and deploy your application? Good thing that some people have ten fingers and ten toes because thats probably how many youll need. And theyre all related but not really. You can copy-paste sections around if youre adventurous and gullible, but you have no idea if the sections are compatible. There are only four required fields, all of which are gibberish, and everything goes under spec: (including spec:). Most of the sections are duplicated but only slightly. They vary microscopically in ways that matter macroscopically.

Copying and pasting is a wonderful art, and Ive personally worked my entire adult career that way. I gleefully admit my whole output in life is like a ransom note cut from stack overflow and documentation examples. But piecing together this fragile web of text to do what really should be quite simple and obvious is tedious, error-prone, and too trial-and-error-y. It would be much better to express what you want and be able to actually emit workable, executable code that produces a result you want: namely your application running.

All this complaining about YAML is quite amusing, but really its the symptom of the cause: Kubernetes is so difficult to use because the interface has to be completely rigid. K8s configurations are not living, majestic trees, they are a bunch of dead chopped wood. They are worse than chopped wood, they are whole petrified forests, vast piles of rocks with the imprint of thousands of years of growth rings imprinted on them and preserved for millions of years.

No, they are worse than petrified wood forests! Kubernetes manifests are the punch cards of the twenty-first century. Each YAML is a collection of holes poked into chopped up wooden cards that we cant read and understand, that we shove blindly into the kubectl apply -f command and hope that we put them in the correct order and didnt make a single-hole mistake anywhere in the stack. Then, just like the machines of yesteryear, we try to gain insight into whats happening by looking at the blinking lights and obscure output of ticker tape, hoping to glean insight.

Just like trying to reproduce Mozart or Beethoven on a pianola is tedious, laborious, error prone, and ultimately unfulfilling, similarly k8s manifests are frozen forever in time, impossible to write expressively, and playing the same tune ad infinitum. The reason people still use v1beta1 even though v1 has been available for two years is because nobody has generated new k8s configurations since then.

Doctor, Heal Thyself; or Debugging Yourself Is Hard

The great thing about k8s is that when something goes wrong, nobody knows. I cant count the number of times Ive deployed something, worked on something else for a few hours, came back and realised that the deployment had just silently failed and nothing ever notified me. The error message was available somewhere: was it in the deployment logs or the pod logs? Is the ingress or ingress deployment running? Where in the ten or dozens of Kinds files did the log entry appear? And the root cause was often some unrelated issue: an errant and invisible whitespace, not using double quotation marks when I should have, not using single quotation marks when I should have, or getting the brunt end of the indent from a copy-paste issue from three weeks ago.

There are, of course, tools and techniques and monitoring tools that help out; its like Elon Musks Mars orbiter MVP: Does it work? Absolutely!! A thousand times, yes! What does it do? Almost anything you want! You have to know what to look for and where to look for it, then you have to know how to figure out what to do about it, then you have to figure out where in the ten or dozens of files which line or lines to fix, and then you have to know how to fix it.

The other great thing about k8s is that you own the whole thing. Listen: friendo, pal, buddy, you chose this existence. You copy pasted the code. The documentation examples work. I can run Hello World! on my laptop so its clearly all on you. Youre the one who ran through the office dripping wet in a towel shouting Kubernetes! If the Hippocratic oath is Do no harm then maybe the Devops oath is Do no more harm than that which will get you fired.

And the last great thing about k8s is there are tons of people and companies who claim to know what is going on and what to do, and theyll gladly take your money to show you whether thats true or not. Type Kubernetes into the search engines and see all the ads that pop up. This article is part of the problem, and also the solution, so stay with me.

The Solution, Finally

There are several ways to make Kubernetes easier to use:

  1. Dont use k8s: run, screaming for your lives
  2. Train all your people to figure it out (come back to me when youre done; I still might be alive. Probably not.)
  3. Hire more people for your team to figure it out (Im available, hit me up. Ha ha, just kidding.)
  4. Hire someone else to do it for you
  5. Wait longer for results, do more with less, eventually settle on something that isnt horrible
  6. Find a solution that deploys your applications to environments for you and get on with your actual business of, well, whatever business it is you actually do. Automation tools and services can help you get your application running without investing in the activities described above. Someone has to do it, but it better not be you.

At Release we work tirelessly to bring your application to life in an orchestrated, human interface. We write software to deal with all the complexity, difficulty, and strain so that no one else has to (unless they want to!) We create the engine that drives the Kubernetes vehicle, and we deliver solutions that our customers can use to get on with their business of doing business.

Photo by Chris Chow on Unsplash


Original Link: https://dev.to/rwilsonreleaseapp/why-is-kubernetes-so-hard-i42

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To