commit ee5822a1038abdf18b46836843dd81ce6433d36c
parent a57609537c49348be858734707341c6fcb6e82e5
Author: Paco Esteban <paco@e1e0.net>
Date: Fri, 8 May 2020 19:50:45 +0200
new article about managing k8s from OpenBSD
Diffstat:
3 files changed, 182 insertions(+), 1 deletion(-)
diff --git a/src/gophermap b/src/gophermap
@@ -12,6 +12,7 @@ Sometimes I write things so I do not forget ...
o--o-- Random (usually tech) stuff
+0Manage Kubernetes clusters from OpenBSD /manage-k8s-from-openbsd.md.txt e1e0.net 70
0Split window on current working directory on tmux /tmux-splitw-on-current-folder.md.txt e1e0.net 70
0upsc (NUT) Prometheus exporter /upsc-prometheus-exporter.md.txt e1e0.net 70
0Notes on Vim register /vim-registers.md.txt e1e0.net 70
@@ -48,5 +49,5 @@ Have any comments ?
Send an email to <comments@e1e0.net>
o- o -- -------------------------------------------------------- -- o --
-Last updated: Sat, 14 Mar 2020 22:14:43 +0000
+Last updated: Fri, 08 May 2020 17:49:00 +0000
o- o -- -------------------------------------------------------- -- o --
diff --git a/src/index.html b/src/index.html
@@ -1,4 +1,5 @@
<ul>
+<li><a href="manage-k8s-from-openbsd.html" title="2020-05-08">Manage Kubernetes clusters from OpenBSD</a></li>
<li><a href="/tmux-splitw-on-current-folder.html" title="2020-03-14">Split window on current working directory on tmux</a></li>
<li><a href="/upsc-prometheus-exporter.html" title="2020-01-17">upsc (NUT) Prometheus exporter</a></li>
<li><a href="/vim-registers.html" title="2019-12-06">Notes on Vim registers</a></li>
diff --git a/src/manage-k8s-from-openbsd.md b/src/manage-k8s-from-openbsd.md
@@ -0,0 +1,179 @@
+# Manage Kubernetes clusters from OpenBSD
+2020-05-08
+
+_This should work with OpenBSD `6.7`. I write this while the source tree is
+locked for release, so even if I use `-current` this is as close as `-current`
+gets to `-release`_
+
+## Intro
+
+Some of us have to suffer the pain of the trendy tech and the buzzwords even
+when it does not provide much benefit. But hey ! we have to be cool kids
+playing with cool tech, right ?
+
+Nowadays, its containers _all the way down_. As I like to say, this solves
+some problems and brings others, but I digress and this can become a rant
+quicker than you think.
+
+In this article I want to talk about how I do for manage work infrastructure
+(all cloudy and containery) from the comfort of my OpenBSD-current workstation.
+
+## Objective
+
+Before I tried all this I had a Linux VM running on `vmd(8)` so I could have
+all the command line tools to work with Google Cloud Platform (from now on
+`gcp`) and Google Kubernetes Engine (from now on `gke`), which are the cloudy
+and containery providers we use at work.
+
+My goal was to have all the needed tools working on OpenBSD so I do not have to
+fire up the VM, and avoid the hassle of moving YAML files around.
+
+In my case I need those cli tools:
+
+* `gcloud`: Google Cloud SDK, for managing Compute Engine VMs, Cloud Storage
+ buckets, etc.
+* `kubectl`: to manage Kubernetes stuff directly.
+* `kustomize`: [This one][1] allows to have a base configuration for the
+ Kubernetes YAML definitions and overlays that can modify that base. In our
+ case for different environments.
+* `fluxctl`: We have all those YAMLs on a git repository, and [flux][2] makes
+ it be the _source of truth_. Before this, there was always sync problems
+ between what was on the repo and what was actually deployed on the
+ Kubernetes cluster.
+* `kubeseal`: We use [sealed secrets][3] to store sensible data on the
+ repository.
+
+Luckily, there's a port for Google Cloud SDK, and the others are written in Go,
+and can be compiled for OpenBSD (with some tweaks).
+
+## Google Cloud SDK
+
+This is not the most used tool for me, but is essential as it provides
+authentication for all the others.
+As I said, there's a port for it, so install it is as simple as:
+
+ $ doas pkg_add google-cloud-sdk
+
+After that one needs to log in. Execute this command and follow the
+instructions:
+
+ $ gcloud init
+
+More info [here][4]
+
+If you manage more than one Google Cloud Project (as I do), the configuration
+files are placed on `~/.config/gcloud/configurations/`.
+
+You'll see there's a `config_default` file. You can copy that to
+`config_whatever` and edit the file (it's in _ini_ format) to fit your needs.
+Later on you can change projects with:
+
+ $ gcloud config configurations activate whatever
+
+## kubectl
+
+There's no port for `kubectl` (yet, if you want to step in, I promise to test
+it, give feedback and maybe even commit it !), but it can be compiled and
+installed manually.
+
+I assume that you have a Go environment working.
+
+At first I tried to go the easy route, as some devs (_abieber@ and kn@_) told
+me that that it was working, maybe this does the trick for you:
+
+ $ go get -u github.com/kubernetes/kubernetes/cmd/kubectl
+
+Unfortunately it did not for me. I had to delete some old stuff on
+`$GOPATH/src` that I think it was outdated and the `-u` did not handle
+correctly for some reason. After that it compiled and installed perfectly on
+`$GOPATH/bin`. If you do not use `gke` as a provider you're all set here, but
+(there's always a but) after get the credentials (more on that later) I got
+this error:
+
+ error: no Auth Provider found for name "gcp"
+
+For some reason it seems the auth provider I need fails to compile and gives no
+error at all.
+
+So, to solve this I took a peak at the FreeBSD port to see how they do things.
+Long story short, I downloaded the stable version they use in the port and used
+the same parameters they use to compile. Basically get the source tarball for
+`1.18.2` (at the time of writing), then go to `kubernetes-1.18.2/cmd/kubectl`
+and compile with those options:
+
+
+```
+go build -ldflags="-X k8s.io/component-base/version.gitMajor=1 -X k8s.io/component-base/version.gitMinor=18 -X k8s.io/component-base/version.buildDate=$(date +'%Y-%m-%dT%H:%M:%SZ') -X k8s.io/component-base/version.gitCommit=\"\" -X k8s.io/component-base/version.gitVersion=v1.18.2 -X k8s.io/client-go/pkg/version.gitVersion=v1.18.2"
+```
+
+I have the impression that the only one needed is the last `-X`, but I couldn't
+be bothered of cheking further. So one can get the configuration for the auth
+provider as usual right ?
+
+ gcloud container clusters get-credentials my-cluster-name
+
+Wrong. For some reason this does not work. The error message urges you to use
+_"application default credentials"_, so a couple more steps are needed:
+
+ gcloud config set container/use_application_default_credentials true
+ gcloud auth application-default login
+
+And now finally `kubectl` is working. You'll have to repeat this 3 last steps
+if you have more than one project or cluster to manage.
+
+## kustomize
+
+If you have to suffer Kubernetes and don't know about [kustomize][5]. Take
+a look, you'll thank me later.
+
+It's out of the scope of this article to explain what it is and how to use it
+(which is a fancy way of saying RTFM).
+
+There's no port for this one either but, it's really easy, just "go get it":
+
+ GO111MODULE=on go install sigs.k8s.io/kustomize/kustomize/v3
+
+## fluxctl
+
+Again, no port for this one either. I had to use the same technique as with
+`kubectl` because the "go get" was failing with a type mismatch on one of the
+dependencies `k8s.io/client-go/transport/round_trippers.go`.
+
+I took a quick look at the code, but the offending lines were there since 2016,
+so I avoided the potential rabbit hole and went for the easy ride.
+
+Download the last tarball (`1.19.0` at the time of writing), go to
+`flux-1.19.0/cmd/fluxctl` and then `go build`.
+
+That went flawlessly.
+
+## kubeseal
+
+This one is quite nice to manage sensible data. It keeps the data on the
+source repo encrypted and it can only be decrypted by the controller installed
+on the Kubernetes cluster. Again, it's out of the scope ... blah blah ...
+
+Really easy one. Just "go get" it and be happy:
+
+ go get -u github.com/bitnami-labs/sealed-secrets/cmd/kubeseal
+
+## Conclusion
+
+And finally, I can use all those _wonderful_ commands to manage that
+_fantastic_ infrastructure from OpenBSD.
+
+To be honest, at least they do a good job to work with each other and with
+other classic tools, which means they play quite nice with the
+pipeline/redirection composition ways of the shell.
+
+I really doubt that there's much OpenBSD users managing Kubernetes clusters out
+there, but maybe this could be useful to somebody.
+
+_Have any comments ? Send an email to the [comments address][999]._
+
+[1]: https://kustomize.io/
+[2]: https://fluxcd.io/
+[3]: https://github.com/bitnami-labs/sealed-secrets
+[4]: https://cloud.google.com/sdk/docs/quickstart-linux#initialize_the_sdk
+[5]: https://kustomize.io
+[999]: mailto:comments@e1e0.net?Subject=Manage%20Kubernetes%20from%20OpenBSD