e1e0.net

sources for e1e0 website
git clone https://git.e1e0.net/e1e0.net.git
Log | Files | Refs

manage-k8s-from-openbsd.md (7291B)


      1 Title: Manage Kubernetes clusters from OpenBSD
      2 Author: paco
      3 Date: 2020-05-08
      4 Type: article
      5 
      6 _This should work with OpenBSD `6.7`.  I write this while the source tree is
      7 locked for release, so even if I use `-current` this is as close as `-current`
      8 gets to `-release`_
      9 
     10 _Update 2020-06-05: we now have [a port for kubectl][6].  So, at least in
     11 `-current` things get a bit easier._
     12 
     13 ## Intro
     14 
     15 Some of us have to suffer the pain of the trendy tech and the buzzwords even
     16 when it does not provide much benefit.  But hey ! we have to be cool kids
     17 playing with cool tech, right ?
     18 
     19 Nowadays, its containers _all the way down_.  As I like to say, this solves
     20 some problems and brings others, but I digress and this can become a rant
     21 quicker than you think.
     22 
     23 In this article I want to talk about how I do for manage work infrastructure
     24 (all cloudy and containery) from the comfort of my OpenBSD-current workstation.
     25 
     26 ## Objective
     27 
     28 Before I tried all this I had a Linux VM running on `vmd(8)` so I could have
     29 all the command line tools to work with Google Cloud Platform (from now on
     30 `gcp`) and Google Kubernetes Engine (from now on `gke`), which are the cloudy
     31 and containery providers we use at work.
     32 
     33 My goal was to have all the needed tools working on OpenBSD so I do not have to
     34 fire up the VM, and avoid the hassle of moving YAML files around.
     35 
     36 In my case I need those cli tools:
     37 
     38 * `gcloud`: Google Cloud SDK, for managing Compute Engine VMs, Cloud Storage
     39     buckets, etc.
     40 * `kubectl`: to manage Kubernetes stuff directly.
     41 * `kustomize`: [This one][1] allows to have a base configuration for the
     42     Kubernetes YAML definitions and overlays that can modify that base.  In our
     43     case for different environments.
     44 * `fluxctl`: We have all those YAMLs on a git repository, and [flux][2] makes
     45     it be the _source of truth_.  Before this, there was always sync problems
     46     between what was on the repo and what was actually deployed on the
     47     Kubernetes cluster.
     48 * `kubeseal`: We use [sealed secrets][3] to store sensible data on the
     49     repository.
     50 
     51 Luckily, there's a port for Google Cloud SDK, and the others are written in Go,
     52 and can be compiled for OpenBSD (with some tweaks).
     53 
     54 ## Google Cloud SDK
     55 
     56 This is not the most used tool for me, but is essential as it provides
     57 authentication for all the others.
     58 As I said, there's a port for it, so install it is as simple as:
     59 
     60     $ doas pkg_add google-cloud-sdk
     61 
     62 After that one needs to log in.  Execute this command and follow the
     63 instructions:
     64 
     65     $ gcloud init
     66 
     67 More info [here][4]
     68 
     69 If you manage more than one Google Cloud Project (as I do), the configuration
     70 files are placed on `~/.config/gcloud/configurations/`.
     71 
     72 You'll see there's a `config_default` file.  You can copy that to
     73 `config_whatever` and edit the file (it's in _ini_ format) to fit your needs.
     74 Later on you can change projects with:
     75 
     76     $ gcloud config configurations activate whatever
     77 
     78 ## kubectl
     79 
     80 There's no port for `kubectl` ~~(yet, if you want to step in, I promise to test
     81 it, give feedback and maybe even commit it !),~~ on `6.7` but it can be compiled and
     82 installed manually.  We have [a port now on -current][6] thanks to
     83 Karlis Mikelsons and _@kn_.
     84 
     85 I assume that you have a Go environment working.
     86 
     87 At first I tried to go the easy route, as some devs (_abieber@ and kn@_) told
     88 me that that it was working, maybe this does the trick for you:
     89 
     90     $ go get -u github.com/kubernetes/kubernetes/cmd/kubectl
     91 
     92 Unfortunately it did not for me.  I had to delete some old stuff on
     93 `$GOPATH/src` that I think it was outdated and the `-u` did not handle
     94 correctly for some reason.  After that it compiled and installed perfectly on
     95 `$GOPATH/bin`. If you do not use `gke` as a provider you're all set here, but
     96 (there's always a but) after get the credentials (more on that later) I got
     97 this error:
     98 
     99     error: no Auth Provider found for name "gcp"
    100 
    101 For some reason it seems the auth provider I need fails to compile and gives no
    102 error at all.
    103 
    104 So, to solve this I took a peak at the FreeBSD port to see how they do things.
    105 Long story short, I downloaded the stable version they use in the port and used
    106 the same parameters they use to compile.  Basically get the source tarball for
    107 `1.18.2` (at the time of writing), then go to `kubernetes-1.18.2/cmd/kubectl`
    108 and compile with those options:
    109 
    110 
    111 ```
    112 go build -ldflags="-X k8s.io/component-base/version.gitMajor=1 -X k8s.io/component-base/version.gitMinor=18 -X k8s.io/component-base/version.buildDate=$(date +'%Y-%m-%dT%H:%M:%SZ') -X k8s.io/component-base/version.gitCommit=\"\" -X k8s.io/component-base/version.gitVersion=v1.18.2 -X k8s.io/client-go/pkg/version.gitVersion=v1.18.2"
    113 ```
    114 
    115 I have the impression that the only one needed is the last `-X`, but I couldn't
    116 be bothered of cheking further.  So one can get the configuration for the auth
    117 provider as usual right ?
    118 
    119     gcloud container clusters get-credentials my-cluster-name
    120 
    121 Wrong.  For some reason this does not work.  The error message urges you to use
    122 _"application default credentials"_, so a couple more steps are needed:
    123 
    124     gcloud config set container/use_application_default_credentials true
    125     gcloud auth application-default login
    126 
    127 And now finally `kubectl` is working.  You'll have to repeat this 3 last steps
    128 if you have more than one project or cluster to manage.
    129 
    130 ## kustomize
    131 
    132 If you have to suffer Kubernetes and don't know about [kustomize][5].  Take
    133 a look, you'll thank me later.
    134 
    135 It's out of the scope of this article to explain what it is and how to use it
    136 (which is a fancy way of saying RTFM).
    137 
    138 There's no port for this one either but, it's really easy, just "go get it":
    139 
    140     GO111MODULE=on go install sigs.k8s.io/kustomize/kustomize/v3
    141 
    142 ## fluxctl
    143 
    144 Again, no port for this one either.  I had to use the same technique as with
    145 `kubectl` because the "go get" was failing with a type mismatch on one of the
    146 dependencies `k8s.io/client-go/transport/round_trippers.go`.
    147 
    148 I took a quick look at the code, but the offending lines were there since 2016,
    149 so I avoided the potential rabbit hole and went for the easy ride.
    150 
    151 Download the last tarball (`1.19.0` at the time of writing), go to
    152 `flux-1.19.0/cmd/fluxctl` and then `go build`.
    153 
    154 That went flawlessly.
    155 
    156 ## kubeseal
    157 
    158 This one is quite nice to manage sensible data.  It keeps the data on the
    159 source repo encrypted and it can only be decrypted by the controller installed
    160 on the Kubernetes cluster.  Again, it's out of the scope ... blah blah ...
    161 
    162 Really easy one.  Just "go get" it and be happy:
    163 
    164     go get -u github.com/bitnami-labs/sealed-secrets/cmd/kubeseal
    165 
    166 ## Conclusion
    167 
    168 And finally, I can use all those _wonderful_ commands to manage that
    169 _fantastic_ infrastructure from OpenBSD.
    170 
    171 To be honest, at least they do a good job to work with each other and with
    172 other classic tools, which means they play quite nice with the
    173 pipeline/redirection composition ways of the shell.
    174 
    175 I really doubt that there's much OpenBSD users managing Kubernetes clusters out
    176 there, but maybe this could be useful to somebody.
    177 
    178 [1]: https://kustomize.io/
    179 [2]: https://fluxcd.io/
    180 [3]: https://github.com/bitnami-labs/sealed-secrets
    181 [4]: https://cloud.google.com/sdk/docs/quickstart-linux#initialize_the_sdk
    182 [5]: https://kustomize.io
    183 [6]: https://marc.info/?l=openbsd-ports-cvs&m=159136413409093&w=2