How to ssh into a Kubernetes container.

Intro

ssh is a fundamental, extremely useful and orthogonal system administration tool, that can be used in and integrated into toolchains. It’s also useful in combination with Kubernetes.

In the following I’ll assume that:

  • you do not intend to throw away your tools just to stay kubernetically pure
  • you only want to start the ssh server on demand
  • one set of ssh host keys get generated per pod. Every container in the pod has the same keys.

Howto

First let’s add the ssh server to your docker image.

RUN dnf -y --setopt=install_weak_deps=False install \
        openssh-server \
    && sed --in-place 's/#ListenAddress ::/#ListenAddress ::\nListenAddress localhost/' /etc/ssh/sshd_config \
    && dnf clean all

Note that above I’m setting up sshd to listen on localhost only.

Create the ssh secrets. I create two different secrets:

  • one for ssh ingress (secrets for incoming connections)
  • one for ssh egress (secrets for outgoing connections)

Since I keep those secrets crypted with ansible-vault, I’d do:

$ ansible-vault edit ssh-egress.yaml.vault
$ ansible-vault edit ssh-ingress.yaml.vault

… and enter the secrets:

apiVersion: v1
kind: Secret
metadata:
  name: ssh-egress
type: Opaque
stringData:
  # Generated with ssh-keygen -b 2048 -t rsa -f id_rsa -q -N ''
  id_rsa: |-
    -----BEGIN RSA PRIVATE KEY-----
    Your secret key here
    -----END RSA PRIVATE KEY-----    
  id_rsa.pub: |-
    ssh-rsa your public key here    
  known_hosts: |-
    |1|in case you want to have prepared known_host entries...
    |1|then log into them from your own machine and copy those...
    |1|entries over here.    
apiVersion: v1
kind: Secret
metadata:
  name: ssh-ingress
type: Opaque
stringData:
  authorized_keys: |-
    # That's Joe's key
    ssh-rsa AAAAB... your public key(s) here    
  #
  # generated with: ssh-keygen -N '' -t rsa
  ssh_host_rsa_key: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    key contents here
    -----END OPENSSH PRIVATE KEY-----    
  ssh_host_rsa_key.pub: |
    ssh-rsa AAAA... key contents here    
  #
  # generated with: ssh-keygen -N '' -t ecdsa
  ssh_host_ecdsa_key: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    key contents here
    -----END OPENSSH PRIVATE KEY-----    
  ssh_host_ecdsa_key.pub: |
    ecdsa-sha2-nistp256 AAAA... key contents here    
  #
  # generated with: ssh-keygen -N '' -t ed25519
  ssh_host_ed25519_key: |
    -----BEGIN OPENSSH PRIVATE KEY-----
    key contents here
    -----END OPENSSH PRIVATE KEY-----    
  ssh_host_ed25519_key.pub: |
    ssh-ed25519 AAAA... key contents here    

Note that for some reason my sshd will not recognize keys without a trailing newline. Therefore I’m using the somethingkey: | notation instead of the somethingkey: |- notation here.

Now apply the secrets:

$ ansible-vault view ssh-ingress.yaml.vault | kubectl apply -f -
$ ansible-vault view ssh-egress.yaml.vault  | kubectl apply -f -

And use those secrets in a pod:

---
apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - image: your-image-URL-here
    name: my-pod
    # https://github.com/kubernetes/kubernetes/issues/34982
    lifecycle:
        postStart:
          exec:
            command:
            - bash
            - -c
            - mkdir /root/.ssh && chmod go-rwx /root/.ssh && cp -aL /root/.ssh-egress/* /root/.ssh && cp -aL /root/.ssh-ingress/authorized_keys /root/.ssh && cp -aL /root/.ssh-ingress/ssh_host_*_key* /etc/ssh/
    volumeMounts:
    - name: ssh-ingress
      # https://github.com/kubernetes/kubernetes/issues/34982
      mountPath: "/root/.ssh-ingress"
      readOnly: true
    - name: ssh-egress
      mountPath: "/root/.ssh-egress"
      readOnly: true
  imagePullSecrets:
  - name: regcred-your-handle-here
  volumes:
  - name: ssh-egress
    secret:
      secretName: ssh-egress
      # 0400 -> 256
      defaultMode: 256
      items:
      - key: id_rsa
        path: id_rsa
      - key: id_rsa.pub
        path: id_rsa.pub
      - key: known_hosts
        path: known_hosts
  - name: ssh-ingress
    secret:
      secretName: ssh-ingress
      # 0400 -> 256
      defaultMode: 256
      items:
      - key: authorized_keys
        path: authorized_keys
      - key: ssh_host_rsa_key
        path: ssh_host_rsa_key
      - key: ssh_host_rsa_key.pub
        path: ssh_host_rsa_key.pub
      - key: ssh_host_ecdsa_key
        path: ssh_host_ecdsa_key
      - key: ssh_host_ecdsa_key.pub
        path: ssh_host_ecdsa_key.pub
      - key: ssh_host_ed25519_key
        path: ssh_host_ed25519_key
      - key: ssh_host_ed25519_key.pub
        path: ssh_host_ed25519_key.pub

Apply that pod config and start sshd in the pod:

$ kubectl exec my-pod -- /usr/sbin/sshd

Portforward a random local port of your choice to that pod:

kubectl port-forward my-pod 2222:22

Create a useful alias for that pod (so that ssh won’t complain about changing keys when you’d connect to localhost instead):

$ cat /etc/hosts | grep my-pod
127.0.0.1       localhost my-pod

Connect to the pod via ssh:

$ ssh root@my-pod -p 2222

You may want to kill sshd once you do not need it any more.

Note a lot of the trouble here is because Kubernetes’ way of setting permissions seems to be unworkable - see https://github.com/kubernetes/kubernetes/issues/34982 .

Please let me know if there’s progress on that issue and I’ll try to update this article.

Update: Article was updated so that ssh host keys are not present in the docker container image.