4 minutes
Written: 2021-12-26 22:14 +0000
Updated: 2024-08-06 00:53 +0000
TinySSH for Docker Development Environments
Rootless SSH servers with TinySSH for local GitPod containers
Background
act
works well enough for most Github Actions workflows, until something fails
in the pipeline, or interactive debugging is required. Naturally this is beyond
the purview of act
itself, but it is easy enough to set up shop with a
project’s GitPod docker
image being used. For the purposes of the post we’ll
consider numpy
.
1docker run -v $(pwd):/home/gitpod/f2py_skel --name f2py_npdev -it numpy/numpy-dev:latest
docker attach
can be used in tandem with docker stop
and
docker start
, but these have the disadvantage of only providing one
synchronous view of the machine.
An SSH server seems like the most natural solution. Rather than cover OpenSSH and its superuser based configuration, we will instead consider the TinySSH project, which works well without superuser access.
TinySSH
We also need to be able to listen for incoming TCP connections and run
tinysshd
in response to a connection. This is a picture perfect scenario for
tcpserver
which is part of the uscpi-tcp
set of tools.
1sudo apt install tinysshd uscpi-tcp # for tcpserver
2# On focal it is:
3# sudo apt install tinysshd ucspi-tcp-ipv6
Although this assumes sudo
access, compiling both uscpi-tcp and tinysshd from source is a remarkably simple process.
As tinyssh
does not implement RSA and older cryptography algorithms, it is easiest to make a new key pair on the host machine.
1# On the host
2ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_tiny
3ssh-add ~/.ssh/id_ed25519_tiny
4# Copy the public key
5cat ~/.ssh/id_ed25519_tiny.pub | wl-copy # wayland
6# or
7cat ~/.ssh/id_ed25519_tiny.pub | xclip # X11
tinyssh
supports only key based authentication, we need to set up the authorized_keys
correctly.
1# On the client
2mkdir $HOME/.ssh
3touch $HOME/.ssh/authorized_keys
4chmod 700 .ssh/
5echo $COPIED_KEY >> $HOME/.ssh/authorized_keys
6chmod 600 .ssh/authorized_keys
Finally we can setup server keys and run the server itself, on a port which does not require superuser privileges.
1tinysshd-makekey $HOME/tinysshkeydir
2tcpserver -HRDl0 0.0.0.0 2200 /usr/sbin/tinysshd -v $HOME/tinysshkeydir
From the host, we need to know the IP to connect to.
1docker inspect -f "{{ .NetworkSettings.IPAddress }}" $IMG_NAME
2ssh $USER@$IP -p 2200
File Permissions
A common issue with docker
images is that the user and group permissions on the shared folders do not match the permissions within the container. The most obvious fix is to force the id
by passing --user $(id -u):$(id -g)
when creating the container. However, a better approach is to change the group and user id
from within the container itself.
1id # On the host, note the uid and gid
2docker start f2py_npdev
3docker attach f2py_npdev
4# In docker container
5sudo groupmod -g $HOSTGID $USER
6sudo usermod -u $HOSTUID $USER
7exit # re-enter
When re-entering the container, the correct permissions will be set for the shared folder, but any existing files will need to be fixed up. We can also get the uid
and gid
of the shared folder directly with stat
.
1# in the container
2export HOSTGID=$(stat -c "%g" $SHARED_VOLUME)
3export HOSTUID=$(stat -c "%u" $SHARED_VOLUME)
4export DOCKERUID=$(stat -c "%u" $HOME)
5export DOCKERGID=$(stat -c "%g" $HOME)
For the fix-up, we can simply use the populated variables.
1sudo find / -xdev -user "$DOCKERUID" -exec chown -h "$HOSTUID" {} \;
2sudo find / -xdev -group "$DOCKERGID" -exec chgrp -h "$HOSTGID" {} \;
Conclusions
Like many modern developers, I spend far too long messing around with continuous
integration systems and jumping around different development environments.
numpy
and other projects have opted into Gitpod as a lower barrier of entry
method to aid contributors. Often the docker
images backing these do not have
systemd
or service management set up, and so local environments based off of
such images can be a little off putting.
Nothing in this post constitutes any kind of best practice. In an ideal world one would have either physical machines to access or well configured virtual machines. Nevertheless, for edge cases, this works well enough in practice .