Setting up the NFS on my home lab cluster


I wanted to setup the NFS storage on my homelab VM cluster deployed on Hetzner Cloud. Previously I deployed the extra disk (Hetzner Cloud Volume) which I mounted on the “Server” VM and using sshfs “shared” this mounted filesystem to other VM-s acting as clients.

While the sshfs is relatively simply to setup, if ssh already works, it’s generally slower than NFS due to encryption overhead.

NFS is not encrypted by default, for that something like Kerberos should be used, so it’s ideal for trusted networks and VPN-s.

NFS works in a fashion of client-server architecture: where the server exports directories to clients, and the clients can mount the exported directories.


Run the following code on the “server” VM:

# NFS server
sudo apt update -y sudo apt install nfs-kernel-server -y
mkdir /nfs

Run the following command if you are using the root_squash option (translation from root user on client into a non privileged user on server):

# NFS server
sudo chown nobody:nogroup /nfs

Add this line in /etc/exports, configuration file used by the NFS server to define the directories that are shared with remote client machines and the associated access control options:

/nfs *(rw,sync,no_root_squash,no_subtree_check)

Enable the changes:

sudo exportfs -a
sudo systemctl restart nfs-kernel-server

We are done with the NFS server setup, now head over to the NFS clients.

sudo apt install nfs-common -y

mkdir /nfs

sudo mount node-0:/nfs /nfs

To enable the automount on boot, we resort to creating a systemd service. Adding a entry to /etc/fstab will not work because the network is still not loaded at that moment.

Create a systemd mount service at /etc/systemd/system/nfs-mount.service:

Description=Mount NFS Share
ExecStart=/usr/bin/mount -a 

To enable and run the service:

sudo systemctl enable nfs-mount.service 
sudo systemctl start nfs-mount.service 
sudo systemctl status nfs-mount.service