NFS share with external SSD Raid 1 disks

When planning to provide my Kubernetes cluster with a dynamic storage provisioning option, I decided to use NFS to be able to attach volumes to multiple nodes in a dynamic way. Considering SSD disks are inexpensive these days, I also decided to buy a couple to create a RAID 1 array and have a certain level of redundancy. The easiest way to connect those disks is using a SATA to USB adapter, as I’m using an Intel NUC that has a couple of USB 3 ports available.

So this guide assumes you have a couple of unformatted SSD disks and the corresponding cable adapters, and will guide you through all the process up to sharing the new disks using NFS.

So lets get right to it:

Mount and format the external drives

Lets start by properly attaching and formatting the SSD drives, one by one.

After connecting the drive to the USB port, you need to know how to identify the device, for this purpose use the lsblk command:

$ lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0                       7:0    0  55.5M  1 loop  /snap/core18/2409
loop1                       7:1    0  61.9M  1 loop  /snap/core20/1518
loop2                       7:2    0  67.8M  1 loop  /snap/lxd/22753
loop3                       7:3    0  67.2M  1 loop  /snap/lxd/21835
loop5                       7:5    0  61.9M  1 loop  /snap/core20/1494
loop6                       7:6    0    47M  1 loop  /snap/snapd/16010
loop7                       7:7    0 219.8M  1 loop  /snap/microk8s/3272
loop8                       7:8    0    47M  1 loop  /snap/snapd/16292
sda                         8:0    0 119.2G  0 disk  
├─sda1                      8:1    0   1.1G  0 part  /boot/efi
├─sda2                      8:2    0   1.5G  0 part  /boot
└─sda3                      8:3    0 116.7G  0 part  
  └─ubuntu--vg-ubuntu--lv 253:0    0 116.7G  0 lvm   /
sdb                         8:16   0 223.6G  0 disk  

The command will display all devices and partitions. In this case, the new attached drive is identified as “sdb”, with no partitions.

In case this is a used drive and any partitions are listed, we will delete them all for a fresh start.

$ sudo gdisk /dev/sdb
GPT fdisk (gdisk) version 1.0.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

To delete the existing partition, use the d option.

Command (? for help): d
Partition number (1-2): 1

Command (? for help): d
Using 2

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!!

Do you want to proceed? (Y/N): Y

Now it’s time to create a new fresh partition.

$ sudo gdisk /dev/sdb
GPT fdisk (gdisk) version 1.0.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Create a new partition using the n option and selecting partition number 1.

Command (? for help): n
Partition number (1-128, default 1): 1

You can go with all the default options for the new partition to use all disk space.

First sector (34-11721045134, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-11721045134, default = 11721045134) or {+-}size{KMGTP}:
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300):

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!!

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/sda.
The operation has completed successfully.

Now, once the partition is ready, you need to format using any type of file system. In this case will be using EXT4, which is a common type on Linux systems.

$ sudo mkfs.ext4 /dev/sdb1

Be careful when specifying the device and partition, to not overwrite any other disk.

At this point the drive is ready, with a proper filesystem ready to be used. As this is an external disk, lets mount the disk to an specific location and make that mount permanent even after reboot.

$ sudo mkdir /media/usbssd1

$ sudo blkid
/dev/sdb1: UUID="acc9fb76-a7bb-4801-9740-831055ebed29" TYPE="ext4" PARTLABEL="Linux filesystem" PARTUUID="868d68e4-aea2-4671-b348-5d610108c240"

Keep the UUID specific to the device and partition, and edit fstab to make the mount permanent.

$ sudo nano /etc/fstab

Add the new mount to the end of the file.

# External device
UUID=acc9fb76-a7bb-4801-9740-831055ebed29 /media/usbssd1 ext4 defaults 0 1

Run the following command to mount the drive and change permissions at the new mount point.

$ sudo mount -a
$ sudo chmod 1777 /media/ussdbssd1

At this point, the first drive is ready to be used and automatically mounts after reboot. As we need two disks to create a RAID 1 array, all these steps are to be repeated with the second drive connected to the USB port. Keep in mind that the device identifier will change (ie: sdc), so use the commands with the proper device identifier.

Create the RAID 1 disk array

So what is a RAID disk array? RAID stands for Redundant Array of Independent Disks, which means that a set of independent disks are used by the operating system as a single unit, with different levels of redundancy. RAID 1 creates an exact copy of data on each disk, being tolerant to loose one of the drives without any data loss. In my case this is more than enough for the data I’ll be storing and the available budget.

First install mdadm and examine the devices to be used in the RAID array.

$ sudo apt install mdadm
$ sudo mdadm --examine /dev/sdb /dev/sdc

The output of this command should be something like:

/dev/sdb:
   MBR Magic : aa55
Partition[0] :    468862127 sectors at            1 (type ee)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :    468862127 sectors at            1 (type ee)

Now examine the specific partitions:

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1
mdadm: No md superblock detected on /dev/sdb1.
mdadm: No md superblock detected on /dev/sdc1.

This out put is consistent with the lack of a RAID setup on those partitions.

Before moving forward on creating the RAID array, we need to unmount the partitions and delete the mount points by editing /etc/fstab. You can’t create a RAID logical device on mounted partitions.

$ sudo umount /media/usbssd1
$ sudo umount /media/usbssd2
$ sudo nano /etc/fstab

Now lets create the RAID 1 logical drive:

$ sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

mdadm: /dev/sdb1 appears to contain an ext2fs file system
       size=234430020K  mtime=Fri May 20 15:36:59 2022
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: /dev/sdc1 appears to contain an ext2fs file system
       size=234430020K  mtime=Fri May 20 15:36:59 2022
Continue creating array? 

This process takes a while, you can check progress with this command:

$ cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[1] sdb1[0]
      234297920 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  0.8% (1883712/234297920) finish=26.7min speed=144900K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

The percentage of progress will be moving as time elapses. An alternative is:

$ sudo mdadm --detail /dev/md0

/dev/md0:
           Version : 1.2
     Creation Time : Fri May 20 15:56:49 2022
        Raid Level : raid1
        Array Size : 234297920 (223.44 GiB 239.92 GB)
     Used Dev Size : 234297920 (223.44 GiB 239.92 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri May 20 15:58:51 2022
             State : clean, resyncing 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

     Resync Status : 4% complete

              Name : nucworker2:0  (local to host nucworker2)
              UUID : dd8951a9:f6691c6a:06662569:8fe8bad0
            Events : 24

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

Once the status is “clean”, you can move forward creating the corresponding file system:

$ sudo mkfs.ext4 /dev/md0

Finally, mount the new device and edit fstab to make it permanent after reboot:

$ sudo mkdir /media/ssdraid1
$ sudo chmod 1777 /media/ssdraid1
$ sudo blkid
/dev/md0: UUID="253e1c45-2df4-490d-8d7d-2d0e35ca7876" TYPE="ext4"

$ sudo nano /etc/fstab

Append at the end of the file:
# RAID device
UUID=253e1c45-2df4-490d-8d7d-2d0e35ca7876 /media/ssdraid1 ext4 defaults 0 1

$ sudo mount -a

Install NFS and share the RAID 1 device

Once again, what is NFS? It stands for Network File System, is a proven way of creating a network drive accessible to any allowed node, that also has the capability of simultaneous read and write clients.

Lets install the NFS server:

$ sudo apt install nfs-kernel-server

Then create a directory on the RAID drive to be shared:

$ cd /media/ssdraid1/
$ sudo mkdir nfs-share-dir
$ sudo chown -R nobody:nogroup /media/ssdraid1/nfs-share-dir/
$ sudo chmod 777 nfs-share-dir

Create the NFS export for the new directory:

$ sudo nano /etc/exports

#Add this line at the end of the file
/media/ssdraid1/nfs-share-dir 192.168.68.0/24(rw,sync,no_subtree_check)

$ sudo exportfs -a
$ sudo systemctl restart nfs-kernel-server

In this case I’m sharing the directory to all my network segment, but you can be more restrictive if only a couple of nodes are allowed to mount the NFS share.

To test that everything is working as expected, log into a different node and mount the newly created share:

$ sudo apt install nfs-common

# Create a mountpoint
$ sudo mkdir /media/testnfs

$ sudo mount 192.168.68.54:/media/ssdraid1/nfs-share-dir /media/testnfs

Try writing some data and validating it on the NFS server, all should be working as expected.

You can permanently mount the share on any node as previously explained, or do it on demand. Multiple nodes can mount the same share and write and read shared files without any issue, and with the confidence that data has a reasonable level of redundancy in case any of the drives fails.

Let me know in the comments section if this solution worked for you, any improvements or alternative steps. Cheers!

Related Posts

ZeroTier router to access my home network

Never had the real need to access my home network from the outside but recently as working back at the office is becoming a reality, I just…

Deploying WordPress in Kubernetes exposed via Cloudflare Tunnels

So this is the story of how you are reading this blog, technically speaking. When thinking about creating this blog, I realized that already had available compute…

Add shared storage to MicroK8s Kubernetes cluster

A big feature of any Kubernetes cluster is its capability to deploy running workloads among the different compute nodes depending on many factors, but primarily assuring availability…

Backup script for KVM virtual machines

KVM is a great virtualization engine, open source and light way. I’m running my Home Assistant VM inside an Intel NUC without any issues and super stable….

Expose Home Assistant on CG-NAT networks

There is no doubt that Home Assistant is a great home automation software that lets automate a wide range of home appliances. Open source and in constant…

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments