How to Set Up GlusterFS Distributed Storage Across Raspberry Pi Nodes
GlusterFS is a scalable, distributed filesystem that pools storage from multiple machines into a single volume. If one node goes down, your data remains available on the others. It operates at the system level, so it is installed directly on the host rather than in Docker.
Prerequisites
- At least 2 Raspberry Pi devices (3 recommended for redundancy)
- Raspberry Pi OS (64-bit) on each node
- All Pis on the same local network with SSH enabled
Example setup: pi-node1 at 192.168.1.101, pi-node2 at 192.168.1.102, pi-node3 at 192.168.1.103.
Step 1: Configure Hostnames
On every Pi, add all nodes to /etc/hosts:
sudo nano /etc/hosts
192.168.1.101 pi-node1
192.168.1.102 pi-node2
192.168.1.103 pi-node3
Set each Pi's hostname to match, e.g. on the first node:
sudo hostnamectl set-hostname pi-node1
Step 2: Install GlusterFS on All Nodes
Run on every Pi:
sudo apt update && sudo apt upgrade -y
sudo apt install -y glusterfs-server
sudo systemctl enable glusterd
sudo systemctl start glusterd
Verify: sudo systemctl status glusterd should show active (running).
Step 3: Create Storage Bricks
On each node, create the directory GlusterFS will use as its brick:
sudo mkdir -p /gluster/brick1
For better performance and SD card longevity, consider using a USB SSD mounted at this path instead.
Step 4: Probe Peer Nodes
From pi-node1, connect to the other nodes:
sudo gluster peer probe pi-node2
sudo gluster peer probe pi-node3
Verify with:
sudo gluster peer status
You should see both peers listed as Peer in Cluster (Connected).
Step 5: Create a Replicated Volume
From pi-node1, create a volume replicated across all three nodes:
sudo gluster volume create shared-vol replica 3 \
pi-node1:/gluster/brick1/vol \
pi-node2:/gluster/brick1/vol \
pi-node3:/gluster/brick1/vol
If using only 2 nodes, change replica 3 to replica 2 and list two brick paths. Start the volume:
sudo gluster volume start shared-vol
sudo gluster volume info shared-vol
You should see Status: Started.
Step 6: Mount the Volume
On the node where you want to access shared storage:
sudo apt install -y glusterfs-client
sudo mkdir -p /mnt/shared
sudo mount -t glusterfs pi-node1:/shared-vol /mnt/shared
To persist across reboots:
echo 'pi-node1:/shared-vol /mnt/shared glusterfs defaults,_netdev 0 0' | sudo tee -a /etc/fstab
Step 7: Test Replication
Write a file from pi-node1:
echo "Hello from GlusterFS" | sudo tee /mnt/shared/test.txt
SSH into pi-node2 and check the brick directly:
cat /gluster/brick1/vol/test.txt
You should see Hello from GlusterFS. To test fault tolerance, stop glusterd on pi-node3, write a new file from pi-node1, then restart pi-node3. After a short healing period, the file replicates automatically. Check status with sudo gluster volume heal shared-vol info.
Troubleshooting
- Peer probe fails: Ensure
glusterdis running on the target node and port 24007 is not blocked. - Volume creation fails with "brick already in use": Use a fresh path like
/gluster/brick2/vol. - Mount hangs: Verify the
glusterfs-clientpackage is installed and the volume is started. - Files not replicating: Run
sudo gluster volume heal shared-vol infoto check for split-brain issues. - Slow performance: SD cards have limited I/O. Use USB 3.0 SSDs for brick storage.
- "Transport endpoint not connected": Remount with
sudo umount /mnt/shared && sudo mount -t glusterfs pi-node1:/shared-vol /mnt/shared.
Conclusion
GlusterFS gives your Raspberry Pi cluster a resilient shared storage layer. With replicated volumes, your data survives node failures and is accessible from any machine on the network.