James Coates

Computer Science student. Sydney, Australia.


← Back to blog

Collapsing the Core: Rebuilding the Newport Network on 10GbE

· Technology

Collapsing the Core: Rebuilding the Newport Network on 10GbE

The Newport network started as a flat gigabit setup with a single switch and a consumer router. Over the past year it evolved into a segmented UniFi deployment with VLANs, managed switching, and a growing compute stack. What it had not done, until this month, was address the fundamental throughput constraint at the centre of it all.

This post documents the transition to a collapsed core architecture built around the USW Aggregation as the root switch, with 10 Gigabit Ethernet forming the backbone between every critical node in the lab.

Gateway and External Connectivity

The internet connection enters the property through an NBN NTD (Network Termination Device) provided by Launtel. The NTD connects via a 10GbE RJ45 link to a Ubiquiti UCG Fiber (UniFi Cloud Gateway Fiber), which serves as the network gateway, router, and firewall.

The UCG Fiber handles inter-VLAN routing, DNS, DHCP, and all the Traffic Rules that keep the IoT VLAN isolated from the rest of the network. Its SFP+ ports provide the 10GbE uplink into the switching backbone using DAC (Direct Attach Copper) cables.

Launtel deserves a brief mention. They are one of the few Australian ISPs that offer a genuinely transparent service with no lock-in contracts, real-time speed tier changes through their web portal, and actual humans when you call support. The NBN connection itself runs at 500/50 Mbps on the current plan, which is more than adequate for the household's external bandwidth requirements. The 10GbE backbone is not about internet speed. It is about internal east-west traffic between the servers and storage.

The Switching Tier

The switching layer is where the architecture changed most significantly.

Root Switch: USW Aggregation

The USW Aggregation is a compact 8-port 10GbE SFP+ switch designed for exactly this role. It sits at the centre of the network and provides the high-speed distribution point for every 10GbE-capable device. In a collapsed core topology, a single switch handles both the core switching layer and the aggregation of access-layer uplinks. The USW Aggregation is purpose-built for this pattern in small to mid-sized UniFi deployments.

Every 10GbE connection in the network terminates here. The UCG Fiber uplink, the Plex server's SFP+ NIC, the access switch uplink, and soon the TrueNAS server.

Access Switch: US-48-500W

The US-48 PoE 500W is the workhorse of the physical layer. It provides 48 gigabit PoE ports and connects back to the USW Aggregation via a 10GbE SFP+ uplink. This switch supports the following downstream devices:

The U6 Pro APs are placed based on RF site survey work documented in earlier posts, with manual 2.4 GHz and 5 GHz dBm tuning to manage co-channel interference across the double-brick construction. HT20 channel width on 2.4 GHz and legacy roaming protocols remain disabled.

Compute and Storage

Plex Media Server (Lenovo SFF)

The primary compute node is a Lenovo small form factor desktop repurposed as a headless Ubuntu Server instance. It connects to the USW Aggregation via a 10GbE SFP+ NIC, making it the first server in the lab to operate at full backbone speed.

The server runs several Docker containers alongside Plex itself:

The NVMe drive handles all database writes and Docker layer operations. Media files are read sequentially from the external HDDs during playback, where the 10GbE link ensures that even high-bitrate 4K remuxes never saturate the connection between the server and any client on the network.

TrueNAS Server

The storage backbone of the lab is a dedicated TrueNAS machine built on server-grade hardware:

The TrueNAS box currently connects via a dual 1GbE LACP LAG (Link Aggregation Control Protocol). This is the last bottleneck in the architecture. A 10GbE SFP+ upgrade is pending, which will bring the storage node up to full backbone speed and remove the throughput ceiling on large file transfers between the Plex server and the ZFS pools.

All four drives are CMR (Conventional Magnetic Recording) rather than SMR. This is a deliberate choice. SMR drives suffer from severe write performance degradation under sustained workloads due to their shingled write architecture, making them unsuitable for ZFS pools that rely on consistent write throughput during scrubs and resilver operations.

Data Integrity

The backup strategy runs on a three-day automated cycle to TrueNAS. Homebridge configurations from the Raspberry Pi and the Plex database from the Lenovo SFF are both mirrored to dedicated ZFS datasets on the TrueNAS server.

Both machines run essentially the same script. The Raspberry Pi version:

#!/bin/bash

SOURCE_DIR="/"
DEST_USER="root"
DEST_IP="truenas.local"
DEST_DIR="/mnt/MasterNAS/tank/Archival/RaspPiBackup/"
LOG_FILE="/home/doorpi/nas_full_backup.log"
SSH_KEY="/home/doorpi/.ssh/id_ed25519"

echo "Starting full system backup at $(date)" >> "$LOG_FILE"

rsync -avz --delete \
  --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} \
  -e "ssh -i $SSH_KEY" \
  "$SOURCE_DIR" "$DEST_USER@$DEST_IP:$DEST_DIR" >> "$LOG_FILE" 2>&1

echo "Backup finished at $(date)" >> "$LOG_FILE"
echo "----------------------------------------" >> "$LOG_FILE"

The Plex server version is identical in structure, with paths adjusted for the host:

#!/bin/bash

SOURCE_DIR="/"
DEST_USER="root"
DEST_IP="<TRUENAS_IP>"
DEST_DIR="/mnt/MasterNAS/tank/Archival/PlexPCBackup/"
LOG_FILE="/root/nas_full_backup.log"
SSH_KEY="/root/.ssh/id_ed25519"

echo "Starting Plex OS backup at $(date)" >> "$LOG_FILE"

rsync -avz --delete \
  --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} \
  -e "ssh -i $SSH_KEY" \
  "$SOURCE_DIR" "$DEST_USER@$DEST_IP:$DEST_DIR" >> "$LOG_FILE" 2>&1

echo "Backup finished at $(date)" >> "$LOG_FILE"
echo "----------------------------------------" >> "$LOG_FILE"

Both scripts do the same thing. They run rsync in archive mode (-a preserves permissions, ownership, timestamps, and symlinks), with compression (-z) over SSH using a dedicated ED25519 key with no passphrase. The --delete flag ensures the remote mirror stays in sync by removing files on the TrueNAS side that no longer exist on the source. The exclusion list strips out the kernel virtual filesystems (/dev, /proc, /sys), runtime state (/tmp, /run), and mount points (/mnt, /media) that should never be part of a backup. On the Plex host, the /mnt/* exclusion is what prevents the 8 TB of media on the external drives from being pulled into a backup sized for the OS SSD.

Each script logs its start time, the full rsync output, and its finish time to a local log file. That log is the first thing to check when a backup fails silently, which is the default failure mode for any unattended cron job.

Automation

Both scripts are scheduled in the root crontab (sudo crontab -e) and run every three days in the evening:

45 18 1-31/3 * * /home/doorpi/nas_backup.sh
45 19 1-31/3 * * /root/nas_backup.sh

The Pi runs at 18:45, the Plex server at 19:45. The hour gap keeps the two jobs from contending for the same TrueNAS write bandwidth simultaneously. Both run as root because a full system mirror needs to read files that only root can access, including /etc/shadow, service persistence directories, and daemon-owned paths under /var.

The three-day cadence represents a balance between recovery point granularity and the realities of the Pi 3B's 100 Mbps Ethernet ceiling. Running the backup more frequently would provide a tighter recovery point objective but would contend with the household's evening bandwidth usage. Three days keeps the worst-case data loss window small enough to be acceptable while keeping the backup jobs outside of peak household usage.

Internal Network Summary

What This Changes in Practice

The shift from gigabit to 10GbE on the internal backbone is not about peak throughput for any single operation. It is about headroom. When the Plex server is streaming a 4K remux to two clients while Gluetun is saturating a VPN tunnel and the TrueNAS box is running a weekly scrub, none of those workloads are contending for the same gigabit pipe any more. The backbone has enough capacity to absorb all of them simultaneously without any single flow degrading another.

The collapsed core model also simplifies management. There is one point of truth for the high-speed fabric. Every 10GbE device plugs into the USW Aggregation. Every gigabit device plugs into the US-48. The logical topology matches the physical one, which makes troubleshooting and capacity planning considerably more straightforward than a mesh of uplinks between multiple switches would be.

Technology Stack

Networking

Compute

Software


← Back to blog