Collapsing the Core: Rebuilding the Newport Network on 10GbE
The Newport network started as a flat gigabit setup with a single switch and a consumer router. Over the past year it evolved into a segmented UniFi deployment with VLANs, managed switching, and a growing compute stack. What it had not done, until this month, was address the fundamental throughput constraint at the centre of it all.
This post documents the transition to a collapsed core architecture built around the USW Aggregation as the root switch, with 10 Gigabit Ethernet forming the backbone between every critical node in the lab.
Gateway and External Connectivity
The internet connection enters the property through an NBN NTD (Network Termination Device) provided by Launtel. The NTD connects via a 10GbE RJ45 link to a Ubiquiti UCG Fiber (UniFi Cloud Gateway Fiber), which serves as the network gateway, router, and firewall.
The UCG Fiber handles inter-VLAN routing, DNS, DHCP, and all the Traffic Rules that keep the IoT VLAN isolated from the rest of the network. Its SFP+ ports provide the 10GbE uplink into the switching backbone using DAC (Direct Attach Copper) cables.
Launtel deserves a brief mention. They are one of the few Australian ISPs that offer a genuinely transparent service with no lock-in contracts, real-time speed tier changes through their web portal, and actual humans when you call support. The NBN connection itself runs at 500/50 Mbps on the current plan, which is more than adequate for the household's external bandwidth requirements. The 10GbE backbone is not about internet speed. It is about internal east-west traffic between the servers and storage.
The Switching Tier
The switching layer is where the architecture changed most significantly.
Root Switch: USW Aggregation
The USW Aggregation is a compact 8-port 10GbE SFP+ switch designed for exactly this role. It sits at the centre of the network and provides the high-speed distribution point for every 10GbE-capable device. In a collapsed core topology, a single switch handles both the core switching layer and the aggregation of access-layer uplinks. The USW Aggregation is purpose-built for this pattern in small to mid-sized UniFi deployments.
Every 10GbE connection in the network terminates here. The UCG Fiber uplink, the Plex server's SFP+ NIC, the access switch uplink, and soon the TrueNAS server.
Access Switch: US-48-500W
The US-48 PoE 500W is the workhorse of the physical layer. It provides 48 gigabit PoE ports and connects back to the USW Aggregation via a 10GbE SFP+ uplink. This switch supports the following downstream devices:
- Two U6 Pro Access Points (one upstairs, one downstairs), providing dual-band 802.11ax wireless coverage across both levels of the house, powered directly via PoE from the switch
- Ethernet feed to the study, serving the desktop workstation and any client devices that benefit from a wired connection
- Raspberry Pi 3B, running Homebridge on a dedicated wired connection for stability
The U6 Pro APs are placed based on RF site survey work documented in earlier posts, with manual 2.4 GHz and 5 GHz dBm tuning to manage co-channel interference across the double-brick construction. HT20 channel width on 2.4 GHz and legacy roaming protocols remain disabled.
Compute and Storage
Plex Media Server (Lenovo SFF)
The primary compute node is a Lenovo small form factor desktop repurposed as a headless Ubuntu Server instance. It connects to the USW Aggregation via a 10GbE SFP+ NIC, making it the first server in the lab to operate at full backbone speed.
- 16 GB DDR4 RAM
- 1 TB M.2 NVMe SSD for the operating system, Plex database, and Docker metadata
- 2x 4 TB external HDDs mounted for media storage
The server runs several Docker containers alongside Plex itself:
- Gluetun with a Mullvad VPN tunnel for all torrent traffic, ensuring download activity is routed through an encrypted exit node regardless of the state of any other network configuration
- Tautulli for Plex usage analytics and monitoring
- Network Optimiser running in read-only mode for passive network performance monitoring
The NVMe drive handles all database writes and Docker layer operations. Media files are read sequentially from the external HDDs during playback, where the 10GbE link ensures that even high-bitrate 4K remuxes never saturate the connection between the server and any client on the network.
TrueNAS Server
The storage backbone of the lab is a dedicated TrueNAS machine built on server-grade hardware:
- 16-core Intel Xeon processor
- 128 GB ECC RAM, which TrueNAS leverages heavily for its ARC (Adaptive Replacement Cache) to accelerate ZFS read operations
- 2x 8 TB Seagate IronWolf Pro drives (7200 RPM, CMR)
- 2x 4 TB Western Digital drives (7200 RPM, CMR)
- Dedicated IPMI interface for out-of-band management
The TrueNAS box currently connects via a dual 1GbE LACP LAG (Link Aggregation Control Protocol). This is the last bottleneck in the architecture. A 10GbE SFP+ upgrade is pending, which will bring the storage node up to full backbone speed and remove the throughput ceiling on large file transfers between the Plex server and the ZFS pools.
All four drives are CMR (Conventional Magnetic Recording) rather than SMR. This is a deliberate choice. SMR drives suffer from severe write performance degradation under sustained workloads due to their shingled write architecture, making them unsuitable for ZFS pools that rely on consistent write throughput during scrubs and resilver operations.
Data Integrity
The backup strategy runs on a three-day automated cycle to TrueNAS. Homebridge configurations from the Raspberry Pi and the Plex database from the Lenovo SFF are both mirrored to dedicated ZFS datasets on the TrueNAS server.
Both machines run essentially the same script. The Raspberry Pi version:
#!/bin/bash
SOURCE_DIR="/"
DEST_USER="root"
DEST_IP="truenas.local"
DEST_DIR="/mnt/MasterNAS/tank/Archival/RaspPiBackup/"
LOG_FILE="/home/doorpi/nas_full_backup.log"
SSH_KEY="/home/doorpi/.ssh/id_ed25519"
echo "Starting full system backup at $(date)" >> "$LOG_FILE"
rsync -avz --delete \
--exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} \
-e "ssh -i $SSH_KEY" \
"$SOURCE_DIR" "$DEST_USER@$DEST_IP:$DEST_DIR" >> "$LOG_FILE" 2>&1
echo "Backup finished at $(date)" >> "$LOG_FILE"
echo "----------------------------------------" >> "$LOG_FILE"
The Plex server version is identical in structure, with paths adjusted for the host:
#!/bin/bash
SOURCE_DIR="/"
DEST_USER="root"
DEST_IP="<TRUENAS_IP>"
DEST_DIR="/mnt/MasterNAS/tank/Archival/PlexPCBackup/"
LOG_FILE="/root/nas_full_backup.log"
SSH_KEY="/root/.ssh/id_ed25519"
echo "Starting Plex OS backup at $(date)" >> "$LOG_FILE"
rsync -avz --delete \
--exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} \
-e "ssh -i $SSH_KEY" \
"$SOURCE_DIR" "$DEST_USER@$DEST_IP:$DEST_DIR" >> "$LOG_FILE" 2>&1
echo "Backup finished at $(date)" >> "$LOG_FILE"
echo "----------------------------------------" >> "$LOG_FILE"
Both scripts do the same thing. They run rsync in archive mode (-a preserves permissions, ownership, timestamps, and symlinks), with compression (-z) over SSH using a dedicated ED25519 key with no passphrase. The --delete flag ensures the remote mirror stays in sync by removing files on the TrueNAS side that no longer exist on the source. The exclusion list strips out the kernel virtual filesystems (/dev, /proc, /sys), runtime state (/tmp, /run), and mount points (/mnt, /media) that should never be part of a backup. On the Plex host, the /mnt/* exclusion is what prevents the 8 TB of media on the external drives from being pulled into a backup sized for the OS SSD.
Each script logs its start time, the full rsync output, and its finish time to a local log file. That log is the first thing to check when a backup fails silently, which is the default failure mode for any unattended cron job.
Automation
Both scripts are scheduled in the root crontab (sudo crontab -e) and run every three days in the evening:
45 18 1-31/3 * * /home/doorpi/nas_backup.sh
45 19 1-31/3 * * /root/nas_backup.sh
The Pi runs at 18:45, the Plex server at 19:45. The hour gap keeps the two jobs from contending for the same TrueNAS write bandwidth simultaneously. Both run as root because a full system mirror needs to read files that only root can access, including /etc/shadow, service persistence directories, and daemon-owned paths under /var.
The three-day cadence represents a balance between recovery point granularity and the realities of the Pi 3B's 100 Mbps Ethernet ceiling. Running the backup more frequently would provide a tighter recovery point objective but would contend with the household's evening bandwidth usage. Three days keeps the worst-case data loss window small enough to be acceptable while keeping the backup jobs outside of peak household usage.
Internal Network Summary
- NBN NTD — ISP termination, 10GbE RJ45 to UCG Fiber
- UCG Fiber — Gateway, router, firewall, 10GbE SFP+ DAC to USW Aggregation
- USW Aggregation — Root switch (collapsed core), 10GbE SFP+ backbone
- US-48-500W — Access switch, PoE distribution, 10GbE SFP+ uplink to USW Aggregation
- Lenovo SFF (Plex) — Primary compute, 10GbE SFP+ to USW Aggregation
- TrueNAS — Storage node, dual 1GbE LAG (10GbE pending)
- Raspberry Pi 3B — Homebridge, automation, 100 Mbps to US-48
- 2x U6 Pro — Wireless access points, PoE from US-48
What This Changes in Practice
The shift from gigabit to 10GbE on the internal backbone is not about peak throughput for any single operation. It is about headroom. When the Plex server is streaming a 4K remux to two clients while Gluetun is saturating a VPN tunnel and the TrueNAS box is running a weekly scrub, none of those workloads are contending for the same gigabit pipe any more. The backbone has enough capacity to absorb all of them simultaneously without any single flow degrading another.
The collapsed core model also simplifies management. There is one point of truth for the high-speed fabric. Every 10GbE device plugs into the USW Aggregation. Every gigabit device plugs into the US-48. The logical topology matches the physical one, which makes troubleshooting and capacity planning considerably more straightforward than a mesh of uplinks between multiple switches would be.
Technology Stack
Networking
- Ubiquiti UCG Fiber (gateway)
- USW Aggregation (10GbE root switch)
- US-48-500W (48-port PoE access switch)
- 2x U6 Pro (802.11ax access points)
- 10GbE SFP+ DAC cabling
- Launtel (NBN ISP, 500/50 Mbps)
Compute
- Lenovo SFF (headless Ubuntu Server, 16 GB DDR4, 1 TB NVMe, 10GbE SFP+ NIC)
- TrueNAS (16-core Xeon, 128 GB ECC RAM, 2x 8 TB IronWolf Pro, 2x 4 TB WD, dual 1GbE LAG with IPMI)
- Raspberry Pi 3B (Homebridge, 100 Mbps wired)
Software
- Docker (Gluetun with Mullvad VPN, Tautulli, Network Optimiser)
- TrueNAS with ZFS datasets and snapshot retention
- Plex Media Server
- Homebridge