Having had a wee issue with Jellyfin Media Server’s database this week after an upgrade, I decided to avoid the requirement for a 24 hour database rebuild, to start backing things up with Borgmatic. Borgmatic is a handy wrapper script to automate BorgBackup.
Jellyfin Borgmatic Config
# List of source directories to backup.
# Path to BorgBackup repository
# Retention policy for how many backups to keep.
# List of checks to run to validate your backups.
# Custom preparation scripts to run.
-systemctl stop jellyfin
-systemctl start jellyfin
This is a very simple configuration, which does the following steps:
Stops the Jellyfin server
Runs Borg on both configuration directories – /etc/jellyfin & /var/lib/jellyfin.
Checks the repo & existing archives for consistency
Restarts the Jellyfin server.
Now, whenever the SQLite 🤮 databases backing up the frontend decide to have a shitfit, it should be a relatively simple matter to restore to the last good backup. In my case I have a cronjob set to run every night. Once someone adds proper MySQL support, I will migrate over to a proper database server instance. 😉
So, it’s time to finish off the upgrades to the core storage server on my network. Now a new motherboard, CPU & RAM have been obtained (MSI GA-X58-USB3), Core i7 950, 12GB), along with new SAS/SATA HBAs for the disk rack I can get everything fitted into place.
Proper branded LSI HBA cards are expensive so I went with the cheaper option & obtained a pair of Dell H200 RAID cards. These have custom firmware flashed to them, but luckily can be crossflashed to a standard LSI firmware to become an LSI9211-8i card – providing 8 lanes of either SAS or SATA connectivity on a pair of SFF-8087 ports. Flashing these cards was very simple, once I managed to work my way into the EFI shell on my main machine, which I was using to do the flashing. Find all the firmware files & required software here:
One thing I left out from the flashing was a BIOS – this means that the boot process is speeded up, but also means the system BIOS cannot see the disks connected to the cards, so they’re not bootable. This isn’t a problem however, as I never plan on booting from the data storage disk array.
The SAS2008 RoC (RAID on Chip) on these cards runs at around 8.5W thermal power, so some active cooling is required to keep temperatures within check. I have attached a 40mm fan to each card’s factory heatsink, using M3x25mm screws. Getting the screws to grab the heatsink was the tricky bit – I needed to crimp the outer corners of the fins together slightly, so when the screws are driven in, the gap is forced to expand, which grabs the threads. The fans will be connected to spare headers on the motherboard for speed monitoring.
It was a struggle finding a motherboard with the required number of high-lane-count PCIe slots. Even on modern motherboards, there aren’t many about within a reasonable price range that have more than a single x16 slot, and since I’m going with the new HBAs, a single slot is no longer enough. The motherboard I managed to obtain has a pair of x16 slots, and a x4 slot (x16 physical), along with a 3 x1 slots. The only downside is there’s no onboard graphics on this motherboard, so an external card will be required. Another cheapie from eBay sorted this issue out.
Since I need to use the x16 ports for the disk controllers, this card will have to go into the x4 slot.
Here the board has been installed into the new chassis, along with it’s IO shield. Both HBA cards are jacked into the x16 slots, with the SAS/SATA loom cables attached. I did have to grab longer cables – the originals I had were only 500mm, definitely not long enough to reach the ports on these cards, so 1m cables are used. The fans are plugged in with extensions to a pair of the headers on the motherboard, but the MB doesn’t seem to want to read RPM from those fans. Nevermind. While the fans are a little close to the adjacent cards, the heatsinks run just about warm to the touch, so there’s definitely enough airflow – not forgetting the trio of 120mm fans in the bulkhead just out of shot, creating a breeze right through the chassis.
Since the onboard SATA ports are in a better position, I was able to attach the boot SSD to the caddy properly, which helps tidy things up a bit. These slot into the 5-¾” bays on the front of the chassis, above the disk cage.
To take up the excess cable length, and tidy things up, the data loom to the disk cage is cable-tied to self-adhesive saddles on the side of the chassis. This arrangement also helps cooling air flow.
With the new components, and the cabling tied up, things inside the chassis look much cleaner. I’ve rationalised the power cabling to the disk backplanes down to a a pair of SATA power looms.
So I figured it was time to get a hardware update sorted for my network’s core storage server, which I have posted about before. The way I had the drives anchored to steel rails really doesn’t make moving or replacing disks easy, so a proper case needed to be sourced.
ServerCaseUK stocked 16-bay 4U chassis units, so one of these was ordered. These have 4 internal backplanes, with SFF-8087 Mini-SAS connections, so hooking into my existing 16-channel HBA card would be simple. In the current setup, the multi-lane cables are routed out via SFF-8088 connectors to the drive array, so this will tidy things up considerably.
The main data links are via these SFF-8087 connectors, each carrying 4 lanes of SATA.
Power is provided by 4x Molex connections, via SATA power adaptors (the good kind, which don’t create fire). There’s a 5th Molex hidden down the size of the last fan, which powers all 3 120mm fans.
The disks are kept cool by 3x 120mm hot-swap fans on the dividing wall. These don’t create much noise, and are always at full speed.
Here’s the back of the case after transplanting the motherboard & HBA from the old chassis. There’s a new 750W EVGA modular power supply, since I’ll be expanding the disk array as well. The boot SSD is currently sat on the bottom of the case since I don’t have a data cable long enough to mount it in the proper place as yet.
Here’s the fan controller, which takes care of the dual high speed Delta fans on the back wall of the chassis. This has a pair of temperature sensors – one on the HBA card’s heatsink, and the other on the fan wall monitoring the exhaust air temp of the drive array, to control the speed of the two fans. Temperatures are kept at around 30°C at all times.
Since the HBA card’s fan failed a while back, it’s had a couple of fans attached. The centrifugal one here works a little better than a massive 80mm axial fan, and is a little quieter. This is always run at full speed from a spare motherboard header. The temperature sensor feeding the fan controller can be seen here bonded to the heatsink. The 4 SFF-8087 cables are going off to the disk backplanes.
As mentioned before, there are a pair of 80mm Delta high-speed fans on the back wall of the case, to provide some extra cooling air flow just in case overheating manages to set in. These are usually spooled down to low RPM to keep them quiet.
Since space was getting a little tight, and having some slots spare on the HBA, I decided to add some more disks to bring the active members up to 12, from 9 – increasing available disk space from 28TB to 40TB.
Creation Time:Wed Mar1116:01:012015
Used Dev Size:3906887488(3725.90GiB4000.65GB)
Update Time:Mon Nov1814:13:352019
Number Major Minor RaidDevice State
The space expansion improves things there, I will be adding a couple more spare disks to bring the number up to the full 16, just in case of any failures.
There are still a couple of issues with this setup:
The motherboard & CPU are ancient. Currently an Intel Core 2 Quad, running 8GB of RAM, limits data throughput, and critically, the speed of mdadm data checks & rebuilds. The Core 2 Quad also runs at roughly the same temperature as the Sun’s core when under high load.
The SATA HBA is running 4 controllers on an expander, through a PCIe x4 link, which is a little slow due to congestion on the expander itself. RAID6 does have some write-speed penalties though.
These are issues I will address shortly, with a replacement motherboard on the way!