Posted on Leave a comment

Website Hosting Updates!

Over the past few weeks, the host I’ve been with for over 3 years, OVH, announced a rather large price increase of 20% because of Brexit – the current universal excuse to squeeze the customer for more cash. This change has sent the price of my dedicated server solution with them to over £45 a month. Doing some napkin-calculation gave me £18 a month in extra power to run a small server locally. So I’ve decided to bring the hosting solution back to my local network & run from my domestic internet link, which at 200Mbit/s DL & 20Mbit/s UL should be plenty fast enough to handle the modest levels of traffic I usually get.

Obviously, some hardware was required for this, so I obtained this beauty cheap on eBay:

HP MicroServer Gen 8
HP Proliant MicroServer Gen 8

This is a Gen 8 HP Proliant Microserver, very small & quiet, perfect for the job. This came with 4GB of RAM installed from the factory, and a Celeron G1610T running at 2.3GHz. Both are a little limited, so some upgrades will be made to the system.

Disk Bays
Disk Bays

4 SATA drive bays are located behind the magnetically-locked front door, there’s a 250GB boot disk in here along with a pair of 500GB disks in RAID1 to handle the website files & databases. For my online file hosting site, the server has a backend NFS link direct to Volantis – my 28TB storage server. This arrangement keeps the large file storage side of things off the web server disks & on a NAS, where it should be.

Extra RAM
Extra RAM

First thing is a RAM upgrade to the full supported capacity of 16GB. This being a Proliant server machine, doesn’t take anything of a standard flavour, it’s requirements are DDR3-10600E or DDR3-12800E (the E in here being ECC). This memory is both eye-wateringly expensive & difficult to find anywhere in stock. It’s much cheaper & easier to find the ECC Registered variety, but alas this isn’t compatible.

Over the past 48 hours or so, I’ve been migrating everything over to the new baby server, with a couple of associated teething problems, but everything seems to have gone well so far. The remaining job to get everything running as it should is an external mail relay – sending any kind of email from a dynamic IP / domestic ISP usually gets it spam binned by the big providers instantly, regardless of it actually being spam or not – more to come on that setup & configuring postfix to use an external SMTP relay server soon!

If anyone does find something weird going on with the blog, do let me know via the contact page or comments!

Posted on Leave a comment

Project Volantis – Storage Server Rebuild

For some time now I’ve been running a large disk array to store all the essential data for my network. The current setup has 10x 4TB disks in a RAID6 array under Linux MD.

Up until now the disks have been running in external Orico 9558U3 USB3 drive bays, through a PCIe x1 USB3 controller. However in this configuration there have been a few issues:

  • Congestion over the USB3 link. RAID rebuild speeds were severely limited to ~20MB/s in the event of a failure. General data transfer was equally as slow.
  • Drive dock general reliability. The drive bays are running a USB3 – SATA controller with a port expander, a single drive failure would cause the controller to reset all disks on it’s bus. Instead of losing a single disk in the array, 5 would disappear at the same time.
  • Cooling. The factory fitted fans in these bays are total crap – and very difficult to get at to change. A fan failure quickly allows the disks to heat up to temperatures that would cause failure.
  • Upgrade options difficult. These bays are pretty expensive for what they are, and adding more disks to the USB3 bus would likely strangle the bandwidth even further.
  • Disk failure difficult to locate. The USB3 interface doesn’t pass on the disk serial number to the host OS, so working out which disk has actually failed is difficult.

To remedy these issues, a proper SATA controller solution was required. Proper hardware RAID controllers are incredibly expensive, so they’re out of the question, and since I’m already using Linux MD RAID, I didn’t need a hardware controller anyway.

16-Port HBA
16-Port HBA

A quick search for suitable HBA cards showed me the IOCrest 16-port SATAIII controller, which is pretty low cost at £140. This card breaks out the SATA ports into standard SFF-8086 connectors, with 4 ports on each. Importantly the cables to convert from these server-grade connectors to standard SATA are supplied, as they’re pretty expensive on their own (£25 each).
This card gives me the option to expand the array to 16 disks eventually, although the active array will probably be kept at 14 disks with 2 hot spares, this will give a total capacity of 48TB.

HBA
SATA HBA

Here’s the card installed in the host machine, with the array running. One thing I didn’t expect was the card to be crusted with activity LEDs. There appears to be one LED for each pair of disks, plus a couple others which I would expect are activity on the backhaul link to PCIe. (I can’t be certain, as there isn’t any proper documentation anywhere for this card. It certainly didn’t come with any ;)).
I’m not too impressed with the fan that’s on the card – it’s a crap sleeve bearing type, so I’ll be keeping a close eye on this for failure & will replace with a high quality ball-bearing fan when it finally croaks. The heatsink is definitely oversized for the job, with nothing installed above the card barely gets warm, which is definitely a good thing for life expectancy.

Update 10/02/17 – The stock fan is now dead as a doornail after only 4 months of continuous operation. Replaced with a high quality ball-bearing 80mm Delta fan to keep things running cool. As there is no speed sense line on the stock fan, the only way to tell it was failing was by the horrendous screeching noise of the failing bearings.

SCSI Controller
SCSI Controller

Above is the final HBA installed in the PCIe x1 slot above – a parallel SCSI U320 card that handles the tape backup drives. This card is very close to the cooling fan of the SATA card, and does make it run warmer, but not excessively warm. Unfortunately the card is too long for the other PCIe socket – it fouls on the DIMM slots.

Backup Drives
Backup Drives

The tape drives are LTO2 300/600GB for large file backup & DDS4 20/40GB DAT for smaller stuff. These were had cheap on eBay, with a load of tapes. Newer LTO drives aren’t an option due to cost.

The main disk array is currently built as 9 disks in service with a single hot spare, in case of disk failure, this gives a total size after parity of 28TB:

/dev/md0:
        Version : 1.2
  Creation Time : Wed Mar 11 16:01:01 2015
     Raid Level : raid6
     Array Size : 27348211520 (26081.29 GiB 28004.57 GB)
  Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB)
   Raid Devices : 9
  Total Devices : 10
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Nov 14 14:28:59 2016
          State : active 
 Active Devices : 9
Working Devices : 10
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

           Name : Main-PC:0
           UUID : 266632b8:2a8a3dd3:33ce0366:0b35fad9
         Events : 773938

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       32        1      active sync   /dev/sdc
       9       8       96        2      active sync   /dev/sdg
      10       8      112        3      active sync   /dev/sdh
      11       8       16        4      active sync   /dev/sdb
       5       8      176        5      active sync   /dev/sdl
       6       8      144        6      active sync   /dev/sdj
       7       8      160        7      active sync   /dev/sdk
       8       8      128        8      active sync   /dev/sdi

      12       8        0        -      spare   /dev/sda

The disks used are Seagate ST4000DM000 Desktop HDDs, which at this point have ~15K hours on them, and show no signs of impending failure.

USB3 Speeds
USB3 Speeds

Here’s a screenshot with the disk array fully loaded running over USB3. The aggregate speed on the md0 device is only 21795KB/s. Extremely slow indeed.

This card is structured similarly to the external USB3 bays – a PCI Express bridge glues 4 Marvell 9215 4-port SATA controllers into a single x8 card. Bus contention may become an issue with all 16 ports used, but as far with 9 active devices, the performance increase is impressive. Adding another disk to the active array would certainly give everything a workout, as rebuilding with an extra disk will hammer both read from the existing disks & will write to the new.

HBA Speeds
HBA Speeds

With all disks on the new controller, I’m sustaining read speeds of 180MB/s. (Pulling data off over the network). Write speeds are always going to be pretty pathetic with RAID6, as parity calculations have to be done. With Linux MD, this is done by the host CPU, which is currently a Core2Duo E7500 at 2.96GHz, with this setup, I get 40-60MB/s writes to the array with large files.

Disk Array
Disk Array

Since I don’t have a suitable case with built in drive bays, (again, they’re expensive), I’ve had to improvise with some steel strip to hold the disks in a stack. 3 DC-DC converters provides the regulated 12v & 5v for the disks from the main unregulated 12v system supply. Both the host system & the disks run from my central battery-backed 12v system, which acts like a large UPS for this.

The SATA power splitters were custom made, the connectors are Molex 67926-0001 IDC SATA power connectors, with 18AWG cable to provide the power to 4 disks in a string.

IDT Insertion Tool
IDT Insertion Tool

These require the use of a special tool if you value your sanity, which is a bit on the expensive side at £25+VAT, but doing it without is very difficult. You get a very well made tool for the price though, the handle is anodised aluminium & the tool head itself is a 300 series stainless steel.

Posted on Leave a comment

Cheap eBay Molex-SATA Power Adaptors

Molex to Dual SATA Power
Molex to Dual SATA Power

To do some upgrades to my NAS, I needed some SATA power adaptors, to split the PSU out to the planned 16 disk drives. eBay has these for very little money, however there’s a good reason for them being cheap.

Wire Marking
Wire Marking

The marking on the wire tells me it’s 18AWG, which should be good for 9.5A at an absolute maximum. However these adaptors are extremely light.

Wire Comparison
Wire Comparison

Here’s the cheapo eBay wire compared to proper 18AWG wire. The cores in the eBay adaptor are tiny, I’d guess about 24AWG, only good for about 3A. As disk drives pull about 2A from the +12v rail on startup to spin the platters up to speed, this thin wire is going to cause quite the volt drop & possibly prevent the disk from operating correctly.

Posted on Leave a comment

Raspberry Pi Timelapse – Resequencing Images

Sometimes while taking timelapse video on the Pi, it misses frames, for no apparent reason. I have been playing with various combinations of disks/SATA cases to see what the bottleneck is. Oddly enough a faster drive actually made the problem worse!

Really Bad Frame Skipping
Really Bad Frame Skipping

Here’s an example of some really bad frame skipping, this is with a frame interval of 1250ms, which has worked fine in the past. The disk used is a 750GB WD Black 7200RPM, so disk access time shouldn’t be an issue.

Since frame skipping is rarely a problem in timelapse video I do, I’ve been searching for something to automatically renumber all the frames for processing into video – after writing my own script, which was a bit crusty, I came across a very handy script on SourceForge. It required a couple of small modifications to work correctly with what I want, but here’s the slightly modified version.

[snippet id=”1770″]

With the small modifications, it renumbers the images correctly for processing by AVConv.

More scripting to come when I sort out an automatic transcode kludge!

73s for now

Posted on Leave a comment

USB-IDE/SATA Adaptor

Front
Front

This is a device to use an IDE or SATA interface drive via a USB connection. Here is the front of the device, IDE interface at the bottom, 2.5″ form factor.

PCB Top
PCB Top

PCB removed from the casing. USB cable exits the top, 12v DC power jack to the left.
SATA interface below the DC Jack.
Molex connector below SATA is the power output for the drive in use. This unit has a built in 5v regulator.

PCB Bottom
PCB Bottom

Bottom of the PCB showing the interface IC.

Drive Adaptor
Drive Adaptor

Adaptor to plug into the 44-pin 2.5″ form factor IDE interface on the adaptor, converts to standard 40-pin 3.5″ IDE.

Power Cable
Power Cable

Power pigtail with standard Molex & SATA power plugs.

Posted on Leave a comment

Western Digital 160GB 2.5″ HDD

Top Of Drive With Label
Top

This is a Western Digital drive recently removed from my laptop when it died of a severe head crash.
Top of drive can be seen here.

Top Removed From Drive
Top Removed

Here the cover has been removed from the drive, showing the platter, head arm & magnet. Yellow piece top left is head parking ramp.

Head Arm of Drive
Head Arm

The head assembly of the drive is shown here. The head itself is on the left hand end of the arm in the plastic parking ramp. The other end of the arm holds the voice coil part of the head motor, surrounded by the magnet.

Bottom Of Drive with PCB
Bottom Of Drive with PCB

Bottom of drive, with controller PCB. SATA interface socket at bottom.

PCB removed from bottom of drive. Spindle motor connections & connections to the head unit can be seen on the bottom of the drive unit.

Controller PCB. Supports the cache, interface & motor controller ICs.

Closeup of the motor driver IC, this controls the speed of the spindle motor precisely to 5,400RPM. Also controls the voice coil motor controlling the position of the head arm on the platters.

Interface IC closeup. This IC receives signals from the head assembly & processes them for transmission to the SATA bus. Also holds drive firmware, controls the Motor driver IC & all other functions of the drive.

Cache Memory IC.