Posted on Leave a comment

OpenVPN Server Speed Tweaks

I’ve been running my own VPN so I can access my home-based servers from anywhere with an internet connection (not to mention, in this day & age of Government snooping – personal privacy & increased security).

I’m on a pretty quick connection from Virgin Media here in the UK, currently the fastest they offer:

Virgin Media
Virgin Media

To do these tests, I used the closest test server to my VPN host machine, in this case Paris. This keeps the variables to a minimum. Testing without the VPN connection gave me this:

Paris Server Speed
Paris Server Speed

I did expect a lower general speed to a server further away, this will have much to do with my ISP’s traffic management, network congestion, etc. So I now have a baseline to test my VPN throughput against.
The problem I’ve noticed with OpenVPN stock configs are that the connections are painfully slow – running over UDP on the usual port of 1194 the throughput was pretty pathetic:

Stock Config Speed
Stock Config Speed

I did some reading on the subject, the first possible solution being to change the send/receive buffers so they’re set to a specific value, rather than letting the system handle them. I also added options to get the server to push these values to the clients, this saving me the trouble of having to reissue all the client configurations.

sndbuf 393216
rcvbuf 393216

push "sndbuf 393216"
push "rcvbuf 393216"

Unfortunately just this option didn’t work as well as I’d like, downstream speeds jumped to 25Mb/s. In the stock config, the tunnel MTU & MSSFIX settings aren’t bothered with, some adjustment to set the tunnel MTU to lower than the host link MTU (in my case the standard 1500) prevents packet fragmentation, MSSFIX let’s the client TCP sessions know to limit the packet sizes it sends so that after OpenVPN has done the encryption & encapsulation, the packets do not exceed the set size. This also helps prevent packet fragmentation.

tun-mtu 1400 
mssfix 1360
VPN Tweaked
VPN Tweaked

After adjusting these settings, the download throughput over the VPN link has shot up to 136Mb/s. Upload throughput hasn’t changed as this is limited by my connection to Virgin Media. Some more tweaking is no doubt possible to increase speeds even further, but this is fine for me at the moment.

 

Posted on Leave a comment

Behringer DEQ2496 Mastering Processor

Bootscreen
Bootscreen

I was recently given this unit, along with another Behringer sound processor to repair, as the units were both displaying booting problems. This first one is a rather swish Mastering Processor, which has many features I’ll leave to Behringer to explain 😉

Input Board & Relays
Input Board & Relays

All the inputs are on the back of this 19″ rackmount bit of kit, nothing much on this PCB other than the connectors & a couple of switching relays.

Main Processor PCB
Main Processor PCB

All the magic is done on the main processor PCB, which is host to 3 Analog Devices DSP processors:

ADSP-BF531 BlackFin DSP. This one is probably handling most of the audio processing, as it’s the most powerful DSP onboard at 600Mhz. There’s a ROM on board above this for the firmware & a single RAM chip. On the right are a pair of ADSP-21065  DSP processors at a lower clock rate of 66MHz. To the left is some glue logic to interface the user controls & dot-matrix LCD.

PSU Module
PSU Module

The PSU in this unit is a pretty standard looking SMPS, with some extra noise filtering & shielding. The main transformer is underneath the mu-metal shield in the centre of the board.

Posted on Leave a comment

Project Volantis – Storage Server Rebuild

For some time now I’ve been running a large disk array to store all the essential data for my network. The current setup has 10x 4TB disks in a RAID6 array under Linux MD.

Up until now the disks have been running in external Orico 9558U3 USB3 drive bays, through a PCIe x1 USB3 controller. However in this configuration there have been a few issues:

  • Congestion over the USB3 link. RAID rebuild speeds were severely limited to ~20MB/s in the event of a failure. General data transfer was equally as slow.
  • Drive dock general reliability. The drive bays are running a USB3 – SATA controller with a port expander, a single drive failure would cause the controller to reset all disks on it’s bus. Instead of losing a single disk in the array, 5 would disappear at the same time.
  • Cooling. The factory fitted fans in these bays are total crap – and very difficult to get at to change. A fan failure quickly allows the disks to heat up to temperatures that would cause failure.
  • Upgrade options difficult. These bays are pretty expensive for what they are, and adding more disks to the USB3 bus would likely strangle the bandwidth even further.
  • Disk failure difficult to locate. The USB3 interface doesn’t pass on the disk serial number to the host OS, so working out which disk has actually failed is difficult.

To remedy these issues, a proper SATA controller solution was required. Proper hardware RAID controllers are incredibly expensive, so they’re out of the question, and since I’m already using Linux MD RAID, I didn’t need a hardware controller anyway.

16-Port HBA
16-Port HBA

A quick search for suitable HBA cards showed me the IOCrest 16-port SATAIII controller, which is pretty low cost at £140. This card breaks out the SATA ports into standard SFF-8086 connectors, with 4 ports on each. Importantly the cables to convert from these server-grade connectors to standard SATA are supplied, as they’re pretty expensive on their own (£25 each).
This card gives me the option to expand the array to 16 disks eventually, although the active array will probably be kept at 14 disks with 2 hot spares, this will give a total capacity of 48TB.

HBA
SATA HBA

Here’s the card installed in the host machine, with the array running. One thing I didn’t expect was the card to be crusted with activity LEDs. There appears to be one LED for each pair of disks, plus a couple others which I would expect are activity on the backhaul link to PCIe. (I can’t be certain, as there isn’t any proper documentation anywhere for this card. It certainly didn’t come with any ;)).
I’m not too impressed with the fan that’s on the card – it’s a crap sleeve bearing type, so I’ll be keeping a close eye on this for failure & will replace with a high quality ball-bearing fan when it finally croaks. The heatsink is definitely oversized for the job, with nothing installed above the card barely gets warm, which is definitely a good thing for life expectancy.

Update 10/02/17 – The stock fan is now dead as a doornail after only 4 months of continuous operation. Replaced with a high quality ball-bearing 80mm Delta fan to keep things running cool. As there is no speed sense line on the stock fan, the only way to tell it was failing was by the horrendous screeching noise of the failing bearings.

SCSI Controller
SCSI Controller

Above is the final HBA installed in the PCIe x1 slot above – a parallel SCSI U320 card that handles the tape backup drives. This card is very close to the cooling fan of the SATA card, and does make it run warmer, but not excessively warm. Unfortunately the card is too long for the other PCIe socket – it fouls on the DIMM slots.

Backup Drives
Backup Drives

The tape drives are LTO2 300/600GB for large file backup & DDS4 20/40GB DAT for smaller stuff. These were had cheap on eBay, with a load of tapes. Newer LTO drives aren’t an option due to cost.

The main disk array is currently built as 9 disks in service with a single hot spare, in case of disk failure, this gives a total size after parity of 28TB:

/dev/md0:
        Version : 1.2
  Creation Time : Wed Mar 11 16:01:01 2015
     Raid Level : raid6
     Array Size : 27348211520 (26081.29 GiB 28004.57 GB)
  Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB)
   Raid Devices : 9
  Total Devices : 10
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Nov 14 14:28:59 2016
          State : active 
 Active Devices : 9
Working Devices : 10
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

           Name : Main-PC:0
           UUID : 266632b8:2a8a3dd3:33ce0366:0b35fad9
         Events : 773938

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       32        1      active sync   /dev/sdc
       9       8       96        2      active sync   /dev/sdg
      10       8      112        3      active sync   /dev/sdh
      11       8       16        4      active sync   /dev/sdb
       5       8      176        5      active sync   /dev/sdl
       6       8      144        6      active sync   /dev/sdj
       7       8      160        7      active sync   /dev/sdk
       8       8      128        8      active sync   /dev/sdi

      12       8        0        -      spare   /dev/sda

The disks used are Seagate ST4000DM000 Desktop HDDs, which at this point have ~15K hours on them, and show no signs of impending failure.

USB3 Speeds
USB3 Speeds

Here’s a screenshot with the disk array fully loaded running over USB3. The aggregate speed on the md0 device is only 21795KB/s. Extremely slow indeed.

This card is structured similarly to the external USB3 bays – a PCI Express bridge glues 4 Marvell 9215 4-port SATA controllers into a single x8 card. Bus contention may become an issue with all 16 ports used, but as far with 9 active devices, the performance increase is impressive. Adding another disk to the active array would certainly give everything a workout, as rebuilding with an extra disk will hammer both read from the existing disks & will write to the new.

HBA Speeds
HBA Speeds

With all disks on the new controller, I’m sustaining read speeds of 180MB/s. (Pulling data off over the network). Write speeds are always going to be pretty pathetic with RAID6, as parity calculations have to be done. With Linux MD, this is done by the host CPU, which is currently a Core2Duo E7500 at 2.96GHz, with this setup, I get 40-60MB/s writes to the array with large files.

Disk Array
Disk Array

Since I don’t have a suitable case with built in drive bays, (again, they’re expensive), I’ve had to improvise with some steel strip to hold the disks in a stack. 3 DC-DC converters provides the regulated 12v & 5v for the disks from the main unregulated 12v system supply. Both the host system & the disks run from my central battery-backed 12v system, which acts like a large UPS for this.

The SATA power splitters were custom made, the connectors are Molex 67926-0001 IDC SATA power connectors, with 18AWG cable to provide the power to 4 disks in a string.

IDT Insertion Tool
IDT Insertion Tool

These require the use of a special tool if you value your sanity, which is a bit on the expensive side at £25+VAT, but doing it without is very difficult. You get a very well made tool for the price though, the handle is anodised aluminium & the tool head itself is a 300 series stainless steel.

Posted on Leave a comment

Belkin F5U021 4-Port USB Hub

Top
Top

This is an old USB 1.1 hub that was recently retired from service on some servers. Top of the unit visible here.

Bottom Label
Bottom Label

Bottom label shows that this is a model F5U021 hub, a rather old unit.

PCB Front
PCB Front

PCB is here removed from the casing, Indicator LEDs along the bottom edge of the board, power supply is on the left. Connectors on the top edge are external power, USB host, & the 4 USB outputs. Yellow devices are polyswitch fuses for the 500mA at 5v each port must supply.

USB Hub IC
USB Hub IC

This is the USB Hub Controller IC, which is a Texas Instruments TUSB2046B device. Power filter capacitors next to the USB ports are visible here also, along with 2 of the polyswitches.

Power Supply
Power Supply

The power supply section of the unit, which supplies regulated 5v to the ports, while supplying regulated 3.3v to the hub controller IC. Large TO-220 IC is the 5v regulator. Smaller IC just under the power selector switch is the 3.3v regulator for the hub IC. The switch selects between Host powered or external power for the hub.