I wrote a few weeks ago about replacing the hot water circulating pump on the boat with a new one, and mentioned that we’d been through several pumps over the years. After every replacement, autopsy of the pump has revealed the failure mode: the first pump failed due to old age & limited life of carbon brushes. The second failed due to thermal shock from an airlock in the system causing the boiler to go a bit nuts through lack of water flow. The ceramic rotor in this one just cracked.
The last pump though, was mechanically worn, the pump bearings nicely polished down just enough to cause the rotor to stick. This is caused by sediment in the system, which comes from corrosion in the various components of the system. Radiators & skin tanks are steel, engine block cast iron, back boiler stainless steel, Webasto heat exchanger aluminium, along with various bits of copper pipe & hose tying the system together.
The use of dissimilar metals in a system is not particularly advisable, but in the case of the boat, it’s unavoidable. The antifreeze in the water does have anti-corrosive additives, but we were still left with the problem of all the various oxides of iron floating around the system acting like an abrasive. To solve this problem without having to go to the trouble of doing a full system flush, we fitted a magnetic filter:
This is just an empty container, with a powerful NdFeB magnet inserted into the centre. As the water flows in a spiral around the magnetic core, aided by the offset pipe connections, the magnet pulls all the magnetic oxides out of the water. it’s fitted into the circuit at the last radiator, where it’s accessible for the mandatory maintenance.
Now the filter has been in about a month, I decided it would be a good time to see how much muck had been pulled out of the circuit. I was rather surprised to see a 1/2″ thick layer of sludge coating the magnetic core! The disgusting water in the bowl below was what drained out of the filter before the top was pulled. (The general colour of the water in the circuit isn’t this colour, I knocked some loose from the core of the filter while isolating it).
If all goes well, the level of sludge in the system will over time be reduced to a very low level, with the corrosion inhibitor helping things along. This should result in much fewer expensive pump replacements!
For some time now I’ve been running a large disk array to store all the essential data for my network. The current setup has 10x 4TB disks in a RAID6 array under Linux MD.
Up until now the disks have been running in external Orico 9558U3 USB3 drive bays, through a PCIe x1 USB3 controller. However in this configuration there have been a few issues:
Congestion over the USB3 link. RAID rebuild speeds were severely limited to ~20MB/s in the event of a failure. General data transfer was equally as slow.
Drive dock general reliability. The drive bays are running a USB3 – SATA controller with a port expander, a single drive failure would cause the controller to reset all disks on it’s bus. Instead of losing a single disk in the array, 5 would disappear at the same time.
Cooling. The factory fitted fans in these bays are total crap – and very difficult to get at to change. A fan failure quickly allows the disks to heat up to temperatures that would cause failure.
Upgrade options difficult. These bays are pretty expensive for what they are, and adding more disks to the USB3 bus would likely strangle the bandwidth even further.
Disk failure difficult to locate. The USB3 interface doesn’t pass on the disk serial number to the host OS, so working out which disk has actually failed is difficult.
To remedy these issues, a proper SATA controller solution was required. Proper hardware RAID controllers are incredibly expensive, so they’re out of the question, and since I’m already using Linux MD RAID, I didn’t need a hardware controller anyway.
A quick search for suitable HBA cards showed me the IOCrest 16-port SATAIII controller, which is pretty low cost at £140. This card breaks out the SATA ports into standard SFF-8086 connectors, with 4 ports on each. Importantly the cables to convert from these server-grade connectors to standard SATA are supplied, as they’re pretty expensive on their own (£25 each).
This card gives me the option to expand the array to 16 disks eventually, although the active array will probably be kept at 14 disks with 2 hot spares, this will give a total capacity of 48TB.
Here’s the card installed in the host machine, with the array running. One thing I didn’t expect was the card to be crusted with activity LEDs. There appears to be one LED for each pair of disks, plus a couple others which I would expect are activity on the backhaul link to PCIe. (I can’t be certain, as there isn’t any proper documentation anywhere for this card. It certainly didn’t come with any ;)).
I’m not too impressed with the fan that’s on the card – it’s a crap sleeve bearing type, so I’ll be keeping a close eye on this for failure & will replace with a high quality ball-bearing fan when it finally croaks. The heatsink is definitely oversized for the job, with nothing installed above the card barely gets warm, which is definitely a good thing for life expectancy.
Update 10/02/17 – The stock fan is now dead as a doornail after only 4 months of continuous operation. Replaced with a high quality ball-bearing 80mm Delta fan to keep things running cool. As there is no speed sense line on the stock fan, the only way to tell it was failing was by the horrendous screeching noise of the failing bearings.
Above is the final HBA installed in the PCIe x1 slot above – a parallel SCSI U320 card that handles the tape backup drives. This card is very close to the cooling fan of the SATA card, and does make it run warmer, but not excessively warm. Unfortunately the card is too long for the other PCIe socket – it fouls on the DIMM slots.
The tape drives are LTO2 300/600GB for large file backup & DDS4 20/40GB DAT for smaller stuff. These were had cheap on eBay, with a load of tapes. Newer LTO drives aren’t an option due to cost.
The main disk array is currently built as 9 disks in service with a single hot spare, in case of disk failure, this gives a total size after parity of 28TB:
Linux MD Detail
Creation Time:Wed Mar1116:01:012015
Used Dev Size:3906887360(3725.90GiB4000.65GB)
Update Time:Mon Nov1414:28:592016
Number Major Minor RaidDevice State
The disks used are Seagate ST4000DM000 Desktop HDDs, which at this point have ~15K hours on them, and show no signs of impending failure.
Here’s a screenshot with the disk array fully loaded running over USB3. The aggregate speed on the md0 device is only 21795KB/s. Extremely slow indeed.
This card is structured similarly to the external USB3 bays – a PCI Express bridge glues 4 Marvell 9215 4-port SATA controllers into a single x8 card. Bus contention may become an issue with all 16 ports used, but as far with 9 active devices, the performance increase is impressive. Adding another disk to the active array would certainly give everything a workout, as rebuilding with an extra disk will hammer both read from the existing disks & will write to the new.
With all disks on the new controller, I’m sustaining read speeds of 180MB/s. (Pulling data off over the network). Write speeds are always going to be pretty pathetic with RAID6, as parity calculations have to be done. With Linux MD, this is done by the host CPU, which is currently a Core2Duo E7500 at 2.96GHz, with this setup, I get 40-60MB/s writes to the array with large files.
Since I don’t have a suitable case with built in drive bays, (again, they’re expensive), I’ve had to improvise with some steel strip to hold the disks in a stack. 3 DC-DC converters provides the regulated 12v & 5v for the disks from the main unregulated 12v system supply. Both the host system & the disks run from my central battery-backed 12v system, which acts like a large UPS for this.
The SATA power splitters were custom made, the connectors are Molex 67926-0001 IDC SATA power connectors, with 18AWG cable to provide the power to 4 disks in a string.
These require the use of a special tool if you value your sanity, which is a bit on the expensive side at £25+VAT, but doing it without is very difficult. You get a very well made tool for the price though, the handle is anodised aluminium & the tool head itself is a 300 series stainless steel.
Here’s a cheap PSU from the treasure trove of junk that is eBay, rated at a rather beefy 400W of output at 12v – 33A! These industrial-type PSUs from name brands like TDK-Lambda or Puls are usually rather expensive, so I was interested to find out how much of a punishment these cheap Chinese versions will take before grenading. In my case this PSU is to be pushed into float charging a large lead acid battery bank, which when in a discharged state will try to pull as many amps from the charger as can be provided.
These PSUs are universal input, voltage adjustable by a switch on the other side of the PSU, below. The output voltage is also trimmable from the factory, an important thing for battery charging, as the output voltage needs to be sustained at 13.8v rather than the flat 12v from the factory.
Mains connections & the low voltage outputs are on beefy screw terminals. The output voltage adjustment potentiometer & output indicator LED are on the left side.
The cooling fan for the unit, which pulls air through the casing instead of blowing into the casing is a cheap sleeve bearing 60mm fan. No surprises here. I’ll probably replace this with a high-quality ball-bearing fan, to save the PSU from inevitable fan failure & overheating.
The PCB tracks are generously laid out on the high current output side, but there are some primary/secondary clearance issues in a couple of places. Lindsay Wilson over at Imajeenyus.com did a pretty thorough work-up on the fineries of these PSUs, so I’ll leave most of the in-depth stuff via a linky. There’s also a modification of this PSU for a wider voltage range, which I haven’t done in this case as the existing adjustment is plenty wide enough for battery charging duty.
The PCB is laid out in the usual fashion for these PSUs, with the power path taking a U-route across the board. Mains input is lower left, with some filtering. Main diode bridge in the centre, with the voltage selection switch & then the main filter caps. Power is then switched into the transformer by the pair of large transistors on the right before being rectified & smoothed on the top left.
The pair of main switching devices are mounted to the casing with thermal compound & an insulating pad. To bridge the gap there’s a chunk of aluminium which also provides some extra heatsinking.
The PSU is controlled by a jelly-bean TL494 PWM controller IC. No active PFC in this cheap supply so the power factor is going to be very poor indeed.
Input protection & filtering is rather simple with the usual fuse, MOV filter capacitor & common mode choke.
Beefy 30A dual diodes on the DC output side, mounted in the same fashion as the main switching transistors.
Current measurement is done by these large wire links in the current path, selectable for different models with different output ratings.
The output capacitors were just floating around in the breeze, with one of them already having broken the solder joints in shipping! After reflowing the pads on all the capacitors some hot glue as flowed around them to stop any further movement.
This supply has now been in service for a couple of weeks at a constant 50% load, with the occasional hammering to recharge the battery bank after a power failure. at 13A the supply barely even gets warm, while at a load high enough to make 40A rated cable get uncomfortably warm (I didn’t manage to get a current reading, as my instruments don’t currently go high enough), the PSU was hot in the power semiconductor areas, but seemed to cope at full load perfectly well.