Here’s a piece of tech that is growing all the more important in recent times, with devices with huge battery capacities, a quick charger. This unit supports Qualcomm’s Quick Charge 3 standard, where the device being charged can negotiate with the charger for a higher-power link, by increasing the bus voltage past the usual 5v.
The casing feels rather nice on this unit, sturdy & well designed. All the legends on the case are laser marked, apart from the front side logo which is part of the injection moulding.
The power capacity of this charger is pretty impressive, with outputs for QC3 from 3.6-6.5v at 3A, up to 12v 1.5A. Standard USB charging is limited at 4.8A for the other 3 ports.
The two of the 5 USB ports are colour coded blue on the QC3 ports. The other 3 are standard 5v ports, the only thing that doesn’t make sense in the ratings is the overall current rating of the 5v supply (4.8A), and the rated current of each of the ports (2.4A) – this is 7.2A total rather than 4.8A.
The casing is glued together at the seam, but it gave in to some percussive attack with a screwdriver handle. The inside of this supply is mostly hidden by the large heatspreader on the top.
This is a nicely designed board, the creepage distances are at least 8mm between the primary & secondary sides, the bottom also has a conformal coating, with extra silicone around the primary-side switching transistor pins, presumably to decrease the chances of the board flashing over between the close pins.
On the lower 3 USB ports can be seen the 3 SOT-23 USB charge control ICs. These are probably similar to the Texas Instruments TPS2514 controllers, which I’ve experimented with before, however I can’t read the numbers due to the conformal coating. The other semiconductors on this side of the board are part of the voltage feedback circuits for the SMPS. The 5v supply optocoupler is in the centre bottom of the board.
Desoldering the pair of primary side transistors allowed me to easily remove the heatspreader from the supply. There’s thermal pads & grease over everything to get rid of the heat. Here can be seen there are two transformers, forming completely separate supplies for the standard USB side of things & the QC3 side. Measuring the voltages on the main filter capacitors showed me the difference – the QC3 supply is held at 14.2v, and is managed through other circuits further on in the power chain. There’s plenty of mains filtering on the input, as well as common-mode chokes on the DC outputs before they reach the USB ports.
Here’s where the QC3 magic happens, a small DC-DC buck converter for each of the two ports. The data lines are also connected to these modules, so all the control logic is located on these too. The TO-220 device to the left is the main rectifier.
Here’s another random gadget for teardown, this time an IR remote control repeater module. These would be used where you need to operate a DVD player, set top box, etc in another room from the TV that you happen to be watching. An IR receiver sends it’s signal down to the repeater box, which then drives IR LEDs to repeat the signal.
Not much to day about the exterior of this module, the IR input is on the left, up to 3 receivers can be connected. The outputs are on the right, up to 6 repeater LEDs can be plugged in. Connections are done through standard 3.5mm jacks.
Not much inside this one at all, there are 6 transistors which each drive an LED output. This “dumb” configuration keeps things very simple, no signal processing has to be done. Power is either provided by a 12v input, which is fed into a 7805 linear regulator, or direct from USB.
For some time now I’ve been running a large disk array to store all the essential data for my network. The current setup has 10x 4TB disks in a RAID6 array under Linux MD.
Up until now the disks have been running in external Orico 9558U3 USB3 drive bays, through a PCIe x1 USB3 controller. However in this configuration there have been a few issues:
Congestion over the USB3 link. RAID rebuild speeds were severely limited to ~20MB/s in the event of a failure. General data transfer was equally as slow.
Drive dock general reliability. The drive bays are running a USB3 – SATA controller with a port expander, a single drive failure would cause the controller to reset all disks on it’s bus. Instead of losing a single disk in the array, 5 would disappear at the same time.
Cooling. The factory fitted fans in these bays are total crap – and very difficult to get at to change. A fan failure quickly allows the disks to heat up to temperatures that would cause failure.
Upgrade options difficult. These bays are pretty expensive for what they are, and adding more disks to the USB3 bus would likely strangle the bandwidth even further.
Disk failure difficult to locate. The USB3 interface doesn’t pass on the disk serial number to the host OS, so working out which disk has actually failed is difficult.
To remedy these issues, a proper SATA controller solution was required. Proper hardware RAID controllers are incredibly expensive, so they’re out of the question, and since I’m already using Linux MD RAID, I didn’t need a hardware controller anyway.
A quick search for suitable HBA cards showed me the IOCrest 16-port SATAIII controller, which is pretty low cost at £140. This card breaks out the SATA ports into standard SFF-8086 connectors, with 4 ports on each. Importantly the cables to convert from these server-grade connectors to standard SATA are supplied, as they’re pretty expensive on their own (£25 each).
This card gives me the option to expand the array to 16 disks eventually, although the active array will probably be kept at 14 disks with 2 hot spares, this will give a total capacity of 48TB.
Here’s the card installed in the host machine, with the array running. One thing I didn’t expect was the card to be crusted with activity LEDs. There appears to be one LED for each pair of disks, plus a couple others which I would expect are activity on the backhaul link to PCIe. (I can’t be certain, as there isn’t any proper documentation anywhere for this card. It certainly didn’t come with any ;)).
I’m not too impressed with the fan that’s on the card – it’s a crap sleeve bearing type, so I’ll be keeping a close eye on this for failure & will replace with a high quality ball-bearing fan when it finally croaks. The heatsink is definitely oversized for the job, with nothing installed above the card barely gets warm, which is definitely a good thing for life expectancy.
Update 10/02/17 – The stock fan is now dead as a doornail after only 4 months of continuous operation. Replaced with a high quality ball-bearing 80mm Delta fan to keep things running cool. As there is no speed sense line on the stock fan, the only way to tell it was failing was by the horrendous screeching noise of the failing bearings.
Above is the final HBA installed in the PCIe x1 slot above – a parallel SCSI U320 card that handles the tape backup drives. This card is very close to the cooling fan of the SATA card, and does make it run warmer, but not excessively warm. Unfortunately the card is too long for the other PCIe socket – it fouls on the DIMM slots.
The tape drives are LTO2 300/600GB for large file backup & DDS4 20/40GB DAT for smaller stuff. These were had cheap on eBay, with a load of tapes. Newer LTO drives aren’t an option due to cost.
The main disk array is currently built as 9 disks in service with a single hot spare, in case of disk failure, this gives a total size after parity of 28TB:
Linux MD Detail
Creation Time:Wed Mar1116:01:012015
Used Dev Size:3906887360(3725.90GiB4000.65GB)
Update Time:Mon Nov1414:28:592016
Number Major Minor RaidDevice State
The disks used are Seagate ST4000DM000 Desktop HDDs, which at this point have ~15K hours on them, and show no signs of impending failure.
Here’s a screenshot with the disk array fully loaded running over USB3. The aggregate speed on the md0 device is only 21795KB/s. Extremely slow indeed.
This card is structured similarly to the external USB3 bays – a PCI Express bridge glues 4 Marvell 9215 4-port SATA controllers into a single x8 card. Bus contention may become an issue with all 16 ports used, but as far with 9 active devices, the performance increase is impressive. Adding another disk to the active array would certainly give everything a workout, as rebuilding with an extra disk will hammer both read from the existing disks & will write to the new.
With all disks on the new controller, I’m sustaining read speeds of 180MB/s. (Pulling data off over the network). Write speeds are always going to be pretty pathetic with RAID6, as parity calculations have to be done. With Linux MD, this is done by the host CPU, which is currently a Core2Duo E7500 at 2.96GHz, with this setup, I get 40-60MB/s writes to the array with large files.
Since I don’t have a suitable case with built in drive bays, (again, they’re expensive), I’ve had to improvise with some steel strip to hold the disks in a stack. 3 DC-DC converters provides the regulated 12v & 5v for the disks from the main unregulated 12v system supply. Both the host system & the disks run from my central battery-backed 12v system, which acts like a large UPS for this.
The SATA power splitters were custom made, the connectors are Molex 67926-0001 IDC SATA power connectors, with 18AWG cable to provide the power to 4 disks in a string.
These require the use of a special tool if you value your sanity, which is a bit on the expensive side at £25+VAT, but doing it without is very difficult. You get a very well made tool for the price though, the handle is anodised aluminium & the tool head itself is a 300 series stainless steel.