It’s been a good 9 months now since I originally blogged about the efforts to reduce power usage of the inhouse server. Back then, this involved taking one of my biggest sites from Umbraco 7 to Umbraco 9, where huge performance improvements in the CMS were allowing me to run the already quite efficient server processor in a lower state and cutting about 20% off power consumption. As alluded to that I was planning at the end of that post, I did go to Umbraco 10.2 not long afterwards, though the power consumption savings from 9.2 to 10.2 were much smaller so it never got much mention at the time. However much more recently I’ve been heavily revisiting this ‘green’ project but this time really getting under the bonnet with the infrastructure a lot more, so thought it time for a little update.
In December, I finally got round to adding an Uninterruptible Power Supply (UPS) into the mix for the inhouse server and associated networking infrastructure. Or to be precise, servers! There was actually both a Windows-based server and a Kubuntu Linux server running off mini PCs powering the home network setup. The Windows one does mostly websites, media serving, email and .net workloads. While the other runs TVHeadend, for distributing TV over IP around the house to any device, PiHole, for stripping down heavy advertising and trackers, and an internal video processor talked to by the Windows server that was just impossible to get running natively on Win (Yes, this is all still my hobby stuff if you’re wondering, I’m honestly not running Google from here). So to keep the bare minimum running in the event of a power failure, this new UPS needed to cover 2 mini PCs, the Openreach VDSL modem, a network switch and the house router which also has an Android phone constantly tethered to provide an emergency 4g backup. With the first UPS tests, it could run all of this for just shy of 60 minutes, which is more than enough for the average power cut and plenty of time to switch over to my backup hosting in another location if it was likely to go on longer. But this is where my interest in cutting power became really piqued. Reducing power consumption both reduces energy costs and makes the whole setup greener which has been an item on everyone’s radar especially in recent times, but now every watt saved also increases the length of time any battery backup would last. So… time to start looking at what could be trimmed!
The first and biggest candidate of course was the Linux server. This has ran for the past few years off a low powered Lenovo Mini-PC, the sort you see bought in bulk in big corporate settings like the NHS, so was quite efficient to start with usually running at about 10-15w (measured at the mains socket using a plug through meter) when the quad ‘core’ i3 processor was in its lowest power state. This is probably the point where a lot would chip in saying “why not use a Raspberry Pi here”, especially given the PiHole project name came from the fact this was light enough to run on one. I do actually have a Pi having gotten one long ago before they became like gold dust, and it does indeed sip considerably less than this. However as mentioned above, this server does a lot of video processing and demuxing of broadcast terrestrial and satellite video streams to send across the internal network, and once these are fully in use this is just too much strain for the Pi to handle.
That said, running a server which is idle 95% of the time solely so it has enough headroom for the 5% of the time it needed it is both annoying and wasteful. And that’s where plan B came in. Why not just move this onto a Hyper-V (Microsoft’s virtualisation solution) instance, within the other already running Windows box? Both machines have a lot of idle time, so this would be a much better way to balance this wasted load. This took quite a bit of fiddling, mainly as the Linux box was previously running on some very old legacy bios boot options and a historic drive configuration. This is very easy to clone to a Hyper-V Generation 1 machine, however Generation 1 instances have a lot of limits on hardware passthrough and also I quickly found I’d get some weird graphical corruption on the VM on first boot (fixable by switching out and back in to the GUI – yes I have a GUI on Linux, sue me – but still untidy). Generation 2 solved this problem, but also requires UEFI enabled boot drives to work. Thankfully Ubuntu, which the Linux server was already running off, does have a tool called boot-repair normally designed for converting from legacy boot to UEFI boot on the system natively. After doing this the updated disk image would now boot perfectly within a Generation 2 Hyper-V machine, leaving only a few software configuration niggles to sort out most of them due to Linux being Linux and renaming my network interface with the move to virtual hardware, but you expect to have to fix some things. That’s one whole server removed or 10-15+ watts off the overall load. And with no huge increases on the demand on the Windows machine either.
But now that I’ve been able to do this, why stop there? For those who remember the previous blog post, there was a graph attached which showed the drop in usage after turning down the Windows server speed settings which was recorded from a smart plug energy monitor sitting before all these devices. You’ll notice there isn’t such a graph attached this time. The reason? Running that smart plug takes about 2watts just to keep itself on a Wifi connection and ready for commands whether soft powered on or off. Now while this isn’t a lot, running that 24/7 for a plug connected to devices I’d never plan to turn off anyway makes for a very wasteful energy monitor (one which is notoriously inaccurate anyway, and wouldn’t even be showing the usage of the plug itself) – so that’s now gone! I do still have some graphs from the main smart meter of course, but as these cover the whole house usage I’ve only been able to track the changed impact on the ‘dead’ hours overnight which don’t make for particularly obvious graphics here.
Next I started looking at the USB devices connected to both machines. Satellite television broadcasts come from a dedicated SatIP box which the Linux server can call over the network and then process or forward on, while Terrestrial broadcasts would come from 4 USB dongles connected to the Linux box and each able to tune into a set of channels. The original plan had been to still plug these into the Windows machine and then try and get these passed through into the Hyper-V box as mentioned above, something which may not even be possible as even Gen 2 Hyper-V machines will only passthrough certain USB devices… but I decided never to bother in the end and just dispensed with the dongles completely. As every channel on Terrestrial is also available on Satellite (and indeed sometimes in HD on Satellite only due to the extra capacity there), I just stuck with passing through satellite only. It’s harder to guage the exact savings on USB devices of course due to the highly variable loads involved, but given the hypothetical draw USB devices can put on it it’s likely that’s another 2 watts under heavy use. And as I was no longer having to split a TV aerial signal for 4 different USB tuners now, the signal booster could also be dispensed with, so another 1-2 watts there. It’s also at this point I realised one of the devices connected to a USB port on the Windows server, which I’d completely forgotten about, was an Amazon Fire TV Stick. The port here was being used solely for power and not data to save a mains plug, coming off one of the higher power USB3 ports. While the FireTV Stick is very variable in its use, Amazon’s own pages state it can use upwards of 2w just sitting in standby, which is part of the reason it won’t even run off the lower power provided by USB2 ports on most TVs. Regardless as it’s not something that actually needed to be connected to the server anyway, out that came as well. And then the final candidate for removal was the small network switch, which I only list under the USB section as I was again using a USB port for power-only rather than a mains plug. But with some clever rethinking and replanning of the network cabling, it was possible to connect everything directly to the main router and dispense with this switch completely, again leading to a 1-2watt saving, and a few less wires in the nest of wires I have behind the TV as an extra benefit too!
After all of the hardware changes, even just a little rethinking of how some software stuff works can make a difference, albeit one which is probably much harder to quantify. In the initial setup for Hyper-V I’d dedicated a secondary USB LAN adapter to being the interface through which the now virtual Linux box is connected to the rest of the network, with the main motherboard LAN continuing to support the underlying Windows server. On the face of things, this appears to work fine, but delving deeper reveals more hidden issues. Historically the Windows server would call broadcast video streams from the Linux server, and at certain times of the day this can be up to 20 channels streaming over the network at once. This made sense when there were two physical servers. However under this new virtual setup it meant data would be going out from the NIC for the virtual server to the router and back in via the other NIC to the other server. That’s a huge amount of data being sent needlessly across the network which in itself consumes power. Also the USB protocol has a lot more processing overhead which doesn’t usually show up, but when you’re doing such a large amount of data transfer over a USB LAN adapter becomes very noticeable and ties up a lot of processor cycles (the good old vague ‘Software Interrupts’ in Windows Task Manager). So to remove all this, a second internal ‘virtual’ NIC was added only visible internally to Windows and Linux. This means all communication between the Host Windows and Guest Linux servers can now completely bypass the wider network and effectively talk directly in hardware, dropping processor load and network load massively and ultimately saving even more on the power consumption.
These all may seem like obsessively small numbers when presented here, and you’re probably still only looking at a total of 30w saved. But when you consider this added up 24 hours a day, 365 days a year, it starts to make a real difference. And when at the end of this the whole set up is simplified but still running pretty much everything it did before with little noticable impact on performance it just shows what thinking a little differently can do. The proof of the pudding came in the re-test with the UPS. Backup times on the second test went from just shy of 60 minutes to about 110 minutes, or close to 100% increase in the length of time. I’m still not convinced I’ve actually halved the power consumption of everything connected to this so its possible some of this time improvement may have been down to the battery conditioning a bit better over time. But either way there’s been a noticeable improvement to offline resiliency, energy costs and wider green credentials, all pluses in my book!
Where next? Well the server box is getting on for 4 years old now, and processor tech has continued to improve in that time especially on the low power end of the scale where I’m aiming as Intel and AMD have had to remain competitive with the already low-power ARM. So the server itself may get replaced with something that can do more for less – just with a keen eye kept that those power figures don’t creep back up again as I’ve shown even the smallest changes can make a big difference!