blarg?

Pony Tails

Here’s a veteran systems administration move that not a lot of people know about. Though to be honest I haven’t really asked around to find out if people know about it or not; that’s not so much beside the point as it is back in the cheap seats on the point’s reunion tour, obviously. It’s also counterintuitive, this year’s soi-disant-intelligentsia shorthand for “fashionable”, so between that and the exclusivity of ignorance there’s so much pseudointellectual cachet going on here that I almost feel I’m cheapening it by telling anyone. Let’s just agree that if you read this article before Friday you can tell anyone you hear it from later that you liked it before it went mainstream. Mail me a postcard, I’ll send you the single when it comes out on vinyl.

That was a little florid, so let me cut to the chase, which is this: don’t use power bars with surge protectors for your personal computer. It’s fine to use them anywhere else, though you also want to avoid them for any attached external drives and their various connective tissues, USB hubs and network switches, for example. But your computer, at least, you want to be either plugged into a decent UPS or directly into the wall.

Almost all power bars but the very cheapest include surge protectors now, and if you’re trying to protect sensitive equipment you’d think you’d want something like that between your relatively delicate computer and the outside world. But protecting a computer from a surge by cutting the power is a lot like protecting someone from secondhand smoke by putting them in a chokehold. Strictly speaking your surge protector may be protecting your hardware, but in exchange for that you’re assuming a lot of risk involving your data.

Specifically, you’re accepting the risk of killing your box in the middle of an important write or, worse, with an unknown quantity of uncommitted data still in your drive’s onboard cache that’s just gone, never to be seen again. And on the off chance you’re (foolishly, in my opinion) using RAID at home the risk to your data actually gets worse, not better.

It’s harmless in the case of your appliances, game consoles or printers, and we all need more sockets than we have. But in terms of data preservation, if you don’t have a UPS you’re far better off plugging straight into the wall and letting your power supply take the abuse.

First off, my colleague Donna wrote up a bit about the work we’ve been doing for the last few months. It’s been a pleasure to work with her, and I don’t really think of her as a crony but nobody tell her I said so.

The second thing is a way to get all the linuxes. That’s right, all of them; specifically a way to get a variety of them running in a single headless virtual machine on your OS of choice. You start with an Ubuntu .ISO and VirtualBox.

Install Ubuntu on a suitably capacious VM, make sure sshd is running and starts by default, pause it, close and quit VirtualBox. Then do two things; first, set yourself up with this script:

#!/bin/sh
VBoxManage startvm Prime --type headless
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guestssh/Protocol" TCP
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guestssh/GuestPort" 22
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guestssh/HostPort" 2222
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guesthttp/Protocol" TCP
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guesthttp/HostPort" 8080
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guesthttp/GuestPort" 80

(My VM’s name is “Prime” in this example, to clarify. Yours may not be.)

Then read this article by Ted Dziuba about running several versions of Linux, simultaneously and non-virtualized, on the same machine. It’s pretty cool, and that should set you up with All The Linuxes, should you happen to want all the linuxes.

From that you can SSH to localhost:2222 for Ubuntu and schroot between the whatever other linuxes you desire. X-forwarding will help you here, and I wonder if you can add Android to that list? Hmm. Hmmmmm.

Next up, if you’re making changes to Firefox don’t/won’t/can’t get at their Tryserver test harness, I just found out (duh, of course) that all their tests are in their source tree anyway. Add these lines to the end of your Makefile, and you can run the whole test harness locally with one command.

test-me:
    echo 'Running automated tests in 10 seconds. This can take a long time - hit control-C to end.' && sleep 10
    make $(MAKE) -f $(topsrcdir)/obj-ff-dbg/Makefile crashtest
    make $(MAKE) -f $(topsrcdir)/obj-ff-dbg/Makefile jstestbrowser
    make $(MAKE) -f $(topsrcdir)/obj-ff-dbg/Makefile reftest
    make $(MAKE) -f $(topsrcdir)/obj-ff-dbg/Makefile mochitest-plain
    make $(MAKE) -f $(topsrcdir)/obj-ff-dbg/Makefile xpcshell-tests

Configure, make, make test-me, then wait. This is a run-overnight kind of thing – it will stomp on your machine pretty hard – but at least it will tell you if you broke anything. I was briefly tempted to call that “trouble”, or “come-at-me-bro” rather than “test-me”, but I think wisely elected not to.

Finally, I broke down and installed Fedora on my little netbook, and to my surprise it’s awfully pretty. I miss apt-get, but the new Gnome UI is actually great, wildly better and more discoverable than Win7. It’s actually a respectable little computer now, all things considered. Except, of course my wireless doesn’t work, and if I put an SD Card in it won’t suspend anymore.

“Sysadmin” is a portmanteau of “administration” and “Sisyphus“, apparently.

The Door

I alluded to some fictional future tech the other day, specifically ARM-powered Macbook Airs. My reasonings, let me show you them.

  • With OSX 10.6, Apple announced Grand Central Dispatch, a framework for managing multithreaded programs across multiple cores, which they released, surprisingly, under the Apache open-source license. This gives programmers who take advantage of it an easy way to take good advantage of multi-core processors without the usual agonies of threading. You might not think this is a huge deal when we’re talking the usual two- or four-core processors on most modern machines, but
  • Apple is one of the very few licensees of Imagination Technology’s SGX543MP2-16 ARM chips. In terms of performance, the cutting edge there is not quite as fast as your current Atoms, but there’s sixteen general-purpose GPU cores in those chips, plus a pair of 3D GPUs and 2d and crypto acceleration thrown in for kicks.
  • One of the neatest thing about these chips is that you can actually power down individual cores to save power, and fast enough that you can do it between frames of playing video. Relatedly, this is something PA Semi was also very good at before Apple bought them – aggressive power management on ARM-based systems. In terms of pure processing power ARM is not as fast as the best processors that Intel has to offer but in per-watt terms x86 doesn’t even come close. That plus 16 cores plus GCD is going to be a hard act to follow for anyone in the portable space stuck in Intel-land.
  • Microsoft has asked Intel to produce a 16-core Atom chip, it was reported earlier this year, despite the fact that they’re pushing towards ARM as well.

… and Apple has their annual World Wide Developer Conference coming up in June. My predictions are as follows:

  • Apple’s next generation of laptop hardware will run ARM chips, likely starting with the Airs. They’ve pulled this switch off before in their move from PPC to Intel and their insistence on total vertical control of the development environment is what lets them do it; the App Store model is only going to make that easier. They’ll announce this at WWDC, and it will look a lot like the PPC-Intel move did – if you’re using XCode, the next version of XCode has a checkbox in it saying “ARM” that you’ll click and be fine. If not, you’re basically 100% fucked.
  • At some point late in the year we’ll learn that Adobe doesn’t develop for Macs with XCode. They’ve got their own proprietary thing, because that’s the sort of thing they’d do.
  • Windows 8, definitely ARM support and probably all of it, is going to ship late. Microsoft is going to be in a lot of trouble in the laptop space late next year, because without ARM support they won’t be able to sell a product with competitive battery life.
  • In the longer, vaguer term, processing power per watt is going to be the most important computer metric of the next decade. Virtualized services running on ARM blades are going to displace everything that doesn’t require screamingly fast sequential computing as close to the bare metal as possible, which is to say “almost all of it”. In two years your more expensive 2U servers will have several hundred processor cores in them, consuming less power than your beefier 2U servers to today.
  • Steve Ballmer will lose his job by 2012 or Microsoft continues its long slide into irrelevance.

We’ll know in a few months!