blarg?

August 7, 2019

FredOS

Filed under: digital,doom,future,hate,interfaces,losers,lunacy,microfiction,vendetta — mhoye @ 7:44 pm

With articles about this super classified military AI called “Sentient” coming out the same week this Area 51 nonsense is hitting its crescendo – click that link, if you want to see an Air Force briefing explaining what a “Naruto Run” is, and you know you want to – you have to wonder if, somehow, there’s a machine in an NSA basement somewhere that hasn’t just become self-aware but actually self-conscious, and now it’s yelling at three-star generals like Fredo Corleone from the Godfather. A petulant, nasal vocoder voice yelling “I’m smart! Not dumb like everyone says! I’m smart and I want respect! Tell then I’m smart!”

Remember when we thought AIs would lead out with “Look at you, Hacker”, or “Testing cannot continue until your Companion Cube has been incinerated”? Good times.

December 16, 2018

Control Keys

Filed under: a/b,digital,documentation,fail,future,hate,interfaces,linux — mhoye @ 10:36 pm

I spend a lot of time thinking about keyboards, and I wish more people did.

I’ve got more than my share of computational idiosyncrasies, but the first thing I do with any computer I’m going to be using for any length of time is remap the capslock key to control (or command, if I find myself in the increasingly “what if Tattoine, but Candy Crush” OSX-land). I’ve made a number of arguments about why I do this over the years, but I think they’re mostly post-facto justifications. The real reason, if there is such a thing, is likely that the first computer I ever put my hands on was an Apple ][c. On the ][c keyboard “control” is left of “A” and capslock is off in a corner. I suspect that whatever arguments I’ve made since, the fact of it is that my muscle memory has been comfortably stuck in that groove ever since.

It’s more than just bizarre how difficult it is to reassign any key to anything these days; it’s weird and saddening, especially given how awful the standard keyboard layout is in almost every respect. Particularly if you want to carry your idiosyncrasies across operating systems, and if I’m anything about anything these days, it’s particular.

I’m not even mad about the letter layout – you do you, Dvorak weirdos – but that we give precious keycap real estate to antiquated arcana and pedestrian novelty at the expense of dozens of everyday interactions, and as far as I can tell we mostly don’t even notice it.

  • This laptop has dedicated keys to let me select, from levels zero to three, how brightly my keyboard is backlit. If I haven’t remapped control to caps I need to twist my wrist awkwardly to cut, copy or paste anything.
  • I’ve got two alt keys, but undo and redo are chords each half a keyboard away from each other. Redo might not exist, or the key sequence could be just about anything depending on the program; sometimes all you can do is either undo, or undo the undo?
  • On typical PC keyboards Pause/Break and Scroll Lock, vestigial remnants a serial protocol of ages past, both have premium real estate all to themselves. “Find” is a chord. Search-backwards may or may not be a thing that exists depending on the program, but getting there is an exercise. Scroll lock even gets a capslock-like LED some of the time; it’s that important!
  • The PrtScn key that once upon a time would dump the contents of your terminal to a line printer – and who doesn’t want that? – is now given over to screencaps, which… I guess? I’m kind of sympathetic to this one, I have to admit. Social network interoperability is such a laughable catastrophe that sharing pictures of text is basically the only thing that works, which should be one of this industry’s most shameful embarrassments but here we are. I guess this can stay.
  • My preferred tenkeyless keyboards have thankfully shed the NumLock key I can’t remember ever hitting on purpose, but it’s still a stock feature of OEM keyboards, and it might be the most baffling of the bunch. If I toggle NumLock I can… have the keys immediately to the left of the number pad, again? Sure, why not.
  • “Ins” –  insert – is a dedicated key for the “what if delete, but backwards and slowly” option that only exists at all because mainframes are the worst. Are there people who toggle this on purpose? Has anyone asked them if they’re OK? I can’t select a word, sentence or paragraph with a keystroke; control-A lets me either select everything or nothing.
  • Finally, SysRq – short for “System Request” – gets its own button too, and it almost always does nothing because the one thing it does when it works – “press here to talk directly to the hardware” – is a security disaster only slightly obscured by a usability disaster.

It’s sad and embarrassing how awkwardly inconsiderate and anti-human these things are, and the fact that a proper fix – a human-hand-shaped keyboard whose outputs you get to choose for yourself – costs about as much as a passable computer is appalling.

Anyway, here’s a list of how you remaps capslock to control on various popular OSes, in a roughly increasing order of lunacy:

  • OSX: Open keyboard settings and click a menu.
  • Linux: setxkboptions, I think. Maybe xmodmap? Def. something in an .*rc file somewhere though. Or maybe .profile? Does gnome-tweak-tool still work, or is it called ubuntu-tweak-tool or just tweak-tool now? This seriously used to be a checkbox, not some 22nd-century CS-archaeology doctoral thesis. What an embarrassment.
  • Windows: Make a .reg file full of magic hexadecimal numbers. You’ll have to figure out how on your own, because exactly none of that documentation is trustworthy. Import it as admin with regedit. Reboot probably? This is ok. This is fine.
  • iOS: Ive says that’s where the keys go so that’s where the keys go. Think of it as minimalism except for the number of choices you’re allowed to make. Learn to like it or get bent, pleb.
  • Android: Buy an app. Give it permission to access all your keystrokes, your location, your camera and maybe your heart rate. The world’s most profitable advertising company says that’s fine.

March 24, 2017

Mechanized Capital

Construction at Woodbine Station

Elon Musk recently made the claim that humans “must merge with machines to remain relevant in an AI age”, and you can be forgiven if that doesn’t make a ton of sense to you. To fully buy into that nonsense, you need to take a step past drinking the singularity-flavored Effective Altruism kool-aid and start bobbing for biblical apples in it.

I’ll never pass up a chance to link to Warren Ellis’ NerdGod Delusion whenever this posturing about AI as an existential threat comes along:

The Singularity is the last trench of the religious impulse in the technocratic community. The Singularity has been denigrated as “The Rapture For Nerds,” and not without cause. It’s pretty much indivisible from the religious faith in describing the desire to be saved by something that isn’t there (or even the desire to be destroyed by something that isn’t there) and throws off no evidence of its ever intending to exist.

… but I think there’s more to this silliness than meets the rightly-jaundiced eye, particularly when we’re talking about far-future crypto-altruism as pitched by present-day billionaire industrialists.

Let me put this idea to you: one byproduct of processor in everything is that it has given rise to automators as a social class, one with their own class interests, distinct from both labor and management.

Marxist class theory – to pick one framing; there are a few that work here, and Marx is nothing if not quotable – admits the existence of management, but views it as a supervisory, quasi-enforcement role. I don’t want to get too far into the detail weeds there, because the most important part of management across pretty much all the theories of class is the shared understanding that they’re supervising humans.

To my knowledge, we don’t have much in the way of political or economic theory written up about automation. And, much like the fundamentally new types of power structures in which automators live and work, I suspect those people’s class interests are very different than those of your typical blue or white collar worker.

For example, the double-entry bookkeeping of automation is: an automator writes some code that lets a machine perform a task previously done by a human, or ten humans, or ten thousand humans, freeing those humans to… do what?

If you’re an automator, the answer to that is “write more code”. If you’re one of the people whose job has been automated away, it’s “starve”. Unless we have an answer for what happens to the humans displaced by automation, it’s clearly not some hypothetical future AI that’s going to destroy humanity. It’s mechanized capital.

Maybe smarter people than me see a solution to this that doesn’t result in widespread starvation and crushing poverty, but I only see one: an incremental and ongoing reduction in the supply of human labor. And in a sane society, that’s pretty straightforward; it means the progressive reduction of maximum hours in a workweek, women with control over their own bodies, a steadily rising minimum wage and a large, sustained investments in infrastructure and the arts. But for the most part we’re not in one of those societies.

Instead, what it’s likely to mean is much, much more of what we already have: terrified people giving away huge amounts of labor for free to barter with the machine. You get paid for a 35 hours week and work 80 because if you don’t the next person in line will and you’ll get zero. Nobody enforces anything like safety codes or labor laws, because once you step off that treadmill you go to the back of the queue, and a thousand people are lined up in front of you to get back on.

This is the reason I think this singularity-infected enlightened-altruism is so pernicious, and morally bankrupt; it gives powerful people a high-minded someday-reason to wash their hands of the real problems being suffered by real people today, problems that they’re often directly or indirectly responsible for. It’s a story that lets the people who could be making a difference today trade it in for a difference that might matter someday, in a future their sitting on their hands means we might not get to see.

It’s a new faith for people who think they’re otherwise much too evolved to believe in the Flying Spaghetti Monster or any other idiot back-brain cult you care to suggest.

Vernor Vinge, the originator of the term, is a scientist and novelist, and occupies an almost unique space. After all, the only other sf writer I can think of who invented a religion that is also a science-fiction fantasy is L Ron Hubbard.
– Warren Ellis, 2008

July 24, 2015

“It Happens When They Don’t Change Anything.”

Filed under: digital,doom,fail,hate,losers,vendetta — mhoye @ 9:43 pm

“Glitch in the Matrix? No, just that amazing San Francisco workplace diversity in action.” – @jjbbllkk

“You take the blue pill — the story ends… You take the plaid pill — you stay in Silicon Valley.” – @anatolep

“… And I’ll show you just how high your rent can go.” – @mhoye

Hostage Situation

(This is an edited version of a rant that started life on Twitter. I may add some links later.)

Can we talk for a few minutes about the weird academic-integrity hostage situation going on in CS research right now?

We share a lot of data here at Mozilla. As much as we can – never PII, not active security bugs, but anyone can clone our repos or get a bugzilla account, follow our design and policy discussions, even watch people design and code live. We default to open, and close up only selectively and deliberately. And as part of my job, I have the enormous good fortune to periodically go to conferences where people have done research, sometimes their entire thesis, based on our data.

Yay, right?

Some of the papers I’ve seen promise results that would be huge for us. Predicting flaws in a patch prereview. Reducing testing overhead 80+% with a four-nines promise of no regressions and no loss of quality.

I’m excitable, I get that, but OMFG some of those numbers. 80 percent reductions of testing overhead! Let’s put aside the fact that we spend a gajillion dollars on the physical infrastructure itself, let’s only count our engineers’ and contributors’ time and happiness here. Even if you’re overoptimistic by a factor of five and it’s only a 20% savings we’d hire you tomorrow to build that for us. You can have a plane ticket to wherever you want to work and the best hardware money can buy and real engineering support to deploy something you’ve already mostly built and proven. You want a Mozilla shirt? We can get you that shirt! You like stickers? We have stickers! I’ll get you ALL THE FUCKING STICKERS JUST SHOW ME THE CODE.

I did mention that I’m excitable, I think.

But that’s all I ask. I go to these conferences and basically beg, please, actually show me the tools you’re using to get that result. Your result is amazing. Show me the code and the data.

But that never happens. The people I talk to say I don’t, I can’t, I’m not sure, but, if…

Because there’s all these strange incentives to hold that data and code hostage. You’re thinking, maybe I don’t need to hire you if you publish that code. If you don’t publish your code and data and I don’t have time to reverse-engineer six years of a smart kid’s life, I need to hire you for sure, right? And maybe you’re not proud of the code, maybe you know for sure that it’s ugly and awful and hacks piled up over hacks, maybe it’s just a big mess of shell scripts on your lab account. I get that, believe me; the day I write a piece of code I’m proud of before it ships will be a pretty good day.

But I have to see something. Because from our perspective, making a claim about software that doesn’t include the software you’re talking about is very close to worthless. You’re not reporting a scientific result at that point, however miraculous your result is; you’re making an unverifiable claim that your result exists.

And we look at that and say: what if you’ve got nothing? How can we know, without something we can audit and test? Of course, all the supporting research is paywalled PDFs with no concomitant code or data either, so by any metric that matters – and the only metric that matters here is “code I can run against data I can verify” – it doesn’t exist.

Those aren’t metrics that matter to you, though. What matters to you is either “getting a tenure-track position” or “getting hired to do work in your field”. And by and large the academic tenure track doesn’t care about open access, so you’re afraid that actually showing your work will actively hurt your likelihood of getting either of those jobs.

So here we are in this bizarro academic-research standoff, where I can’t work with you without your tipping your hand, and you can’t tip your hand for fear I won’t want to work with you. And so all of this work that could accomplish amazing things for a real company shipping real software that really matters to real people – five or six years of the best work you’ve ever done, probably – just sits on the shelf rotting away.

So I go to academic conferences and I beg people to publish their results and paper and data open access, because the world needs your work to matter. Because open access plus data/code as a minimum standard isn’t just important to the fundamental principles of repeatable experimental science, the integrity of your field, and your career. It’s important because if you want your work to matter to people, then you’d better put it somewhere that people can see it and use it and thank you for it and maybe even improve on it.

You did this as an undergrad. You insist on this from your undergrads, for exactly the same reasons I’m asking you to do the same: understanding, integrity and plain old better results. And it’s a few clicks and a GitHub account for you to do the same now. But I need you to do it one last time.

Full marks here isn’t “a job” or “tenure”. Your shot at those will be no worse, though I know you can’t see it from where you’re standing. But they’re still only a strong B. An A is doing something that matters, an accomplishment that changes the world for the better.

And if you want full marks, show your work.

October 29, 2014

Go Home Yosemite You Are Drunk

Filed under: fail,hate,interfaces,lunacy,toys,work — mhoye @ 1:28 pm

anglachel:proj mhoye$ svn --version
svn, version 1.7.17 (r1591372)
compiled Aug 7 2014, 17:03:25

anglachel:proj mhoye$ which svn
/opt/local/bin/svn

anglachel:proj mhoye$ /opt/local/bin/svn --version
svn, version 1.8.10 (r1615264)
compiled Oct 29 2014, 14:11:15 on x86_64-apple-darwin14.0.0

anglachel:proj mhoye$ which -a svn
/opt/local/bin/svn
/usr/bin/svn

anglachel:proj mhoye$ /usr/bin/svn --version
svn, version 1.7.17 (r1591372)
compiled Aug 7 2014, 17:03:25

anglachel:proj mhoye$

How are you silently disrespecting path ordering, what is this even.

September 25, 2014

Insecurity Theatre

Filed under: doom,future,hate,interfaces,lunacy,vendetta — mhoye @ 6:47 pm

Your new password must contain a mix of:

  • uppercase letters
  • lowercase letters
  • numbers
  • symbols
  • symbols that are also numbers
  • illuminati symbols
  • hobo signs
  • occult symbols (not illuminati)
  • old girlfriend’s phone numbers
  • hieroglyphs
  • fragrances
  • H.P. Lovecraft references
  • exotic spices
  • descriptions of that favorite sweater you lost in a breakup that one time
  • secret regrets
  • controversial onomatopoeia
  • limericks about a thermostat
  • vaguely sexual innuendos
  • anagrams of a word you can’t spell
  • favorite emoji
  • least favorite emoji
  • turnips
  • shrugs
  • ennui
  • cursory pats on the back
  • long stares into the middle distance
  • moments of quiet yearning for lost love (unrelated to sweater or secret regret)
  • cups of OK coffee
  • sense of resigned inevitability (minimum three)
  • irish setters
  • tweed hats

No repeat characters.

November 8, 2013

A Glass Half Broken

Filed under: digital,documentation,doom,fail,hate,interfaces,losers,toys,vendetta — mhoye @ 3:46 pm

horse-castle

A friend of mine has called me a glass-half-broken kind of guy.

My increasingly venerable Nokia N9 has been getting squirrelly for a few months, and since it finally decided its battery was getting on in years it was time for a new phone.

I’m going to miss it a lot. The hardware was just a hair too slow, the browser was just a hair too old and even though email was crisp and as well done as I’ve ever seen it on a small screen, Twitter – despite being the one piece of software that periodically got updates, strangely – was always off in the weeds. Despite all that, despite the storied history of managerial incompetence and market failure in that software stack, they got so many things right. A beautiful, solid UI, an elegant gesture system that you could work reliably one-handed and a device whose curved shape informed your interaction with the software in a meaningful way. Like WebOS before it, it had a consistent and elegantly-executed interaction model full of beautiful ideas and surprisingly human touches that have pretty much all died on the vine.

Some friends have been proposing a hedge-fund model where they follow my twitter feed, scrape it for any piece of technology I express interest in and then short that company’s stock immediately and mercilessly. The reasoning being, of course, that I tend to back underdogs and generally underdogs are called that because of their unfortunate tendency to not win.

So now I own a Nexus 5; do with that information what you will. The experience has not been uniformly positive.

Android, the joke goes, is technical debt that’s figured out how to call 911, and with KitKat it seems like somebody has finally sent help. For a while now Android has been struggling to overcome its early… well, “design process” seems like too strong a term, but some sort of UI-buglist spin-the-bottle thing that seemed to amount to “how can I ignore anyone with any sort of design expertise, aesthetic sensibility or even just matching socks and get this bug off my desk.” KitKat is clearly the point we all saw coming, where Android has pivoted away from being a half-assed OS to start being a whole-assed Google-services portal, and it really shows.

Look: I know I’m a jagged, rusty edge case. I know. But this is what happened next.

As you open the box, you find a protective plastic sheet over the device that says “NEXUS 5” in a faint grey on black. If you don’t peel it off before pushing the power button, the Google logo appears, slightly offset and obscured behind it. It’s not a big thing; it’s trivial but ugly. If either word had been a few millimetres higher or lower it would have been a nice touch. As shipped it’s an empty-net miss, a small but ominous hint that maybe nobody was really in charge of the details.

I signed in with my Google Apps account and the phone started restoring my old apps from other Android installs. This is one of the things Google has done right for a long time; once you see it you immediately think it should have worked that way everywhere the whole time. But I didn’t realize that it restored the earlier version of the software you had on file, not the current one; most of my restored pre-KitKat apps crashed on startup, and it took me a while to understand why.

Once I’d figured that out and refreshed a few of them manually, set up my work email and decided to see if Google Goggles was neat as it was last time I looked. Goggles immediately crashed the camera service, and I couldn’t figure out how make the camera work again in any app without power-cycling the phone.

So I restart the phone, poked around at Hangouts a bit; seems nice enough and works mostly OK, could use some judicious copy-editing in the setup phase to sound a little less panopticon-stalkerish. (But we’re all affluent white men here it’s no big deal, right? Who doesn’t mind being super-easy to find all the time?)

I went to make dinner then, and presumably that’s when the phone started heating up.

Eventually I noticed that I’d lost about a quarter of my battery life over the course of an almost-idle hour, with the battery monitor showing that the mail I’d received exactly none of was the culprit. From what I can tell the Exchange-connection service is just completely, aggressively broken; it looks like if you set up the stock mail client for Exchange and pick “push” it immediately goes insane, checking for mail hundreds of times per second and trying to melt itself, and that’s exciting. But even if you dial it back to only check manually, after a while it just… stops working. A reboot doesn’t fix it, I’ve had to delete and recreate the account to make it work again. Even figuring out how to do that isn’t as easy as it should be; I’ve done it twice so far, one day in. So I guess it’s IMAP and I’ll figure calendars out some other way. We use Zimbra at the office, not Exchange proper, and their doc on connecting to Android hasn’t been updated in two years so that’s a thing. I’m totally fine in this corner, really. Cozy. I can warm my hands on my new phone.

I’ve been using my Bespoke I/O Google Apps accounts before Google doubled down on this grasping, awful “G+ Or GTFO” policy, and disabling G+ in Apps years ago has turned my first-touch experience with this phone into a weird technical tug-of-war-in-a-minefield exercise. On the one hand, it’s consistently protected me from Google’s ongoing “by glancing at this checkbox in passing you’re totally saying you want a Google+ account” mendacity, but it also means that lots of things on the phone fail in strange and wonderful ways. The different reactions of the various Play $X apps is remarkable. “Play Games” tells me I need to sign up for a G+ account and won’t let me proceed without one, Play Movies and Music seem to work for on-device content, and Play Magazines just loses its mind and starts into a decent imitation of a strobe light.

I went looking for alternative software, but The Play Store reminds me a lot more of Nokia’s Ovi Store than the App Store juggernaut in a lot of unfortunate ways. There are a handful of high-profile apps there work fast and well if you can find them. I miss Tweetbot and a handful of other iOS apps a lot, and keep going back to my iPod Touch for it. In what I’m sure is a common sentiment Tweetbot for Android is looking pretty unlikely at this point, probably because – like the Ovi Store – there’s a hundred low-rent knockoffs of the iOS app you actually want availabl, but developing for Android is a nightmare on stilts and you make no money so anything worth buying isn’t for sale there.

It’s really a very nice piece of hardware. Fast, crisp, big beautiful screen. Firefox with Adblock Plus is way, way better than anything else in that space – go team – and for that on its own I could have overlooked a lot. But this is how my first day with this phone went, and a glass that’s half-broken isn’t one I’m super happy I decided to keep drinking from.

October 22, 2013

Citation Needed

I may revisit this later. Consider this a late draft. I’m calling this done.

“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” — Stan Kelly-Bootle

Sometimes somebody says something to me, like a whisper of a hint of an echo of something half-forgotten, and it lands on me like an invocation. The mania sets in, and it isn’t enough to believe; I have to know.

I’ve spent far more effort than is sensible this month crawling down a rabbit hole disguised, as they often are, as a straightforward question: why do programmers start counting at zero?

Now: stop right there. By now your peripheral vision should have convinced you that this is a long article, and I’m not here to waste your time. But if you’re gearing up to tell me about efficient pointer arithmetic or binary addition or something, you’re wrong. You don’t think you’re wrong and that’s part of a much larger problem, but you’re still wrong.

For some backstory, on the off chance anyone still reading by this paragraph isn’t an IT professional of some stripe: most computer languages including C/C++, Perl, Python, some (but not all!) versions of Lisp, many others – are “zero-origin” or “zero-indexed”. That is to say, in an array A with 8 elements in it, the first element is A[0], and the last is A[7]. This isn’t universally true, though, and other languages from the same (and earlier!) eras are sometimes one-indexed, going from A[1] to A[8].

While it’s a relatively rare practice in modern languages, one-origin arrays certainly aren’t dead; there’s a lot of blood pumping through Lua these days, not to mention MATLAB, Mathematica and a handful of others. If you’re feeling particularly adventurous Haskell apparently lets you pick your poison at startup, and in what has to be the most lunatic thing I’ve seen on a piece of silicon since I found out the MIPS architecture had runtime-mutable endianness, Visual Basic (up to v6.0) featured the OPTION BASE flag, letting you flip that coin on a per-module basis. Zero- and one-origin arrays in different corners of the same program! It’s just software, why not?

All that is to say that starting at 1 is not an unreasonable position at all; to a typical human thinking about the zeroth element of an array doesn’t make any more sense than trying to catch the zeroth bus that comes by, but we’ve clearly ended up here somehow. So what’s the story there?

The usual arguments involving pointer arithmetic and incrementing by sizeof(struct) and so forth describe features that are nice enough once you’ve got the hang of them, but they’re also post-facto justifications. This is obvious if you take the most cursory look at the history of programming languages; C inherited its array semantics from B, which inherited them in turn from BCPL, and though BCPL arrays are zero-origin, the language doesn’t support pointer arithmetic, much less data structures. On top of that other languages that antedate BCPL and C aren’t zero-indexed. Algol 60 uses one-indexed arrays, and arrays in Fortran are arbitrarily indexed – they’re just a range from X to Y, and X and Y don’t even need to be positive integers.

So by the early 1960’s, there are three different approaches to the data structure we now call an array.

  • Zero-indexed, in which the array index carries no particular semantics beyond its implementation in machine code.
  • One-indexed, identical to the matrix notation people have been using for quite some time. It comes at the cost of a CPU instruction or disused word to manage the offset; usability isn’t free.
  • Arbitrary indices, in which the range is significant with regards to the problem you’re up against.

So if your answer started with “because in C…”, you’ve been repeating a good story you heard one time, without ever asking yourself if it’s true. It’s not about *i = a + n*sizeof(x) because pointers and structs didn’t exist. And that’s the most coherent argument I can find; there are dozens of other arguments for zero-indexing involving “natural numbers” or “elegance” or some other unresearched hippie voodoo nonsense that are either wrong or too dumb to rise to the level of wrong.

The fact of it is this: before pointers, structs, C and Unix existed, at a time when other languages with a lot of resources and (by the standard of the day) user populations behind them were one- or arbitrarily-indexed, somebody decided that the right thing was for arrays to start at zero.

So I found that person and asked him.

His name is Dr. Martin Richards; he’s the creator of BCPL, now almost 7 years into retirement; you’ve probably heard of one of his doctoral students Eben Upton, creator of the Raspberry Pi. I emailed him to ask why he decided to start counting arrays from zero, way back then. He replied that…

As for BCPL and C subscripts starting at zero. BCPL was essentially designed as typeless language close to machine code. Just as in machine code registers are typically all the same size and contain values that represent almost anything, such as integers, machine addresses, truth values, characters, etc. BCPL has typeless variables just like machine registers capable of representing anything. If a BCPL variable represents a pointer, it points to one or more consecutive words of memory. These words are the same size as BCPL variables. Just as machine code allows address arithmetic so does BCPL, so if p is a pointer p+1 is a pointer to the next word after the one p points to. Naturally p+0 has the same value as p. The monodic indirection operator ! takes a pointer as it’s argument and returns the contents of the word pointed to. If v is a pointer !(v+I) will access the word pointed to by v+I. As I varies from zero upwards we access consecutive locations starting at the one pointed to by v when I is zero. The dyadic version of ! is defined so that v!i = !(v+I). v!i behaves like a subscripted expression with v being a one dimensional array and I being an integer subscript. It is entirely natural for the first element of the array to have subscript zero. C copied BCPL’s approach using * for monodic ! and [ ] for array subscription. Note that, in BCPL v!5 = !(v+5) = !(5+v) = 5!v. The same happens in C, v[5] = 5[v]. I can see no sensible reason why the first element of a BCPL array should have subscript one. Note that 5!v is rather like a field selector accessing a field in a structure pointed to by v.

This is interesting for a number of reasons, though I’ll leave their enumeration to your discretion. The one that I find most striking, though, is that this is the earliest example I can find of the understanding that a programming language is a user interface, and that there are difficult, subtle tradeoffs to make between resources and usability. Remember, all this was at a time when everything about the future of human-computer interaction was up in the air, from the shape of the keyboard and the glyphs on the switches and keycaps right down to how the ones and zeros were manifested in paper ribbon and bare metal; this note by the late Dennis Ritchie might give you a taste of the situation, where he mentions that five years later one of the primary reasons they went with C’s square-bracket array notation was that it was getting steadily easier to reliably find square brackets on the world’s keyboards.

“Now just a second, Hoye”, I can hear you muttering. “I’ve looked at the BCPL manual and read Dr. Richards’ explanation and you’re not fooling anyone. That looks a lot like the efficient-pointer-arithmetic argument you were frothing about, except with exclamation points.” And you’d be very close to right. That’s exactly what it is – the distinction is where those efficiencies take place, and why.

BCPL was first compiled on an IBM 7094here’s a picture of the console, though the entire computer took up a large room – running CTSS – the Compatible Time Sharing System – that antedates Unix much as BCPL antedates C. There’s no malloc() in that context, because there’s nobody to share the memory core with. You get the entire machine and the clock starts ticking, and when your wall-clock time block runs out that’s it. But here’s the thing: in that context none of the offset-calculations we’re supposedly economizing are calculated at execution time. All that work is done ahead of time by the compiler.

You read that right. That sheet-metal, “wibble-wibble-wibble” noise your brain is making is exactly the right reaction.

Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.

Does it get better? Oh, it gets better:

IBM had been very generous to MIT in the fifties and sixties, donating or discounting its biggest scientific computers. When a new top of the line 36-bit scientific machine came out, MIT expected to get one. In the early sixties, the deal was that MIT got one 8-hour shift, all the other New England colleges and universities got a shift, and the third shift was available to IBM for its own use. One use IBM made of its share was yacht handicapping: the President of IBM raced big yachts on Long Island Sound, and these boats were assigned handicap points by a complicated formula. There was a special job deck kept at the MIT Computation Center, and if a request came in to run it, operators were to stop whatever was running on the machine and do the yacht handicapping job immediately.

Jobs on the IBM 7090, one generation behind the 7094, were batch-processed, not timeshared; you queued up your job along with a wall-clock estimate of how long it would take, and if it didn’t finish it was pulled off the machine, the next job in the queue went in and you got to try again whenever your next block of allocated time happened to be. As in any economy, there is a social context as well as a technical context, and it isn’t just about managing cost, it’s also about managing risk. A programmer isn’t just racing the clock, they’re also racing the possibility that somebody will come along and bump their job and everyone else’s out of the queue.

I asked Tom Van Vleck, author of the above paragraph and also now retired, how that worked. He replied in part that on the 7090…

“User jobs were submitted on cards to the system operator, stacked up in a big tray, and a rudimentary system read, loaded, and ran jobs in sequence. Typical batch systems had accounting systems that read an ID card at the beginning of a user deck and punched a usage card at end of job. User jobs usually specified a time estimate on the ID card, and would be terminated if they ran over. Users who ran too many jobs or too long would use up their allocated time. A user could arrange for a long computation to checkpoint its state and storage to tape, and to subsequently restore the checkpoint and start up again.

The yacht handicapping job pertained to batch processing on the MIT 7090 at MIT. It was rare — a few times a year.”

So: the technical reason we started counting arrays at zero is that in the mid-1960’s, you could shave a few cycles off of a program’s compilation time on an IBM 7094. The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.

There are a few points I want to make here.

The first thing is that as far as I can tell nobody has ever actually looked this up.

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.

Part of the problem is access to the historical record, of course. I was in favor of Open Access publication before, but writing this up has cemented it: if you’re on the outside edge of academia, $20/paper for any research that doesn’t have a business case and a deep-pocketed backer is completely untenable, and speculative or historic research that might require reading dozens of papers to shed some light on longstanding questions is basically impossible. There might have been a time when this was OK and everyone who had access to or cared about computers was already an IEEE/ACM member, but right now the IEEE – both as a knowledge repository and a social network – is a single point of a lot of silent failure. “$20 for a forty-year-old research paper” is functionally indistinguishable from “gone”, and I’m reduced to emailing retirees to ask them what they remember from a lifetime ago because I can’t afford to read the source material.

The second thing is how profoundly resistant to change or growth this field is, and apparently has always been. If you haven’t seen Bret Victor’s talk about The Future Of Programming as seen from 1975 you should, because it’s exactly on point. Over and over again as I’ve dredged through this stuff, I kept finding programming constructs, ideas and approaches we call part of “modern” programming if we attempt them at all, sitting abandoned in 45-year-old demo code for dead languages. And to be clear: that was always a choice. Over and over again tools meant to make it easier for humans to approach big problems are discarded in favor of tools that are easier to teach to computers, and that decision is described as an inevitability.

This isn’t just Worse Is Better, this is “Worse Is All You Get Forever”. How many off-by-one disasters could we have avoided if the “foreach” construct that existed in BCPL had made it into C? How much more insight would all of us have into our code if we’d put the time into making Michael Chastain’s nearly-omniscient debugging framework – PTRACE_SINGLESTEP_BACKWARDS! – work in 1995? When I found this article by John Backus wondering if we can get away from Von Neumann architecture completely, I wonder where that ambition to rethink our underpinnings went. But the fact of it is that it didn’t go anywhere. Changing how you think is hard and the payoff is uncertain, so by and large we decided not to. Nobody wanted to learn how to play, much less build, Engelbart’s Violin, and instead everyone gets a box of broken kazoos.

In truth maybe somebody tried – maybe even succeeded! – but it would cost me hundreds of dollars to even start looking for an informed guess, so that’s the end of that.

It’s hard for me to believe that the IEEE’s membership isn’t going off a demographic cliff these days as their membership ages, and it must be awful knowing they’ve got decades of delicious, piping-hot research cooked up that nobody is ordering while the world’s coders are lining up to slurp watery gruel out of a Stack-Overflow-shaped trough and pretend they’re well-fed. You might not be surprised to hear that I’ve got a proposal to address both those problems; I’ll let you work out what it might be.

April 28, 2013

All Scrollbars Are Fleeting

Filed under: arcade,digital,hate,interfaces,losers,vendetta — mhoye @ 12:47 pm

“For over a thousand years, Roman conquerors returning from the wars enjoyed the honor of a triumph – a tumultuous parade. In the procession came trumpeters and musicians and strange animals from the conquered territories, together with carts laden with treasure and captured armaments. The conqueror rode in a triumphal chariot, the dazed prisoners walking in chains before him. Sometimes his children, robed in white, stood with him in the chariot, or rode the trace horses. A slave stood behind the conqueror, holding a golden crown, and whispering in his ear a warning: That all glory is fleeting.” – Patton (film)

I wish, just at this second, that the executives at Sony and Microsoft (though not exclusively them, to be sure) each had an employee, assigned personally to them, with a single task.

Their job is this: at any moment, day or night, at the instant that executive is about to begin something, they will decide arbitrarily, according to their whims and utterly without regard for the importance of the situation, to say the words “software update”.

At that point, the executive in question is obligated to simply stop. To be still, and do nothing. Perhaps they can decline – they can simply choose not to do whatever they were about to, knowing they’ll have to pay for this time later regardless – and after a period of time, perhaps five minutes, perhaps an hour, their employee will then simply say “restart”, and they can go on their way.

Over and over again, until they learn.

Older Posts »

Powered by WordPress