blarg?

academic

I may revisit this later. Consider this a late draft. I’m calling this done.

“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” — Stan Kelly-Bootle

Sometimes somebody says something to me, like a whisper of a hint of an echo of something half-forgotten, and it lands on me like an invocation. The mania sets in, and it isn’t enough to believe; I have to know.

I’ve spent far more effort than is sensible this month crawling down a rabbit hole disguised, as they often are, as a straightforward question: why do programmers start counting at zero?

Now: stop right there. By now your peripheral vision should have convinced you that this is a long article, and I’m not here to waste your time. But if you’re gearing up to tell me about efficient pointer arithmetic or binary addition or something, you’re wrong. You don’t think you’re wrong and that’s part of a much larger problem, but you’re still wrong.

For some backstory, on the off chance anyone still reading by this paragraph isn’t an IT professional of some stripe: most computer languages including C/C++, Perl, Python, some (but not all!) versions of Lisp, many others – are “zero-origin” or “zero-indexed”. That is to say, in an array A with 8 elements in it, the first element is A[0], and the last is A[7]. This isn’t universally true, though, and other languages from the same (and earlier!) eras are sometimes one-indexed, going from A[1] to A[8].

While it’s a relatively rare practice in modern languages, one-origin arrays certainly aren’t dead; there’s a lot of blood pumping through Lua these days, not to mention MATLAB, Mathematica and a handful of others. If you’re feeling particularly adventurous Haskell apparently lets you pick your poison at startup, and in what has to be the most lunatic thing I’ve seen on a piece of silicon since I found out the MIPS architecture had runtime-mutable endianness, Visual Basic (up to v6.0) featured the OPTION BASE flag, letting you flip that coin on a per-module basis. Zero- and one-origin arrays in different corners of the same program! It’s just software, why not?

All that is to say that starting at 1 is not an unreasonable position at all; to a typical human thinking about the zeroth element of an array doesn’t make any more sense than trying to catch the zeroth bus that comes by, but we’ve clearly ended up here somehow. So what’s the story there?

The usual arguments involving pointer arithmetic and incrementing by sizeof(struct) and so forth describe features that are nice enough once you’ve got the hang of them, but they’re also post-facto justifications. This is obvious if you take the most cursory look at the history of programming languages; C inherited its array semantics from B, which inherited them in turn from BCPL, and though BCPL arrays are zero-origin, the language doesn’t support pointer arithmetic, much less data structures. On top of that other languages that antedate BCPL and C aren’t zero-indexed. Algol 60 uses one-indexed arrays, and arrays in Fortran are arbitrarily indexed – they’re just a range from X to Y, and X and Y don’t even need to be positive integers.

So by the early 1960’s, there are three different approaches to the data structure we now call an array.

  • Zero-indexed, in which the array index carries no particular semantics beyond its implementation in machine code.
  • One-indexed, identical to the matrix notation people have been using for quite some time. It comes at the cost of a CPU instruction to manage the offset; usability isn’t free.
  • Arbitrary indices, in which the range is significant with regards to the problem you’re up against.

So if your answer started with “because in C…”, you’ve been repeating a good story you heard one time, without ever asking yourself if it’s true. It’s not about *i = a + n*sizeof(x) because pointers and structs didn’t exist. And that’s the most coherent argument I can find; there are dozens of other arguments for zero-indexing involving “natural numbers” or “elegance” or some other unresearched hippie voodoo nonsense that are either wrong or too dumb to rise to the level of wrong.

The fact of it is this: before pointers, structs, C and Unix existed, at a time when other languages with a lot of resources and (by the standard of the day) user populations behind them were one- or arbitrarily-indexed, somebody decided that the right thing was for arrays to start at zero.

So I found that person and asked him.

His name is Dr. Martin Richards; he’s the creator of BCPL, now almost 7 years into retirement; you’ve probably heard of one of his doctoral students Eben Upton, creator of the Raspberry Pi. I emailed him to ask why he decided to start counting arrays from zero, way back then. He replied that…

As for BCPL and C subscripts starting at zero. BCPL was essentially designed as typeless language close to machine code. Just as in machine code registers are typically all the same size and contain values that represent almost anything, such as integers, machine addresses, truth values, characters, etc. BCPL has typeless variables just like machine registers capable of representing anything. If a BCPL variable represents a pointer, it points to one or more consecutive words of memory. These words are the same size as BCPL variables. Just as machine code allows address arithmetic so does BCPL, so if p is a pointer p+1 is a pointer to the next word after the one p points to. Naturally p+0 has the same value as p. The monodic indirection operator ! takes a pointer as it’s argument and returns the contents of the word pointed to. If v is a pointer !(v+I) will access the word pointed to by v+I. As I varies from zero upwards we access consecutive locations starting at the one pointed to by v when I is zero. The dyadic version of ! is defined so that v!i = !(v+I). v!i behaves like a subscripted expression with v being a one dimensional array and I being an integer subscript. It is entirely natural for the first element of the array to have subscript zero. C copied BCPL’s approach using * for monodic ! and [ ] for array subscription. Note that, in BCPL v!5 = !(v+5) = !(5+v) = 5!v. The same happens in C, v[5] = 5[v]. I can see no sensible reason why the first element of a BCPL array should have subscript one. Note that 5!v is rather like a field selector accessing a field in a structure pointed to by v.

This is interesting for a number of reasons, though I’ll leave their enumeration to your discretion. The one that I find most striking, though, is that this is the earliest example I can find of the understanding that a programming language is a user interface, and that there are difficult, subtle tradeoffs to make between resources and usability. Remember, all this was at a time when everything about the future of human-computer interaction was up in the air, from the shape of the keyboard and the glyphs on the switches and keycaps right down to how the ones and zeros were manifested in paper ribbon and bare metal; this note by the late Dennis Ritchie might give you a taste of the situation, where he mentions that five years later one of the primary reasons they went with C’s square-bracket array notation was that it was getting steadily easier to reliably find square brackets on the world’s keyboards.

“Now just a second, Hoye”, I can hear you muttering. “I’ve looked at the BCPL manual and read Dr. Richards’ explanation and you’re not fooling anyone. That looks a lot like the efficient-pointer-arithmetic argument you were frothing about, except with exclamation points.” And you’d be very close to right. That’s exactly what it is – the distinction is where those efficiencies take place, and why.

BCPL was first compiled on an IBM 7094here’s a picture of the console, though the entire computer took up a large room – running CTSS – the Compatible Time Sharing System – that antedates Unix much as BCPL antedates C. There’s no malloc() in that context, because there’s nobody to share the memory core with. You get the entire machine and the clock starts ticking, and when your wall-clock time block runs out that’s it. But here’s the thing: in that context none of the offset-calculations we’re supposedly economizing are calculated at execution time. All that work is done ahead of time by the compiler.

You read that right. That sheet-metal, “wibble-wibble-wibble” noise your brain is making is exactly the right reaction.

Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.

Does it get better? Oh, it gets better:

IBM had been very generous to MIT in the fifties and sixties, donating or discounting its biggest scientific computers. When a new top of the line 36-bit scientific machine came out, MIT expected to get one. In the early sixties, the deal was that MIT got one 8-hour shift, all the other New England colleges and universities got a shift, and the third shift was available to IBM for its own use. One use IBM made of its share was yacht handicapping: the President of IBM raced big yachts on Long Island Sound, and these boats were assigned handicap points by a complicated formula. There was a special job deck kept at the MIT Computation Center, and if a request came in to run it, operators were to stop whatever was running on the machine and do the yacht handicapping job immediately.

Jobs on the IBM 7090, one generation behind the 7094, were batch-processed, not timeshared; you queued up your job along with a wall-clock estimate of how long it would take, and if it didn’t finish it was pulled off the machine, the next job in the queue went in and you got to try again whenever your next block of allocated time happened to be. As in any economy, there is a social context as well as a technical context, and it isn’t just about managing cost, it’s also about managing risk. A programmer isn’t just racing the clock, they’re also racing the possibility that somebody will come along and bump their job and everyone else’s out of the queue.

I asked Tom Van Vleck, author of the above paragraph and also now retired, how that worked. He replied in part that on the 7090…

“User jobs were submitted on cards to the system operator, stacked up in a big tray, and a rudimentary system read, loaded, and ran jobs in sequence. Typical batch systems had accounting systems that read an ID card at the beginning of a user deck and punched a usage card at end of job. User jobs usually specified a time estimate on the ID card, and would be terminated if they ran over. Users who ran too many jobs or too long would use up their allocated time. A user could arrange for a long computation to checkpoint its state and storage to tape, and to subsequently restore the checkpoint and start up again.

The yacht handicapping job pertained to batch processing on the MIT 7090 at MIT. It was rare — a few times a year.”

So: the technical reason we started counting arrays at zero is that in the mid-1960’s, you could shave a few cycles off of a program’s compilation time on an IBM 7094. The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.

There are a few points I want to make here.

The first thing is that as far as I can tell nobody has ever actually looked this up.

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.

Part of the problem is access to the historical record, of course. I was in favor of Open Access publication before, but writing this up has cemented it: if you’re on the outside edge of academia, $20/paper for any research that doesn’t have a business case and a deep-pocketed backer is completely untenable, and speculative or historic research that might require reading dozens of papers to shed some light on longstanding questions is basically impossible. There might have been a time when this was OK and everyone who had access to or cared about computers was already an IEEE/ACM member, but right now the IEEE – both as a knowledge repository and a social network – is a single point of a lot of silent failure. “$20 for a forty-year-old research paper” is functionally indistinguishable from “gone”, and I’m reduced to emailing retirees to ask them what they remember from a lifetime ago because I can’t afford to read the source material.

The second thing is how profoundly resistant to change or growth this field is, and apparently has always been. If you haven’t seen Bret Victor’s talk about The Future Of Programming as seen from 1975 you should, because it’s exactly on point. Over and over again as I’ve dredged through this stuff, I kept finding programming constructs, ideas and approaches we call part of “modern” programming if we attempt them at all, sitting abandoned in 45-year-old demo code for dead languages. And to be clear: that was always a choice. Over and over again tools meant to make it easier for humans to approach big problems are discarded in favor of tools that are easier to teach to computers, and that decision is described as an inevitability.

This isn’t just Worse Is Better, this is “Worse Is All You Get Forever”. How many off-by-one disasters could we have avoided if the “foreach” construct that existed in BCPL had made it into C? How much more insight would all of us have into our code if we’d put the time into making Michael Chastain’s nearly-omniscient debugging framework – PTRACE_SINGLESTEP_BACKWARDS! – work in 1995? When I found this article by John Backus wondering if we can get away from Von Neumann architecture completely, I wonder where that ambition to rethink our underpinnings went. But the fact of it is that it didn’t go anywhere. Changing how you think is hard and the payoff is uncertain, so by and large we decided not to. Nobody wanted to learn how to play, much less build, Engelbart’s Violin, and instead everyone gets a box of broken kazoos.

In truth maybe somebody tried – maybe even succeeded! – but it would cost me hundreds of dollars to even start looking for an informed guess, so that’s the end of that.

It’s hard for me to believe that the IEEE’s membership isn’t going off a demographic cliff these days as their membership ages, and it must be awful knowing they’ve got decades of delicious, piping-hot research cooked up that nobody is ordering while the world’s coders are lining up to slurp watery gruel out of a Stack-Overflow-shaped trough and pretend they’re well-fed. You might not be surprised to hear that I’ve got a proposal to address both those problems; I’ll let you work out what it might be.

Bricks

I was going to write this to an internal mailing list, following this week’s PRISM excitement, but I’ve decided to put it here instead. It was written (and cribbed from other stuff I’ve written elsewhere) in response to an argument that encrypting everything would somehow solve a scary-sounding though imprecisely-specified problem, a claim you may not be surprised to find out I think is foolish.

I’ve written about this elsewhere, so forgive me, but: I think that it’s a profound mistake to assume that crypto is a panacea here.

Backstory time: in 1993, the NSA released SHA, the Secure Hashing Algorithm; you’ve heard of it, I’m sure. Very soon afterwards – months, I think? – they came back and said no, stop, don’t use that. Use SHA-1 instead, here you go.

No explanation, nothing. But nobody else could even begin to make a case either way, so SHA-1 it is.

It’s 2005 before somebody manages to generate one, just one, collision in what’s now called SHA-0, and they do that by taking a theoretical attack that gets you close to a collision, generalizing it and running it for around 80,000 CPU hours or so on a machine with 256 Itanium-2 processors running this one job flat out for two weeks.

That hardware straight up didn’t exist in 1993. That was the year the original Doom came out, for what it’s worth, so it’s very likely that the “significant weakness” they found was found by a person or team of people scribbling on a whiteboard. And, note, they found the weaknesses in that algorithm in the weeks after publication when those holes – or indeed “any holes at all” – would take the public-facing crypto community more than a decade to discover were a theoretical possibility.

Now, wash that tender morsel down with this quote from an article in Wired quoting James Bamford, longtime writer about all things NSA:

“According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

“Many average computer users in the US”? Welp. That’s SSL, then.

So odds are good that what we here in the public and private sectors consider to be strong crypto isn’t much more of an impediment for the NSA than ROT-13. In the public sector AES-128 is considered sufficient for information up to level “secret” only; AES-256 is for “top secret”, and both are part of the NSA’s Suite B series of cryptographic algorithms, outlined here.

Suite A is unlikely to ever see the light of day, not even so much as their names. The important thing that this suggests is that the NSA may internally have a class break for their recommended Series B crypto algorithms, or at least an attack that makes decryption computationally feasible for a small set of people that includes themselves, and indeed for anything weaker, or with known design flaws.

The problem that needs to be addressed here is a policy problem, not a technical one. And that’s actually great news, because if you’re getting into a pure-math-and-computational-power arms race with the NSA, you’re gonna have a bad time.

Don't Interrupt

We took Arthur’s Science Fair Trouble out of the library for Maya the other day, and let me tell you: I had always suspected that most of what adults tell you is bullshit, but children’s books live at some horrible Venn overlap of Moore and Sturgeon’s respective Laws where 90% of everything is not only crap but getting twice as crappy every year and a half or so.

I had to go over this book carefully with Maya after I read it, to explain to her why every single part of it is wrong. The description from the dust cover reads:

Arthur has to do a science fair project, but all of the good ideas are taken: Buster is building a rocket, Muffy is growing crystals, and Francine is making a bird feeder. Arthur learns a valuable lesson when he finds his father’s old solar system project in the attic and tries to use it for his own science fair project.

That’s right: Arthur’s in a pickle, because all the good science ideas have been done by other children doing wholly original work. But when Arthur instead decides to update his father’s old solar system project (repainting it) and presenting that he feels, we are told, terribly guilty, finally breaking down after winning first prize to admit the work wasn’t wholly his. He is suitably chastised, of course.

I don’t think Maya understood my rant about why verifying old assumptions was incredibly valuable, not merely per se but particularly in light of Pluto’s redefined status and the inclusion of Eris and Ceres in the “Dwarf Planet” category as well.

I had to explain to her Arthur was explaining the evolution of cosmology by repurposing and updating older (handmade by his father!) demonstration materials, which is not only great on its own, but vastly better scientific and expository work than his classmates’ projects, who were showing no insight into why assembling premanufactured toys might not count as science.

“Maya, the people harassing Arthur for this are lazy, ignorant people saying dumb things to make Arthur feel bad, and Arthur is wrong to feel bad about his work. Building on top of each others’ work is the only reason we have this world of incredible, miraculous wonder we live in, and don’t let anyone tell you otherwise.”

I don’t think it stuck, but I’ll keep repeating it.

I was thinking about this today when this quote from Mark Twain on plagiarism started making the rounds:

Mark Twain, letter to Helen Keller, after she had been accused of plagiarism for one of her early stories (17 March 1903), published in Mark Twain’s Letters, Vol. 1 (1917) edited by Albert Bigelow Paine, p. 731:

Oh, dear me, how unspeakably funny and owlishly idiotic and grotesque was that “plagiarism” farce! As if there was much of anything in any human utterance, oral or written, except plagiarism! The kernal, the soul — let us go further and say the substance, the bulk, the actual and valuable material of all human utterances — is plagiarism. For substantially all ideas are second-hand, consciously and unconsciously drawn from a million outside sources, and daily used by the garnerer with a pride and satisfaction born of the superstition that he originated them; whereas there is not a rag of originality about them anywhere except the little discoloration they get from his mental and moral calibre and his temperament, and which is revealed in characteristics of phrasing. When a great orator makes a great speech you are listening to ten centuries and ten thousand men — but we call it his speech, and really some exceedingly smail portion of it is his. But not enough to signify. It is merely a Waterloo. It is Wellington’s battle, in some degree, and we call it his; but there are others that contributed. It takes a thousand men to invent a telegraph, or a steam engine, or a phonograph, or a photograph, or a telephone or any other important thing—and the last man gets the credit and we forget the others. He added his little mite — that is all he did. These object lessons should teach us that ninety-nine parts of all things that proceed from the intellect are plagiarisms, pure and simple; and the lesson ought to make us modest. But nothing can do that.

Which is all to say: Constant vigilance!

According to Wolfram Alpha, there are 2.9 x 10^6 dietary calories in a cubic meter of cheese, 142829% of your recommended daily caloric intake.

Furthermore, there are 8.468×10^47 cubic meters in a cubic light year. From this, we can conclude that there are 2.455 x 10^54 dietary calories in a cubic light year of cheese.

According to NASA the sun produces 3.8 x 10^33 ergs/sec or roughly 3.8 x 10^26 joules/sec. Over the course of a year that adds up to approximately 6.065 x 10^37 joules of energy.

One dietary calorie or “kilocalorie” equals about 4180 joules. Doing the math we conclude it will take 1.7 x 10^20 years for our sun to generate the same amount of energy as a cubic light year of cheese.

Be warned, however, that at 977 kilograms per cubic meter, or 8.27 × 10^50 kilograms per cubic light year, the Schwarzchild Radius of a cubic light year of cheese would be 1.23 × 10^24 meters, significantly greater than the 9.46 x 10^15 meters in a light year. From this we can conclude that a cubic light year of cheese, should that somehow manifest itself, will immediately collapse into a black hole.

So while you would think a cubic light year of cheese would be the obvious choice over the sun, if you are presented with a choice between them, the numbers suggest you would be far better off choosing the sun.

These numbers assume cheese of approximately constant density. Swiss cheeses require much more sophisticated modelling.

(This article has been updated to reflect a comment from Jin, seen below, who notes that Wolfram returns dietary calorie units, which is to say kilocalories, rather than simply calories. The original claim, that it would take the sun 1.7 x 10^17 years to generate the same amount of energy as is contained in a cubic light-year of cheese was inaccurate, and has been corrected above. The author sincerely regrets any inconvenience this may have caused.)

I don’t think I’m actually done this, so just pretend it’s a late draft. I might try to tighten it up later, but here you go; I hope you’re interested. Yeah, this is still about Portal 2, so bear with me. It’s not like Gears Of War deserves to be dissected like this, you know?

I’ve been spending some time chasing this idea around in the bowels of the Aperture Science facility, taking copious notes as I wander through the middle bits of Portal 2 again. There’s some important context here that it may help to be familiar with, but just playing through Portal 1 and 2 should be plenty.

It’s probably because I’m sentimental, but to my mind an important thing about Quest- or FPS-RPGs that doesn’t get much attention, at least as far as video games is concerned, is that you actually are playing a role. Video games differ fundamentally from most narratives (and are closer to real life, in this sense) in that you are being allowed to shape a story and participate in a universe that you don’t fully own, and can’t fully command; the character whose role you play predates your presences in that space, and has a story that is in some sense theirs, reaching forward and back beyond your brief manipulation of their limbs and choices. Sometimes you need to take the time, wherever your character finds themselves – a dungeon, a running firefight, a ruined building or an open field – to do something that’s not relevant to your goals, or even to you personally, just to do some justice to the character you’re playing.

I found a lot of the “Rat Man’s Dens” on my first playthrough, being the sort of person who looks for the seams. Specifically, I found that corner of the facility where one of the radios, rather than playing the tinny Aperture-marimba, is playing The National’s “Exile Vilify”.

Did you find it? What did you do, then? It occurred to me as I sat there that this is the first piece of music we’ve really heard, in-game. But maybe, and maybe worse, there’s a decent chance that this slow lament about the burdens of alienation might actually be the only song Chell has ever heard.

I wondered what that might do to a person, how suspicious they’d be to have found that thing in that place, and how they’d react. Is it even possible to guess how somebody might feel in that situation? I crouched down to stare at the radio, listening to it all the way through before going back to finish that test. It seemed appropriate. I doubt it had any effect on the game at all (but who can know, with Valve?) but I have a sense that my participation in the game was improved somehow by it, and it’s hard to argue with that metric.

Anyway, let’s get back on track here.

So apropos of nothing, or at least it was at the time, a few months ago I wrote about the implications of the cave in Plato’s well-known metaphor having its own agency. It’s odd that the idea would find some traction in a discussion about the plot of a video game but, I guess, where else?

The idea of immortality which appears in syncretistic religions of antiquity was introduced in late antiquity. The mysteries represented the myth of the abduction of Persephone from her mother Demeter by the king of the underworld Hades, in a cycle with three phases, the “descent” (loss), the “search” and the “ascent”, with main theme the “ascent” of Persephone and the reunion with her mother.

– Wikipedia on the Eleusinian Mysteries.

Here’s a question for you: how many protagonists are there in Portal 2? Chell, GlaDOS and Wheatley… three, right? And you’re resurrected in the midst of Aperture Science’s protracted decay, to be dropped into this forgotten, sealed off subterranean wing of Aperture after a GlaDOS and Wheatley’s first confrontation, to struggle back up the mine shaft and restore the status quo ante.

That’s the game, to a certain superficial approximation. And all of that has to be wrong; there are hundreds of little details in-game that put the lie to it. Portal 2 isn’t a simple or superficial game, not at all.

Though Demeter is often described simply as the goddess of the harvest, she presided also over the sanctity of marriage, the sacred law, and the cycle of life and death. She and her daughter Persephone were the central figures of the Eleusinian Mysteries that predated the Olympian pantheon.

– Wikipedia on Demeter

The first problem is, as I mentioned earlier, is all these little things that are where they really shouldn’t be. At the very bottom of Test Shaft 09, as you’ve passed Abandonment Seal Zulu Bunsen and entered Aperture’s antechambers, you start to see the signs that these sealed off and abandoned facilities aren’t nearly as sealed off or abandoned as you think. All the lights are still on, doors are still powered and they’re still controlled by devices with clean, white lines and modern-era lens-blade Aperture logos on the side. Likewise the hazmat warning labels on the pipes and vats as you ascend from the depths; with modern warnings and modern logos, this isn’t the long-abandoned a facility it seems to be at first glance.

There are other problems, like: in the last room before you ascend back through the containment door to modern Aperture, what activates that lift? You don’t. There’s no switches, no panels; the door just closes and up you go. The same thing happens in the moments before you meet Wheatley again; there’s stairs everywhere else but here, for no architectural reason, a lift you don’t actuate yourself hoists you up to the entrance to the next chamber.

You can see where I’m going with this by now. There aren’t three protagonists here; there are four. Portal 2 doesn’t make sense unless you consider the Aperture Science facility itself as an agent in its own right.

And it gets weirder, because it seems likely that the Aperture facility is the manifestation of its creator, Cave Johnson.

When Wheatley slams you down the shaft that drops you into the bowels of Aperture, it’s worth asking: why is that shaft even there? There’s no structural reason for it, and when you get to the bottom of it, there’s nothing else down there with you. It has to be something else. Another question worth taking a good hard look at is, what are you actually doing while you’re down there?

In Greek mythology, Tartarus is both a deity and a place in the underworld. In ancient Orphic sources and in the mystery schools, Tartarus is also the unbounded first-existing entity from which the Light and the cosmos are born.

Wikipedia, “Tartarus”

It’s pretty well-established that GlaDOS is the electronic (and likely the very much unwilling) reincarnation of Caroline, Cave Johnson’s personal assistant. It’s not much of a stretch to say that Chell is in all likelihood Caroline’s daughter, and that likely by Cave. Indeed, partway through your ascent, you get a disturbing glimpse of Chell’s backstory when you come across a slew of science fair projects: the one with the hugely overgrown potato (whose shape bears a more-than-passing resemblance to that of GlaDOS, with its roots threading up into the ceiling) has two noteworthy details, one being the line that it involved a “special ingredient from daddy’s work”, and the other being that it’s signed “Chell”.

“For the record you are adopted and that’s terrible. Just work with me.”

– GlaDOS to Chell, Portal 2.

The chronology here is ambiguous, but Chell would have to have been between about six and ten years old to have made the potato battery project. Cave Johnson’s last recorded message in the Aperture Test Spheres said unambiguously that “If I die before you people can pour me into a computer, I want Caroline to run this place. Now she’ll argue, she’ll say she can’t – she’s modest like that. But you make her! Hell, put her in my computer, I don’t care.” If Bring Your Daughter To Work Day was when everything went wrong it’s likely that Caroline, forcibly decanted into GLaDOS, has already been a victim of that process. GLaDOS stands for Genetic Lifeform and Disk Operating System; it’s not clear what being forced to be that genetic component entails, but the fact GLaDOS physically resembles a bound, blindfolded and gagged woman is I think telling, and an important part of the story.

“Sorry boys, she’s married – to science!”

– Cave Johnson, introducing Caroline in his first recorded message.

The timing seems wrong – Chell is clearly a lot older than 10, likely in her mid to late 20s in-game and it’s not clear when Cave Johnson died of the moon rock poisoning he suffered. “Daddy’s work” seems to imply that Chell’s father was still alive at the time, but it’s possible it means “from the place my Dad worked” or “created”. Either way, it’s pretty clear given the chronology that Chell really was adopted, but not by any other parents; she was adopted by Aperture. And Aperture is in a very real sense, with its vast, relentless complexity, advanced technology including “brain mapping” and its mad genius CEO, both a deity and a place.

One day they woke me up
So I could live forever
It’s such a shame the same will never happen to you.
You’ve got your short sad life left,
(That’s what I’m counting on.)
I used to want you dead but now I only want you gone.

– lyrics from Want You Gone, Portal 2’s concluding song, sung by Jonathan Coulton

What you’re really doing as you ascend through the history of Aperture from the bottom of Test Shaft 09 is resurrecting Aperture itself; resurrecting Cave, and reconnecting him to Caroline again, forever. And even though Cave ordered Caroline forcibly decanted into GLaDOS, he may not have wanted the same for his daughter, and now that the reawakened Caroline knows who she really is and who you are, she may not actually want that either.

And that’s why you’re ultimately sent away, and why Portal 2 is a weirder, creepier game than it first appears; while you’ve been solving all of these puzzle-tests, you’ve also been resurrecting your doomed parents to their respective (terrible, captive) immortalities, in the end being sent away so that “the same will never happen to you”. You made the last ascension serenaded by the facility itself; they’re left alone together as you emerge from the facility to a blue sky and a field full of tall wheat. It’s sometime in early autumn – harvest season – and you’re off to see the world, with your scorched old Companion Cube as a last going away present from your parents.

I’m really interested in video games as narrative, and the possibilities virtual spaces open up to be examined through the lenses and terminologies of the various schools of literary criticism that are content to call anything that hits them in the eyes a text. There’s a lot of ground in that field to cover, and some of the best games are happy to give you a glimpse of the scope of the worlds they’re embedded in and the forces that shape them, a larger sense of who the protagonists are, and hint at the broad brushstrokes and hidden grammars of a story you’re barely a part of.

Portal 2 is great for this.

If you’re paying really close attention, there’s a few interesting discontinuities in Portal 2. Some of them are… maybe more obvious than they should be. The low-hanging fruit come when you’re fighting through Wheatley’s tests in the latter third of the game. When you first meet back up with her halfway up Test Shaft 9 Glados tells you that she “literally doesn’t have the energy to lie to you”; she later on she reverses herself on the claim that she didn’t stockpile test chambers when she’s called on it. Another one that might just be a continuity error comes up when you emerge from the last of the Test Shaft 9’s pumping rooms; the walls below are marked “1982”, but stepping through the door leads you to a vitrification order dated 1961. Continuity seems pretty clear, at that point, so, maybe this is nothing?

But maybe it’s something, or a hint at something. Because at the very bottom of the mine, in the doorway out of the fifties-era Aperture Science offices where the first picture of Cave and his runner-up contractor-of-the-year awards are, the sliding door is apparently controlled by a little white device, with little square lights. And if you look closely, you’ll see it inscribed with, not the 50’s era Aperture Science logo as you’d expect, but with the most recent lens-blade Aperture Science logo, the one we all know and love.
There’s no hint that I can find anywhere else in the narrative that this has any right to be there but there it is, and the implications for the story, both main- and back-, are pretty large.

I do like me some understanding a good story so self, I said to myself, why not just ask?

So I sent some email to Chet Faliszek asking him: is it there on purpose, or is that an oversight?

And I got some email back just now from Erik Wolpaw and Mr. Faliszek saying:

Mike,

As you probably know, the answer to that seemingly innocent question would necessarily include partial answers to several even bigger questions. Nice try, though. Glad you liked the game!

Erik

I don’t know that I expected anything else, but there it is, and my slow-clap processor is running pretty hot right now. Whatever it means to the story, there’s functioning, modern-era Aperture Science technology deployed at the very, very bottom of Test Shaft 9, making a sliding door work.

First off, my colleague Donna wrote up a bit about the work we’ve been doing for the last few months. It’s been a pleasure to work with her, and I don’t really think of her as a crony but nobody tell her I said so.

The second thing is a way to get all the linuxes. That’s right, all of them; specifically a way to get a variety of them running in a single headless virtual machine on your OS of choice. You start with an Ubuntu .ISO and VirtualBox.

Install Ubuntu on a suitably capacious VM, make sure sshd is running and starts by default, pause it, close and quit VirtualBox. Then do two things; first, set yourself up with this script:

#!/bin/sh
VBoxManage startvm Prime --type headless
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guestssh/Protocol" TCP
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guestssh/GuestPort" 22
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guestssh/HostPort" 2222
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guesthttp/Protocol" TCP
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guesthttp/HostPort" 8080
VBoxManage setextradata Prime "VBoxInternal/Devices/e1000/0/LUN#0/Config/guesthttp/GuestPort" 80

(My VM’s name is “Prime” in this example, to clarify. Yours may not be.)

Then read this article by Ted Dziuba about running several versions of Linux, simultaneously and non-virtualized, on the same machine. It’s pretty cool, and that should set you up with All The Linuxes, should you happen to want all the linuxes.

From that you can SSH to localhost:2222 for Ubuntu and schroot between the whatever other linuxes you desire. X-forwarding will help you here, and I wonder if you can add Android to that list? Hmm. Hmmmmm.

Next up, if you’re making changes to Firefox don’t/won’t/can’t get at their Tryserver test harness, I just found out (duh, of course) that all their tests are in their source tree anyway. Add these lines to the end of your Makefile, and you can run the whole test harness locally with one command.

test-me:
    echo 'Running automated tests in 10 seconds. This can take a long time - hit control-C to end.' && sleep 10
    make $(MAKE) -f $(topsrcdir)/obj-ff-dbg/Makefile crashtest
    make $(MAKE) -f $(topsrcdir)/obj-ff-dbg/Makefile jstestbrowser
    make $(MAKE) -f $(topsrcdir)/obj-ff-dbg/Makefile reftest
    make $(MAKE) -f $(topsrcdir)/obj-ff-dbg/Makefile mochitest-plain
    make $(MAKE) -f $(topsrcdir)/obj-ff-dbg/Makefile xpcshell-tests

Configure, make, make test-me, then wait. This is a run-overnight kind of thing – it will stomp on your machine pretty hard – but at least it will tell you if you broke anything. I was briefly tempted to call that “trouble”, or “come-at-me-bro” rather than “test-me”, but I think wisely elected not to.

Finally, I broke down and installed Fedora on my little netbook, and to my surprise it’s awfully pretty. I miss apt-get, but the new Gnome UI is actually great, wildly better and more discoverable than Win7. It’s actually a respectable little computer now, all things considered. Except, of course my wireless doesn’t work, and if I put an SD Card in it won’t suspend anymore.

“Sysadmin” is a portmanteau of “administration” and “Sisyphus“, apparently.

Ideas get lodged in my head, and if they’re interesting enough – not necessarily “good”, mind you, but “interesting” – then I basically can’t do anything useful until I’ve gnawed away at them for hours. If it’s OCD that applies only to the inside of your head, is there even a word for that? Obsessive Compulsive Extrospection? Intramania? Let’s watch what happens as my friend Dave pursues his secret hobby of sneaking up on me and sticking broomhandles through the spokes of my brainwheels.

14:23 <@humph> mhoye: http://vimeo.com/20950590
14:31 < mhoye> what what
14:32 < mhoye> is he projecting directly onto the sensor?
14:32 < mhoye> That is so great.
14:37 <@humph> yeah
14:37 <@humph> seemed like you might like that
14:37 <@humph> that's what I do with software, done with cameras and lenses
14:38 < mhoye> Shadows on the cave.
14:38 < mhoye> I've never heard the shadows-on-cave-walls parable end with "We need a smarter cave".
14:39 < mhoye> But maybe that's an avenue of inquiry that's overdue.
14:43 < mhoye> About every third conversation I have with you makes me want to go sit in a dark corner for an hour or four just to turn the ideas over in my head, and then go write somebody else's doctoral thesis.
14:43 < mhoye> But I CANT because I have OTHER THINGS TO DO, dammit.
14:49 < mhoye> i don't even like you.

[...]

15:17 <mhoye> GAH
15:17 <mhoye> SERIOUSLY I AM TRYING TO DO WORK HERE
15:18 <mhoye> AND NOW ALL I CAN THINK ABOUT IS WHAT ARE THE IMPLICATIONS OF THE CAVE HAVING AGENCY IN THAT METAPHOR

I don’t think I’m being unreasonable about this at all.

The Light At The End

I was wondering the other day why investment banking, which is in theory a competitive service industry, appears to be so insanely profitable. A notion occurred to me, but not being an expert in the field it’s hard for me to evaluate its veracity. It’s got a certain sinister elegance to it, though, and if you’ll bear with me for a minute I just want to put this idea in your head.

The 2001 Nobel Prize in Economics went to Akerlof, Spence, and Stiglitz for their “analyses of markets with asymmetric information“, that is to say, the economic effects of the other guy knowing something you don’t. Akerlof’s classic paper on the subject is The Market For Lemons, of which Wikipedia provides a good summary, per usual. The more cynical among you are rightly saying, well yes, the economic effect of making a deal with somebody who knows way more than you do is that you lose your shirt, but that’s microeconomics; we’re talking macro here. There are no easy buckets on this court.

In any case, one thing I haven’t found in my cursory n00b investigation is something on the economic effect of what I will politely call an asymmetric understanding of the basic principles of modern markets and the naive company’s place in them. Which is to say, let’s imagine… I start like that because from what I can tell, “let’s imagine” is the traditional way of starting any argument about economics. Which probably tells you something about economics, now that I think about it. Seriously, try googling the name of your favorite economist plus “let’s imagine”, and count the Google hits. It’s eerie.

Anyway, you all know what derivatives, specifically futures are, right? The idea is that you can set up a long-term contract to sell a thing at some fixed price, fixing the price and letting the buyer at the other end absorb the risk, reaping the potential benefits or losses of a fluctuating marketplace. This lets our entirely imaginary A-One Flour Co. say “for the next five years we’re selling you this much flour for this price every year”, and whatever happens to the market price of flour, either more profit or unexpected loss, get absorbed by whoever’s on the other side of that futures contract.

That, in short, is why futures are traded – there’s both risk and potential profit involved, ownership can change, etcetera. But our imagined A-One Flour, a company with one major input of “wheat” and a single output of “flour” may choose to engage in the same sort of transaction on the wheat-purchasing end, to give themselves some stability on the supply side as well, a sensible move now that there’s a lot less flexibility available to them in terms of revenue. So they agree to buy a fixed amount of wheat for a fixed price over the course of the next few years, from some commodities trader whose hope in this case is that the cost of wheat will drop, thus insuring him some profit on the deal.

Now let’s say I’ve been watching all this, or more realistically I’ve had my computers watching all this. I see what the A-One people are up to, and because they’re traded commodities and I can, I buy both of those futures contracts.

Now: what just happened to A-One flour? They no longer control, in very real sense, the amount of money coming in, the amount of money going out, or who they buy from or sell to. They get wheat from me, they sell flour to me, and they’ve effectively been reduced from controlling their destiny to little more than operating their machinery. They went looking for stability, effectively trading stability for control. I own the complete set, in the Boardwalk & Park Place sense, of contracts for their material, and thus financial, inputs and outputs and this effectively means that I’m the one who’s really in charge of the company. All that without a single share of A-One Flour changing hands.

Better still, if I can pull the same trick with B-One Bread Co., and pair up those futures contracts profitably? That’s a pipe that spews money. And maybe even better than that, this is de-facto inside information about how profitable (or not!) A-One is going to be in the next year or three. So I have this great arms-length way to engage in what would normally be insider trading, knowing what’s going to happen to A-One long before shareholders or the public does. And it’s an oversimplified example, sure, but I’d be surprised if it wasn’t already a well-understood process in some of the taller office buildings of the world.

I haven’t thought of a better way to make money recently, but I’ll let you know if I do.

A friend of mine recently expressed some shock when I told him that I have no problem at all with my daughter playing video games, but I’d rather she not watch television. “Really”, he said?

Life Skills

Yeah, really. And the more TV hits me in the eyes the more convinced I am that I’m entirely in the right.

From a practical standpoint, video games have a lot of things going for them. They’re either in the house or they’re not, for one; you don’t worry too much about your kid stumbling over something with wildly objectionable content. And more importantly the content I find most objectionable about television is the advertising. Video games don’t by and large spend eight minutes of every half hour of use shivving advertising into your child’s eyes, which is unambiguously a win.

And they’re participatory! You can play games with your child, either by taking turns or cooperatively, and more and more of these games can be fun, rewarding experiences for all involved. When was the last time you were done watching television and thought, we did that? We beat the bad guys together, we finished that quest together, we win?

And if my daughter is ever going to drive a Lamborghini into a concrete wall at 250mph I’d rather it be in Gran Turismo, frankly.

More philosophically but also of tier-one importance to me is that video games (especially of the open-world variety) don’t just offer you a choice, but the act of playing them forces you to make choices. There’s no detached voyeurism here and you are not, either in which games you have or in actually playing them, absolved of your own agency in this process.

I’m sure that Mcluhanites or some other school of metamedia junkies have some better word for this, but medical and crime-scene dramas are just about the canonical example of what I’ve been referring to, for lack of a better term, as “agency porn”. Pretty, driven people with morals and ideals and goals on the screen, having these heavy emotional relationships the viewer can turn off with a button, doing ostensibly important work you’ll never do and periodically splattered with entrails that don’t belong to anyone you care about; pornography of a life of decision and consequences, instead of sex.

A Fistful Of Noodle

These things are consumed without the least input or interaction, uncritically. And I am 100% convinced that if you watch enough of these it skews your view of the world. I don’t think it’s a coincidence that the startling rise of helicopter parenting, overprotectionism and the general pushback to letting kids have any kind of personal freedom has happened at the same time as these viscerally vivid crime dramas about child abductions and serial killers have moved towards being on TV 24/7.

I want no part of any of that. I mean, it’s hardly news that if you pick the right channels, you can watch CSI-alikes that makes A Clockwork Orange’s “ultraviolence” look like a pillowfight from noon to midnight on any given day, but just as an aside: Christmas day of 2009, A&E decided to run a 24-hour CSI marathon. 24 hours of murder-porn on Christmas day; way to go, A&E. I’m not saying it was better when I was a kid, because it wasn’t, but when I was a kid it also wasn’t possible to watch formulaic murder-porn nonstop through the Christmas holidays.

Sure, there are games like the Grand Theft Auto or Gears Of War series’ out there, but they’re big-kid games you don’t get free with basic cable. (In GTA3, you can just walk down to the hospital, take an ambulance and drive around picking people up and driving them back to the ER, if that’s what you really want to do. Which might be where all the chum they grind through in those medical dramas comes from, now that I think about it.) And I am not even a little opposed to the existence of games like the (awesome) God Of War series or (the awesome) Assassin’s Creed 2; I’m just saying that there a distinction to be made between pornography, art and harmless, healthy fun, as much in violence and its various portrayals as in sex, and an age to start finding out about all of it.

But it is critically important to me that Maya knows that what she sees on the screen is there by choice, and that she engages media in a way that allows and encourages choice. I think those choices are deeply hidden by regular television and I firmly believe that worse than the greed, the obscene violence and routine debasement, worse than the crappy writing and the idiotic commercials is the habit of passive acceptance cultivated by the viewer’s perfect inability to engage.

Science!

And I want to introduce her to this stuff on mom and dad’s schedule, deliberately, not by some accident of numbed channel surfing. And besides, when she thinks she’s ready (maybe, maybe not, maybe almost…) for something Dad doesn’t approve of? That’ll probably be a negotiation and a half, and an interesting day for sure. But she’ll have to go after it, it’s not just going to roll in here on its own.

Which will be kind of the point.

Have a comment? The original article is here.