blarg?

March 27, 2019

Defined By Prosodic And Morphological Properties

Filed under: academia,academic,awesome,interfaces,lunacy,science — mhoye @ 5:09 pm

Untitled

I am fully invested in these critical advances in memetic linguistics research:

[…] In this paper, we go beyond the aforementioned prosodic restrictions on novel morphology, and discuss gradient segmental preferences. We use morphological compounding to probe English speakers’ intuitions about the phonological goodness of long-distance vowel and consonant identity, or complete harmony. While compounding is a central mechanism for word-building in English, its phonology does not impose categorical vowel or consonant agreement patterns, even though such patterns are attested cross-linguistically. The compound type under investigation is a class of insult we refer to as shitgibbons, taking their name from the most prominent such insult which recently appeared in popular online media (§2). We report the results of three online surveys in which speakers rated novel shitgibbons, which did or did not instantiate long-distance harmonies (§3). With the ratings data established, we follow two lines of inquiry to consider their source: first, we compare shitgibbon harmony preferences with the frequency of segmental harmony in English compounds more generally, and conclude that the lexicon displays both vowel and consonant harmony (§4); second, we attribute the lack of productive consonant harmony in shitgibbons to the attested cross-linguistic harmonies, which we implement as a locality bias in MaxEnt grammar (§5).

You had me at “diddly-infixation”.

August 15, 2018

Time Dilation

Filed under: academic,digital,documentation,interfaces,lunacy,mozilla,science,work — mhoye @ 11:17 am


[ https://www.youtube.com/embed/JEpsKnWZrJ8 ]

I riffed on this a bit over at twitter some time ago; this has been sitting in the drafts folder for too long, and it’s incomplete, but I might as well get it out the door. Feel free to suggest additions or corrections if you’re so inclined.

You may have seen this list of latency numbers every programmer should know, and I trust we’ve all seen Grace Hopper’s classic description of a nanosecond at the top of this page, but I thought it might be a bit more accessible to talk about CPU-scale events in human-scale transactional terms. So: if a single CPU cycle on a modern computer was stretched out as long as one of our absurdly tedious human seconds, how long do other computing transactions take?

If a CPU cycle is 1 second long, then:

  • Getting data out of L1 cache is about the same as getting your data out of your wallet; about 3 seconds.
  • At 9 to 10 seconds, getting data from L2 cache is roughly like asking your friend across the table for it.
  • Fetching data from the L3 cache takes a bit longer – it’s roughly as fast as having an Olympic sprinter bring you your data from 400 meters away.
  • If your data is in RAM you can get it in about the time it takes to brew a pot of coffee; this is how long it would take a world-class athlete to run a mile to bring you your data, if they were running backwards.
  • If your data is on an SSD, though, you can have it six to eight days, equivalent to having it delivered from the far side of the continental U.S. by bicycle, about as fast as that has ever been done.
  • In comparison, platter disks are delivering your data by horse-drawn wagon, over the full length of the Oregon Trail. Something like six to twelve months, give or take.
  • Network transactions are interesting – platter disk performance is so poor that fetching data from your ISP’s local cache is often faster than getting it from your platter disks; at two to three months, your data is being delivered to New York from Beijing, via container ship and then truck.
  • In contrast, a packet requested from a server on the far side of an ocean might as well have been requested from the surface of the moon, at the dawn of the space program – about eight years, from the beginning of the Apollo program to Armstrong, Aldrin and Collin’s successful return to earth.
  • If your data is in a VM, things start to get difficult – a virtualized OS reboot takes about the same amount of time as has passed between the Renaissance and now, so you would need to ask Leonardo Da Vinci to secretly encode your information in one of his notebooks, and have Dan Brown somehow decode it for you in the present? I don’t know how reliable that guy is, so I hope you’re using ECC.
  • That’s all if things go well, of course: a network timeout is roughly comparable to the elapsed time between the dawn of the Sumerian Empire and the present day.
  • In the worst case, if a CPU cycle is 1 second, cold booting a racked server takes approximately all of recorded human history, from the earliest Indonesian cave paintings to now.

June 8, 2017

A Security Question

To my shame, I don’t have a certificate for my blog yet, but as I was flipping through some referer logs I realized that I don’t understand something about HTTPS.

I was looking into the fact that I sometimes – about 1% of the time – I see non-S HTTP referers from Twitter’s t.co URL shortener, which I assume means that somebody’s getting man-in-the-middled somehow, and there’s not much I can do about it. But then I realized the implications of my not having a cert.

My understanding of how this works, per RFC7231 is that:

A user agent MUST NOT send a Referer header field in an unsecured HTTP request if the referring page was received with a secure protocol.

Per the W3C as well:

Requests from TLS-protected clients to non- potentially trustworthy URLs, on the other hand, will contain no referrer information. A Referer HTTP header will not be sent.

So, if that’s true and I have no certificate on my site, then in theory I should never see any HTTPS entries in my referer logs? Right?

Except: I do. All the time, from every browser vendor, feed reader or type of device, and if my logs are full of this then I bet yours are too.

What am I not understanding here? It’s not possible, there is just no way for me to believe that it’s two thousand and seventeen and I’m the only person who’s ever noticed this. I have to be missing something.

What is it?

FAST UPDATE: My colleagues refer me to this piece of the puzzle I hadn’t been aware of, and Francois Marier’s longer post on the subject. Thanks, everyone! That explains it.

SECOND UPDATE: Well, it turns out it doesn’t completely explain it. Digging into the data and filtering out anything referred via Twitter, Google or Facebook, I’m left with two broad buckets. The first is is almost entirely made of feed readers; it turns out that most and maybe almost all feed aggregators do the wrong thing here. I’m going to have to look into that, because it’s possible I can solve this problem at the root.

The second is one really persistent person using Firefox 15. Who are you, guy? Why don’t you upgrade? Can I help? Email me if I can help.

December 15, 2016

Even the dedication to reason and truth might, for all we know, change drastically.

Filed under: academic,documentation,doom,future,interfaces,vendetta — mhoye @ 12:19 pm

The following letter, written by Carl Sagan, is one of the appendices of the “Expert Judgement on Markers To Deter Inadvertent Human Intrusion into the Waste Isolation Pilot Plant” document, completed in 1993.

It’s on page 331, and it hurts to read.

Dr. D. Richard Anderson
Performance Assessment Division
6342 Sandia National Laboratories
Albuquerque,
New Mexico
87185

Dear Dr. Anderson:

Many thanks for your kind invitation to participate in the panel charged with making recommendations on signing to the far future about the presence of dangerous long-lived radioactive waste repositories (assuming the waste hasn’t all leached out by then). It is an interesting and important problem, and I’m sorry that my schedule will not permit me to participate. But I can, in a few sentences, tell you my views on the matter; perhaps you would be kind enough to pass them on to the members of the panel:

Several half-lives of the longest-lived radioisotopes in question constitute a time period longer than recorded human history. No one knows what changes that span of time will bring. Social institutions, artistic conventions, written and spoken language, scientific knowledge and even the dedication to reason and truth might, for all we know, change drastically. What we need is a symbol invariant to all those possible changes. Moreover, we want a symbol that will be understandable not just to the most educated and scientifically literate members of the population, but to anyone who might come upon this repository. There is one such symbol . It is tried and true. It has been used transculturally for thousands of years, with unmistakable meaning. It is the symbol used on the lintels of cannibal dwellings, the flags of pirates, the insignia of SS divisions and motorcycle gangs, the labels of bottles of poisons — the skull and crossbones. Human skeletal anatomy, we can be reasonably sure, will not unrecognizably change in the next few tens of thousands of years. You might very well wish also to include warnings in major human languages (being careful not to exclude Chinese and Arabic), and to attach a specification of the radioisotopes in question — perhaps by circling entries in a periodic table with the appropriate isotopic atomic numbers emphasized. It might be useful to include on the signs their own radioactive markers so that the epoch of radioactive waste burial can be calculated (or maybe a sequence of drawings of the Big Dipper moving around the Pole Star each year so that, through the precession of the equinoxes, the epoch of burial, modulo 26,000 years, could be specified) . But all this presumes much about future generations. The key is the skull and crossbones.

Unless a more powerful and more direct symbol can be devised, I think the only reason for not using the skull and crossbones is that we believe the current political cost of speaking plainly about deadly radioactive waste is worth more than the well-being of future generations.

With best wishes,

      Cordially,

      Carl Sagan

September 22, 2016

Falsehoods Programmers Believe About Economics

Filed under: academic,documentation,doom,interfaces,want — mhoye @ 8:38 am

IMG_20160521_152840

Late update: This post has been added to this excellent list of falsehoods programmers believe, and I’m pretty proud of that.

Two similar jokes rolled past me late last week, the first when I mentioned that running a Java program in JVM in a Linux VM in a container on AWS is a very inefficient way of generating waste heat, and that I could save a lot of time and effort by cutting out the middleman and just setting money on fire.

For the second, a friend observed that startups are an extremely inefficient way of transferring wealth from venture capitalists to bay-area landlords; there’s a disruptive opportunity here to shortcut that process and just give venture capital directly to the SoCal rentier class for a nominal service fee. I suggested he call his startup “olygarchr”, or maybe “plutocrysii”; you heard it here first, in two years YCombinator will be obsolete.

With that in mind and in the spirit of the now-classic Falsehoods Programmers Believe About Time and Falsehoods Programmers Believe About Names, I asked for this on Twitter the other day and got some pretty good feedback. But I guess if I want something written up, I’ll be the one writing it up.

I may add some more links to this over the next little while, but for now here you go.

Falsehoods Programmers Believe About Economics

  1. Economics is simple.
  2. Econ-101 is a comprehensive overview of the field.
  3. Economics is morally neutral.
  4. Economics is racially- and gender-neutral.
  5. The efficient markets hypothesis is true.
  6. Classical economics is empirically grounded.
  7. Politics is an entirely unrelated field.
  8. Externalities are the same as inefficiencies.
  9. Pareto efficiency exists.
  10. Information symmetry exists.
  11. People are rational actors.
  12. OK, sure, people, I get it. But I’m a rational actor.
  13. “Rational” to me is the same as “rational” to everyone else.
  14. Rational actors exist at all.
  15. Advertising doesn’t influence or distort markets.
  16. Ok, fine, but advertising doesn’t influence me.
  17. Just-so stories make predictive economic models.
  18. Just-so stories make effective public policy.
  19. Price is an indication of cost.
  20. Price is an indication of value.
  21. The system works for me, therefore the system works for everyone.
  22. Wealth is an indication of worth.

November 15, 2015

The Thousand Year Roadmap

Filed under: academic,documentation,future,interfaces,lunacy,mozilla,work — mhoye @ 10:33 am

I made this presentation at Seneca’s FSOSS a few weeks ago; some of these ideas have been rattling around in my brain for a while, but it was the first time I’d even run through it. I was thoroughly caffeinated at the time so all of my worst verbal tics are on display, right as usual um right pause right um. But if you want to have my perspective on why free and open source software matters, why some institutions and ideas live and others die out, and how I think you should design and build organizations around your idea so that they last a few hundred years, here you go.

There are some mistakes I made, and now that I’m watching it – I meant to say “merchants” rather than “farmers”, there’s a handful of others I may come back here to note later. But I’m still reasonably happy with it.

August 30, 2015

Opaque Symbology

Filed under: academic,documentation,interfaces,lunacy — mhoye @ 8:22 pm

A collection of highway traffic signage unused pending an economical symbolic representation:

  • Warning: Ornery Local Stereotype
  • Completely Unremarkable Natural Phenomenon: 2 KM
  • Something Will Happen And The Right Decision Will Seem Obvious In Hindsight But Nobody In The Car Will Ever Let You Live Down What You Did, Next 500 M
  • If There Were No Eternal Consciousness In The Next 10 KM, If At The Bottom Of Everything There Were Only A Wild Ferment, A Power That Twisting In Dark Passions Produced Everything Great Or Inconsequential, If An Unfathomable, Insatiable Emptiness Lay Hid Beneath Everything, What Would The Next 10 KM Be But Despair
  • Meese
  • Duck Season Rabbit Season Duck Season Rabbit Season Duck Season Rabbit Season Duck Season Rabbit Season Duck Season Rabbit Season Duck Season Rabbit Season Duck Season Rabbit Season Duck Season Rabbit Season Duck Season Rabbit Season
  • Iield, Yield, Theild. Hield, Shield, Wield
  • Jarring And Inappropriate Pop-Culture References Next 12 Parsecs
  • I Don’t Care For Your Tone Young Man
  • Now Entering Eldritch Nether Regions
  • I Wouldn’t Call Them Slow Children Playing But Honestly They’re Not The Brightest Of Sparks Rachel It’s A Highway Who Lets Their Kids Do That Somebody Should Call Somebody My God
  • Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Red Rum Turn Right Next Exit
  • Property Is Theft Therefore Theft Is Property Therefore This Road Sign Is Mine Now No You Shut Up That Is How It Works Doug
  • OMG LIEK WOAH NEXT 3 KM OMG OMG
  • Yo Dog I Heard You Like Hairpin Turns So We Put A Hairpin Turn In Your Hairpin Turn So You Can Die While You Die
  • Locus Of Shame
  • Caution But Telling You Why Would Ruin The Surprise
  • Slow: Ennui

July 24, 2015

Hostage Situation

(This is an edited version of a rant that started life on Twitter. I may add some links later.)

Can we talk for a few minutes about the weird academic-integrity hostage situation going on in CS research right now?

We share a lot of data here at Mozilla. As much as we can – never PII, not active security bugs, but anyone can clone our repos or get a bugzilla account, follow our design and policy discussions, even watch people design and code live. We default to open, and close up only selectively and deliberately. And as part of my job, I have the enormous good fortune to periodically go to conferences where people have done research, sometimes their entire thesis, based on our data.

Yay, right?

Some of the papers I’ve seen promise results that would be huge for us. Predicting flaws in a patch prereview. Reducing testing overhead 80+% with a four-nines promise of no regressions and no loss of quality.

I’m excitable, I get that, but OMFG some of those numbers. 80 percent reductions of testing overhead! Let’s put aside the fact that we spend a gajillion dollars on the physical infrastructure itself, let’s only count our engineers’ and contributors’ time and happiness here. Even if you’re overoptimistic by a factor of five and it’s only a 20% savings we’d hire you tomorrow to build that for us. You can have a plane ticket to wherever you want to work and the best hardware money can buy and real engineering support to deploy something you’ve already mostly built and proven. You want a Mozilla shirt? We can get you that shirt! You like stickers? We have stickers! I’ll get you ALL THE FUCKING STICKERS JUST SHOW ME THE CODE.

I did mention that I’m excitable, I think.

But that’s all I ask. I go to these conferences and basically beg, please, actually show me the tools you’re using to get that result. Your result is amazing. Show me the code and the data.

But that never happens. The people I talk to say I don’t, I can’t, I’m not sure, but, if…

Because there’s all these strange incentives to hold that data and code hostage. You’re thinking, maybe I don’t need to hire you if you publish that code. If you don’t publish your code and data and I don’t have time to reverse-engineer six years of a smart kid’s life, I need to hire you for sure, right? And maybe you’re not proud of the code, maybe you know for sure that it’s ugly and awful and hacks piled up over hacks, maybe it’s just a big mess of shell scripts on your lab account. I get that, believe me; the day I write a piece of code I’m proud of before it ships will be a pretty good day.

But I have to see something. Because from our perspective, making a claim about software that doesn’t include the software you’re talking about is very close to worthless. You’re not reporting a scientific result at that point, however miraculous your result is; you’re making an unverifiable claim that your result exists.

And we look at that and say: what if you’ve got nothing? How can we know, without something we can audit and test? Of course, all the supporting research is paywalled PDFs with no concomitant code or data either, so by any metric that matters – and the only metric that matters here is “code I can run against data I can verify” – it doesn’t exist.

Those aren’t metrics that matter to you, though. What matters to you is either “getting a tenure-track position” or “getting hired to do work in your field”. And by and large the academic tenure track doesn’t care about open access, so you’re afraid that actually showing your work will actively hurt your likelihood of getting either of those jobs.

So here we are in this bizarro academic-research standoff, where I can’t work with you without your tipping your hand, and you can’t tip your hand for fear I won’t want to work with you. And so all of this work that could accomplish amazing things for a real company shipping real software that really matters to real people – five or six years of the best work you’ve ever done, probably – just sits on the shelf rotting away.

So I go to academic conferences and I beg people to publish their results and paper and data open access, because the world needs your work to matter. Because open access plus data/code as a minimum standard isn’t just important to the fundamental principles of repeatable experimental science, the integrity of your field, and your career. It’s important because if you want your work to matter to people, then you’d better put it somewhere that people can see it and use it and thank you for it and maybe even improve on it.

You did this as an undergrad. You insist on this from your undergrads, for exactly the same reasons I’m asking you to do the same: understanding, integrity and plain old better results. And it’s a few clicks and a GitHub account for you to do the same now. But I need you to do it one last time.

Full marks here isn’t “a job” or “tenure”. Your shot at those will be no worse, though I know you can’t see it from where you’re standing. But they’re still only a strong B. An A is doing something that matters, an accomplishment that changes the world for the better.

And if you want full marks, show your work.

October 22, 2013

Citation Needed

I may revisit this later. Consider this a late draft. I’m calling this done.

“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” — Stan Kelly-Bootle

Sometimes somebody says something to me, like a whisper of a hint of an echo of something half-forgotten, and it lands on me like an invocation. The mania sets in, and it isn’t enough to believe; I have to know.

I’ve spent far more effort than is sensible this month crawling down a rabbit hole disguised, as they often are, as a straightforward question: why do programmers start counting at zero?

Now: stop right there. By now your peripheral vision should have convinced you that this is a long article, and I’m not here to waste your time. But if you’re gearing up to tell me about efficient pointer arithmetic or binary addition or something, you’re wrong. You don’t think you’re wrong and that’s part of a much larger problem, but you’re still wrong.

For some backstory, on the off chance anyone still reading by this paragraph isn’t an IT professional of some stripe: most computer languages including C/C++, Perl, Python, some (but not all!) versions of Lisp, many others – are “zero-origin” or “zero-indexed”. That is to say, in an array A with 8 elements in it, the first element is A[0], and the last is A[7]. This isn’t universally true, though, and other languages from the same (and earlier!) eras are sometimes one-indexed, going from A[1] to A[8].

While it’s a relatively rare practice in modern languages, one-origin arrays certainly aren’t dead; there’s a lot of blood pumping through Lua these days, not to mention MATLAB, Mathematica and a handful of others. If you’re feeling particularly adventurous Haskell apparently lets you pick your poison at startup, and in what has to be the most lunatic thing I’ve seen on a piece of silicon since I found out the MIPS architecture had runtime-mutable endianness, Visual Basic (up to v6.0) featured the OPTION BASE flag, letting you flip that coin on a per-module basis. Zero- and one-origin arrays in different corners of the same program! It’s just software, why not?

All that is to say that starting at 1 is not an unreasonable position at all; to a typical human thinking about the zeroth element of an array doesn’t make any more sense than trying to catch the zeroth bus that comes by, but we’ve clearly ended up here somehow. So what’s the story there?

The usual arguments involving pointer arithmetic and incrementing by sizeof(struct) and so forth describe features that are nice enough once you’ve got the hang of them, but they’re also post-facto justifications. This is obvious if you take the most cursory look at the history of programming languages; C inherited its array semantics from B, which inherited them in turn from BCPL, and though BCPL arrays are zero-origin, the language doesn’t support pointer arithmetic, much less data structures. On top of that other languages that antedate BCPL and C aren’t zero-indexed. Algol 60 uses one-indexed arrays, and arrays in Fortran are arbitrarily indexed – they’re just a range from X to Y, and X and Y don’t even need to be positive integers.

So by the early 1960’s, there are three different approaches to the data structure we now call an array.

  • Zero-indexed, in which the array index carries no particular semantics beyond its implementation in machine code.
  • One-indexed, identical to the matrix notation people have been using for quite some time. It comes at the cost of a CPU instruction or disused word to manage the offset; usability isn’t free.
  • Arbitrary indices, in which the range is significant with regards to the problem you’re up against.

So if your answer started with “because in C…”, you’ve been repeating a good story you heard one time, without ever asking yourself if it’s true. It’s not about *i = a + n*sizeof(x) because pointers and structs didn’t exist. And that’s the most coherent argument I can find; there are dozens of other arguments for zero-indexing involving “natural numbers” or “elegance” or some other unresearched hippie voodoo nonsense that are either wrong or too dumb to rise to the level of wrong.

The fact of it is this: before pointers, structs, C and Unix existed, at a time when other languages with a lot of resources and (by the standard of the day) user populations behind them were one- or arbitrarily-indexed, somebody decided that the right thing was for arrays to start at zero.

So I found that person and asked him.

His name is Dr. Martin Richards; he’s the creator of BCPL, now almost 7 years into retirement; you’ve probably heard of one of his doctoral students Eben Upton, creator of the Raspberry Pi. I emailed him to ask why he decided to start counting arrays from zero, way back then. He replied that…

As for BCPL and C subscripts starting at zero. BCPL was essentially designed as typeless language close to machine code. Just as in machine code registers are typically all the same size and contain values that represent almost anything, such as integers, machine addresses, truth values, characters, etc. BCPL has typeless variables just like machine registers capable of representing anything. If a BCPL variable represents a pointer, it points to one or more consecutive words of memory. These words are the same size as BCPL variables. Just as machine code allows address arithmetic so does BCPL, so if p is a pointer p+1 is a pointer to the next word after the one p points to. Naturally p+0 has the same value as p. The monodic indirection operator ! takes a pointer as it’s argument and returns the contents of the word pointed to. If v is a pointer !(v+I) will access the word pointed to by v+I. As I varies from zero upwards we access consecutive locations starting at the one pointed to by v when I is zero. The dyadic version of ! is defined so that v!i = !(v+I). v!i behaves like a subscripted expression with v being a one dimensional array and I being an integer subscript. It is entirely natural for the first element of the array to have subscript zero. C copied BCPL’s approach using * for monodic ! and [ ] for array subscription. Note that, in BCPL v!5 = !(v+5) = !(5+v) = 5!v. The same happens in C, v[5] = 5[v]. I can see no sensible reason why the first element of a BCPL array should have subscript one. Note that 5!v is rather like a field selector accessing a field in a structure pointed to by v.

This is interesting for a number of reasons, though I’ll leave their enumeration to your discretion. The one that I find most striking, though, is that this is the earliest example I can find of the understanding that a programming language is a user interface, and that there are difficult, subtle tradeoffs to make between resources and usability. Remember, all this was at a time when everything about the future of human-computer interaction was up in the air, from the shape of the keyboard and the glyphs on the switches and keycaps right down to how the ones and zeros were manifested in paper ribbon and bare metal; this note by the late Dennis Ritchie might give you a taste of the situation, where he mentions that five years later one of the primary reasons they went with C’s square-bracket array notation was that it was getting steadily easier to reliably find square brackets on the world’s keyboards.

“Now just a second, Hoye”, I can hear you muttering. “I’ve looked at the BCPL manual and read Dr. Richards’ explanation and you’re not fooling anyone. That looks a lot like the efficient-pointer-arithmetic argument you were frothing about, except with exclamation points.” And you’d be very close to right. That’s exactly what it is – the distinction is where those efficiencies take place, and why.

BCPL was first compiled on an IBM 7094here’s a picture of the console, though the entire computer took up a large room – running CTSS – the Compatible Time Sharing System – that antedates Unix much as BCPL antedates C. There’s no malloc() in that context, because there’s nobody to share the memory core with. You get the entire machine and the clock starts ticking, and when your wall-clock time block runs out that’s it. But here’s the thing: in that context none of the offset-calculations we’re supposedly economizing are calculated at execution time. All that work is done ahead of time by the compiler.

You read that right. That sheet-metal, “wibble-wibble-wibble” noise your brain is making is exactly the right reaction.

Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.

Does it get better? Oh, it gets better:

IBM had been very generous to MIT in the fifties and sixties, donating or discounting its biggest scientific computers. When a new top of the line 36-bit scientific machine came out, MIT expected to get one. In the early sixties, the deal was that MIT got one 8-hour shift, all the other New England colleges and universities got a shift, and the third shift was available to IBM for its own use. One use IBM made of its share was yacht handicapping: the President of IBM raced big yachts on Long Island Sound, and these boats were assigned handicap points by a complicated formula. There was a special job deck kept at the MIT Computation Center, and if a request came in to run it, operators were to stop whatever was running on the machine and do the yacht handicapping job immediately.

Jobs on the IBM 7090, one generation behind the 7094, were batch-processed, not timeshared; you queued up your job along with a wall-clock estimate of how long it would take, and if it didn’t finish it was pulled off the machine, the next job in the queue went in and you got to try again whenever your next block of allocated time happened to be. As in any economy, there is a social context as well as a technical context, and it isn’t just about managing cost, it’s also about managing risk. A programmer isn’t just racing the clock, they’re also racing the possibility that somebody will come along and bump their job and everyone else’s out of the queue.

I asked Tom Van Vleck, author of the above paragraph and also now retired, how that worked. He replied in part that on the 7090…

“User jobs were submitted on cards to the system operator, stacked up in a big tray, and a rudimentary system read, loaded, and ran jobs in sequence. Typical batch systems had accounting systems that read an ID card at the beginning of a user deck and punched a usage card at end of job. User jobs usually specified a time estimate on the ID card, and would be terminated if they ran over. Users who ran too many jobs or too long would use up their allocated time. A user could arrange for a long computation to checkpoint its state and storage to tape, and to subsequently restore the checkpoint and start up again.

The yacht handicapping job pertained to batch processing on the MIT 7090 at MIT. It was rare — a few times a year.”

So: the technical reason we started counting arrays at zero is that in the mid-1960’s, you could shave a few cycles off of a program’s compilation time on an IBM 7094. The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.

There are a few points I want to make here.

The first thing is that as far as I can tell nobody has ever actually looked this up.

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.

Part of the problem is access to the historical record, of course. I was in favor of Open Access publication before, but writing this up has cemented it: if you’re on the outside edge of academia, $20/paper for any research that doesn’t have a business case and a deep-pocketed backer is completely untenable, and speculative or historic research that might require reading dozens of papers to shed some light on longstanding questions is basically impossible. There might have been a time when this was OK and everyone who had access to or cared about computers was already an IEEE/ACM member, but right now the IEEE – both as a knowledge repository and a social network – is a single point of a lot of silent failure. “$20 for a forty-year-old research paper” is functionally indistinguishable from “gone”, and I’m reduced to emailing retirees to ask them what they remember from a lifetime ago because I can’t afford to read the source material.

The second thing is how profoundly resistant to change or growth this field is, and apparently has always been. If you haven’t seen Bret Victor’s talk about The Future Of Programming as seen from 1975 you should, because it’s exactly on point. Over and over again as I’ve dredged through this stuff, I kept finding programming constructs, ideas and approaches we call part of “modern” programming if we attempt them at all, sitting abandoned in 45-year-old demo code for dead languages. And to be clear: that was always a choice. Over and over again tools meant to make it easier for humans to approach big problems are discarded in favor of tools that are easier to teach to computers, and that decision is described as an inevitability.

This isn’t just Worse Is Better, this is “Worse Is All You Get Forever”. How many off-by-one disasters could we have avoided if the “foreach” construct that existed in BCPL had made it into C? How much more insight would all of us have into our code if we’d put the time into making Michael Chastain’s nearly-omniscient debugging framework – PTRACE_SINGLESTEP_BACKWARDS! – work in 1995? When I found this article by John Backus wondering if we can get away from Von Neumann architecture completely, I wonder where that ambition to rethink our underpinnings went. But the fact of it is that it didn’t go anywhere. Changing how you think is hard and the payoff is uncertain, so by and large we decided not to. Nobody wanted to learn how to play, much less build, Engelbart’s Violin, and instead everyone gets a box of broken kazoos.

In truth maybe somebody tried – maybe even succeeded! – but it would cost me hundreds of dollars to even start looking for an informed guess, so that’s the end of that.

It’s hard for me to believe that the IEEE’s membership isn’t going off a demographic cliff these days as their membership ages, and it must be awful knowing they’ve got decades of delicious, piping-hot research cooked up that nobody is ordering while the world’s coders are lining up to slurp watery gruel out of a Stack-Overflow-shaped trough and pretend they’re well-fed. You might not be surprised to hear that I’ve got a proposal to address both those problems; I’ll let you work out what it might be.

June 8, 2013

Crypto Is Not A Panacea

Filed under: academic,digital,doom,future,interfaces,science,vendetta,work — mhoye @ 9:36 am

Bricks

I was going to write this to an internal mailing list, following this week’s PRISM excitement, but I’ve decided to put it here instead. It was written (and cribbed from other stuff I’ve written elsewhere) in response to an argument that encrypting everything would somehow solve a scary-sounding though imprecisely-specified problem, a claim you may not be surprised to find out I think is foolish.

I’ve written about this elsewhere, so forgive me, but: I think that it’s a profound mistake to assume that crypto is a panacea here.

Backstory time: in 1993, the NSA released SHA, the Secure Hashing Algorithm; you’ve heard of it, I’m sure. Very soon afterwards – months, I think? – they came back and said no, stop, don’t use that. Use SHA-1 instead, here you go.

No explanation, nothing. But nobody else could even begin to make a case either way, so SHA-1 it is.

It’s 2005 before somebody manages to generate one, just one, collision in what’s now called SHA-0, and they do that by taking a theoretical attack that gets you close to a collision, generalizing it and running it for around 80,000 CPU hours or so on a machine with 256 Itanium-2 processors running this one job flat out for two weeks.

That hardware straight up didn’t exist in 1993. That was the year the original Doom came out, for what it’s worth, so it’s very likely that the “significant weakness” they found was found by a person or team of people scribbling on a whiteboard. And, note, they found the weaknesses in that algorithm in the weeks after publication when those holes – or indeed “any holes at all” – would take the public-facing crypto community more than a decade to discover were a theoretical possibility.

Now, wash that tender morsel down with this quote from an article in Wired quoting James Bamford, longtime writer about all things NSA:

“According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

“Many average computer users in the US”? Welp. That’s SSL, then.

So odds are good that what we here in the public and private sectors consider to be strong crypto isn’t much more of an impediment for the NSA than ROT-13. In the public sector AES-128 is considered sufficient for information up to level “secret” only; AES-256 is for “top secret”, and both are part of the NSA’s Suite B series of cryptographic algorithms, outlined here.

Suite A is unlikely to ever see the light of day, not even so much as their names. The important thing that this suggests is that the NSA may internally have a class break for their recommended Series B crypto algorithms, or at least an attack that makes decryption computationally feasible for a small set of people that includes themselves, and indeed for anything weaker, or with known design flaws.

The problem that needs to be addressed here is a policy problem, not a technical one. And that’s actually great news, because if you’re getting into a pure-math-and-computational-power arms race with the NSA, you’re gonna have a bad time.

Older Posts »

Powered by WordPress