blarg?

November 9, 2018

The Evolution Of Open

Filed under: digital,future,interfaces,linux,losers,mozilla,science,toys,vendetta,work — mhoye @ 5:00 pm

This started its life as a pair of posts to the Mozilla governance forum, about the mismatch between private communication channels and our principles of open development. It’s a little long-winded, but I think it broadly applies not just to Mozilla but to open source in general. This version of it interleaves those two posts into something I hope is coherent, if kind of rambly. Ultimately the only point I want to make here is that the nature of openness has changed, and while it doesn’t mean we need to abandon the idea as a principle or as a practice, we can’t ignore how much has changed or stay mired in practices born of a world that no longer exists.

If you’re up for the longer argument, well, you can already see the wall of text under this line. Press on, I believe in you.

Even though open source software has essentially declared victory, I think that openness as a practice – not just code you can fork but the transparency and accessibility of the development process – matters more than ever, and is in a pretty precarious position. I worry that if we – the Royal We, I guess – aren’t willing to grow and change our understanding of openness and the practical realities of working in the open, and build tools to help people navigate those realities, that it won’t be long until we’re worse off than we were when this whole free-and-open-source-software idea got started.

To take that a step further: if some of the aspirational goals of openness and open development are the ideas of accessibility and empowerment – that reducing or removing barriers to participation in software development, and granting people more agency over their lives thereby, is self-evidently noble – then I think we need to pull apart the different meanings of the word “open” that we use as if the same word meant all the same things to all the same people. My sense is that a lot of our discussions about openness are anchored in the notion of code as speech, of people’s freedom to move bits around and about the limitations placed on those freedoms, and I don’t think that’s enough.

A lot of us got our start when an internet connection was a novelty, computation was scarce and state was fragile. If you – like me – are a product of this time, “open” as in “open source” is likely to be a core part of your sense of personal safety and agency; you got comfortable digging into code, standing up your own services and managing your own backups pretty early, because that was how you maintained some degree of control over your destiny, how you avoided the indignities of data loss, corporate exploitation and community collapse.

“Open” in this context inextricably ties source control to individual agency. The checks and balances of openness in this context are about standards, data formats, and the ability to export or migrate your data away from sites or services that threaten to go bad or go dark. This view has very little to say about – and is often hostile to the idea of – granular access restrictions and the ability to impose them, those being the tools of this worldview’s bad actors.

The blind spots of this worldview are the products of a time where someone on the inside could comfortably pretend that all the other systems that had granted them the freedom to modify this software simply didn’t exist. Those access controls were handled, invisibly, elsewhere; university admission, corporate hiring practices or geography being just a few examples of the many, many barriers between the network and the average person.

And when we’re talking about blind spots and invisible social access controls, of course, what we’re really talking about is privilege. “Working in the open”, in a world where computation was scarce and expensive, meant working in front of an audience that was lucky enough to go to university or college, whose parents could afford a computer at home, who lived somewhere with broadband or had one of the few jobs whose company opened low-numbered ports to the outside world; what it didn’t mean was doxxing, cyberstalking, botnets, gamergaters, weaponized social media tooling, carrier-grade targeted-harassment-as-a-service and state-actor psy-op/disinformation campaigns rolling by like bad weather. The relentless, grinding day-to-day malfeasance that’s the background noise of this grudgefuck of a zeitgeist we’re all stewing in just didn’t inform that worldview, because it didn’t exist.

In contrast, a more recent turn on the notion of openness is one of organizational or community openness; that is, openness viewed through the lens of the accessibility and the experience of participation in the organization itself, rather than unrestricted access to the underlying mechanisms. Put another way, it puts the safety and transparency of the organization and the people in it first, and considers the openness of work products and data retention as secondary; sometimes (though not always) the open-source nature of the products emerges as a consequence of the nature of the organization, but the details of how that happens are community-first, code-second (and sometimes code-sort-of, code-last or code-never). “Openness” in this context is about accessibility and physical and emotional safety, about the ability to participate without fear. The checks and balances are principally about inclusivity, accessibility and community norms; codes of conduct and their enforcement.

It won’t surprise you, I suspect, to learn that environments that champion this brand of openness are much more accessible to women, minorities and otherwise marginalized members of society that make up a vanishingly small fraction of old-school open source culture. The Rust and Python communities are doing good work here, and the team at Glitch have done amazing things by putting community and collaboration ahead of everything else. But a surprising number of tool-and-platform companies, often in “pink-collar” fields, have taken the practices of open community building and turned themselves into something that, code or no, looks an awful lot like the best of what modern open source has to offer. If you can bring yourself to look past the fact that you can’t fork their code, Salesforce – Salesforce, of all the damn things – has one of the friendliest, most vibrant and supportive communities in all of software right now.

These two views aren’t going to be easy to reconcile, because the ideas of what “accountability” looks like in both contexts – and more importantly, the mechanisms of accountability built in to the systems born from both contexts – are worse than just incompatible. They’re not even addressing something the other worldview is equipped to recognize as a problem. Both are in some sense of the word open, both are to a different view effectively closed and, critically, a lot of things that look like quotidian routine to one perspective look insanely, unacceptably dangerous to the other.

I think that’s the critical schism the dialogue, the wildly mismatched understandings of the nature of risk and freedom. Seen in that light the recent surge of attention being paid to federated systems feels like a weirdly reactionary appeal to how things were better in the old days.

I’ve mentioned before that I think it’s a mistake to think of federation as a feature of distributed systems, rather than as consequence of computational scarcity. But more importantly, I believe that federated infrastructure – that is, a focus on distributed and resilient services – is a poor substitute for an accountable infrastructure that prioritizes a distributed and healthy community.  The reason Twitter is a sewer isn’t that Twitter is centralized, it’s that Jack Dorsey doesn’t give a damn about policing his platform and Twitter’s board of directors doesn’t give a damn about changing his mind. Likewise, a big reason Mastodon is popular with the worst dregs of the otaku crowd is that if they’re on the right instance they’re free to recirculate shit that’s so reprehensible even Twitter’s boneless, soporific safety team can’t bring themselves to let it slide.

That’s the other part of federated systems we don’t talk about much – how much the burden of safety shifts to the individual. The cost of evolving federated systems that require consensus to interoperate is so high that structural flaws are likely to be there for a long time, maybe forever, and the burden of working around them falls on every endpoint to manage for themselves. IRC’s (Remember IRC?) ongoing borderline-unusability is a direct product of a notion of openness that leaves admins few better tools than endless spammer whack-a-mole. Email is (sort of…) decentralized, but can you imagine using it with your junkmail filters off?

I suppose I should tip my hand at this point, and say that as much as I value the source part of open source, I also believe that people participating in open source communities deserve to be free not only to change the code and build the future, but to be free from the brand of arbitrary, mechanized harassment that thrives on unaccountable infrastructure, federated or not. We’d be deluding ourselves if we called systems that are just too dangerous for some people to participate in at all “open” just because you can clone the source and stand up your own copy. And I am absolutely certain that if this free software revolution of ours ends up in a place where asking somebody to participate in open development is indistinguishable from asking them to walk home at night alone, then we’re done. People cannot be equal participants in environments where they are subject to wildly unequal risk. People cannot be equal participants in environments where they are unequally threatened. And I’d have a hard time asking a friend to participate in an exercise that had no way to ablate or even mitigate the worst actions of the internet’s worst people, and still think of myself as a friend.

I’ve written about this before:

I’d like you to consider the possibility that that’s not enough.

What if we agreed to expand what freedom could mean, and what it could be. Not just “freedom to” but a positive defense of opportunities to; not just “freedom from”, but freedom from the possibility of.

In the long term, I see that as the future of Mozilla’s responsibility to the Web; not here merely to protect the Web, not merely to defend your freedom to participate in the Web, but to mount a positive defense of people’s opportunities to participate. And on the other side of that coin, to build accountable tools, systems and communities that promise not only freedom from arbitrary harassment, but even freedom from the possibility of that harassment.

More generally, I still believe we should work in the open as much as we can – that “default to open”, as we say, is still the right thing – but I also think we and everyone else making software need to be really, really honest with ourselves about what open means, and what we’re asking of people when we use that word. We’re probably going to find that there’s not one right answer. We’re definitely going to have to build a bunch of new tools.  But we’re definitely not going to find any answers that matter to the present day, much less to the future, if the only place we’re looking is backwards.

[Feel free to email me, but I’m not doing comments anymore. Spammers, you know?]

November 8, 2018

A Summer Of Code Question

Filed under: digital,documentation,future,interfaces,mozilla,work — mhoye @ 1:43 pm

This is a lightly edited response to a question we got on IRC about how to best apply to participate in Google’s “Summer Of Code” program. this isn’t company policy, but I’ve been the one turning the crank on our GSOC application process for the last while, so maybe it counts as helpful guidance.

We’re going to apply as an organization to participate in GSOC 2019, but that process hasn’t started yet. This year it kicked off in the first week of January, and I expect about the same in 2019.

You’re welcome to apply to multiple positions, but I strongly recommend that each application be a focused effort; if you send the same generic application to all of them it’s likely they’ll all be disregarded. I recognize that this seems unfair, but we get a tidal wave of redundant applications for any position we open, so we have to filter them aggressively.

Successful GSOC applicants generally come in two varieties – people who put forward a strong application to work on projects that we’ve proposed, and people that have put together their own GSOC proposal in collaboration with one or more of our engineers.

The latter group are relatively rare, comparatively – they generally are people we’ve worked through some bugs and had some useful conversations with, who’ve done the work of identifying the “good GSOC project” bugs and worked out with the responsible engineers if they’d be open to collaboration, what a good proposal would look like, etc.

None of those bugs or conversations are guarantees of anything, perhaps obviously – some engineers just don’t have time to mentor a GSOC student, some of the things you’re interested in doing won’t make good GSOC projects, and so forth.

One of the things I hope to do this year is get better at clarifying what a good GSOC project proposal looks like, but broadly speaking they are:

  • Nice-to-have features, but non-blocking and non-critical-path. A struggling GSOC student can’t put a larger project at risk.
  • Few (good) or no (better) dependencies, on external factors, whether they’re code, social context or other people’s work. A good GSOC project is well-contained.
  • Clearly defined yes-or-no deliverables, both overall and as milestones throughout the summer. We need GSOC participants to be able to show progress consistently.
  • Finally, broad alignment with Mozilla’s mission and goals, even if it’s in a supporting role. We’d like to be able to draw a straight line between the project you’re proposing and Mozilla being incrementally more effective or more successful. It doesn’t have to move any particular needle a lot, but it has to move the needle a bit, and it has to be a needle we care about moving.

It’s likely that your initial reaction to this “that is a lot, how do I find all this out, what do I do here, what the hell”, and that’s a reasonable reaction.

The reason that this group of applicants is comparatively rare is that people who choose to go that path have mostly been hanging around the project for a bit, soaking up the culture, priorities and so on, and have figured out how to navigate from “this is my thing that I’m interested in and want to do” to “this is my explanation of how my thing fits into Mozilla, both from product engineering and an organizational mission perspective, and this is who I should be making that pitch to”.

This is not to say that it’s impossible, just that there’s no formula for it. Curiosity and patience are your most important tools, if you’d like to go down that road, but if you do we’d definitely like to hear from you. There’s no better time to get started than now.

August 15, 2018

Time Dilation

Filed under: academic,digital,documentation,interfaces,lunacy,mozilla,science,work — mhoye @ 11:17 am


[ https://www.youtube.com/embed/JEpsKnWZrJ8 ]

I riffed on this a bit over at twitter some time ago; this has been sitting in the drafts folder for too long, and it’s incomplete, but I might as well get it out the door. Feel free to suggest additions or corrections if you’re so inclined.

You may have seen this list of latency numbers every programmer should know, and I trust we’ve all seen Grace Hopper’s classic description of a nanosecond at the top of this page, but I thought it might be a bit more accessible to talk about CPU-scale events in human-scale transactional terms. So: if a single CPU cycle on a modern computer was stretched out as long as one of our absurdly tedious human seconds, how long do other computing transactions take?

If a CPU cycle is 1 second long, then:

  • Getting data out of L1 cache is about the same as getting your data out of your wallet; about 3 seconds.
  • At 9 to 10 seconds, getting data from L2 cache is roughly like asking your friend across the table for it.
  • Fetching data from the L3 cache takes a bit longer – it’s roughly as fast as having an Olympic sprinter bring you your data from 400 meters away.
  • If your data is in RAM you can get it in about the time it takes to brew a pot of coffee; this is how long it would take a world-class athlete to run a mile to bring you your data, if they were running backwards.
  • If your data is on an SSD, though, you can have it six to eight days, equivalent to having it delivered from the far side of the continental U.S. by bicycle, about as fast as that has ever been done.
  • In comparison, platter disks are delivering your data by horse-drawn wagon, over the full length of the Oregon Trail. Something like six to twelve months, give or take.
  • Network transactions are interesting – platter disk performance is so poor that fetching data from your ISP’s local cache is often faster than getting it from your platter disks; at two to three months, your data is being delivered to New York from Beijing, via container ship and then truck.
  • In contrast, a packet requested from a server on the far side of an ocean might as well have been requested from the surface of the moon, at the dawn of the space program – about eight years, from the beginning of the Apollo program to Armstrong, Aldrin and Collin’s successful return to earth.
  • If your data is in a VM, things start to get difficult – a virtualized OS reboot takes about the same amount of time as has passed between the Renaissance and now, so you would need to ask Leonardo Da Vinci to secretly encode your information in one of his notebooks, and have Dan Brown somehow decode it for you in the present? I don’t know how reliable that guy is, so I hope you’re using ECC.
  • That’s all if things go well, of course: a network timeout is roughly comparable to the elapsed time between the dawn of the Sumerian Empire and the present day.
  • In the worst case, if a CPU cycle is 1 second, cold booting a racked server takes approximately all of recorded human history, from the earliest Indonesian cave paintings to now.

August 13, 2018

Licensing Edgecases

Filed under: digital,documentation,interfaces,linux,mozilla,work — mhoye @ 4:37 pm

While I’m not a lawyer – and I’m definitely not your lawyer – licensing questions are on my plate these days. As I’ve been digging into one, I’ve come across what looks like a strange edge case in GPL licensing compliance that I’ve been trying to understand. Unfortunately it looks like it’s one of those Affero-style, unforeseen edge cases that (as far as I can find…) nobody’s tested legally yet.

I spent some time trying to understand how the definition of “linking” applies in projects where, say, different parts of the codebase use disparate, potentially conflicting open source licenses, but all the code is interpreted. I’m relatively new to this area, but generally speaking outside of copying and pasting, “linking” appears to be the critical threshold for whether or not the obligations imposed by the GPL kick in and I don’t understand what that means for, say, Javascript or Python.

I suppose I shouldn’t be surprised by this, but it’s strange to me how completely the GPL seems to be anchored in early Unix architectural conventions. Per the GPL FAQ, unless we’re talking about libraries “designed for the interpreter”, interpreted code is basically data. Using libraries counts as linking, but in the eyes of the GPL any amount of interpreted code is just a big, complicated config file that tells the interpreter how to run.

At a glance this seems reasonable but it seems like a pretty strange position for the FSF to take, particularly given how much code in the world is interpreted, at some level, by something. And honestly: what’s an interpreter?

The text of the license and the interpretation proposed in the FAQ both suggest that as long as all the information that a program relies on to run is contained in the input stream of an interpreter, the GPL – and if their argument sticks, other open source licenses – simply… doesn’t apply. And I can’t find any other major free or open-source licenses that address this question at all.

It just seems like such a weird place for an oversight. And given the often-adversarial nature of these discussions, given the stakes, there’s no way I’m the only person who’s ever noticed this. You have to suspect that somewhere in the world some jackass with a very expensive briefcase has an untested legal brief warmed up and ready to go arguing that a CPU’s microcode is an “interpreter” and therefore the GPL is functionally meaningless.

Whatever your preferred license of choice, that really doesn’t seem like a place we want to end up; while this interpretation may be technically correct it’s also very-obviously a bad-faith interpretation of both the intent of the GPL and that of the authors in choosing it.

The position I’ve taken at work is that “are we technically allowed to do this” is a much, much less important question than “are we acting, and seen to be acting, as good citizens of the larger Open Source community”. So while the strict legalities might be blurry, seeing the right thing to do is simple: we treat the integration of interpreted code and codebases the same way we’d treat C/C++ linking, respecting the author’s intent and the spirit of the license.

Still, it seems like something the next generation of free and open-source software licenses should explicitly address.

September 13, 2017

Durable Design

Filed under: awesome,digital,documentation,future,interfaces,toys — mhoye @ 10:47 am

Flip

It seems like small thing, but it’s an engineering detail I’ve always had a lot of respect for.

That picture is of a Flip video camera with the lid off, a product from about nine years ago. It was a decent little video camera at a time that phones weren’t up to it, storing a bit over an hour of 720p video with decent sound. The company that made them, Pure Digital Technologies, was bought by Cisco in 2009 for about $590M and shut down less than two years later. Their last product – that ultimately never shipped – could stream video live to the Web, something we wouldn’t really see from a pocket-sized device until Periscope and (now-dead) Meerkat took a run at it five years later.

The thing I wanted to call attention to, though, is the shape of that case. The Flip shipped with a custom rectangular battery that had the usual extra charging smarts in it and you could charge off USB, like all civilized hardware that size. But it also gave you the option of putting in three absolutely standard, available-everywhere AAA batteries instead, after that exotic square thing finally died.

You only get to run the camera about two-thirds as long, sure. But long after they’ve stopped making those custom batteries or supporting the device itself, the fact of the matter is: you can still run it at all. It may not be the best thing around, but it’s also not in a landfill. It still does everything it said it would; my kids can make movies with it and they’re good fun. It didn’t suddenly become junk just because the people who made it aren’t around anymore.

I’ve often wondered what those product meetings looked like at Pure Digital. Who pushed for that one extra feature that might give their product a few extra years of life, when so many market forces were and are pushing against it. What did they see, that convinced them to hold the line on a feature that few people would ever use, or even notice? You see it less and less every day, in software and hardware alike – the idea that longevity matters, that maybe repair is better than replace.

If you’re still out there, whoever made this what it was: I noticed. I think it matters, and I’m grateful. I hope that’s worth something.

September 12, 2017

Cleaning House

Filed under: comics,digital,documentation,interfaces,mozilla,work — mhoye @ 3:32 pm

Current status:


Current Status

When I was desk-camping in CDOT a few years ago, one thing I took no small joy in was the combination of collegial sysadminning and servers all named after cities or countries that made a typical afternoon’s cubicle chatter sound like a rapidly-developing multinational diplomatic crisis.

Change management when you’re module owner of Planet Mozilla and de-facto administrator of a dozen or so lesser planets is kind of like that. But way, way better.

Over the next two weeks or I’m going to going to be cleaning up Planet Mozilla, removing dead feeds and culling the participants list down to people still actively participating in the Mozilla project in some broadly-defined capacity. As well, I’ll be consuming decommissioning a number of uninhabited lesser under- or unused planets and rolling any stray debris back into Planet Mozilla proper.

With that in mind, if anything goes missing that you expected to survive a transition like that, feel free to email me or file a bug. Otherwise, if any of your feeds break I am likely to be the cause of that, and if you find a planet you were following has vanished you can take some solace in the fact that it was probably delicious.

June 9, 2017

Trimming The Roster

Filed under: digital,documentation,interfaces,mozilla,work — mhoye @ 1:25 pm

This is a minor administrative note about Planet Mozilla.

In the next few weeks I’ll be doing some long-overdue maintenance and cleaning out dead feeds from Planet and the various sub-Planet blogrolls to help keep them focused and helpful.

I’m going to start by scanning existing feeds and culling any that error out every day for the next two weeks. After that I’ll go down the list of remaining feeds individually, and confirm their author’s ongoing involvement in Mozilla and ask for tagged feeds wherever possible. “Involved in Mozilla” can mean a lot of things – the mission, the many projects, the many communities – so I’ll be happy to take a yes or no and leave it at that.

The process should be pretty painless – with a bit of luck you won’t even notice – but I thought I’d give you a heads up regardless. As usual, leave a comment or email me if you’ve got questions.

June 8, 2017

I’m Walking, Yes Indeed

Filed under: arcade,awesome,digital,interfaces,toys — mhoye @ 10:00 pm

They’re called “walking simulators”, which I guess is a pejorative in some circles, but that certain type of game that’s only a little bit about the conventions of some gaming subgenre – puzzles, platforming, whatever – and mostly about exploration, narrative and atmosphere is one of my favorite things.

Over the last year or two, I suspect mostly thanks to the recent proliferation of free-to-use, high-quality game engines, excellent tutorials and the generally awesome state of consumer hardware, we’re currently in a golden age of this type of game.

One of the underappreciated things that blogging did for writing as a craft was free it from the constraints of the industries around it; you don’t need to fit your article to a wordcount or column-inch slot; you write as much or as little as you think your subject required, and click publish, and that’s OK. It was, and I think still is, generally underappreciated how liberating that has been.

Today the combination of Steam distribution, arbitrary pricing and free-to-use engines has done much the same thing for gaming. Some of the games I’ve listed here are less than half an hour long, others much longer; either way, they’re as long as they need to be, but no more. A stroll through a beautifully-illustrated story doesn’t need to be drawn out, diluted or compressed to fit a market niche precisely anymore, and I thought all of these were a good way to spend however much time they took up.

Plenty of well-deserved superlatives have already been deployed for The Stanley Parable, and it is absolutely worth your time. But two short games by its creators – the free Dr. Langeskov, The Tiger, and The Terribly Cursed Emerald: A Whirlwind Heist and the much longer The Beginner’s Guide are radically different, but both excellent. Dr. Langeskov is brief and polished enough to feel like a good joke; The Beginner’s Guide feels more like exploring the inside of a confession than a game, a unique and interesting experience; I enjoyed them both quite a bit.

Firewatch is, in narrative terms, kind of mechanical – despite its may accolades, you eventually get the sense that you’re turn the handle on the dialogue meat grinder and you know what’s coming out. But it’s still affecting, especially in its quieter moments, and the environment and ambience is unquestionably beautiful. it’s worth playing just to explore. I’d be happy to wander through Firewatch again just to see all the corners of the park I missed the first time around, and there’s a tourist mode in which you can find recordings that explore the production process that I enjoyed quite a bit more than I’d expected.

“Homesick” is very much the opposite of Firewatch, a solitary and mostly monochromatic struggle through environmental and psychological decay, set in a rotting institution in what we eventually learn is an abandoned industrial sacrifice zone. The story unfolds through unexpected puzzles and mechanisms, and ends up being as much a walkthrough of the experience of mental illness as of the environment. Homesick isn’t a difficult game to play, but it’s a difficult game to experience; I’m cautiously recommending it on those terms, and I don’t know of any game I can compare it to.

“Lifeless Planet” is a slow exploration of a marooned FTL expedition to an alien world discovering the abandoned ruins of a fifties-era Soviet settlement. It’s not graphically spectacular, but somehow there is something I found really great about the slow unfolding of it, the pacing and puzzles of this well, if obliquely, told story. I found myself enjoying it far more than I would have expected.

Another space-exploration type game, though (supposedly?) much more sophisticated, Event[0] was generally very well received – Procedurally generated dialog! An AI personality influenced by the player’s actions! – but I played through it and found it… strangely boring? I suspect my gameplay experience was sabotaged by my Canadianness here, because I went into it knowing that the AI would react to your tone and it turns out if you consistently remember your manners the machine does whatever you want. The prime antagonist of the game this ostensibly-secretive-and-maybe-malevolent AI, but if you say please and thank you it turns out to be about as menacing as a golden retriever. Maybe the only reason I found it boring is because I’m boring? Could be, I guess, but I bet there’s a lesson in there somewhere.

The most striking of the bunch, though, the one that’s really stuck with me and that I absolutely recommend, is Everybody’s Gone To The Rapture, essentially an exploration of a small, inexplicably abandoned English village near an observatory in the aftermath of something Iain Banks once referred to as an “Outside-Context Problem”. It is all of interesting, beautiful and relentlessly human, investing you in not just the huge what-just-happened question but the lives and relationships of the people confronting it and trying to live through it. If walking simulators appeal to you – if exploring a story the way you’d explore an open-world game appeals to you – then I don’t want to tell you anything more about it so that you can experience it for yourself.

I’ve played a few other games I’m looking forward to telling you about – some of the best 2D-platformer and Sierra-like games ever made are being made right now – but that’s for another day. In the meantime, if you’ve got some other games that fit in to this genre that you love, I’d love to hear about them.

A Security Question

To my shame, I don’t have a certificate for my blog yet, but as I was flipping through some referer logs I realized that I don’t understand something about HTTPS.

I was looking into the fact that I sometimes – about 1% of the time – I see non-S HTTP referers from Twitter’s t.co URL shortener, which I assume means that somebody’s getting man-in-the-middled somehow, and there’s not much I can do about it. But then I realized the implications of my not having a cert.

My understanding of how this works, per RFC7231 is that:

A user agent MUST NOT send a Referer header field in an unsecured HTTP request if the referring page was received with a secure protocol.

Per the W3C as well:

Requests from TLS-protected clients to non- potentially trustworthy URLs, on the other hand, will contain no referrer information. A Referer HTTP header will not be sent.

So, if that’s true and I have no certificate on my site, then in theory I should never see any HTTPS entries in my referer logs? Right?

Except: I do. All the time, from every browser vendor, feed reader or type of device, and if my logs are full of this then I bet yours are too.

What am I not understanding here? It’s not possible, there is just no way for me to believe that it’s two thousand and seventeen and I’m the only person who’s ever noticed this. I have to be missing something.

What is it?

FAST UPDATE: My colleagues refer me to this piece of the puzzle I hadn’t been aware of, and Francois Marier’s longer post on the subject. Thanks, everyone! That explains it.

SECOND UPDATE: Well, it turns out it doesn’t completely explain it. Digging into the data and filtering out anything referred via Twitter, Google or Facebook, I’m left with two broad buckets. The first is is almost entirely made of feed readers; it turns out that most and maybe almost all feed aggregators do the wrong thing here. I’m going to have to look into that, because it’s possible I can solve this problem at the root.

The second is one really persistent person using Firefox 15. Who are you, guy? Why don’t you upgrade? Can I help? Email me if I can help.

May 1, 2017

Wooden Shoes As A Service

Filed under: academia,digital,doom,future,interfaces,vendetta — mhoye @ 10:57 pm

P5012703

In international trade, the practice of selling state-subsidized goods far below cost – often as a way of crushing local producers of competing goods – is called “dumping”:

Under the Tariff Act of 1930, U.S. industries may petition the government for relief from imports that are sold in the United States at less than fair value (“dumped”) or which benefit from subsidies provided through foreign government programs. Under the law, the U.S. Department of Commerce determines whether the dumping or subsidizing exists and, if so, the margin of dumping or amount of the subsidy; the USITC determines whether there is material injury or threat of material injury to the domestic industry by reason of the dumped or subsidized imports.

To my knowledge there’s not much out there as far as comparable prohibitions around services. Until recently, I think, the idea wouldn’t have made much sense. How do you “dump” services? The idea was kind of nonsensical; you couldn’t, particularly not at any kind of scale.

If you put your black hat on for a minute, though, and think of commerce and trade agreements as extensions of state policy: another way to put that might be, how do you subject a services-based economy to the same risks that dumping poses to a goods-based economy?

Unfortunately, I think software has given us a pretty good answer to that: you dig into deep pockets and fund aggressively growing, otherwise-unsustainable service companies.

Now a new analysis of Uber’s financial documents suggests that ride subsidies cost the company $2 billion in 2015. On average, the analysis suggests, Uber passengers paid only 41% of the cost of their trips for the fiscal year ended in September 2015.

In other words: given enough subsidy, a software startup can become an attack vector on a services-based economy. A growing gig economy is a sign of extreme economic vulnerability being actively exploited.

I don’t know what to do about it, but I think this is new. Certainly the Canadian Special Import Measures Act only mentions services as a way to subsidize the offending company, not as the thing being sold, and all the recent petitions I can find in Canada and the U.S. both involve actual stuff, nothing delivered or mediated by software. At the very least, this is an interesting, quasi-guerilla way to weaponize money in trans-national economic conflicts.

For industries not yet established, the USITC may also be asked to determine whether the establishment of an industry is being materially retarded by reason of the dumped or subsidized imports.

I have a theory that the reason we’re not calling this out an as act of trade war – the reason we can’t see it at all, as far as I can tell – is that the people worst affected are individuals, not corporations. The people losing out are individuals, working on their own, who have no way to petition the state for redress at that scale, when the harm done in aggregate is functionally invisible without a top-down view of the field.

It’d be easy to make this sound isolationist and xenophobic, and that’s not what I intend – I like cool things and meeting people from other places, and international trade seems like the way the world gets to have that. But we know to put a stop to that when trade policies turn into weapons by another name. And I don’t understand down here at street level if there’s much of a difference between “foreign subsidies artificially undercut price of steel ingots” and “foreign subsidies artificially undercut price of cab rides”.

« Newer PostsOlder Posts »

Powered by WordPress