Speed Camera

In the past, I’ve talked about how certain errors can reveal a human in what we may imagine is an entirely automated process. I’ve also shown quite a few errors that reveal the absence of a human just as clearly. Here’s a photograph attached to a speeding ticket given by an automated speed camera that shows the latter.

Photograph of a tow-truck towing a car down a road.

The Daily WTF published this photograph which was sent in by Thomas, one of their readers. The photograph came attached to this summons which arrived in the mail and explained that Thomas had been caught traveling 72 kilometers per hour in a 60 KPH speed zone. The photograph above was attached as evidence of his crime. He was asked to pay a fine or show up in court to contest it.

Obviously, Thomas should never have been fined or threatened. It’s obvious from the picture that Thomas’ car is being towed. Somebody was going 72 KPH but it was the tow-truck driver, not Thomas! Anybody who looked at the image could see this.

In fact, Thomas was the first person to see the image. The photograph was taken by a speed camera: a radar gun measured a vehicle moving in excess of the speed limit and triggered a camera which took a photograph. A computer subsequently analyzed the image to read the license plate number and look up the driver in a vehicle registration database. The system then printed a fine notice and summons notice and mailed it to the vehicle’s owner. The Daily WTF editor points out that proponents of these automated systems often guarantee human oversight in the the implementation of these systems. This error reveals that the human oversight in the application of this particular speed camera is either very little or none and all.

Of course, Thomas will be able to avoid paying the fine — the evidence that exonerates him is literally printed on his court summons. But it will take work and time. The completely automated nature of this system, revealed by this error, has deep implications for the way that justice is carried out. The system is one where people are watched, accused, fined, and processed without any direct human oversight. That has some benefits — e.g., computers are unlikely to let people of a certain race, gender, or background off easier than others.

But in addition to creating the possibilities of new errors, the move from a human to a non-human process has important economic, political, and social consequences. Police departments can give more tickets with cameras — and generate more revenue — than they could ever do with officers in squad cars. But no camera will excuse a man speeding to the hospital with a wife in labor or a hurt child in the passanger seat. As work to rule or “rule-book slowdowns” — types of labor protests where workers cripple production by following rules to the letter — show, many rules are only productive for society because they are selectively enforced. The complex calculus that goes into deciding when to not apply the rules, second nature to humans, is still impossibly out of reach for most computerized expert systems. This is an increasingly important fact we are reminded of by errors like the one described here.

More Google News

In the very first thing I wrote about Revealing Errors — an article published in the journal Media/Culture — one of my core examples was Google News. In my discussion, I described how the fact that Google News aggregates articles without human intervention can become quite visible through the site’s routine mistakes — errors that human editors would never commit. I gave the example of the aggregation of two articles: one from Al Jazeera on how, “Iran offers to share nuclear technology,” and another from the Guardian on how, “Iran threatens to hide nuclear program.” Were they really discussing the same event? Maybe. But few humans would have made the same call that Google News did.

Google News Share/Hide

Yesterday, I saw this article from Network World that described an error that is even more egregious and that was, apparently, predicted by the article’s author ahead of time.

In this case, Google listed a parody by McNamara as the top story about the recent lawsuit filed by the MBTA (the Boston mass transit authority) against security researchers at MIT. In the past, McNamara has pointed to other examples of Google News being duped by obvious spoofs. This long list of possible examples includes a story about congress enlisting the help of YouTube to grill the Attorney General (it was listed as the top story on Google News) and this story (which I dug up) about Paris Hilton’s genitals being declared a wonder of the modern world!

McNamara has devoted an extraordinary amount of time to finding and discussing other shortcomings of Google News. For example, he’s talked about the fact that GN has trouble filtering out highly-local takes on stories that are of broader interest when presenting them to the general Google News audience, about its sluggishness and inability to react to changing news circumstances, and about the sometimes hilarious and wildly inappropriate mismatches of images on the Google News website. Here’s one example I dug up. Imagine what it looked like before it was censored!

Google News Porn

As McNamara points out repeatedly, all of these errors are only possible because Google News employs no human editors. Computers remain pretty horrible at sorting images for relevance to news stories and discerning over-the-top parody from the real thing — two tasks that most humans don’t have too much trouble with. The more generally inappropriate errors wouldn’t have made it past a human for multiple reasons!

As I mentioned in my original Revealing Errors article, the decision to use a human editor is an important one with profound effects on the way that users are exposed to news and, as an effect, experience and understand one important part of the world around them. Google News’ frequent mistakes gives us repeated opportunity to consider the way that our choice of technology — and of editors — frames this understanding.

Olympics Blue Screen of Death

Thanks to everyone who pointed me to the the Blue Screen of Death (BSoD) that could be seen projected onto part of the roof of the birds nest stadium in the opening ceremony of the Beijing Olympics this week right next to and during the torch lighting! Here’s a photograph of the opening ceremony from an article on Gizmodo (there are more photos there) that shows the snafu pretty clearly.

BSOD at Olympics Opening

In the not-so-recent past, a stadium like the Bird’s Nest would have been lit up using a large number of lights with gels to add color and texture. As the need for computer control moved on, expensive specialized theatrical computer controlled lighting equipment was introduced to help automate the use of these systems.

Of course, another way to maximize flexibility, coordination, and programmability at a low cost is to skip the lighting control systems altogether and to just hook up a computer to a powerful general purpose video projector. Then, if you want a green light projected, all you have to do is change the background on the screen being projected to green. If you want a blue green gradient, it’s just as easy and there are no gels to change. Apparently, that’s exactly what the Bird’s Nest’s designers did.

Unfortunately, with that added flexibility comes the opportunity for new errors. If the computer controlling your light is running Windows, for example, your lighting systems will be susceptible to all the same modes of failure. Apparently, using a video projector for this type of lighting is an increasingly common trick. If it had worked correctly for the Olympic organizers, we might never have known!

OSCON Keynote

This year, I was invited to give a keynote presentation on revealing errors at the annual O’Reilly Open Source conference. The keynotes this year were all short form — 15 minutes — but I tried to squeeze in what I could. Although I was “helped” in this regard by the fact that I talk too quickly in general, I think the talk hit the core themes of the project and offered a few key examples that will be familiar to RE’s regular readers.

I’m happy with the result: a couple thousand people showed up for the talk despite the fact that it was at 8:45 AM after the biggest “party night” of the conference!

For those that missed it for whatever reason, you can watch a video recording that O’Reilly made that I’ve embedded below.

A larger version of the Flash video as well as a QuickTime version is over on blip.tv and I’ve created an OGG Theora version for all my freedom loving readers.