Beef Panties

Many of the gems from the newspaper correction blog Regret the Error qualify as a revealing errors. One particularly entertaining example was this Reuters syndicated wire story on the recall of beef whose opening paragraph explained that (emphasis mine):

Quaker Maid Meats Inc. on Tuesday said it would voluntarily recall 94,400 pounds of frozen ground beef panties that may be contaminated with E. coli.

ABC News Beef Panties Article

Of course the article was talking about beef patties, not beef panties.

This error can be blamed, at least in part, on a spellchecker. I talked about spellcheckers before when I discussed the Cupertino effect which happens when someone spells a word correctly but is prompted to change it to an incorrect word because the spellchecker does not contain the correct word in its dictionary. The Cupertino effect explains why the New Zealand Herald ran a story with Saddam Hussein’s named rendered as Saddam Hussies and Reuters ran a story referring to Pakistan’s Muttahida Quami Movement as the Muttonhead Quail Movement.

What’s going on in the beef panties example seems to be a little different and more subtle. Both “patties” and “panties” are correctly spelled words that are one letter apart. The typo that changes patties to panties is, unlike swapping Cupertino in for cooperation, an easy one for a human to make. Single letter typos in the middle of a word are easy to make and easy to overlook.

As nearly all word processing programs have come to include spellcheckers, writers have become accustomed to them. We look for the red squiggly lines underneath words indicating a typo and, if we don’t see it, we assume we’ve got things right. We do so because this is usually a correct assumption: spelling errors or typos that result in them are the most common type of error that writers make.

In a sense though, the presence of spellcheckers has made one class of misspellings — those that result in a correctly spelled but incorrect words — more likely than before. By making most errors easier to catch, we spend less time proofreading and, in the process, make a smaller class of errors — in this case, swapped words — more likely than used to be. The result is errors like “beef panties.”

Although we’re not always aware of them, the affordances of technology changes the way we work. We proofread differently when we have a spellchecker to aid us. In a way, the presence of a successful error-catching technology makes certain types of errors more likely.

One could make an analogy with the arguments made against some security systems. There’s a strong argument in the security community that creation of a bad security system can actually make people less safe. If one creates a new high-tech electronic passport validator, border agents might stop checking the pictures as closely or asking tough questions of the person in front of them. If the system is easy to game, it can end up making the border less safe.

Error-checking systems eliminate many errors. In doing so, they can create affordances that make others more likely! If the error checking system is good enough, we might stop looking for errors as closely as we did before and more errors of the type that are not caught will slip through.

Send in the Clones

Earlier in the summer, Iran released this image to the international community — purportedly a photograph of rocket tests carried out recently.

Iran missiles (original image)

There was an interesting response from a number of people that pointed out that the images appeared to have been manipulated. Eventually, the image ended up on the blog Photoshop Disasters (PsD) who released this marked up image highlighting the fact that certain parts of the image seemed similar to each other. Identical in fact; they had been cut and pasted.

Iran missile image marked up by PsD

The blog joked that the photos revealed a “shocking gap in that nation’s ability to use the clone tool.”

The clone tool — sometimes called the “rubber stamp tool” — is a feature available in a number of photo-manipulation programs including Adobe Photoshop, GIMP and Corel Photopaint. The tool lets users easily replace part of a picture with information from another part. The Wikipedia article on the tool offers a good visual example and this description:

The applications of the cloning tool are almost unlimited. The most common usage, in professional editing, is to remove blemishes and uneven skin tones. With a click of a button you can remove a pimple, mole, or a scar. It is also used to remove other unwanted elements, such as telephone wires, an unwanted bird in the sky, and a variety of other things.

Of course, the clone tool can also be used to add things in — like the clouds of dust and smoke at the bottom of the images of the Iranian test. Used well, the clone tool can be invisible and leave little or no discernible mark. This invisible manipulation can be harmless or, as in the case of the Iranian missiles, it can used for deception.

The clone tool makes perfect copies. Too perfect. And these impossibly perfect reproductions can becoming revealing errors. Through its introduction of unnatural verisimilitude within an image, the clone introduces errors. In doing so, it can reveal both the person manipulating the image and their tools. Through their careless use of the tool, the Iranian government’s deception, and their methods, were revealed to the world.

But the Iranian government is hardly the only one caught manipulating images through careless use of the clone tool. Here’s an image, annotated by PsD again, of the 20th Century Fox Television logo with “evident clone tool abuse!”

20th Century Fox Image Manipulation

And here’s an image from Brazilian Playboy where an editor using a clone tool has become a little overzealous in their removal of blemishes.

Missing navel on Playboy Brazil model

Now we’re probably not shocked to find out that Playboy deceptively manipulates images of their models — although the resulting disregard for anatomy drives the extreme artificially of their productions home in a rather stark way.

In aggregate though, these images (a tiny sample of what I could find with a quick look) help speak to the extent of image manipulation in photographs that, by default, most of us tend to assume are unadulterated. Looking for the clone tool, and for other errors introduced by the process of image manipulation, we can get a hint of just how mediated the images we view the world are — and we have reason to be shocked.

Here’s a final example from Google maps that shows the clear marks of the clone tool in a patch of trees — obviously cloned to the trained eye — on what is supposed to be an unadulterated satellite image of land in the Netherlands.

Missing navel on Playboy Brazil model

Apparently, the surrounding area is full of similar artifacts. Someone has been edited out and papered over much of the area — by hand — with the clone tool because someone with power is trying to hide something visible on that satellite photograph. Perhaps they have a good reason for doing so. Military bases, for example, are often hidden in this way to avoid enemy or terrorist surveillance. But it’s only through the error revealed by sloppy use of the clone tool that we’re in any position to question the validity of these reasons or realize the images have been edited at all.

Google News and the UAL Stock Fiasco

I’ve beat up on Google News before but something happened this week that made me (and many of you who emailed me) believe it worth revisiting the topic.

On September 9th, a glitch in the Google News crawler caused Google News to redisplay an old article from 2002 that announced that UAL — the company that owns and runs United Airlines — was filing for bankruptcy. The re-publication of this article as news started off a chain-reaction that caused UAL’s stock price to plummet from more than USD$11 per share to nearly $3 in 13 minutes! After trading was halted and the company allowed to make a statement, the stock mostly (but not completely) recovered by the end of the day. During that period, USD$1.14 billion dollars of shareholder wealth evaporated.

Initially, officials suspected stock manipulation but it seems to be traced back to a set of automated systems and “honest” technical mistakes. There was no single error behind the fiasco but rather several broken systems working in concert.

The mess started when Chicago Tribune, who published an article about UAL’s 2002 bankruptcy back in 2002, started getting increased traffic to that old article for reasons that are not clear. As a result, the old article became listed as a “popular news story” on their website. Seeing the story on the popular stories list, a program running on computers at Google downloaded the article. For reasons Google tried to explain, their program (or “crawler”) was not able to correctly identify the article as coming from 2002 and, instead, classified it being a new story and listed it on their website accordingly. Elsewhere, the Tribune claimed that they notified Google of this issue already. Google denies this.

What happens next is somewhat complicated but was carefully detailed by the Sun-Times. It seems that a market research firm called Income Securities Advisers, Inc. was monitoring Google News, saw the story (or, in all probability, just the headline “UAL files for bankruptcy”) and filed an alert which was then picked up by the financial news company Bloomberg. At any point, clicking on and reading the article would have made it clear that the story was old. Of course, enough people didn’t click and check before starting a sell-off that snow-balled, evaporating UAL’s market capital before anyone realized what was actually going on. The president of the research firm, Richard Lehmann, said, “It says something about our capital markets that people make a buy-sell decision based on a headline that flashes across Bloomberg.”

Even more intriguing, there’s a Wall Street Journal report that claims that the sell-off was actually kick-started by automated trading programs that troll news aggregators — like Bloomberg and Google news. These programs look for key words and phrases and start selling a companies shares when they get sense “bad” news. Such programs exist and, almost certainly, would have been duped by this chain of events.

While UAL has mostly recovered, the market and many outside of it learned quite a few valuable lessons about the technology that they are trusting their investments and their companies to. Investors understand that the computer programs they use to manage and coordinate their markets are hugely important; Financial services companies spend billions of dollars building robust, error-resistant systems. Google News, of the other hand, quietly became part of this market infrastructure without Google, most investors, or companies realizing it — that’s why officials initially suspected intentional market manipulation and why Google and Tribune were so suprised.

There were several automated programs — including news-reading automated trading systems — that have become very powerful market players. Most investors and the public never knew about these because they are designed to work just like humans do — just faster. When they work, they make money for the people running them because they can be just ahead of the pack in known market moves. These systems were revealed because they made mistakes that no human would make. In the process they lost (if only temporarily) more than a billion dollars!

Our economy is mediated by and, in many ways, resting in the hands of technologies — many of which we won’t know about until they fail. If we’re wise, we’ll learn from errors and think hard about the way that we use technology and about the power, and threat, that invisible and unaccountable technologies might pose to our economy and beyond.

Google Miscalculator

This post on a search engine blog pointed out a series of very strange and incorrect search results returned by Google’s search engine. A very complicated “black box,” many of the errors described highlight and reveal some aspect of Google’s search technology.

My favorite was this error from Google Calculator:

Error showing 1.16 as a result for eight days a week

The error, which has been fixed, occurred when users searched for the the phrase “eight days a week” — the name of a Beatles’s song, film, and sitcom.

Google Calculator is a feature of Google’s search engine that looks at search strings and, if it thinks you are trying to ask a math question or a units conversion, will give you the answer. You can, for example, search for 5000 times 23 or 10 furlongs per fortnight in kph or 30 miles per gallon in inverse square millimeters — Google Calculator will give you the right answers. While it would be obvious to any human that “eight days a week” was a figure of a speech, Google thought it was a math problem! It happily converted 1 week to 7 days and then divided 8 by 7: roughly 1.14.

Clearly, the error reveals the absence of human judgment — but we knew that about Google’s search engine already. More intriguing is what this, combined with a series of other Google Calculator errors, might reveal about the Google’s black box software.

When Google launched its Calculator feature, it reminded me of GNU Units — a piece of free/open source software written by volunteers and distributed with an expectation that those who modify it will share with the community. After playing with Google Calculator for a little while, I tried a few “bugs” that had always bothered me in Units. In particular, I tried converting between Fahrenheit and Celsius. Units converts between the amount of degrees (for example, a change in temperature). It does not take into account the fact that the units have a different zero point so it often gives people an unexpected (and apparently incorrect) answer. Sure enough, Google Calculator had the same bug.

Now it’s possible that Google implemented their system similarly and ran into similar bugs. But it’s also quite likely that Google just took GNU Units and, without telling anyone, plugged it into their system. Google might look bad for using Units without credit and without assisting the community but how would anyone ever find out? Google’s Calculator software ran on the Google’s private servers!

If Google had released a perfect calculator, nobody would have had any reason to suspect that Google might have borrowed from Units. One expects unit conversion by different pieces of software to be similar — even identical — when its working. Identical bugs and idiosyncratic behaviors, however, are much less likely and much more suspicious.

Given the phrase “eight days a week”, Units says “1.1428571.”

Speed Camera

In the past, I’ve talked about how certain errors can reveal a human in what we may imagine is an entirely automated process. I’ve also shown quite a few errors that reveal the absence of a human just as clearly. Here’s a photograph attached to a speeding ticket given by an automated speed camera that shows the latter.

Photograph of a tow-truck towing a car down a road.

The Daily WTF published this photograph which was sent in by Thomas, one of their readers. The photograph came attached to this summons which arrived in the mail and explained that Thomas had been caught traveling 72 kilometers per hour in a 60 KPH speed zone. The photograph above was attached as evidence of his crime. He was asked to pay a fine or show up in court to contest it.

Obviously, Thomas should never have been fined or threatened. It’s obvious from the picture that Thomas’ car is being towed. Somebody was going 72 KPH but it was the tow-truck driver, not Thomas! Anybody who looked at the image could see this.

In fact, Thomas was the first person to see the image. The photograph was taken by a speed camera: a radar gun measured a vehicle moving in excess of the speed limit and triggered a camera which took a photograph. A computer subsequently analyzed the image to read the license plate number and look up the driver in a vehicle registration database. The system then printed a fine notice and summons notice and mailed it to the vehicle’s owner. The Daily WTF editor points out that proponents of these automated systems often guarantee human oversight in the the implementation of these systems. This error reveals that the human oversight in the application of this particular speed camera is either very little or none and all.

Of course, Thomas will be able to avoid paying the fine — the evidence that exonerates him is literally printed on his court summons. But it will take work and time. The completely automated nature of this system, revealed by this error, has deep implications for the way that justice is carried out. The system is one where people are watched, accused, fined, and processed without any direct human oversight. That has some benefits — e.g., computers are unlikely to let people of a certain race, gender, or background off easier than others.

But in addition to creating the possibilities of new errors, the move from a human to a non-human process has important economic, political, and social consequences. Police departments can give more tickets with cameras — and generate more revenue — than they could ever do with officers in squad cars. But no camera will excuse a man speeding to the hospital with a wife in labor or a hurt child in the passanger seat. As work to rule or “rule-book slowdowns” — types of labor protests where workers cripple production by following rules to the letter — show, many rules are only productive for society because they are selectively enforced. The complex calculus that goes into deciding when to not apply the rules, second nature to humans, is still impossibly out of reach for most computerized expert systems. This is an increasingly important fact we are reminded of by errors like the one described here.

More Google News

In the very first thing I wrote about Revealing Errors — an article published in the journal Media/Culture — one of my core examples was Google News. In my discussion, I described how the fact that Google News aggregates articles without human intervention can become quite visible through the site’s routine mistakes — errors that human editors would never commit. I gave the example of the aggregation of two articles: one from Al Jazeera on how, “Iran offers to share nuclear technology,” and another from the Guardian on how, “Iran threatens to hide nuclear program.” Were they really discussing the same event? Maybe. But few humans would have made the same call that Google News did.

Google News Share/Hide

Yesterday, I saw this article from Network World that described an error that is even more egregious and that was, apparently, predicted by the article’s author ahead of time.

In this case, Google listed a parody by McNamara as the top story about the recent lawsuit filed by the MBTA (the Boston mass transit authority) against security researchers at MIT. In the past, McNamara has pointed to other examples of Google News being duped by obvious spoofs. This long list of possible examples includes a story about congress enlisting the help of YouTube to grill the Attorney General (it was listed as the top story on Google News) and this story (which I dug up) about Paris Hilton’s genitals being declared a wonder of the modern world!

McNamara has devoted an extraordinary amount of time to finding and discussing other shortcomings of Google News. For example, he’s talked about the fact that GN has trouble filtering out highly-local takes on stories that are of broader interest when presenting them to the general Google News audience, about its sluggishness and inability to react to changing news circumstances, and about the sometimes hilarious and wildly inappropriate mismatches of images on the Google News website. Here’s one example I dug up. Imagine what it looked like before it was censored!

Google News Porn

As McNamara points out repeatedly, all of these errors are only possible because Google News employs no human editors. Computers remain pretty horrible at sorting images for relevance to news stories and discerning over-the-top parody from the real thing — two tasks that most humans don’t have too much trouble with. The more generally inappropriate errors wouldn’t have made it past a human for multiple reasons!

As I mentioned in my original Revealing Errors article, the decision to use a human editor is an important one with profound effects on the way that users are exposed to news and, as an effect, experience and understand one important part of the world around them. Google News’ frequent mistakes gives us repeated opportunity to consider the way that our choice of technology — and of editors — frames this understanding.

Olympics Blue Screen of Death

Thanks to everyone who pointed me to the the Blue Screen of Death (BSoD) that could be seen projected onto part of the roof of the birds nest stadium in the opening ceremony of the Beijing Olympics this week right next to and during the torch lighting! Here’s a photograph of the opening ceremony from an article on Gizmodo (there are more photos there) that shows the snafu pretty clearly.

BSOD at Olympics Opening

In the not-so-recent past, a stadium like the Bird’s Nest would have been lit up using a large number of lights with gels to add color and texture. As the need for computer control moved on, expensive specialized theatrical computer controlled lighting equipment was introduced to help automate the use of these systems.

Of course, another way to maximize flexibility, coordination, and programmability at a low cost is to skip the lighting control systems altogether and to just hook up a computer to a powerful general purpose video projector. Then, if you want a green light projected, all you have to do is change the background on the screen being projected to green. If you want a blue green gradient, it’s just as easy and there are no gels to change. Apparently, that’s exactly what the Bird’s Nest’s designers did.

Unfortunately, with that added flexibility comes the opportunity for new errors. If the computer controlling your light is running Windows, for example, your lighting systems will be susceptible to all the same modes of failure. Apparently, using a video projector for this type of lighting is an increasingly common trick. If it had worked correctly for the Olympic organizers, we might never have known!

OSCON Keynote

This year, I was invited to give a keynote presentation on revealing errors at the annual O’Reilly Open Source conference. The keynotes this year were all short form — 15 minutes — but I tried to squeeze in what I could. Although I was “helped” in this regard by the fact that I talk too quickly in general, I think the talk hit the core themes of the project and offered a few key examples that will be familiar to RE’s regular readers.

I’m happy with the result: a couple thousand people showed up for the talk despite the fact that it was at 8:45 AM after the biggest “party night” of the conference!

For those that missed it for whatever reason, you can watch a video recording that O’Reilly made that I’ve embedded below.

A larger version of the Flash video as well as a QuickTime version is over on blip.tv and I’ve created an OGG Theora version for all my freedom loving readers.

Lost in Machine Translation

While I’ve been traveling over the last week or so, loads of people sent me a link to this wonderful image of a sign in China reading “Translate Server Error” which has been written up all over the place. Thanks everyone!

Billboard saying

It’s pretty easy to imagine the chain of events to led to this revealing error. The sign is describing a restaurant (the Chinese text, 餐厅, means “dining hall”). In the process of making the sign, the producers tried to translate Chinese text into English with a machine translation system. The translation software did not work and produced the error message, “Translation Server Error.” Unfortunately, because the software’s user didn’t know English, they thought that the error message was the translation and the error text went onto the sign.

This class of error is extremely widespread. When users employ machine translations systems, it’s because they want to communicate to people with whom they do not have a language in common. What that means is that the users of these systems are often in no position to understand the output (or input, depending on which way the translation is going) of such systems and have to trust the translation technology and its designers to get things right.

Here’s another one of my favorite examples that shows a Chinese menu selling stir-fried Wikipedia.

Billboard saying

It’s not entirely clear how this error came about but it seems likely that someone did a search for the Chinese word for a type of edible fungus and its translation into English. The most relevant and accurate page very well might have been an article on the fungus on Wikipedia. Unfamiliar with Wikipedia, the user then confused the name of the article with the name of the website. There have been several distinct citings of “wikipedia” on Chinese menus.

There are a few errors revealed in these examples. Of course, there are errors in the use of language and the broken translation server itself. Machine translations tools are powerful intermediaries that determine (often with very little accountability) the content of one’s messages. The authors of the translation software might design their tool to avoid certain terminology and word choices over others or to silently censor certain messages. When the software is generating reasonable sounding translations, the authors and readers of machine translated texts are usually unaware of the ways in which messages are being changed. By revealing the presence of a translation system or process, this power is hinted at.

Of course, one might be able to recognize a machine translation system simply by the roughness and nature of a translation. In this particular case, the server itself came explicitly into view; it was mentioned by name! In that sense, the most serious failure was not that the translation server worked or that Wikipedia was used incorrectly, but rather that each system failed to communicate the basic fact that there was an error in the first place.

Tyson Homosexual

Thanks to everyone who pointed me to the flub below. It was reported all over the place today.

Screenshot showing Tyson Homosexual instead of Tyson Gay

The error occurred on One News Now, a news website run by the conservative Christian American Family Association. The site provides Christian conservative news and commentary. One of the things they do, apparently, is offer a version of the standard Associated Press news feed. Rather than just republishing it, they run software to clean up the language so it more accurately reflects their values and choice of terminology. They do so with a computer program.

The error is a pretty straightforward variant of the clbuttic effect — a run-away filter trying to clean up text by replacing offensive terms with theoretically more appropriate ones. Among other substitutions, AFA/ONN replaced the term “gay” with “homosexual.” In this case, they changed the name of champion sprinter and U.S. Olympic hopeful Tyson Gay to “Tyson Homosexual.” In fact, they did it quite a few times as you can see in the screenshot below.

Screenshot showing Tyson Homosexual instead of Tyson Gay.

Now, from a technical perspective, the technology this error reveals is identical to the clbuttic mistake. What’s different, however, is the values that the error reveals.

AFA doesn’t advertise the fact that it changes words in its AP stories — it just does it. Most of its readers probably never know the difference or realize that the messages and terminology they are being communicated to in is being intentionally manipulated. AFA prefers the term “homosexual,” which sounds clinical, to “gay” which sounds much less serious. Their substitution, and the error it created, reflects a set of values that AFA and ONN have about the terminology around homosexuality.

It’s possible than the AFA/ONN readers already know about AFA’s values. This error provides an important reminder and shows, quite clearly, the importance that AFA gives to terminology. It reveals their values and some of the actions they are willing to take to protect them.