Transparency

I caught this revealing error on the always entertaining Photoshop Disasters and thought it was too good to resist pointing out here:

Bag of Jasmin Rice

The picture, of course, is a bag of Tao brand jasmine rice for sale in Germany. The error is pretty obvious if you understand a little German: the phrase transparentes sichtfeld literally means transparent field of view. In this case, the phrase is a note written by the graphic designer of the rice bag’s packaging that was never meant to be read by a consumer. The phrase is supposed to indicate to someone involved in the bag’s manufacture than the pink background on which the text is written is supposed to remain unprinted (i.e., as transparent plastic) so that customers get a view directly onto the rice inside the bag.

The error, of course, is that the the pink background and the text was never removed. This was possible, in part, because the the pink background doesn’t look horribly out of place on the bag. A more important factor, however, is the fact that the person printing the bag and bagging the rice almost certainly didn’t speak German.

In this sense, this bears a lot of similarity with some errors I’ve written up before — e.g., the Welsh autoresponder and the Translate server error restaurant. And as in those cases, there are takeaways here about all the things we take for granted when communicating using technology — things we often don’t realize until language barriers make errors like this thrust hidden processes into view.

This error revealed a bit of the processes through which these bags of rice are produced and a little bit about the people and the division of labor that helped bring it to us. Ironically, this error is revealing precisely through the way that the bag fails to reveal its contents.

Deals, Failure, and Fun

I’ve found that the always entertaining FAILblog is a rich source for revealing errors. Here’s a nice example.

Every reader on FAILblog can chuckle at the idea an item is being offered for $69.98 instead of an original $19.99 as part of a clearance sale. The idea that one can “Save $-49” is icing on the cake. Of course, most readers will immediately assume that no human was involved in the production of this sign; it’s hard to imagine that any human even read the sign before it went up on the shelf!

The sign was made by a computer program working from a database or a spreadsheet with a column for the name of the product, a column for the original price, and a column for the sale price. Subtracting the sale price from the original gives the “savings” and, with that data in hand, the sign is printed. The idea of negative savings is a mistake that only a computer will make and, with the error, the sign-producing computer program is revealed.

Errors like this, and FAILblog’s work in in general, highlights one of the reasons that I think that errors are such a great way to talk about technology. FAILblog is incredibly popular with millions of people checking in to see the latest pictures and videos of screw-ups, mistakes, and failures. For whatever reason — sadism, schadenfreude, reflection on things that are surprisingly out of place, or the comfort of knowing that others have it worse — we all know that a good error can be hilarious and entertaining.

My own goal with Revealing Errors centers on a type of technology education. I want to revealing hidden technology as a way of giving people insight into the degree and the way that our lives are technologically mediated. In the process, I hope to lay the groundwork for talking about the power that this technology has.

But if people are going to want to read anything I write, it should also be entertaining. Errors are appropriate for a project like mine because they give an a view into closed systems, hidden intermediaries and technological black boxes. But they they are great for the project because they are also intrinsically interesting!

Faces of Google Street View

This error was revealed and written up by Fred Beneson and first published on his blog.

Google Streetview Blurred Face Example

After receiving criticism for the privacy-violating “feature” of Google Street View that enabled anyone to easily identify people who happened to be on the street as Google’s car drove by, the search giant started blurring faces.

What is interesting, and what Mako would consider a “Revealing Error”, is when the auto-blur algorithm can not distinguish between an advertisement’s face and a regular human’s face. For the ad, the model has been compensated to have his likeness (and privacy) commercially exploited for the brand being advertised. On the other hand, there is a legal grey-area as to whether Google can do the same for random people on the street, and rather than face more privacy criticism, Google chooses to blur their identities to avoid raising the issue of whether it is their right to do so, at least in America.

So who cares that the advertisement has been modified? The advertiser, probably. If a 2002 case was any indication, advertisers do not like it when their carefully placed and expensive Manhattan advertisements get digitally altered. While the advertisers lost a case against Sony for changing (and charging for) advertisements in the background of Spiderman scenes located in Times Square, its clear that they were expecting their ads to actually show up in whatever work happened to be created in that space. There are interesting copyright implications here, too, as it demonstrates an implicit desire by big media for work like advertising to be reappropriated and recontextualized because it serves the point of getting a name “out there.”

To put my undergraduate philosophy degree to use, I believe these cases bring up deep ethical and ontological questions about the right to control and exhibit realities (Google Street View being one reality, Spiderman’s Time Square being another) as they obtain to the real reality. Is it just the difference between a fiction and a non-fiction reality? I don’t think so, as no one uses Google maps expecting to retrieve information that is fictional. Regardless, expect these kinds of issues to come up more and more frequently as Google increases its resolution and virtual worlds merge closer to real worlds.

Send in the Clones

Earlier in the summer, Iran released this image to the international community — purportedly a photograph of rocket tests carried out recently.

Iran missiles (original image)

There was an interesting response from a number of people that pointed out that the images appeared to have been manipulated. Eventually, the image ended up on the blog Photoshop Disasters (PsD) who released this marked up image highlighting the fact that certain parts of the image seemed similar to each other. Identical in fact; they had been cut and pasted.

Iran missile image marked up by PsD

The blog joked that the photos revealed a “shocking gap in that nation’s ability to use the clone tool.”

The clone tool — sometimes called the “rubber stamp tool” — is a feature available in a number of photo-manipulation programs including Adobe Photoshop, GIMP and Corel Photopaint. The tool lets users easily replace part of a picture with information from another part. The Wikipedia article on the tool offers a good visual example and this description:

The applications of the cloning tool are almost unlimited. The most common usage, in professional editing, is to remove blemishes and uneven skin tones. With a click of a button you can remove a pimple, mole, or a scar. It is also used to remove other unwanted elements, such as telephone wires, an unwanted bird in the sky, and a variety of other things.

Of course, the clone tool can also be used to add things in — like the clouds of dust and smoke at the bottom of the images of the Iranian test. Used well, the clone tool can be invisible and leave little or no discernible mark. This invisible manipulation can be harmless or, as in the case of the Iranian missiles, it can used for deception.

The clone tool makes perfect copies. Too perfect. And these impossibly perfect reproductions can becoming revealing errors. Through its introduction of unnatural verisimilitude within an image, the clone introduces errors. In doing so, it can reveal both the person manipulating the image and their tools. Through their careless use of the tool, the Iranian government’s deception, and their methods, were revealed to the world.

But the Iranian government is hardly the only one caught manipulating images through careless use of the clone tool. Here’s an image, annotated by PsD again, of the 20th Century Fox Television logo with “evident clone tool abuse!”

20th Century Fox Image Manipulation

And here’s an image from Brazilian Playboy where an editor using a clone tool has become a little overzealous in their removal of blemishes.

Missing navel on Playboy Brazil model

Now we’re probably not shocked to find out that Playboy deceptively manipulates images of their models — although the resulting disregard for anatomy drives the extreme artificially of their productions home in a rather stark way.

In aggregate though, these images (a tiny sample of what I could find with a quick look) help speak to the extent of image manipulation in photographs that, by default, most of us tend to assume are unadulterated. Looking for the clone tool, and for other errors introduced by the process of image manipulation, we can get a hint of just how mediated the images we view the world are — and we have reason to be shocked.

Here’s a final example from Google maps that shows the clear marks of the clone tool in a patch of trees — obviously cloned to the trained eye — on what is supposed to be an unadulterated satellite image of land in the Netherlands.

Missing navel on Playboy Brazil model

Apparently, the surrounding area is full of similar artifacts. Someone has been edited out and papered over much of the area — by hand — with the clone tool because someone with power is trying to hide something visible on that satellite photograph. Perhaps they have a good reason for doing so. Military bases, for example, are often hidden in this way to avoid enemy or terrorist surveillance. But it’s only through the error revealed by sloppy use of the clone tool that we’re in any position to question the validity of these reasons or realize the images have been edited at all.

Google Miscalculator

This post on a search engine blog pointed out a series of very strange and incorrect search results returned by Google’s search engine. A very complicated “black box,” many of the errors described highlight and reveal some aspect of Google’s search technology.

My favorite was this error from Google Calculator:

Error showing 1.16 as a result for eight days a week

The error, which has been fixed, occurred when users searched for the the phrase “eight days a week” — the name of a Beatles’s song, film, and sitcom.

Google Calculator is a feature of Google’s search engine that looks at search strings and, if it thinks you are trying to ask a math question or a units conversion, will give you the answer. You can, for example, search for 5000 times 23 or 10 furlongs per fortnight in kph or 30 miles per gallon in inverse square millimeters — Google Calculator will give you the right answers. While it would be obvious to any human that “eight days a week” was a figure of a speech, Google thought it was a math problem! It happily converted 1 week to 7 days and then divided 8 by 7: roughly 1.14.

Clearly, the error reveals the absence of human judgment — but we knew that about Google’s search engine already. More intriguing is what this, combined with a series of other Google Calculator errors, might reveal about the Google’s black box software.

When Google launched its Calculator feature, it reminded me of GNU Units — a piece of free/open source software written by volunteers and distributed with an expectation that those who modify it will share with the community. After playing with Google Calculator for a little while, I tried a few “bugs” that had always bothered me in Units. In particular, I tried converting between Fahrenheit and Celsius. Units converts between the amount of degrees (for example, a change in temperature). It does not take into account the fact that the units have a different zero point so it often gives people an unexpected (and apparently incorrect) answer. Sure enough, Google Calculator had the same bug.

Now it’s possible that Google implemented their system similarly and ran into similar bugs. But it’s also quite likely that Google just took GNU Units and, without telling anyone, plugged it into their system. Google might look bad for using Units without credit and without assisting the community but how would anyone ever find out? Google’s Calculator software ran on the Google’s private servers!

If Google had released a perfect calculator, nobody would have had any reason to suspect that Google might have borrowed from Units. One expects unit conversion by different pieces of software to be similar — even identical — when its working. Identical bugs and idiosyncratic behaviors, however, are much less likely and much more suspicious.

Given the phrase “eight days a week”, Units says “1.1428571.”

Speed Camera

In the past, I’ve talked about how certain errors can reveal a human in what we may imagine is an entirely automated process. I’ve also shown quite a few errors that reveal the absence of a human just as clearly. Here’s a photograph attached to a speeding ticket given by an automated speed camera that shows the latter.

Photograph of a tow-truck towing a car down a road.

The Daily WTF published this photograph which was sent in by Thomas, one of their readers. The photograph came attached to this summons which arrived in the mail and explained that Thomas had been caught traveling 72 kilometers per hour in a 60 KPH speed zone. The photograph above was attached as evidence of his crime. He was asked to pay a fine or show up in court to contest it.

Obviously, Thomas should never have been fined or threatened. It’s obvious from the picture that Thomas’ car is being towed. Somebody was going 72 KPH but it was the tow-truck driver, not Thomas! Anybody who looked at the image could see this.

In fact, Thomas was the first person to see the image. The photograph was taken by a speed camera: a radar gun measured a vehicle moving in excess of the speed limit and triggered a camera which took a photograph. A computer subsequently analyzed the image to read the license plate number and look up the driver in a vehicle registration database. The system then printed a fine notice and summons notice and mailed it to the vehicle’s owner. The Daily WTF editor points out that proponents of these automated systems often guarantee human oversight in the the implementation of these systems. This error reveals that the human oversight in the application of this particular speed camera is either very little or none and all.

Of course, Thomas will be able to avoid paying the fine — the evidence that exonerates him is literally printed on his court summons. But it will take work and time. The completely automated nature of this system, revealed by this error, has deep implications for the way that justice is carried out. The system is one where people are watched, accused, fined, and processed without any direct human oversight. That has some benefits — e.g., computers are unlikely to let people of a certain race, gender, or background off easier than others.

But in addition to creating the possibilities of new errors, the move from a human to a non-human process has important economic, political, and social consequences. Police departments can give more tickets with cameras — and generate more revenue — than they could ever do with officers in squad cars. But no camera will excuse a man speeding to the hospital with a wife in labor or a hurt child in the passanger seat. As work to rule or “rule-book slowdowns” — types of labor protests where workers cripple production by following rules to the letter — show, many rules are only productive for society because they are selectively enforced. The complex calculus that goes into deciding when to not apply the rules, second nature to humans, is still impossibly out of reach for most computerized expert systems. This is an increasingly important fact we are reminded of by errors like the one described here.

Picture of a Process

I enjoyed seeing this image in an article in The Register.

finger shown in Google book

The picture is a screen shot from Google Books viewing a page from a 1855 issue of The Gentleman’s Magazine. The latex-clad fingers belong to one of the people whose job it is to scan the books for Google’s book project.

Information technologies often hide the processes that bring us the information we interact with. Revealing errors give a picture of what these processes look like or involve. In an extremely literal way, this error shows us just such a picture.

We can learn quite a lot from this image. For example, since the fingers are not pressed against glass, we might conclude that Google is not using a traditional flatbed scanner. Instead, it is likely that they are using a system similar to the one that the the Internet Archive has built that is designed specifically for scanning books.

But perhaps the most important thing that this error reveals is something we know, but often take for granted — the human involved in the process.

The decision on where to automate a process, and where leave it up to a human, is sometimes a very complicated one. Human involvement in a process can prevent and catch many types of errors but can cause new ones. Both choices introduce risks and benefits. For example, an automated bank transaction system may allow human to catch obvious errors and to detect suspicious use that a computer without “common sense” might miss. On the other hand, a human banker might commit fraud to try to enrich themselves with others money — something a machine would never do.

In our interaction with technological systems, we rarely reflect on the fact, and the ways, that the presence of humans in these areas is important to determining the behavior, quality, reliability, and the nature and degree of trust that we have in a technology.

In our interactions with complex processes through simple and abstract user interfaces, it is often only through errors — distinctly human errors, if not usually quite as clearly human as this one — that information workers’ important presence is revealed.