GPS Errors and Pilgrimage to Lourde

Photograph of a man standing at the street sign for Lourde in France.

The Telegraph ran an article about a sizable — and growing — number of Catholic pilgrims arriving in a small village in the Pyrenean foothills. With 94 residents, the town has no hotels or shops — a fact that has left some of the new arrivals a bit confused. The town does have a small statue of the Virgin Mary which some pilgrims have worshiped at. Most pilgrims have noted that the town seems curiously quiet for Catholicism’s third largest pilgrimage site.

The village is Lourde. Without an “s”. The pilgrims, of course, are looking for Lourdes. The statue some pilgrims have prostrated themselves in front of is not the famous Statue of Our Lady at the Grotto of Massabielle but a simple village statue of the virgin. Lourde is 92 kilometers (57 miles) to the east of the larger and more famous city with the very similar name.

Given the similar names, pilgrims have apparently been showing up at Lourde for as long as the residents of the smaller village can remember. But villagers report a very large up-tick in confused pilgrims in recent years. To blame, apparently, is the growing popularity of GPS navigation systems.

Pilgrims have typed in “L-O-U-R-D-E” in their GPS navigation devices and forgotten the final “S”. Indeed, using the clunky on-screen keyboards and automatic completion functionality, it’s often much easier to type in the name of the tiny village than the name of the more likely destination. One letter and only 92 kilometers away in the same country, it’s an easy mistake to make because the affordances of many GPS navigation systems make it slightly easier to ask to go to Lourde than to Lourdes. Apparently, twenty or so cars of pilgrims show up in Lourde each day. Sometimes carrying as many people as live in the town of Lourde itself!

The GPS navigation systems, of course, will happy route drivers to either city and do not know or care that Lourde is rarely the location a driver navigating from across Europe wants. The GPS is designed to show drivers their next turn so a driver won’t know they’re off course until they reach their destination. The systems assume that destinations were entered correctly. A human navigator asked for directions would never point a person to the smaller village. Indeed, they would probably not know it even exists.

A municipal councilor in Lourde suggested that, “the GPS is not at fault. People are.” Of course, she’s correct. Pilgrims typed in the name of their destination incorrectly. But the reason there’s an increase in people making this particular mistake is because the technology people use to navigate in their cars has changed dramatically over the last decade in a way that makes this mistake more likely. A dwindling number of people pore over maps or ask a passer-by or a gas station attendant for directions. On the whole, navigation has become more effective and more convenient. But not without trade-offs and costs.

GPS technology frames our experience of navigation in ways that are profound, even as we are usually take it for granted. Unlike a human, the GPS will never suggest a short detour that leads us to a favorite restaurant or a beautiful vista we’ll be driving by just before sunset. As in the case of Lourde, it will make mistakes no human would (the reverse is also true, of course). In this way, the twenty cars of confused pilgrims showing up in Lourde each day can remind us of the power that technologies have over some of the little tasks in our lives.

Transparency

I caught this revealing error on the always entertaining Photoshop Disasters and thought it was too good to resist pointing out here:

Bag of Jasmin Rice

The picture, of course, is a bag of Tao brand jasmine rice for sale in Germany. The error is pretty obvious if you understand a little German: the phrase transparentes sichtfeld literally means transparent field of view. In this case, the phrase is a note written by the graphic designer of the rice bag’s packaging that was never meant to be read by a consumer. The phrase is supposed to indicate to someone involved in the bag’s manufacture than the pink background on which the text is written is supposed to remain unprinted (i.e., as transparent plastic) so that customers get a view directly onto the rice inside the bag.

The error, of course, is that the the pink background and the text was never removed. This was possible, in part, because the the pink background doesn’t look horribly out of place on the bag. A more important factor, however, is the fact that the person printing the bag and bagging the rice almost certainly didn’t speak German.

In this sense, this bears a lot of similarity with some errors I’ve written up before — e.g., the Welsh autoresponder and the Translate server error restaurant. And as in those cases, there are takeaways here about all the things we take for granted when communicating using technology — things we often don’t realize until language barriers make errors like this thrust hidden processes into view.

This error revealed a bit of the processes through which these bags of rice are produced and a little bit about the people and the division of labor that helped bring it to us. Ironically, this error is revealing precisely through the way that the bag fails to reveal its contents.

Akamai and SSL

SSL stands for “Secure Sockets Layer” and refers to a protocol for using the web in a secure, encrypted, manner. Every time you connect to a website with an address prepended with https://, instead of just http://, you’re connecting over SSL. Almost all banks and e-commerce sites, for example, use SSL exclusively.

SSL helps provide security for users in at least two ways. First, it helps keep communication encoded in such a way that only you and the site you are communicating with can read it. The Internet is designed in a way that makes messages susceptible to eavesdropping; SSL helps prevent this. But sending coded messages only offer protection if you trust that the person you are communicating in code with really is who they say they are. For example, if I’m banking, I want to make sure the website I’m using really is my bank’s and not some phisher trying to get my account information. The fact that we’re talking in a secret code will protect me from eavesdroppers but won’t help me if I can’t trust the person I’m talking in code with.

To address this, web browsers come with a list of trusted organizations that verify or vouch for websites. When one of these trusted organizations vouches that a website really is who they say they are, they offer what is called a “certificate” that attests to this fact. A certificate for revealingerrors.com would help users verify that that they really are viewing Revealing Errors, and not some intermediary, impostor, or stand-in. If someone were redirect traffic meant for Revealing Errors to an intermediary, users connecting using SSL would get an error message warning them that the certificate offered is invalid and that something might be awry.

That bit of background provides the first part of this explanation for this error message.

whitehouse.gov error message claiming the host is a248.e.akamai.net

In this image, a user attempted to connect to the Whitehouse.gov website over SSL — visible from the https in the URL bar. Instead of a secure version of the White House website, however, the user saw an error explaining that the certificate attesting to the identity of the website was not from the United States White House, but rather from some other website called a248.e.akamai.net.

This is a revealing error, of course. The SSL system, normally represented by little more than a lock icon in the status bar of a browser, is thrust awkwardly into view. But this particularly revealing error has more to tell. Who is a248.e.akamai.net? Why is their certificate being offered to someone trying to connect to the White House website?

a248.e.akamai.net is the name of a server that belongs to a company called Akamai. Akamai, while unfamiliar to most Internet users, serves between 10 and 20 percent of all web traffic. The company operates a vast network of servers around the world and rents space on these servers to customers who want their websites to work faster. Rather than serving content from their own computers in centralized data centers, Akamai’s customers can distribute content from locations close to every user. When a user goes to, say, Whitehouse.gov, their computer is silently redirected to one of Akamai’s copies of the Whitehouse website. Often, the user will receive the web page much more quickly than if they had connected directly to the Whitehouse servers. And although Akamai’s network delivers more 650 gigabits of data per second around the world, it is almost entirely invisible to the vast majority of its users. Nearly anyone reading this uses Akamai repeatedly throughout the day and never realizes it. Except when Akamai doesn’t work.

Akamai is an invisible Internet intermediary on a massive scale. But because SSL is designed to detect and highlight hidden intermediaries, Akamai has struggled to make SSL work with their service. Although Akamai offers a service designed to let their customers use Akamai’s service with SSL, many customers do not take advantage of this. The result is that SSL remains one place where, through error messages like the one shown above, Akamai’s normally hidden network is thrust into view. An attempt to connect to a popular website over SSL will often reveal Akamai. The White House is hardly the only victim; Microsoft’s Bing search engine launched with an identical SSL error revealing Akamai’s behind-the-scenes role.

Akamai plays an important role as an intermediary for a large chunk of all activity online. Not unlike Google, Akamai has an enormous power to monitor users’ Internet usage and to control or even alter the messages that users send and receive. But while Google is repeatedly — if not often enough — held to the fire by privacy and civil liberties advocates, Akamai is mostly ignored.

We appreciate the power that Google has because they are visible — right there in our URL bar — every time we connect to Google Search, GMail, Google Calendar, or any of Google’s growing stable of services. On the other hand, Akamai’s very existence is hidden and their power is obscured. But Akamai’s role as an intermediary is no less important due its invisibility. Errors provide one opportunity to highlight Akamai’s role and the power they retain.

Deals, Failure, and Fun

I’ve found that the always entertaining FAILblog is a rich source for revealing errors. Here’s a nice example.

Every reader on FAILblog can chuckle at the idea an item is being offered for $69.98 instead of an original $19.99 as part of a clearance sale. The idea that one can “Save $-49” is icing on the cake. Of course, most readers will immediately assume that no human was involved in the production of this sign; it’s hard to imagine that any human even read the sign before it went up on the shelf!

The sign was made by a computer program working from a database or a spreadsheet with a column for the name of the product, a column for the original price, and a column for the sale price. Subtracting the sale price from the original gives the “savings” and, with that data in hand, the sign is printed. The idea of negative savings is a mistake that only a computer will make and, with the error, the sign-producing computer program is revealed.

Errors like this, and FAILblog’s work in in general, highlights one of the reasons that I think that errors are such a great way to talk about technology. FAILblog is incredibly popular with millions of people checking in to see the latest pictures and videos of screw-ups, mistakes, and failures. For whatever reason — sadism, schadenfreude, reflection on things that are surprisingly out of place, or the comfort of knowing that others have it worse — we all know that a good error can be hilarious and entertaining.

My own goal with Revealing Errors centers on a type of technology education. I want to revealing hidden technology as a way of giving people insight into the degree and the way that our lives are technologically mediated. In the process, I hope to lay the groundwork for talking about the power that this technology has.

But if people are going to want to read anything I write, it should also be entertaining. Errors are appropriate for a project like mine because they give an a view into closed systems, hidden intermediaries and technological black boxes. But they they are great for the project because they are also intrinsically interesting!

Quorum of the Twelve Apostates

A number of people (including the New York Times) wrote about a costly error at Brigham Young University last week that was originally reported by the Utah Valley Daily Herald. The error itself was subtle. First, it is important to realize that Brigham Young is a private university owned by the Church of Jesus Christ of Latter-day Saints (i.e., the Mormon Church or LDS for short). The front of the the Daily Universe — the BYU university newspaper — featured a photograph of a group of men who form one of the most important governing bodies in the LDS church with the heading, “Quorum of the Twelve Apostates.”

Quorum of the Twelve Apostates

The caption should have said the “Quorum of the Twelve Apostles” which is the name of the governing body in question. An apostle, of course, is a messenger or ambassador although the term is most often used to refer to Jesus’ twelve closest disciples. The term apostle is used in LDS to refer to a special high rank of priest within the church. An apostate is something else entirely; the term refers to a person who is disloyal and unfaithful to a cause — particularly to a religion.

Shocked that the paper was labeling the highest priests in the church as disloyal and unfaithful, thousands of copies of the paper (18500 by one report) were pulled from news stands around campus. New editions of the paper with a fixed caption were produced and replaced at what must have been enormous cost to BYU and the Daily Universe.

The source of the error, says the university’s spokesperson, was in a spellchecker. Working under a tight deadline, the person spell-checking the captions ran across a misspelled version of “apostles” in the text. In a rush, they clicked the first term in the suggestion list which, unfortunately, happened to be a similarly spelled near-antonym of the word they wanted.

From a technical perspective, this error is a version of the Cupertino effect although the impact was much more strongly felt than most examples of Cupertino. Like Cupertino, BYU’s small disaster can teach us a whole lot about the power and effect of technological affordances. The spell-checking algorithm made it easier for the Daily Universe’s copy editor to write “apostate” than it was to write “apostle” and, as a result, they did exactly that. A system with different affordances would have had different effects.

The affordances in our technological systems are constantly pushing us toward certain choices and actions over others. In an important way, the things we produce and says and the ways we communicate are the product of these affordances. Through errors like BYU’s, we get a glimpse of these usually-hidden affordances in every-day technologies.

The Case of the Welsh Autoresponder

Last year, I talked about some of the dangers of machine translation that resulted in a Chinese restaurant advertised as “Translate Server Error” and another restaurant serving “Stir Fried Wikipedia.” This article from the BBC a couple months ago shows that embarassing translation errors are hardly limited to either China or to machine translation systems.

Mistranslated Welsh road sign

The English half of the sign is printed correctly and says, “No entry for heavy goods vehicles. Residential site only.” Clearly enough, the point of the sign is to prohibit truck drivers from entering a residential neighborhood.

Since the sign was posted in Swansea, Wales, the bottom half of the sign is written in Welsh. The translation of the Welsh is, “I am not in the office at the moment. Send any work to be translated.”

It’s not too hard to piece together what happened. The bottom half of the sign was supposed to be a translation of the English. Unfortunately, the person ordering the sign didn’t speak Welsh. When he or she sent it off to be translated, they received a quick response from an email autoresponder explaining that the email’s intended recipient was temporarily away and that they would be back soon — in Welsh.

Unfortunately, the representative of the Swansea council thought that the autoresponse message — which is coincidentally, about the right length — was the translation. And onto the sign it went. The autoresponse system was clearly, and widely, revealed by the blunder.

One thing we can learn from this mishap is simply to be wary of hidden intermediaries. Our communication systems are long and complex; every message passes through dozens of computers with a possibility of error, interception, surveillance, or manipulation at every step. Although the representative of the Swansea council thought they were getting a human translation, they, in fact, never talked to a human at all. Because the Swansea council didn’t expect a computerized autoresponse, they didn’t consider that the response was not sent by the recipient.

Another important lesson, and one also present in the Chinese examples, is that software needs to give users responses in the language they are interacting in to be interpreted correctly. In the translation context where users plan to use, but may not understand, their program’s output, this is often impossible. That’s why when a person has someone, or some system, translate into a language they do not speak, they open themselves up to these types of errors. If a user does not understand the output of a system they are using, they are put completely at the whim of that system. The fact that we usually do understand our technology’s output provides a set of “sanity checks” that can keep this power in check. We are so susceptible to translation errors because these checks are necessarily removed.

Show Me the Code

A while ago, Mark Pilgrim wrote about being prompted with a license agreement that looked like this.

Adobe Reader 8 license agreement showing HTML code.

If, like most people, you have trouble parsing the agreement, that’s because it’s not the text of the license agreement that’s being shown but the “marked up” XHTML code. Of course, users are only supposed to see the processed output of the code and not the code itself. Something went wrong here and Mark was shown everything. The result is useless.

Conceptually, computer science can be boiled down to a process of abstraction. In an introductory undergraduate computer science course, students are first taught syntax or the mechanics of writing code that computers can understand. After that, they are taught abstraction. They’ll continue to be taught abstraction, in one way or another, until they graduate. In this sense, programming is just a process of taking complex tasks and then hiding — abstracting — that complexity behind a simplified set of interfaces. Then, programmers build increasingly complex tools on top of these interfaces and the whole cycle repeats. Through this process of abstracting abstractions, programmers build up systems of almost unfathomable complexity. The work of any individual programmer becomes like a tiny cog in a massive, intricate machine.

Mark’s error is interesting because it shows a ruptured black box — an accute failure of abstraction. Of course, many errors, like the dialog shown below, show us very little about the software we’re using.

Unknown Error dialog

With errors like Mark’s, however, users are quite literally presented with a view of parts of the system that programmer was trying to hide.

Here’s another photo I’ve been showing in a my talks that shows a crashed ATM displaying bits of the source code of the application running on the ATM; a bit of unintentional “open sourcing.”

Unknown Error dialog

These examples are embarrassing for authors of the software that caused them but are reasonably harmless. Sometimes, however, the window we get into a broken black box can be shocking.

In talks, I’ve mentioned a configuration error on Facebook that resulted in the accidental publication of the Facebook source code. Apparently, people looking at the code found little pieces like these (comments, written by Facebook’s authors, are bolded):

$monitor = array( '42107457' => 1, '9359890' => 1);
// Put baddies (hotties?) in here

/* Monitoring these people's profile viewage.
Stored in central db on profile_views.
Helpful for law enforcement to monitor stalkers and stalkees.
*/

The first block describes a list of “baddies” and “hotties” represented by user ID numbers that Facebook’s authors have singled out for monitoring. The second stanza should be self-explanatory.

Facebook has since taken steps to avoid future errors like this. As a result, we’re much less likely to get further views into their code. Of course, we have every reason to believe that this code, or other code like it, still runs on Facebook. Of course, as long as Facebook’s black box works better than it has in the past, we may never again know exactly what’s going on.

Like Facebook’s authors, many technologists don’t want us knowing what our technology is doing. Sometimes, like Facebook, for good reason: the technology we use is doing things that we would be shocked and unhappy to hear about it. Errors like these provide a view into some of what we might be missing and reasons to be discomforted by the fact that technologists work so hard to keep us in the dark.

Lorem Ipsum Dolor Sit Amet

I was browsing this store for worker clothes in Germany a few weeks back when I noticed something funny in the bottom corner. I’ve highlighted the snafu in the screenshot below with a big red arrow.

lorem ipsum screen shoot

The arrow points to paragraph that is definitely not in German. In fact, it’s Latin. Well, almost Latin.

The paragraph is a famous piece of Latin nonsense text that starts with, and is usually referred to as, lorem ipsum. Lorem ipsum has apparently been in existence (in one form or another), and in use by the printing and publishing industry, for centuries. Although it’s originally derived by a text from Cicero, the Latin is meaningless.

The story behind lorem ipsum is rooted in the fact that when presented with text, people tend to read it. For this reason, and because sometimes text for a document doesn’t exist until late in the process, many text and layout designers do what’s called Greeking. In Greeking, a designer inserts fake or “dummy” text that looks like real text but, because it doesn’t make any sense, lets viewers focus on the layout without the distraction of “real” words. Lorem ipsum was the printing industry’s standard dummy text. It continues to be popular in the world of desktop and web publishing.

In fact, lorem ipsum is increasingly popular. The rise of computers and computer-based web and print publishing has made it much easier and more common for text layout and design to be prototyped and much more likely that a document’s designer is not the same person or firm that publishes the final version. While both design and publishing would have been done in print houses half a century ago, today’s norm is for web, graphic, print and layout designers to give their clients pages or layouts with dummy text — often the lorem ipsum text itself. Clients — the “real” text’s producers, that is — are expected to replace the dummy text with the real text before printing or uploading their document to the web.

We can imagine what happened in this example. The clothing shop hired a web design firm who turned over the “greeked” layout to the store owners and managers. The store managers replaced the greeked text with information about their products and services. Not being experts — or just because they were careless — they missed a few spots and some of the Greeked text ended up published to the world by mistake.

A quick look around the web shows that this shop is in good company. Although lorem ipsum is often preferred because the spacing makes the text “look like” English from a distance, many other dummy texts are both used and abused. Here’s an example from an auto advertisement.

car advertisement with dummy text

Due to rapidly and radically changed roles introduced by desktop publishing — changes in structure and division of labor that are usually invisible — you can see accidentally published lorem ipsum text all over the web and in all sorts of places in the printed world as well. We don’t often reflect on the changes in the human and technological systems behind web and desktop publishing. Errors like these give an opportunity to do so.

Revealing Errors in Zagreb

I’m going to be giving another revealing errors talk this week at the cultural center Mama in Zagreb, Croatia. The talk is scheduled for 14:00 on January 10th and will be part of the weekly skill sharing meeting. It should be a lot of fun and there will be time talk to chat and grab a coffee or something afterward. Please join if you can and feel free to contact me if you have any questions.

Faces of Google Street View

This error was revealed and written up by Fred Beneson and first published on his blog.

Google Streetview Blurred Face Example

After receiving criticism for the privacy-violating “feature” of Google Street View that enabled anyone to easily identify people who happened to be on the street as Google’s car drove by, the search giant started blurring faces.

What is interesting, and what Mako would consider a “Revealing Error”, is when the auto-blur algorithm can not distinguish between an advertisement’s face and a regular human’s face. For the ad, the model has been compensated to have his likeness (and privacy) commercially exploited for the brand being advertised. On the other hand, there is a legal grey-area as to whether Google can do the same for random people on the street, and rather than face more privacy criticism, Google chooses to blur their identities to avoid raising the issue of whether it is their right to do so, at least in America.

So who cares that the advertisement has been modified? The advertiser, probably. If a 2002 case was any indication, advertisers do not like it when their carefully placed and expensive Manhattan advertisements get digitally altered. While the advertisers lost a case against Sony for changing (and charging for) advertisements in the background of Spiderman scenes located in Times Square, its clear that they were expecting their ads to actually show up in whatever work happened to be created in that space. There are interesting copyright implications here, too, as it demonstrates an implicit desire by big media for work like advertising to be reappropriated and recontextualized because it serves the point of getting a name “out there.”

To put my undergraduate philosophy degree to use, I believe these cases bring up deep ethical and ontological questions about the right to control and exhibit realities (Google Street View being one reality, Spiderman’s Time Square being another) as they obtain to the real reality. Is it just the difference between a fiction and a non-fiction reality? I don’t think so, as no one uses Google maps expecting to retrieve information that is fictional. Regardless, expect these kinds of issues to come up more and more frequently as Google increases its resolution and virtual worlds merge closer to real worlds.