Truth-Teller App from Washington Post Could Alter Nature of Political Dialog

Several weeks ago, I posted a tongue-in-cheek wish list for Web 2.0 improvements that helped tell truth from lies.

It turns out the Washington Post had already been working on a Truth-Teller Application under a grant from the John S. and James L. Knight Foundation. The prototype of the app made its debut in late January.

According to the Washington Post, the goal of Truth Teller is to fact check speeches in as close to real time as possible. The inspiration for the idea came during the last Republican primary election. Steven Ginsberg, the Post’s national political editor, was attending a rally for Michelle Bachman in an Iowa parking lot. Claims Ginsberg:

”For about 45 minutes she said a lot of things that I knew to not be true, and nobody else there knew that.”

Ginsberg thought there must be a way to offer people in the crowd a real-time accounting of politicians’ misstatements. He consulted with Cory Haik and others [1] at the Post. The  Truth Teller App is their attempt to offer such a service.

They based the prototype on a combination of several technologies. It generates a transcript from video using speech-to-text technology, matches the text to a database, and then displays, in real time, what’s true and what’s false.

For the prototype, the Post focused on the looming debate over tax reform, but hopes to expand their database to incorporate more issues in the future.

“It’s a proof of concept, a prototype in the truest sense,” says Cory Haik, Executive Producer for Digital News at the Post.

To test Truth Teller from The Washington Post, visit truthteller.washingtonpost.com. You  can play videos from President Barack Obama, Speaker of the House John Boehner and other politicians and instantly see which statements are true, false or misleading.

Kaila Stein, writing in the American Journalism Review, “Haik realized that everyone at that rally probably had a phone in their hands, and that a program capable of detecting false claims on the spot could help people sort out fact from fiction. She envisioned a product like Shazam, a popular app that can recognize a song based on its sound; however, instead of identifying song and artist, Haik’s app would distinguish between political truth and lies.”

Fact checking is hardly a new concept for news organizations, but doing it in real time is new. It could fundamentally change the nature of political dialog. As I pointed out in another post on February 18, misinformation can be difficult to correct once the rumor mill of the Internet begins and search engines dutifully record millions of comments on it. Hearing or seeing something repeated so often and in such volume can make people think something is true when it, in fact, is not.

“Cognitively, it is much easier for people to accept a given piece of information than to evaluate its truthfulness. This stacks the deck in favor of accepting misinformation rather than properly rejecting it. … Researchers have found that misinformation is “sticky” and is often resistant to correction. Retractions are often ineffective and can sometimes backfire, strengthening incorrect beliefs.”

From Misinformation and Its Correction: Continued Influence and Successful Debiasing
By Stephan Lewandowsky, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz and John Cook

The Post hopes to release a functional version of the app by the end of this year and  continue refining it after that. According to Stein, Haik and Ginsberg see their innovation as a game-changer. “My hope,” Ginsberg says, “is that, in its realized form, it fundamentally alters the political discourse in America.”

______________________________

[1] The Washington Post Truth Teller team:
Cory Haik, Executive Producer for Digital News
Steven Ginsberg, National Political Editor
Joey Marburger, Mobile Design Director
Yuri Victor, UX Director
Siva Ghatti, Director, Application Development
Ravi Bhaskar, Principal Software Engineer
Gaurang Sathaye, Principal software engineer
Julia Beizer, Mobile Projects Editor
Sara Carothers, Producer

Erosion of Trust in Information Fosters Polarization in Politics

A familiar thread running through many of these posts is trust. A good friend who is a very successful businessman once told me that “If you don’t have trust, you don’t have a business.” I have come to believe that saying with all my heart and soul. I think every copywriter, reporter and CEO should have it tattooed on his or her navel.

Trust is the currency of communication.

TrustWhen we don’t trust the information someone is sending us, we don’t trust him, her or them. This merely seeks to divide us. We may win elections or business deals with bad information, but we lose something larger – the relationships upon which long-term success is built.

Recent surveys indicate that the credibility of advertising and media (Pew, Gallup, Neiesen, Lab42 studies) is severely eroding. Both have fallen to about 25 percent. Said another way, three in four people automatically discount what they read, see or hear through the media, whether it’s programming, news, or advertising. By the way, that also is roughly the same percentage of people who falsify information on social media profiles.

How can we restore trust?

A good place to start is over in that far corner of the ring called truth and fairness. If you don’t believe “truth” is obtainable because it is too subjective, then let’s strive for fairness and balance.

I asked several friends, “what would you do to restore trust in the sources of information?” Here are some of the suggestions:

  1. Stop exaggerating to make your point. Yes, exaggeration sometimes gets attention. But it undermines acceptance.
  2. Acknowledge limitations of your information or knowledge.
  3. Be honest, open and fair. Don’t try to twist the facts to make a point. Selective regurgitation is not the way to get the gist of something right.
  4. Don’t withhold information that materially changes the meaning of something.
  5. Support your case with specifics. But don’t misrepresent their meaning to suit your ends. We’ve all seen too many election ads that take quotes out of context to twist the true meaning of what someone said. We’ve all seen too many people waving documents that purport to prove something is true when it is false.
  6. Cite original sources. Do your research. Don’t repeat rumors. And don’t just trust what a friend of a friend of a friend of a friend said. By the time something is filtered through a newspaper reporter who is quoted in a blog which is reposted in a tweet and then distributed in an email rant, the original meaning may have been lost. I had a conversation with my barber before the last election in which he claimed “Obama is a known communist.” Hmmmm. I thought he was a Democrat. So I asked the barber what made him think that. “Somebody wrote a book about it. Everyone knows it.” “What’s the name of the book?”  “I can’t remember.” “Well, can you tell me one thing he’s done that is communistic?” No response.
  7. Make it clear what is fact and what is your opinion of the facts.
  8. Acknowledge different sides of an argument and hold all sides to the same standard of truthfulness. Try to illuminate, not obfuscate. Nothing is more frustrating than when someone doesn’t acknowledge your point of view, but keeps spouting sound bites to make his or her point of view. This does nothing to advance the discussion, but leads to isolationism and gridlock.
  9. Don’t repeat falsehoods, even in jest. A surprising number of people get their news these days from “comedy news shows” that blur the distinction between fact and fantasy.
  10. Be suspicious of ad hominem attacks and avoid generalizations. Treat the other side with respect.

Counterfeiting the Currency of Communication

The partisan pursuit of self-interest often gets in the way of these principles. Unfortunately, when people cross these ethical lines, they undermine the trust that binds people together. People begin to trust only those that share their world view. Compromise is victimized. Politics become polarized. Winning arguments by counterfeiting the currency of communication is a prescription for disaster. The government won’t let people counterfeit its currency. Why do so many human beings willingly counterfeit their own?

Beyond Web 2.0: Proposed Specs to Foster Truth in Elections

Web 2.0 was all about personalizing the web experience. It was great in some ways. I found friends on social networks that owed me money from 40 years ago.

But frankly, in the last election, Web 2.0 failed me. All the attack ads made me wish the web offered a way to tell when someone was:Bullshit

  • Distorting an opponent’s stance
  • Exaggerating egregiously
  • Misrepresenting facts
  • Quoting out of context
  • Putting true facts in a false light
  • Withholding information that changed meaning
  • Embarking on flights of unrestrained falsity.

So I went down to the local bar, cranked up the karaoke machine, sang a chorus of “Your Cheatin’ Heart” and mapped out the future of the Internet on the back of a napkin. Below, my personal wish list:

Web 2.1 would have Bullshit Daemons. (In Geek-speak, a daemon is a background process that handles user requests.) I’d like to be able to cursor over suspicious claims like “Obamacare calls for death panels” and have the daemon:

  • Figure out that Obamacare was actually H.R. 3962 from the first session of the 111th Congress
  • Locate the text of the original bill
  • Scan all 1990 pages of it for any phrases related to “death panel” such as “medical review board”
  • Determine (if found) whether such a board had the power to refuse funding for life-saving medical procedures
  • Send me the results and supporting documentation, and if the claim is false…
  • Put a big flashing red “Bullshit” warning across my screen.

Web 2.2 would have Blabber Daemons that would automatically:

  • Tag all references to false and misleading information with the aforementioned  Bullshit (BS) warning
  • Filter BS out of my searches
  • Email my contacts what I found
  • Post the findings on social networks

Web 2.3 would feature Zippo Daemons that would find all images of the biggest fibber each day on the Web. It would then retouch the images to make pants (or skirts) appear  on fire.

Web 2.4 would have an Elementary Education Daemon. It would let me highlight phrases like “Guns don’t kill people, people do.” The daemon would then:

  • Go to the FBI web site
  • Examine the latest Uniform Crime Report
  • Calculate the number of Americans who die from gunshots every day
  • Obtain autopsy photos of each victim
  • Email them to the entire NRA mailing list

Web 2.5 would feature a “What’s That On Your Shoe?” Daemon. This daemon would sniff out the consequences of lies used to justify ill-advised public policies, wars, boondoggles, and massive tax expenditures. It would then send out reminders to all registered voters before the first Tuesday in November.

Web 2.6 would introduce a “Get Real” Daemon that would highlight false flattery and pious platitudes, i.e., when someone calls America a “peace-loving nation.” The daemon would search the Internet for all wars waged in the user’s lifetime, list them by nation, rank order the list and present it to the user. When I ran this search, I found that we’ve been in some kind of war for the last 66 years straight: The Cold War (1947-92), The Korean War (1953-55), The Vietnam War (1955-75), The Contra Wars (1979-80), Grenada (1983), Star Wars (1984-93),  Panama (1989), The War on Drugs (1972 to present), Gulf War I (1990-91), Gulf War II (2003 – present), The War in Afghanistan (2001-present), The War on Terror (2001-present) plus covert wars. Let’s get real; Switzerland is peace loving.

Web 2.7 would have a Roto-Rooter Daemon because campaigning has become such a cesspool. This daemon would attach a Scarlet L to search results on all candidates who misrepresent the truth.

Web 2.8 would feature a Black-Hole Daemon that explained where my tax dollars went. This proposal may not be technically feasible.

Web 2.9 would introduce the Give-It-A-Rest Daemon. After a hard day of trying to figure out campaign claims, this daemon would program a personal robot to massage the pain in my neck, sooth my hyperactive grumble gland, and fetch a cold beer.

That last part sounds more rewarding. Maybe we should just skip from 2.0 to 2.9.

How Search Engines Can Help Perpetuate Misinformation

Before we get into this, I want acknowledge that search engines put a world of relevant information at our fingertips and that they help people find answers faster than ever before. They’re great. I love ’em. I use ’em. But I also see a dark side to them.

Ask anyone a question. If they don’t know the answer, in all likelihood, they will Google for it from a smartphone. Voila! answers! Are they accurate? Are they true? These are much bigger questions.

searchforanswersA frequently quoted book, Prioritizing Web Usability (2006) by Jakob Nielsen, claims 93 percent of Web searchers never go past the first page of results. Yet Google and other search engines often return millions of pages.

At one time, an army of professional authors, editors, reviewers, librarians and fact checkers helped verify and screen information before dishing it up to readers. Today, that verification process applies to only a tiny fraction of all the information put online. Anyone can self-publish anything. “No experience necessary” often equates to “no truth or accuracy required.”

Limitations of Search Engines and Human Brains

Search engines simply report all references to a phrase on the Internet; they make no attempt to determine the truth or accuracy of claims. Yet most people assume the truth of something published. Why?

A 2012 report called Misinformation and Its Correction: Continued Influence and Successful Debiasing published in the journal of the Association for Psychological Science by Stephan Lewandowsky, Ullrich Ecker, Colleen Seifert, Norbert Schwarz and John Cook of the Universities of Western Australia, Michigan and Queensland[1] concludes that, “Cognitively, it is much easier for people to accept a given piece of information than to evaluate its truthfulness.” (Comment: this is especially true when search engine results stretch to thousands or millions of pages.)

The Stickiness of Misinformation

This fascinating report surveys academic literature relating to why we believe certain things we read or hear – even though they may be false. It begins with a discussion of several public policy issues, such as health care reform, vaccinations, and justifications for wars. It also discusses why misinformation is “sticky,” i.e., how hard it is to correct misinformation once it becomes rooted.

According to the report, disinformation in the U.S. healthcare debate peaked in 2009 when Sarah Palin used the phrase “death panels” on her Facebook page. “Within five weeks,” the report continues, “86% of Americans had heard the claim and half either believed it or were unsure about its veracity.”

Mainstream news media and fact-checkers reported that Palin’s characterization of provisions in the proposed law was false. But even today, four years later, a Google search for the term yields 35,800,000 results (in 0.16 seconds)! A scan of the first 20 pages of posts in the Google search revealed:

  • A few were dedicated to exposing “the myth” of death panels, including (to be fair), the very first post in Wikipedia.
  • Most posts conflicted with each other, i.e., a large number claimed the law would create “death panels” and a large number claimed it would not.
  • A large percentage was posted within the last few months, indicating that many people are trying to resurrect the term or keep the debate going, and that the authors of the paper are correct – misinformation is sticky.

Existing Beliefs Influence Belief in New Information

Determining the validity of information requires hard work and an open mind. The problem, say the authors of the Misinformation report, is that most people don’t seek information that contradicts their view of the world. Said another way, they tend to like information that supports their view.

Even when directly confronted with retractions and conflicting facts, many people cling to their original beliefs by saying something like, “Well, we’re all entitled to our opinions.” In fact, say the authors, conflicting information often serves to strengthen belief in  erroneous information.

How The Search for Truth is Getting More Difficult

Think of the Internet as a giant information archive. When topics such as healthcare become politicized, social networks, blogs and circular references turn the Internet into an echo chamber. Millions of references can accumulate in days as people report on reports of other reports, filtering information and putting their own spin on things along the way.

While search engines dutifully record the location of information, they can’t help us determine the truth of it. The sheer volume of conflicting information that they present makes the search for truth like looking for diamonds in a garbage dump.


[1] Click here to learn more about the Authors of Misinformation Report.

How Context Impacts Interpretation

WARNING: This image is NOT what most people assume it is. It is an example of how even the “literal” can “lie.” The context in which something appears can turn meaning around 180 degrees.

FatherDaugther

Copyright © 2013 Rehak Creative Services, Inc.

In the 1970s, I spent much of my spare time with a Nikon F2 wandering through a Chicago neighborhood called Uptown. It was a pretty rough neighborhood at the time – a cauldron of poor Hispanics, African-Americans, Whites who had migrated up from the South and (reportedly) the nation’s single largest concentration of American Indians. Gangs and poverty ruled the neighborhood., Bars, flop houses and halfway homes dotted the streets.

The Chicago Tribune published many of my photos, but refused to publish this one. I took it on a cold morning when I ducked inside a store to change rolls of film. As I closed the  camera, I turned and saw this pair staring at me. I immediately dropped to my knee and clicked off five frames with my motor drive as the Black man withdrew the cigarette from his mouth.

Eager to learn more about these two and to obtain model releases, I engaged them in conversation and found that my photo was NOT what it appeared to be. The man had adopted the girl after marrying her mother. Several days later, I brought prints from my negatives to the family as a gift. I met the mother and learned that she had been a single mom who moved to the city from West Virginia to find work. Instead, she found herself living on the streets, cold and hungry. The Black man had taken her and her daughter in, provided them with food and shelter, and eventually married the mother. It seemed to be a very loving, interracial family.

“What’s going on here?”

When the Tribune editors saw the image, their jaws dropped. “What’s going on here?” they asked. I told them the story, but they refused to publish the image even after they knew the story behind it. They feared “it would start a race war.”

For more than 35 years, the image remained unpublished until today. One of my clients, an African-American, saw it a few years ago and almost became physically ill from what the image implied. I told her the story behind it and we remained good friends, but the encounter taught me the editors had been right.

Sometimes even an unaltered documentary image can create a false impression. Because of the social context in which we live, most people see this as pimp and child prostitute, not as loving father and adopted daughter. What was your first impression? Did you leap to the wrong conclusion? Most people do. They see it within a cultural context that is filled with racial distrust. They see the hat. They see the gleam in the man’s eye, the smile on his lips, the leer on the young girl’s face, and they assume the worst.

I learned a powerful lesson from this image. Words and images taken out of context can misrepresent the true meaning of something innocent. They can inflame the reader, fuel prejudice, and ultimately harm society. I publish this example, not to do any of those things, but in the hope that it will teach others how images can mislead.

Sometimes, the reader’s past causes him/her to misinterpret the meaning. Sometimes, people simply jump to the wrong conclusion because of personal experience, prejudice or media conditioning. And sometimes, “authors” deliberately mislead readers by withholding information that would allow them to interpret things properly. When that happens, there’s no way readers can get to the truth.