The Misinformation Age | Reflections & Notes

Cailin O’Connor and James Owen Weatherall. The Misinformation Age: How False Beliefs Spread.


REFLECTIONS


I began this book hoping to discover the formulaic insights to better our private and public understanding of truth as I am persuaded that at the heart of all of our problems is ultimately a crisis of epistemological proportions. What I got was delightfully more than I expected, and depressingly less than I hoped for.

“The Vegetable Lamb of Tartary,” a story about a plant that grows lambs, illustrates just how vulnerable we are to false claims. In short, we don’t think empirically, evidentially, or even thoughtfully. We think socially and emotionally. On its face, this should not be surprising for, after all, we are social animals. However, this reality poses significant problems for how we live and operate in the real, physical, tangible world as our social circles get paradoxically larger through globalization and smaller through our tribal affinities. Layer on that complexity capitalist greed and the marketer’s exploitation of our vulnerabilities, and we have, well, what we have now, a hot mess of “fake news,” “alternative facts,” conspiracy thinking, and no accountability to people verbalizing absolute nonsense.

But all that is good news.

Our mere ability to identify what is “fake” and “conspiratorial” and “nonsense” actually shows how advanced our scientific thinking has come. The fact that we can identify what “propaganda” is, and how it can navigate its way through our systems is a necessary set of knowledge that assists in combating falsehood. O’Connor and Weatherall illustrating through figures and descriptions how that works, both strategically and statistically, was really helpful in quantifying and parsing out how people take advantage of our epistemic vulnerabilities. This awareness is fantastic for combating misinformation and outright deception.

But here’s the bad news.

Underlying all of this are the less calculable human elements of fear, greed, pride, willful ignorance, and intentional deception. The harder endeavor that we must advance is to build trust, to commit ourselves to trustworthiness, and to actually care about our human existence as a mutually shared responsibility. Math won’t get us there. Once again, we need something more transcendent, more “spiritual.”

For those of you interested in the “how,” I would encourage you to skip to the end (beginning around page 179) and consider a different perspective on the “marketplace of ideas” and “free speech” as well as a reimagining of the democratic experiment. The proposals may grate against our liberal sensibilities, but it poses the serious question, if we live by lies, propaganda, and misinformation, are we really free?

The challenge is to find new mechanisms for aggregating values that capture the ideals of democracy, without holding us all hostage to ignorance and manipulation. (p.186)

Reading this book was my attempt at emancipating myself from that ignorance and manipulation. I commend it to you for yours.


NOTES


INTRODUCTION
The Vegetable Lamb of Tartary

cf. Sir John Mandeville; an Italian friar named Odoric; the Vegetable Lamb of Tartary

In 1683, a Swedish naturalist named Engelbert Kaempfer, on direct order from King Charles XI, undertook a systematic search in Asia Minor and established conclusively that there simply are no Vegetable Lambs in the world. (2)

| How could it happen that, for centuries, European scholars could assert–with apparent certainty and seriousness–that lambs grew on trees? How could a belief with no supporting evidence, a belief that should have appeared–given all available experience concerning both plants and animals, and, indeed, regular exposure to lambs–simply absurd, nonetheless persist for centuries? (2)

| More important, are the mechanisms by which such beliefs were formed and spread, even among so-called experts, still present? (2) What are today’s Vegetable Lambs, and how many of us believe in them? (3)

This is a book about belief. It is a book about truth and knowledge, science and evidence. But most of all, it is a book about false beliefs. How do we form beliefs–especially false ones? How do they persist? Why do they spread? Why are false beliefs so intransigent, even in the face of overwhelming evidence to the contrary? And, perhaps most important, what can we do to change them? (6)

| This may sound like a truism, but it is worth saying: Our beliefs about the world matter. (6)

To understand why false beliefs persist and spread, we need to understand where beliefs come from in the first place. Some come from our personal experience with the world. (7)

…to focus on individual psychology, or intelligence, is (7) to badly misdiagnose how false beliefs persist and spread. It lead us to the wrong remedies. Many of our beliefs–perhaps most of them–have a more complex origin: we form them on the basis of what other people tell us. We trust and learn from one another. Is there mercury in large fish after all? Most of us haven’t the slightest idea how to test mercury levels. We must rely on information from others. (8)

This means that even our society’s most elite experts are susceptible to false belief based on the testimony of others. (8)

The ability to share information and influence one another’s beliefs is part of what makes humans special. It allows for science and art–indeed, culture of any sort. But it leads to a conundrum. How do we know whether to trust what people tell us? (8)

When we open channels for social communication, we immediately face a trade-off. If we want to have as many true beliefs as possible, we should trust everything we hear. This way, every true belief passing through our social network also becomes part of our belief system. And if we want to minimize the number of false beliefs we have, we should not believe anything. (9)

…widespread falsehood is a necessary, but harmful, corollary to our most powerful tools for learning truths. (9)

We live in an age of misinformation–an age of spin, marketing, and downright lies. (9)

Much of this misinformation takes the form of propaganda. (9)

Political propaganda, however, is just part of the problem. Often more dangerous–because we are less attuned to it–is industrial propaganda. (10)

A classic example of the latter is the campaign by tobacco companies during the second half of the twentieth century to disrupt and undermine research demonstrating the link between smoking and lung cancer. (10)

cf. Naomi Oreskes and Erik Conway, Merchants of Doubt.

In this book we argue that social factors are essential to understanding the spread of beliefs, including–especially–false beliefs. We describe important mechanisms by which false beliefs spread and discuss why, perhaps counterintuitive, these very same mechanisms are often invaluable to us in our attempts to reach the truth. It is only through a proper understanding of these social effects that (11) one can fully understand how false beliefs with significant, real-world consequences persist, even in the face of evidence of their falsehood. And during an era when fake news can dominate real news and influence elections and policy, this sort of understanding is a necessary step toward crafting a successful response. (12)

Our ability to successfully evaluate evidence and form true beliefs has as much to do with our social conditions as our individual psychology. (15)

One of our key arguments in this book is that we cannot understand changes in our political situation by focusing only on individuals. We also need to understand how our networks of social interaction have changed, and why those changes have affected our ability, as a group, to form reliable beliefs. (16)

One of the most surprising conclusions from the models we study in this book is that it is not necessary for propagandists to produce fraudulent results to influence belief. Instead, by exerting influence on how legitimate, independent scientific results are shared with the public, the would-be propagandist can substantially affect the public’s beliefs about scientific facts. This makes responding to propaganda particularly difficult. Merely sussing out industrial or political funding or influence in the production of science is not sufficient. We also need to be attuned to how science is publicized and shared. (17)

…the effects of propaganda can occur even in the absence of a propagandist. If journalists make efforts to be “fair” by presenting results from two sides of a scientific debate, they can bias what results the public sees in deeply misleading ways. (17)

ONE
What Is Truth?

cf. May 1985, Joe Farman, Brian Gardiner, and Jonathan Shanklin, British Antarctic Survey (BAS); ozone.

cf. The Gospel of John, Pilate, “What is truth?”

The idea of truth presents many old, difficult philosophical problems. Can we uncover truths about the natural world? Are there reliable methods for doing so? Can we ever really know anything? (25)

And as the imperial tradition running from Pilate to Trump suggests, those in power have long understood their importance. (26)

Suppose that, having observed some kind of regularity in the world, you would like to draw a general inference about it. For concreteness: Suppose you observe that the sun has risen every morning of your life. Can you infer that the sun always rises? Or, from the fact that you (growing up in the Northern Hemisphere, say) have only ever seen white swans, that every swan is white? (27)

| Hume’s answer was an emphatic “no.” No number of individual instances of a regularity can underwrite a general inference of that sort. (27)

This has become known as the “Problem of Induction.” Hume concluded that we cannot know anything about the world with certainty, because all inferences from experience fall prey to the Problem of Induction. The fact is that science can always be wrong. (27)

We cannot be absolutely certain about the existence of an ozone hole, about whether CFC’s caused it, or even about whether ozone is essential for protecting human health. The reason we cannot be certain is that all o the evidence we have for this claim is ultimately inductive–and as Hume taught us, inductive evidence cannot produce certainty. (28)

| And it is not merely that we cannot be certain. Scientists have often been wrong in the past. The history of science is littered with crumpled-up theories that scientists once believed, on the basis of a great deal of evidence, but which they now reject. For nearly two thousand years, scientists believed bad air, or “miasma,” emanating from rotting organic matter was the chief cause of disease–until the nineteenth century, when they came to believe that the diseases previously attributed to miasma are caused by microorganisms (i.e., germs). A thousand years of precision measurements and careful mathematical arguments had established, beyond a shadow of doubt, that the earth stands still and that the sun, planets, and the stars all move around the stationary earth–until a series of scientists, from Copernicus to Newton, questioned and then overturned this theory. And then for centuries after that, Newton’s theory of gravitation was accepted as the true explanation of the motions of the moon around the earth and the earth around the sun. But today even Newton’s theory has been left behind for Einstein’s theory of relativity. (28)

[Larry Laudan and P. Kyle Stanford] … Their argument is sometimes called the “pessimistic meta-induction”: a careful look at the long history of scientific error should make us confident that current theories are also erroneous. (28)

Perhaps we can never be certain about anything, but that does not mean we cannot be more or less confident–or that we cannot gather evidence and use it to make informed decisions. … With the right sorts of evidence we might become so confident that the line between this sort of evidentially grounded belief and absolute certainty is for our purposes, meaningless. (29)

| Ultimately, we care about truth (at least scientific truth) inasmuch as true beliefs allow us to act successfully in the world. We care about knowledge because of the role that what we know–or at least, what we strongly believe to be true–plays in the choices we make, either individually or collectively. And recognizing this relationship between our beliefs and our choices is the key, not to solving the Problem of Induction, but to setting it aside. (29)

We ignore demands for certainty from industry, and regulate. As Hume himself put it, “A wise man…proportions his belief to the evidence.” [This is from An Enquiry Concerning Human Understanding, Section X.1 (Hume 1748).] (29)

And on reflection, although scientists have come to reject many past theories, it remains true that those theories were often highly effective within the contexts that they had been developed and tested. (30)

Philosophers and statisticians over the past century and a half have developed ways of thinking about the relationship between belief, action, and evidence that captures this pragmatism. The basic idea is that beliefs come in degrees, which measure, roughly, how likely we think something is to be true. And the evidence we gather can and should influence these degrees of belief. The character of that evidence can make us more or less confident. (30)

There is a formula, known as Bayes’ rule, that allows you to calculate what your degree of belief, or credence, should be after learning of some evidence, taking into account what you believed before you saw the evidence and how likely the evidence was.

All normal science, Kuhn argued, occurs within some paradigm, with its own rules for identifying and solving problems and its own standards of evidence. As an example, today when we see a glass fall to the floor and shatter, we see an object pulled down by the force of gravity. Before the paradigm of Newtonian gravitation, we did not see any such thing. We saw the glass as something made of earth, which therefore tended to move toward the earth, returning to its own level in a strict hierarchy of elements. (32)

| A scientific revolution is a change of paradigm: a radical discontinuity, not only in background theory, but in scientists’ whole way of seeing the world. Changes of paradigm could change not only theory, but also what counts as evidence–and in some cases, Kuhn argued, even the results of experiments changed when paradigms changed. (32)

…if Kuhn was right that paradigms structure scientists’ worldviews and if all of our usual evidence gathering and analysis happens, by necessity, within a paradigm, then this picture was fatally flawed. The “evidence” alone could not lead us to scientific theories. There was apparently another ingredient to science–one that ultimately (32) had more to do with the scientists than with the world they were supposedly trying to understand. (33)

Scientists, from this perspective, were members of a society, and their behaviors were determined by that society’s rites and rituals. (33)

It is true that, like all of us, scientists cannot isolate themselves from their cultural contexts. (34)

But the mere observation that a scientist or a group of scientists holds certain cultural or political views does not undermine the evidence and arguments they produce to support those views. (34)

More, it is very important to distinguish between two ways in which politics might affect science. One consists in the sorts of subtle influences we have been considering here, wherein background cultural views affect the assumptions scientists make and the problems they consider. … But there is another way in which politics and science can mix–one that has a strikingly different, and far more nefarious, character. (35)

…there is a kind of political interference in science that is apparent in the case of the Acid Rain Peer REview Panel, and which seems importantly different in kind from anything that advocates of CFC regulation were ever accused of: explicit and intentional manipulation of scientific reports. (41)

By the early 1990s there was a broad perception among many scientists, and also some philosophers, politicians, and journalists, that academics in the humanities were agitating to undermine science. These scientists began to push back. The result was a confrontation, ostensibly over the legitimacy of scientific knowledge, that came to be known as the “science wars.” (41)

The picture of “truth” and “falsity” that we have sketched in this chapter is one according to which our beliefs play a particular role in guiding action. We seek to hold beliefs that are “true” in the sense of serving as guides for making successful choices in the future; we generally expect such beliefs to conform with and be supported by the available evidence. (43) … (When we say, in what follows, that a belief is “true,” this is all we mean; likewise, a “false” belief is one that does not bear this relationship to evidence and successful action.) (44)

This means that if scientists claim they are gathering evidence and that evidence is convincing, we have little choices but to take their word for it. And whether we accept what scientists tell us depends on the degree to which we trust scientists to accurately gather and report their evidence, and to responsibly update their beliefs in light of it. (44)

From this perspective, the real threat to science is not from the ways in which it is influenced by its cultural context, nor the philosophical and social critiques that draw those influences out. Rather, the real threat is from those people who would manipulate scientific knowledge for their own interests or obscure it from the policy makers who might act on it. We are good at dismissing philosophical worries and acting when necessary. We are much less good at protecting ourselves from those who peddle misinformation. (45)

TWO
Polarization and Conformity

We often associate scientific discovery with lone geniuses… But real discoveries are far more complicated and almost invariably involve many people. Most scientific advances result from the slow accumulation of knowledge in a community. (48)

Usually, a model is some sort of simplified or otherwise tractable system that scientists can manipulate and intervene on, to better learn about a messier or more complex system that we ultimately care about. (51)

Bayesian belief updating gives us a model of how individual beliefs change. But as we have just seen in the case of methylmercury, science often needs to be understood on the level of a community, (51) not an individual. How do groups of scientists…share knowledge, evidence, and belief? How do they reach consensus? What do these processes tell us about science? (52)

The basic setup of Bala and Boyal’s model is that there is a group of simple agents–highly idealized representations of scientists, or knowledge seekers–who are trying to choose between two actions and who use information gathered by themselves and by others to make this choice. (53)

Over a series of rounds, each scientist in the model chooses one action or the other. They make their choices on the basis of what they currently believe about the problem, and they record the results of their actions. To begin with, the scientist are not sure about which action is more likely to yield the desired outcome. But as they make their choices, they gradually see what sorts of outcomes each action yields. These outcomes are the evidence they use to update their beliefs. Importantly, each scientist develops beliefs based not only on the outcomes of their own actions, but also on those of their colleagues and friends. (53)

A crucial assumption in this model is that evidence is probabilistic, meaning that when the scientists investigate the world…the results are not always the same. (54)

This process continues stepwise (try actions, update credences, try actions, update credences) until the scientists have converged on a consensus. (58)

What we want to understand is this: Under what circumstances do networks of scientists converge to false beliefs? (59)

the social spread of knowledge is a double-edged sword. It gives us remarkable capabilities, as a species, to develop sophisticated knowledge about the world, but it also opens the door to the spread of false belief. We see this in the models as well: especially when scientists tackle hard problems, they can all come to agree on the wrong thing. This happens when a few scientists get a string of misleading results and share them with their colleagues. Scientists who might have been on track to believe the true thing can be derailed by their (62) peers’ misleading evidence. When this happens, the scientists would have been better off not getting input from others. (63)

This trade-off, where connections propagate true beliefs but also open channels for the spread of misleading evidence, means that sometimes it is actually better for a group of scientists to communicate less, especially when they work on a hard problem. This phenomenon, in which scientists improve their beliefs by failing to communicate, is known as the “Zollman effect,” after Kevin Zollman, who discovered it. (63)

Another way to put this is that some temporary diversity of beliefs is crucial for a scientific community. If everyone starts out believing the same thing, they can fail to try out better options. It is important for at least a few people to rest different possibilities so that the group will eventually find the best one. One way to maintain this diversity of beliefs for a long enough time is to limit communication, so that researchers’ beliefs do not influence one another too much while they test different theories. (63)

[via: Is the sharing of “data” the same? Especially important during a global pandemic, yes?]

The term “polarization” originated in physics to describe the way some electromagnetic waves propagate in two oppositely oriented ways. By the mid-nineteenth century, political pundits had embraced this metaphor, of two opposite ways of being, to describe disagreements in a state dominated by two parties. (69)

In an ideal science, thinkers adopt beliefs that are supported by evidence, regardless of their social consequences. (70)

| In fact, this is not how science works. Scientists are people; like anyone else, they care about their communities, their friends, and their country. They have religious and political beliefs. They value their jobs, their economic standing, and their professional status. And these values come into play in determining which beliefs they support and which theories they adopt. (70)

Usually, when we encounter evidence, it is not perfectly certain. In such cases, there is a different rule that can be used to update your be-(71)liefs, called “Jeffrey’s rule,” after Princeton philosopher Dick Jeffrey, who proposed it. Jeffrey’s rule takes into account an agent’s degree of uncertainty about some piece of evidence when determining what the agent’s new credence should be. (72)

…”confirmation bias”–reasoning by which we tend to confirm our current beliefs–and it is a variety of what is sometimes called “motivated reasoning.” (75)

But the models of polarization based on Jeffrey’s rule that we have described strongly suggest that psychological biases are not necessary for polarization to result. Notice that our agents do not engage in confirmation bias at all–they update on any evidence that comes from a trusted source. Even if people behave very reasonably upon receiving evidence from their peers, they can still end up at odds. (76)

Sometimes, polarization happens over a moral/social position. The abortion debate, for instance, is obviously extremely contentious, and most of the debate is not over facts but over whether it is inexcusably wrong to abort unwanted fetuses. (76)

The take-away is that if we want to develop successful scientific theories to help us anticipate the consequences of our choices, mistrusting those who different beliefs is toxic. It can create polarized camps that fail to listen to the real, trustworthy evidence coming from the opposite side. In general, it means that a smaller proportion of the community ultimately arrives at true beliefs. (77)

| Of course, the opposite can also happen: sometimes, too much trust can lead you astray, especially when agents in a community have strong incentives to convince you of a particular view. (77)

Ultimately, as we will see, when assessing evidence from others, it is best to judge it on its own merits, rather than on the beliefs of those who present it. (77)

cf. https://www.researchgate.net/publication/323884492_Misinformation_or_Expressive_Responding

…there is another explanation that draws on a large literature in psychology, concerning a phenomenon known as “conformity bias.” (80)

More than a third of study participants agreed with the others in the group. They chose to go against the evidence of their own senses in order to conform with what the others in the group did. (81)

cf. https://www.simplypsychology.org/asch-conformity.html

While conformity seems to vary across cultures and over time, it reflects two truths about human psychology: we do not like to disagree with others, and we often trust the judgments of others over our own. (81)

Suppose you have a group of people who are trying to make a judgment about something where there are two possible answers and only one of them is correct. If each person is individually more likely than not to get the correct answer, the probability that the whole group will get the right answer by voting increases as you add more and more voters. This suggests that there are cases when it is actually a good idea to accept your own fallibility and go with the majority opinion: by aggregating many fallible voices, you increase the chances of getting the right answer. Those who have watched the game show Who Wants to Be a Millionaire? will be familiar with this effect. Contestants who poll the audience for the answer to a question can expect correct feedback 91 percent of the time, compared with those who ask a single friend and get the right answer 65 percent of the time. (82)

UCLA economists Sushil Bikhchandani, David Hirschleifer, and Ivo Welch, for instance, have described a phenomenon known as an “information cascade,” by which a belief can spread through a group despite the presence of strong evidence to the contrary. In these cases, incorrect statements of belief can snowball as people’s judgments are influenced by others in their social environment. (82)

The point is simply that even in a case where conforming might seem like a generally good thing–because others might have information we lack–the whole group can end up behaving in a highly irrational way when our actions or statements of belief come under social influence. (83)

But the variations we have discussed so far have been based on the assumption that what each individual cares about is the truth… At least in some settings, it seems we also care about agreeing with other people. In fact, in some cases we are prepared to deny our beliefs, or the evidence of our senses, to better fit in with those around us. (84)

…people care about conformity but also about truth. What about models in which we combine the two elements? Even for these partially truth-seeking scientists, conformity makes groups of scientists worse at figuring out what is true. (86)

| First, the greater scientists’ desire to conform, the more cases there are in which some of them hold correct beliefs but do not act on them. (86)

…as philosopher Aydin Mohseni and economist Cole Williams argue, knowing about conformity can also hurt (88) scientists’ ability to trust each other’s statements. (89)

Thus far, we have been thinking of “desire to conform” as the main variable in these models. But really, we should be looking at the trade-off between the desire to conform and the benefits of successful actions. (89)

This means that any desire to conform could swamp the costs of holding a false belief. (89)

When beliefs are not very important to action, they can come to take the role of a kind of social signal. They tell people what group you belong to–and help you get whatever benefits might accrue from membership in that group. (90)

| For example, an enormous body of evidence supports the idea that the biological species in our world today evolved via natural selection. This is the cornerstone of modern biology, and yet whether or not we accept evolution–irrespective of the evidence available–has essentially no practical consequences for most of us. On the other hand, espousing one view or the other can have significant social benefits, depending on whom we wish to conform with. (90)

A man who says he does not believe in evolution tells you something not just about his beliefs but about where he comes from and whom he identifies with. (91)

As we have argued, these social effects are often independent of, though sometimes exacerbated by, individual psychological tendencies. When we use the beliefs of others to ground our judgment of the evidence they share, we can learn to ignore those who might provide us with crucial information. When we try to conform to others in our social networks, we sometimes ignore our best judgment when making decisions, and, in doing so, halt the spread of true belief. (92)

THREE
The Evangelization of Peoples

In December 1952, Reader’s Digest published an article titled “Cancer by the Carton,”  which presented the growing evidence of a link between cigarette smoking and lung cancer. (93)

As Oreskes and Conway document in Merchants of Doubt, the key idea behind the revolutionary new strategy–which they call the “Tobacco Strategy”–was that the best way to fight science was with more science. (95)

Doubt is our product science it is the best means of competing with the ‘body of fact’ that exists in the mind of the public

At the core of the new strategy was the Tobacco Industry Research Committee (TIRC),…

cf. “A Frank Statement to Cigarette Smokers.”

The term “propaganda” originated in the early seventeenth century, when Pope Gregory XV established the Sacra Congregatio de Propaganda Fide–the Sacred Congregation for the Propagation of the Faith. The Congregation was charged with spreading Roman Catholicism through missionary work across the world and, closer to home, in heavily Protestant regions of Europe. (Today the same body is called the Congregation for the Evangelization of Peoples.) … The Congregation’s activities within Europe were more than religious evangelization: they amounted to political subversion, promoting the interests of France, Spain, and the southern states of the Holy Roman Empire in the Protestant strongholds of northern Europe and Great Britain. (97)

| It was this political aspect of the Catholic Church’s activities that led to the current meaning of propaganda as the systematic, often biased, spread of information for political ends. (97)

The idea that industry–including tobacco, sugar, corn, healthcare, energy, pest control, firearms, and many others–is engaged in propaganda, far beyond advertising and including influence and information campaigns addressed at manipulating scientific research, legislation, political discourse, and public understanding, can be startling and deeply troubling. Yet the consequences of these activities are all around us. (98)

| Did (or do) you believe that fat is unhealthy–and the main contributor to obesity and heart disease? The sugar industry invested heavily in supporting and promoting research on the health risks of fat, to deflect attention from the greater risks of sugar. Who is behind the long-term resistance to legalizing marijuana for recrea-(98)tional use? Many interests are involved, but alcohol trade groups have taken a particularly strong and effective stand. There are many examples of such industry-sponsored beliefs, from the notion that opioids prescribed for acute pain are not addictive to the idea that gun owners are safer than people who do not own guns. (99)

[Edward Bernays] writes,

those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. We are governed, our minds molded, our tastes formed, our ideas suggested, largely by men we have never heard of. [Bernays, Propaganda 1928, 9.]

This might sound like the ramblings of a conspiracy theorist, but in fact it is far more nefarious: it is an invitation to the conspiracy, drafted by one of its founding fathers, and targeted to would-be titans of industry who would like to have a seat on his shadow council of thought leaders. (99)

If he is right, then the very idea of a democratic society is a chimera: the will of the people is something to (99) be shaped by hidden powers, making representative government meaningless. Our only hope is to identify the tools by which our beliefs, opinions, and preferences are shaped, and look for ways to re-exert control–but the success of the Tobacco Strategy shows just how difficult this will be. (100)

With just this modification to the framework, we find that policy (102) makers’ beliefs generally track scientific consensus. (103)

Now consider what happens when we add a propagandist to the mix. (103)

The first is a tactic that we called “biased production.” This strategy, which may seem obvious, involves directly funding, and in some cases performing, industry-sponsored research. If industrial forces control the production of research, they can select what gets published and what gets discarded or ignored. The result is a stream of results that are biased in the industry’s favor. (104)

By 1986, according to their own estimates, they had spent more than $130 million on sponsored research, resulting in twenty-six hundred published articles. (104)

Figure 10 gives an example of what this might look like. In (1) we see that the policy makers have different credence (scientists credence are omitted from the figure for simplicity’s sake). In (b) both the propagandist and the scientists test their beliefs. Scientists “flip the coin” ten times each. The propagandist, in this example, (105) has enough funding to run five studies, each with ten test subjects, and so sees five results. Then, the scientists share their results, and the propagandist shares just the two bolded results, that is, the ones in which B was not very successful. In (c) the policy makers have updated their beliefs. (106)

We find that this strategy can drastically influence policy makers’ beliefs. Often, in fact, as the community of scientists reaches consensus on the correct action, the policy makers approach certainty that the wrong action is better. Their credence goes in precisely the wrong direction. Worse, this behavior is often stable, in the sense that no matter how much evidence the scientific community produces, as long as the propagandist remains active, the policy makers will never be convinced of the truth. (106)

Experiments that do not yield exciting results often go unpublished, or are relegated to minor journals where they are rarely read. Results that are ambiguous or unclear get left out of papers altogether. The upshot is that what gets published is never a perfect reflection of the experiments that were done. (This practice is sometimes referred to as “publication bias” or the “file drawer effect,” and it causes its own problems for scientific understanding.) (107)

One way to think about what is happening in these models is that there is a tug-of-war between scientists and the propagandist for the hearts and minds of policy makers. (107)

“selective sharing.” Selective sharing involves searching for and promoting research that is conducted by independent scientists, with no direct intervention by the propagandist, that happens to support the propagandist’s interests. (110)

This strategy makes use of a fundamental public misunderstanding of how science works. Many people think of individual scientific studies as providing proof, or confirmation, of a hypothesis. But the probabilistic nature of evidence means that real science is far from this ideal. Any one study can go wrong, a fact Big Tobacco used to its advantage. (111)

The propagandist does not do science. They just take advantage of the fact that the data produced by scientists have a statistical distribution, and there will generally be some results suggesting that the wrong action is better. (112)

The basic mechanism behind selective sharing is similar to that behind biased production: there is a tug-of-war. Results shared by (112) scientists tend to pull in the direction of the true belief, and results shared by the propagandist pull in the other direction. The difference is that how hard the propagandist pulls no longer depends on how much money they can devote to running their own studies, but only on the rate at which spurious results appear in the scientific community. (113)

| For this reason, the effectiveness of selective sharing depends on (113) the details of the problem in question. If scientists are gathering data on something where the evidence is equivocal–say, a disease in which pateints’ symptoms vary widely–there will tend to be more results suggesting that the wrong action is better. And the more misleading studies are available, the more material the propagandist has to publicize. (114)

How much data is needed to publish a paper varies dramatically from field to field. Some fields, such as particle physics, demand extremely high thresholds of data quantity and quality for publication of experimental results, while other fields, such as neuroscience and psychology, have been criticized for having lower standards. (114)

[via: Why?]

| Why would sparse data gathering help the propagandist? The answer is closely connected to why studies with fewer participants are better for the propagandist in the biased production strategy. If every scientist “flips their coin” one hundred times for each study, the propagandist will have very few studies to publicize, compared with a situation in which each scientist flips their coin, say, five times. The lower the scientific community’s standards, the easier it is for the propagandist in the tug-of-war for public opinion. (114)

…under real-world circumstances, where a funding agency has a fixed pot of money to devote to a scientific field, funding more scientists is not always best. Our models suggest that it is better to give large pots of money to a few groups, which can use the money to run studies with more data, than to give small pots of money to many people who can each gather only a few data points. The latter distribution is much more likely to generate spurious results for the propagandist. (115)

…papers showing a novel effect are easier to publish than those showing no effect. Thus there are strong personal incentives to adopt standards that sometimes lead to spurious, but surprising, results. (116)

[Bennett] Holmand and [Justin] Bruner contend that industry can influence science without biasing scientists themselves by engaging in what they call “industrial selection.” (119)

Once scientists have produced a set of impressive results, they are more likely to get funding from governmental sources such as the National Science Foundation. (This is an academic version of the “Matthew effect.”) (121)

When the propagandist consistently shares misleading data, they bias the sample that generic scientists in the network update on. Although unbiased scientists’ results favoring B tend to drive their credences up, the propagandist’s results favoring A simultaneously drive them down, leading to indefinite uncertainty about the truth. (124)

The propaganda strategies we have discussed so far all involve the manipulation of evidence. Either a propagandist biases the total evidence on which we make judgments by amplifying and promoting results that support their agenda; or they do so by funding scientists whose methods have been found to produce industry-friendly results–which ultimately amounts to the same thing. (225)

They can succeed without manipulating any individual scientist’s methods or results, by biasing the way evidence is shared with the public, biasing the distribution of scientists in the network, or biasing the evidence seen by scientists. (125)

By manipulating the evidence we use is not the only way to manipulate our behavior. For instance, propagandists can play on our emotions, as advertising often does. (125)

The newer salesmanship, understanding the group strucutre of society and principles of mass psychology, would first ask: “Who is it that influences the eating habits of the world?” The answer, obviously, is: “The physicians.” The new salesman will then suggest to physicians to say publicly that it is wholesome to eat bacon. He knows as a mathematical certainty, that large numbers of persons will follow the advice of their doctors. [Bernays, Propaganda, p.76]

 

But in 1957 most scientists were not worried about global warming. It was widely believed that the carbon dioxide introduced by human activity would be absorbed by the ocean, minimizing the change in atmospheric carbon dioxide–and global temperature. (129)

…[Roger] Revelle and [Hans] Suess estimated how long it took for carbon dioxide to be absorbed by the oceans. They found that the gas would persist in the atmosphere longer than most other scientists had calculated. They also found that as the ocean absorbed more carbon dioxide, its ability to hold the carbon dioxide would degrade, causing it to evaporate out at higher rates. (129)

cf. Earth in the Balance, Al Gore

[Begin at minute 38:02 for the relevant section referenced on p. 133, and minute 45:24 for the specific quote “I read where Senator Gore’s mentor had disagreed with some of the scientific data that is in his book. How do you respond to those criticisms of that sort?”]

But Gore himself had elevated Revelle’s status by basing his environmentalism on Revelle’s authority. (133)

…this extreme case shows most clearly a pattern that has played a persistent (133) role in the history of industrial propaganda in science. It shows that how we change our beliefs in light of evidence depends on the reputation of the evidence’s source. The propagandist’s message is most effective when it comes from voices we think we can trust. (134)

…in 2009 Fred Singer, in collaboration with the Heartland Institute, a conservative think tank, established a group called the Nongovernmental International Panel on Climate Change (NIPCC). (134)

[via: WHAT!? Also, O’Connor and Weatherall cite Merchants of Doubt multiple times.]

…the second Assessment Report of the IPCC (not the NIPCC!), published in 1995. This report included, for the first time, a chapter devoted to what is known as “fingerprinting,” a set of methods for distinguishing climate change caused by human activity from that produced by sources such as sun cycles or volcanic activity. (135)

These models suggest that one way to influence the opinions of members of a group is to find someone who already agrees with them on other topics and have that person share evidence that supports your preferred position. … In other words, a scientist might think: “I am not so sure about actions A and B, but I am certain that action Z is better than Y. If another scientist shares my opinion on action Z, I will also trust that person’s evidence on actions A and B.” (138)

| These sorts of effects can help to explain how weaponized reputation sometimes works. We look to people who have been successful in solving other problems and trust them more when evaluating their evidence. (138)

smallpox variolation. A version of modern-day inoculation, (139) this typically involved scratching a person’s arm and rubbing a scab, or fluid, from a smallpox pustule into the wound. Although a small percentage of people died from the resulting infection, the vast majority experienced a very mild form of smallpox and subsequently developed immunity. (140)

In a network with uniform beliefs, if a central individual changes belief, that person exerts strong conformist influence on peripheral individuals, who will likely also change their beliefs. (143)

| Targeting influential people to spread a new practice or belief is just one way propagandists take advantage of our conformist tendencies. (143)

If a few members of the group change practices, there is pressure on the rest to change, and once they all agree, conformity should keep the whole group there. (144)

FOUR
The Social Network

…science can be thought of as an extreme case of something we are all trying to do in our daily lives. Most of us are not trained as scientists, and even fewer have jobs in which they are paid to do research. But we are often trying to figure stuff out about the world–and to do this, we use the (150) same basic kinds of reasoning that scientists do. We learn from our experience–and, crucially, we learn from the experiences of others. (151)

So fake news has been with us for a long time. And yet something has changed–gradually over the past decade, and then suddenly during the lead-up to the 2016 UK Brexit vote and US election. (153)

Although the stories surely influenced public opinion and likely contributed to the march toward war, their impact was limited by Gilded Age media technology. (154)

| In the past decade, these limitations have vanished. In February 2016, Facebook reported that the 1.59 billion people active on its website are, on average, connected to one another by 3.59 degrees of separation. Moreover, this number is shrinking: in 2011, it was 3.74. And the distribution is skewed, so that more people are more closely connected than the average value suggests. … Information posted and widely shared on Facebook and Twitter has the capacity to reach huge proportions of the voting public in the United States and other Western democracies. (154)

| Even if fake news is not new, it can now spread as never before. This makes it far more dangerous. But does anyone actually believe the outrageous stories that get posted, shared, and like on social media? (154)

Perhaps some people find them funny or unbelievable, or share them ironically. Others may share them because, even though they know the content is false, the stories reflect their feelings about a topic. (154)

But some people do believe fake news. (154)

There is a famous aphorism in journalism, often attributed to a nineteenth-century New York Sun editor, either John B. Bogart or Charles A. Dana:

If a dog bites a man it is not news, but if a man bites a dog it is.

The industry takes these as words to live by: we rarely read about the planes that do not crash, the chemicals that do not harm us, the shareholder meetings that are uneventful, or the scientific studies that confirm widely held assumptions. (155)

Novelty makes things salient, and salience sells papers and attracts clicks. It is what we care about. But for some subjects, including science as well as politics and economics, a novelty bias can be deeply problematic. (156)

…since few journalists relish being accused of bias, pressures remain for journalists to present both sides of disagreements (or at least appear to). (158)

| Fairness sounds great in principle, but it is extremely disruptive to the public communication of complex issues. (158)

Indeed, norms of fairness have long been recognized as a tool for propagandists: the tobacco industry, for instance, often invoked (158) the Fairness Doctrine to insist that its views be represented on television and in newspaper articles. (159)

Ultimately, the mere existence of contrarians is not a good reason to share their views or give them a platform. (159)

[via: I couldn’t find the apology, but found this article to which the note below refers: https://www.nytimes.com/2004/10/30/news/study-puts-civilian-toll-in-iraq-at-over-100000.html]

The New York Times was widely criticized for presenting this consensus without adequate scrutiny or skepticism, and the editors took the highly unusual step of issuing an apology in 2004. In this case, reporting only the consensus view and stories that were broadly in line with it had dire consequences. (160)

| Fair enough. So how, then, are journalists to tell the difference–especially when they are not experts in the relevant material? For one, stories that are ultimately about current or historical events have a very different status from stories about science. It is not, and should not be, journalists’ role to referee scientific disagreements; that is what peer review and the scientific process are for, precisely because expert judgment is often essential. (160)

Our point, rather, is that the mere existence of contrarians or (apparent) controversy is not itself a story, nor does it justify equal time for all parties to a disagreement. And the publication of a surprising or contrary-to-expectation research article is not in and of itself newsworthy. (161)

It is particularly important for journalists and politicians to carefully vet the sources of their information. (161)

Journalists reporting on science need to rely not on individual scientists (even when they are well-credentialed or respected), but on the consensus views of established, independent organizations, such as the National Academy of Sciences, and on international bodies, such as the Intergovernmental Panel on Climate Change. (161)

[via: I do find this point to be accurate, but still susceptible to the same group epistemology discussed earlier, yes?]

Here is another manifestation of a theme that has come up throughout this book. Individual actions that, taken on their own, are justified, conducive to truth, and even rational, can have troubling consequences in a broader context. Individuals exercising judgment over whom to trust, and updating their beliefs in a way that is responsive to those judgments, ultimately contribute to polarization and the breakdown of fruitful exchanges. Journalists looking for true stories that will have wide interest and readership can ultimately spread misinformation. Stories in which every sentence is true and impeccably sourced can nonetheless contribute to fake news and false belief. (168)

…let us simply stipulate for the sake of argument that Russian agents did in fact hack the DNC servers and release some or all of the emails they stole. What was their purpose in doing so? If gathering intelligence were the whole goal, it would make little sense to release the hacked emails. The character and scale of the release suggest a different motive. … In other words, the emails have produced discord and mistrust–and in doing so, they have eroded an American political institution. Perhaps this was the point. (169)

If we consider beliefs across a range of issues in determining whom to trust, then establishing the existence of shared beliefs in one arena (opinions on gun laws or LGBTQ rights) provides grounds for trust on issues in other arenas. So if someone wanted to convince you of something you are currently uncertain about, perhaps the best way for that person to do it is to first establish a commonality–a shared interest or belief. The affinity groups appear to have played just this role. (173)

Economist Charles Goodhart is known for “Goodhart’s law,” which has been glossed by anthropologist Marilyn Strathern as “When a measure becomes a target, it ceases to be a good measure.” In other words, whenever there are interests that would like to game an instrument of measurement, they will surely figure out how to do it–and once (174) they do, the measurement is useless. A classic example occurred in Hanoi, Vietnam, under French colonial rule. In the early 1900s, the city was so overrun with rats that the colonial government offered a bounty for every rat tail returned to them. The problem was that people began cutting off rats’ tails and then simply releasing the rats to breed and make more rats, with more tails. (175)

| We should expect a similar response from fake news sources. A soon as we develop algorithms that identify and block fake news sites, the creators of these sites will have a tremendous incentive to find creative ways to outwit the detectors. Whatever barriers we erect against the forces of propaganda will immediately become targets for these sources to overcome. (175)

This framework paints a dreary picture of our hopes for defeating fake news. The better we get at detecting and stopping it, the better we should expect propagandists to get at producing and disseminating it. That said, the only solution is to keep trying. (175)

[via: Ugh. I got all the way to page 175 to get “Well, whatcha gonna do?!”]

If we hope to have a just and democratic society whose policies are responsive to the available evidence, then we must pay attention to the changing character of propaganda and influence, and develop suitable responses. (176)

…whatever else we do, we also need to think about interventions that take networks into account. (176)

Because other people expect them to conform, it is easy to infer that they must have good reasons for taking such a socially risky position. We might call this the “maverick effect”–such as when Arizona senator (and self-styled maverick) John McCain says that climate change is real, his statement (178) has much more impact on the right than the same one made by Al Gore. (179)

One general takeaway from this book is that we should stop thinking that the marketplace of ideas” can effectively sort fact from fiction. In 1919 Justice Oliver Wendell Holmes dissented from the Supreme Court’s decision in Abrams v. United States to uphold the Sedition Act of 1918. The defendants had distributed leaflets denouncing US attempts to interfere in the Russian Revolution. While the court upheld their sentences, Holmes responded that “the ultimate good desired is better reached by free trade in ideas. … The best test of truth is the power of the thought to get itself accepted in the competition of the market.” (179)

Through discussion, one imagines, the wheat will be separated from the chaff, and the public will eventually adopt the best ideas and beliefs and discard the rest. Unfortunately, this marketplace is a fiction, and a dangerous one. (179)

One solution is for scientific communities to raise their standards. Another is for groups of scientists to band together, when public interest is on the line, and combine their results before publishing. (180)

Another clear message is that we must abandon industry funding of research. (181)

We currently have a legislative framework that limits the ability of certain industries–tobacco and pharmaceuticals–to advertise their products and to spread misinformation. … We also have defamation and libel laws that prohibit certain forms of (inaccurate) claims about individuals. We think these legislative frameworks should be extended to cover (182) more general efforts to spread misinformation. (183)

[via: “Truth” regulations vs. free speech.]

But the goal here is not to limit speech. It is to prevent speech from illegitimately posing as something it is not, and to prevent damaging propaganda from getting amplified on social media sites. If principles of free speech are compatible with laws against defamatory lies about individuals, surely they are also compatible with regulating damaging lies dressed up as reported fact on matters of public consequence. Lying media should be clearly labeled as such, for the same reason that we provide the number of calories on a package of Doritos or point out health effects on a cigarette box. And social media sites should remain vigi-(183)lant about stopping the spread of fake news on their platforms or, at the very least, try to ensure that this “news” is clearly labeled as such. (184)

We conclude this book with what we expect will be the most controversial proposal of all. … We believe that any serious reflection on the social dynamics of false belief and propaganda raises an unsettling question: Is it time to reimagine democracy? (184)

cf. Science, Truth, and Democracy (2001); Science in a Democratic Society (2011)

Vulgar democracy is the majority-rules picture of democracy, where we make decisions about what science to support, what constraints to place on it, and ultimately what policies to adopt in light of that science by putting them to a vote. The problem, he argues, is simple: Most of the people voting have no idea what they are talking about. Vulgar democracy is a “tyranny of ignorance”–or, given what we have argued here, a tyranny of propaganda. Public beliefs are often worse than ignorant: they are actively misinformed and manipulated. (185)

| As we have argued throughout this book, it is essential that our policy decisions be informed by the best available evidence. What this evidence says is simply not up for a vote. (185)

| There is an obvious alternative to vulgar democracy that is equally unacceptable. Decisions about science and policy informed by science could be made by expert opinion alone, without input from those whose lives would be affected by the policies. As Kitcher points out, this would push onto scientific elites decisions that they are not qualified to make, because they, too, are substantially ignorant: not about the science, but about what matters to the people whose lives would be affected by policy based on that science. (185)

| Kitcher proposes a “well-ordered science” meant to navigate between vulgar democracy and technocracy in a way that rises to the ideals of democracy. Well-ordered science is the science we would have if decisions about research priorities, methodological protocols, and ethical constraints on science were made via considered and informed deliberation among ideal and representative citizens able to adequately communicate and understand both the relevant (185) science and their own preferences, values, and priorities. (186)

Proposing our own form of government is, of course, beyond the scope of this book. But we want to emphasize that that is the logical conclusion of the ideas we have discussed. And the first step in that process is to abandon the notion of a popular vote as the proper way to adjudicate issues that require expert knowledge. (186)

| The challenge is to find new mechanisms for aggregating values that capture the ideals of democracy, without holding us all hostage to ignorance and manipulation. (186)

Notes

About VIA

www.kevinneuner.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: