Saturday, November 08, 2014

Be the change?

"Be the change you wish to see in the world"--It's no wonder this saying (let’s call it BTC) has become so popular. From its sense of immediacy to its spiritual turn of phrase, BTC hits all the right notes. It doesn't hurt that it is commonly attributed to Gandhi, even though, as writer Brian Morton has noted, the closest Gandhi came to BTC was a passage including these words: "If we could change ourselves, the tendencies in the world would also change." 

Although BTC echoes some of Gandhi’s themes, its phrasing and emphasis are notably different. What is clear is that its concise form delivers a potent message about the potential for transformation--and this provides us with a window into contemporary values.

BTC suggests that if, for example, you wish for more patience in the world, you should be more patient yourself. Presumably if you succeed in becoming more patient, then you have increased the global level of patience. Furthermore, your example may encourage others. This appears plausible, and in cases like this BTC seems to provide good guidance.

What if you seek a reduction in greenhouse gas emissions? You can’t “be” less greenhouse gas emission, but following the spirit of BTC, you should aim to produce less emissions yourself. But suppose you want an improvement in the human rights situation in Burma? How can you “be” such a change? Here, there isn't much guidance. BTC is sometimes interpreted to mean “If you want to see change in the world, then do something.” But this is too broad. BTC is more than an encouragement to take action--it’s about personal change as a vehicle for transforming the world.

Like many sayings, BTC is a mixed bag. To its credit, it encourages each of us to examine the role we play in larger-scale problems, and it calls us to take action. But troublingly, BTC hints that any problem can be solved by changing individual behaviours. Thus, the problem of greenhouse gas emissions could be solved if we carpooled and used public transportation, used less energy to heat and cool our homes, and so forth. But while individual lifestyle choices are clearly important, the problem is much more extensive and complex than this. Greenhouse gas emissions can be attributed not only to individual choices but also to large-scale industrial and agricultural operations, not to mention military activities. It might be argued that such factors can ultimately be traced to individual lifestyle choices: companies only produce what customers demand, governments simply respond to the public will. But this is simplistic: companies may be driven by the market, but they also affect the market through advertising and political influence; governments respond not only to the public will, but also to powerful people and corporations.

Regardless, we still face the question of how best to effect change. BTC encourages each of us to change ourselves. But even in problems where individual behaviour plays a central role, broader issues are often involved. For example, automobiles are a major producer of greenhouse gases. But our reliance on them is part of an intricate web of social and economic factors, such as urban sprawl, public transit options, tax policies, and government regulations. If complex problems are seen largely as personal lifestyle issues, structural factors may not get the attention they deserve. At its worst, a focus on personal lifestyle can become a fetish, and broader issues may be ignored.

During the American civil rights movement, many wished to see an end to racial discrimination. BTC would suggest that those people should have worked to end their own discriminatory behaviour. But discrimination was more than an individual behaviour, it was an entrenched institutional problem. The civil rights movement used political action such as protest marches to press for structural changes in American society, such as desegregation of schools and the outlawing of discriminatory employment practices. Of course, in many cases personal transformation and political change go hand in hand. However BTC tends to encourage--and reflect--a belief that personal transformation is sufficient.

Another interesting aspect of BTC is the distinctly spiritual tone in the words "be the change", echoing the transformative themes so common in religious traditions. The notion that transcendent meaning may be found in personal change need not stand in opposition to an understanding of the mechanisms by which broader change can be effected. But BTC conflates personal change with change in the world, hinting that in some mystical sense they are the same thing.

Equating personal and societal change obscures the mechanisms by which change is actually brought about. To understand these mechanisms, we need to pay closer attention to the messy connections between our efforts and their outcomes, including the role of other contributing factors, barriers to change, possible synergies, and the unanticipated consequences that our actions may produce. A commitment to change requires considering the pros and cons of various choices in the face of inevitable uncertainty. Unfortunately, BTC may cut this process short.

The popularity of BTC reflects a concern about our role in the problems of the world and a desire to bring about change, but it also reflects our society’s preoccupation with self-improvement, which sometime veers into self absorption. BTC deftly substitutes personal transformation for global change. The risk is that even as it empowers us to transform ourselves, BTC threatens to further disengage us from the world.
Bookmark and Share

Thursday, June 14, 2012

Rethinking data

"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." — Sherlock Holmes in The Adventure of the Copper Beeches.

Data may be the preeminent obsession of our age[1]. We marvel at the ever-growing quantity of data on the Internet, and fortunes are made when Google sells shares for the first time on the stock market. We worry about how corporations and governments collect, protect, and share our personal information. A beloved character on a television science fiction show is named Data. We spend billions of dollars to convert the entire human genome into digital data, and having completed that, barely pause for breath before launching similar and even larger bioinformatic endeavours. All this attention being paid to data reflects a real societal transformation as ubiquitous computing and the Internet refashion our economy and, in some respects, our lives. However, as with other important transformations—think of Darwin's theory of natural selection, and the revolutionary advances in genetics and neuroscience—misinterpretation, misapplication, hype, and fads can develop. In this post, I'd like to examine the current excitement about data and where we may be going astray.

Big Data

Writing in the New York Times, Steve Lohr points out that larger and larger quantities of data are being collected—a phenomenon that has been called "Big Data":
In field after field, computing and the Web are creating new realms of data to explore — sensor signals, surveillance tapes, social network chatter, public records and more. And the digital data surge only promises to accelerate, rising fivefold by 2012, according to a projection by IDC, a research firm. 
Widespread excitement is being generated by the prospect of corporations, governments, and scientists mining these immense data sets for insights. In 2008, a special issue of the journal Nature was devoted to Big Data. Microsoft Research's 2009 book, The Fourth Paradigm: Data-Intensive Scientific Discovery, includes these reflections by Craig Mundie (p.223):
Computing technology, with its pervasive connectivity via the Internet, already underpins almost all scientific study. We are amassing previously unimaginable amounts of data in digital form—data that will help bring about a profound transformation of scientific research and insight. 
The enthusiasm in the lay press is even less restrained. Last November, Popular Science had a special issue all about data. It has a slightly breathless feel—one of the articles is titled "The Glory of Big Data"—which is perhaps not so surprising in a magazine whose slogan is "The Future Now".

Data Science

Along with the growth in data, there has been a tremendous growth in analytical and computational tools for drawing inferences from large data sets. Most prominently, techniques from computer sciencein particular data mining and machine learninghave frequently been applied to big data. These approaches can often be applied automatically—which is to say, without the need to make much in the way of assumptions, and without explicitly specifying models. What is more, they tend to be scalable—it is feasible (in terms of computing resources and time) to apply them to enormous data sets. Such approaches are sometimes seen as black boxes in that the link between the inputs and the outputs is not entirely clear. To some extent these characteristics stand in contrast with statistical techniques, which have been less optimized for use with very large data sets and which make more explicit assumptions based on the nature of the data and the way they were collected. Fitted statistical models are interpretable, if sometimes rather technical.

In an article on big data, Sameer Chopra suggests that organizations should "embrace traditional statistical modeling and machine learning approaches". Some have argued that a new discipline is forming dubbed data science[2]which combines these and other techniques for working with data. In 2010, Mike Loukides at O'Reilly Media wrote a good summary of data science, except for this odd claim:
Using data effectively requires something different from traditional statistics, where actuaries in business suits perform arcane but fairly well-defined kinds of analysis.
Leaving aside the confusion between statistics and actuarial science (not to mention stereotyped notions of typical attire), what is curious is the suggestion that "traditional statistics" has little role to play in the effective use of data. Chopra is more diplomatic: machine learning "lends itself better to the road ahead". Now, in many cases, a fast and automatic method may indeed be just what's needed. Consider the recommendations we have come to expect from online stores. They may not be perfect, but they can be quite convenient. Unfortunately, the successes of computing-intensive approaches for some applications has encouraged some grandiose visions. In an emphatic piece titled "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete", Chris Anderson, the editor in chief of Wired magazine, writes:
This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity.
Furthermore:
We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.
Anderson proposes that instead of taking a scientific approach, we can just "throw the numbers" into a machine and through computational alchemy transform data into knowledge. (Similar thinking shows up in commonplace references to "crunching" numbers, a metaphor I have previously criticized.) The suggestion that we should "forget" the theory developed by experts in the relevant field seems particularly unwise. Theory and expert opinion are always imperfect, but that doesn't mean they should be casually discarded.

Anderson's faith in big data and blind computing power can be challenged on several grounds. Take selection bias, which can play havoc with predictions. As an example, consider the political poll conducted by The Literary Digest magazine, just before the 1936 presidential election. The magazine sent out 10 million postcard questionnaires to its subscribers, and received about 2.3 million back. In 1936, that was big data. The results clearly pointed to a victory by the republican challenger, Alf Landon. In fact, Franklin Delano Roosevelt won by a landslide. The likely explanation for this colossal failure: for one thing, subscribers to The Literary Digest were not representative of the voting population of the United States; for another, the 23% who responded to the questionnaire were likely quite different from those who did not. This double dose of selection bias resulted in a very unreliable prediction. Today, national opinion polls typically survey between 500 and 3000 people, but those people are selected randomly and great efforts are expended to avoid bias. The moral of this story is that, contrary to the hype, bigger data is not necessarily better data. Carefully designed data collection can trump sheer volume of data. Of course it all depends on the situation.

Selection biases can also be induced during data analysis when cases with missing data are excluded, since the pattern of missingness often carries information. More generally, bias can creep into results in any number of ways, and extensive lists of biases have been compiled. One important source of bias is the well-known principle of Garbage In Garbage Out. Anderson refers to measurements taken with "unprecedented fidelity". It is true that in some areas, impressive technical improvements in certain measurement have been made, but data quality issues are much broader and are usually problematic. Data quality issues can never be ignored, and can sometimes completely derail an analysis.

Another limitation of Anderson's vision concerns the goals of data analysis. When the goal is prediction, it may be quite sufficient to algorithmically sift through correlations between variables. Notwithstanding the previously noted hazards of prediction, such an approach can be very effective. But data analysis is not always about prediction. Sometimes we wish to draw conclusions about the causes of phenomena. Such causal inference is best achieved through experimentation, but here a problem arises: big data is mostly observational. Anderson tries to sidestep this by claiming that with enough data "Correlation is enough":
Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.
But on the contrary, investigations of cause and effect (mechanistic explanations) are central to both natural and social science. And in applied fields such as government policy, it is often of fundamental importance to understand the likely effect of interventions. Correlations alone don't answer such questions. Suppose, for example, there is a correlation between A and B. Does A affect B? Does B affect A? Does some third factor C affect both A and B? This last situation is known as confounding (for a good introduction, see this article [pdf]). A classic example concerns a positive correlation between the number of drownings each month and ice cream sales. Of course this is not a causal relationship. The confounding factor here is the season: during the warmer periods of the year when people consume more ice cream, there are far more water activities and hence drownings. When a confounding factor is not taken into account, estimates of the effect of one factor on another may be biased. Worse, this bias does not go away as the quantity of data increases—big data can't help us here. Finally, confounding cannot be handled automatically; expert input is indispensable in any kind of causal analysis. We can't do without theory.

Big data affords many new possibilities. But just being big does not eliminate the problems that have plagued the analysis of much smaller data sets. Appropriate use of data still requires careful thought—about both the content area of interest and the best tool for the job.

Thinking about Data

It is also useful to think more broadly about the concept of data. Let's start with an examination of the word data itself, to see what baggage it carries.

We are inconsistent in how we talk about data. The words data and information are often used synonymously (think of "data processing" and "information processing"). Notions of an information hierarchy have been around for a long time. One model goes by the acronym DIKW, representing an ordered progression from Data to Information to Knowledge and eventually Wisdom. Ultimately, these are epistemological questions, and easy answers are illusory.

Nevertheless, if what we mean by data is the kind of thing stored on a memory stick, then data can be meaningless noise, the draft of a novel, a pop song, the genome of a virus, a blurry photo taken by a cellphone, or a business's sales records. Each of these types of information and an endless variety of others can be stored in digital memory: on one level all data are equivalent. Indeed the mathematical field of information theory sets aside the meaning or content of data, and focuses entirely on questions about encoding and communicating information. In the same spirit, Chris Anderson argues that we need "to view data mathematically first and establish a context for it later."

But when we consider the use of data, it makes no sense to think of all data as equivalent. The complete lyrics of all of the songs by the Beatles is not the same as a CT scan. Data are of use to us when they are "about" something. In philosophy this is the concept of intentionality, which is an aspect of consciousness. By themselves, the data on my memory stick have no meaning. A human consciousness must engage with the data for them to be meaningful. When this takes place, a complex web of contextual elements come into play. Depending on who is reading them, the Beatles' lyrics may call to mind the music, the cultural references, the history of rock and roll, and diverse personal associations. A radiologist who examines a CT scan will recognize various anatomical features and perhaps concerning signs of pathology. Judgements of quality may also arise, whether in mistranscribed lyrics or a poorly performed CT scan.

The word data is the plural of the Latin word datum, meaning "something given". So the data are the "givens" in a problem. But in many cases, it might be helpful to think of data as taken rather than given. For example, when you take a photograph, you have a purpose in mind, you actively choose a scene, include some features and exclude others, adjust the settings of the camera. The quality of the resulting image depends on how steady your hand is, how knowledgeable you are of the principles of photography. Even when a photograph is literally given to you by someone else, it was still taken by somebody. The camera never lies, but the photograph may be misunderstood or misrepresented.

When a gift is given to you, it is easy to default to the passive role of recipient. The details of how the gift was selected and acquired may be entirely unknown to you. A dealer in fine art would carefully investigate a newly acquired work to determine its provenance and authenticity. Similarly, when you receive data from an outside source, it is important to take an active role. At the very least, you should ask questions. Chris Anderson claims that "With enough data, the numbers speak for themselves." But on their own, the numbers never speak for themselves, any more than a painting stolen during WWII will whisper the secret of its rightful ownership. One common source of received data today is administrative data, that is, data collected as part of an organization's routine operations. Rather than taking such data at face value, it is important to investigate the underlying processes and context.

It is also possible to make use of received data to design a study. For example, to investigate the effect of a certain exposure, cases of a rare outcome may be selected from a data set and matched with controls, that is individuals who are similar except that they did not experience that outcome. (This is a matched case-control study.) Appropriate care must be taken in how the cases and controls are selected, and in ensuring that any selection effects in the original database do not translate into bias in the analysis. Tools for the valid and efficient analysis of such observational studies have been investigated by epidemiologists and statisticians for over 50 years.

When we collect the data ourselves, we have an opportunity to take an active role from the start. In an experiment, we manipulate independent variables and measure the resulting values of dependent variables. Careful experimental design lets us accurately and efficiently obtain results. In many cases, however, true experiments are not possible. Instead, observational studies, where there is no manipulation of independent variables, are used. Numerous designs for observational studies exist, including case-control (as mentioned above), cohort, and cross-sectional. Again, careful design is vital to avoid bias, and to efficiently obtain results.

Conclusion

Excitement over a new developmentbe it a discovery, a trend, or a way of thinkingcan sometimes spill over, like popcorn jumping from a popper. This may give rise to related, but nevertheless distinct ideas. In the heat of the excitement (and not infrequently a good deal of hype), it's important to evaluate the quality of the ideas. Exaggerated claims may not be hard to identify, but they are also frequently pardoned as merely an excess of enthusiasm.

Still, the underlying bad idea may, in subtler form, gradually gain acceptance. The costs may only be appreciated much later. Today it is easy to see how damaging ideas like social Darwinismthe malignant offspring of a very good ideaproved to be. But at the time, it may have seemed like a plausible extrapolation from a brilliant new theory.

The role of data in our societies and our own lives is becoming increasingly central. We live in a world where the quantity of data is exploding and truly gargantuan data sets are being generated and analyzed. But it is important that we not become hypnotized by their immensity. It is all too easy to see data as somehow magical, and to imagine that big data combined with computational brute force will overcome all obstacles.

Let's enjoy the popcornbut turn down the heat a little. 

_____________________________ 
1. ^In this post, I won't worry too much about whether to treat data as singular or plural. It strikes me as a little bit like the question of whether to talk about bacteria or a bacterium. While the distinction is sometimes important, people can get awfully hung up on it, with little benefit. 
2. ^ See this interesting history of data science
Bookmark and Share

Thursday, September 29, 2011

Too much of nothing

Is more placebo better?

A friend of mine pointed me to the above TED talk, by Ben Goldacre. It's a entertaining presentation with lots of interesting content, although Goldacre's discussion of the placebo effect—"one of the most fascinating things in the whole of medicine" (6:32)—is a little weak. At 6:47, he says:

We know for example that two sugar pills a day are a more effective treatment for getting rid of gastric ulcers than one sugar pill a day. Two sugar pills a day beats one pill a day. And that's an outrageous and ridiculous finding, but it's true.
Notice that the claim is not about pain, but about actually healing the ulcers.

The source of this claim is apparently a 1999 study by de Craen and co-authors titled "Placebo effect in the treatment of duodenal ulcer" [free full text/pdf]. It's a systematic review based on 79 randomized trials comparing various drugs to placebo, taken either four times a day or twice a day depending on the study. (Note that Goldacre refers to twice a day versus once a day; I'm uncertain of the reason for the difference.) From each trial, the authors extracted the results in the placebo group only, obtaining the following results:

The pooled 4 week healing rate of the 51 trials with a four times a day regimen was 44.2% (805 of 1821 patients) compared with 36.2% (545 of 1504 patients) in the 28 trials with a twice a day regimen
This 8% difference was statistically significant, and remained so even when several different statistical models were used.

However, the authors are up-front about a key limitation of the study: "We realize that the comparison was based on nonrandomized data." Even though the data were obtained from randomized trials, none of the trials individually compared a four-times-a-day placebo regimen to a twice-a-day placebo regimen, so the analysis is a nonrandomized comparison. What if there were important differences between the patients, the study procedures, or the overall medical care provided in the four-times-a-day trials and the two-times-a-day trials? The authors discuss various attempts to adjust for gender, age, smoking, and type of comparator drug, but report that this made little difference. But they acknowledge that:

Although we adjusted for a number of possible confounders, we can not rule out that in this nonrandomized comparison the observed difference was caused by some unrecognized confounding factor or factors.
The strength of a randomized comparison is that important differences between groups are unlikely—even when it comes to unrecognized factors. Although the authors go on to consider other possible biases, their bottom line is:
... we speculate that the difference between regimens was induced by the difference in frequency of placebo administration.
These results of this study are intriguing, but they're hardly definitive.

Labels:

Bookmark and Share

Sunday, September 25, 2011

The placebo defect

Suppose a clinical trial randomizes 100 patients to receive an experimental drug in the form of pills and an equal number of patients to receive identical pills except that they contain no active ingredient, that is, placebo. The results of the trial are as follows: 60 of the patients who received the experimental drug improved, compared to 30 of the patients who received the placebo. The drug clearly works better than the placebo.[1] But 30% of the patients who received the placebo did get better. There seems to be a placebo effect, right?

Wrong. The results from this trial provide no information about whether or not there is a placebo effect. To determine whether there is a placebo effect you would need compare the outcomes of patients who received placebo with the outcomes of patients who received no treatment. And not surprisingly, trials with a no-treatment arm are quite rare.

But there are some. In a landmark paper published in the New England Journal of Medicine in 2001 (free full text), Asbjørn Hróbjartsson and Peter Gøtzsche identified 130 trials in which patients were randomly assigned to either placebo or no treatment. Their conclusions?
We found little evidence in general that placebos had powerful clinical effects. Although placebos had no significant effects on objective or binary outcomes, they had possible small benefits in studies with continuous subjective outcomes and for the treatment of pain.
How could that be? Returning to our hypothetical trial, recall that among the patients who received placebo, 30% improved. The question is, how many would have improved had they not received placebo? If the answer is 10%, then there is 20% placebo effect. But if the answer is 30%, then there is no placebo effect at all. What Hróbjartsson and Gøtzsche found was that in most cases there was no significant placebo effect. The exception—and it is an interesting one—was in studies with continuous subjective outcomes and for the treatment of pain. It is not hard to imagine how a placebo effect could operate in such cases. The expectation of an effect can strongly influence an individual's subjective experience and assessment of pain, satisfaction, and so forth.

A study published this summer provides a nice illustration. Weschler and colleagues randomized patients with asthma to receive an inhaler containing a bronchodilator medication (albuterol), a placebo inhaler, sham acupuncture, or no intervention. When patients were asked to rate their improvement, the results were as follows:

Self-rated improvement was similar between the active-medication, placebo, and sham-acupuncture groups, and significantly greater than in the no-intervention group.

When an objective measure of respiratory function (maximum forced expiratory volume in 1 second, FEV1) was made, the results were as follows:


The objective measure of improvement was similar between the placebo, sham-acupuncture, and no-intervention groups, and significantly less than in the active-medication group.

At least in this study, it appears that a placebo effect can operate when the outcome of interest is self-rated improvement, but not when an objective outcome is used. This finding is in accordance with what Hróbjartsson and Gøtzsche originally reported, as well with an update of their review published in 2004 (free pdf).

Indeed the notion of a placebo effect in the case of objectively-measured outcomes has always seemed a little shaky, and the putative mechanisms rather speculative. So why has the placebo effect commanded so much attention?

Fascination with the placebo effect

Although placebos had probably been used clinically long before[2], it was a 1955 paper published in the Journal of the American Medical Association by Henry Beecher titled The Powerful Placebo, that brought widespread attention to the placebo effect. Beecher's analysis of 15 placebo-controlled trials for a variety of conditions showed that 35% of the patients who received placebo improved and he referred to this as "real therapeutic effects" of placebo. As discussed above, this mistakenly attributes clinical improvement among patients who received placebo to an effect of the placebo itself, without considering other possible causes such as the natural course of the illness. Unfortunately Beecher's error was not widely understood, and the mystique of the placebo was cemented.

Over the years, the placebo effect has received a tremendous amount of attention in both the academic and popular press. A search of PubMed, a publicly-accessible database of citations of biomedical publications, reveals 527 publications with the words "placebo effect" in the title, going back to 1953. This number is particularly impressive given that not all articles on the topic—for instance, Beecher's paper itself—include the words "placebo effect" in their title. A Google search of "placebo effect" reports "about 5,220,000 results". Why has so much attention been given to such a dubious notion?

One reason may be our fascination with mind-body interactions. Conventional medicine, perhaps influenced by the philosophy of René Descartes, has tended to treat the mind and body as entirely separate. It is clear that this is not so, perhaps most obviously in regards to mental health. Perhaps in reaction, some fuzzy thinking has developed around the idea of mind-body interactions. New-age and alternative-medicine movements have often entailed beliefs about how positive attitudes can heal the body, and conversely how negative ones can lead to illness. While this may contain elements of truth, at its worst it fosters dogmatic thinking and pseudoscience.

Curiously, however, in more scientific circles recent developments in neurobiology have also encouraged interest in the placebo effect. Advances in understanding of how the brain works have lead to research efforts to understand the mechanism of action of the placebo effect. This is more than a little odd, given the fairly sparse evidence for such an effect! An article in Wired Magazine asserts that "The fact that taking a faux drug can powerfully improve some people's health—the so-called placebo effect—has long been considered an embarrassment to the serious practice of pharmacology." Note that the article takes for granted "the fact" that the placebo effect works.

Indeed, the term "the placebo effect" itself is part of the problem. By labeling it as an effect, we lend it credence. Arguing against the placebo effect seems to put one at an immediate disadvantage. Hasn't everyone heard of the placebo effect? How could anyone deny such an established fact?

______________________________
1. ^Relative to the sample size, the difference is large enough that we can safely rule out chance as an explanation. In statistical terms, a test of the hypothesis that the improvement rates in the two groups are equal using Fisher's exact test gives a p-value < 0.001.
2. ^For some historical background, see The Problematic Placebo, by Stanley Scheindlin [pdf].

Labels: ,

Bookmark and Share

Saturday, August 27, 2011

Rethinking property

Suppose Alison has been playing a game of solitaire, but has left the room. A little while later, Trevor, aged 4, notices the cards lying on the table and reaches for them. Another member of the family calls out, "Trevor, don't touch those, they're Alison's!" A straightforward case of teaching a young person about property rights, isn't it?

Perhaps not. The deck of cards may belong to the family rather than just Alison. And in any case, it's really not the ownership of the cards themselves that's the issue, it's their arrangement on the table. If that arrangement is significantly disturbed, the game will be ruined even if the cards themselves are not at all damaged. So why do we construe this as an issue of property rights?

I believe the reason is that we find it much easier to express property claims than to describe the real issue, which is respect for other people. Perhaps we might have said, "Trevor, don't touch those, Alison is playing a game of solitaire!" The trouble is, Trevor may not know what solitaire is, or what that has to do with touching the cards. In fact, it may not even be clear that Alison is still playing. Perhaps she got tired of playing, and just abandoned the game. In that case, the arrangement of the cards wouldn't matter to her—but more about that later. Suffice it to say that the complexities in this and many other situations can easily get out of hand, and we fall back on the simple formulation: "Don't touch that, it's not yours."

Unfortunately, we tend to get fooled by our own simplification. In the Western tradition of political philosophy, property rights have been a central focus, as exemplified by the writing of thinkers like Hobbes and Locke. Today many see the notion of property rights as a sacrosanct and even supreme principle. But this obsession with private property has its costs. If access to resources depends on ownership, then it is natural to equate property with security. This psychological dynamic plays out in individual obsession with accumulation. On a broader scale, our economic systems emphasize that continual growth is the only option. Considerations of sustainability are given little attention.

The limitations of the private property model are readily apparent when it comes to land. Suppose you own a piece of land with a beautiful tree on it. I own adjacent land which I dig up, and in so doing I damage the roots of the tree so that it dies. Similarly, if I pollute my land with toxic chemicals, they may seep into nearby land and water. If I dump radioactive waste on my land, even if nearby areas are unaffected, the land may be rendered permanently unusable. Long after my life has passed, my footprint on the planet may continue. Such considerations lead to ideas of stewardship, which have a long history. The notion that property comes with responsibility seems to be an attempt to mitigate the more anti-social tendencies that ownership can promote.

Of course in the short term, stewardship may be motivated by self interest. A classic scenario that examines these issues is the tragedy of the commons. Suppose there is pasture land where people take their cattle to graze. It has been argued that if the land is held in common, it will be overused and the land will be exhausted—to everyone's detriment. If the land is privately held, the owner has an interest in wise use of the land. This scenario has connections to the history of English agriculture and the enclosure of public lands, but the interpretation continues to be debated. Furthermore, the supposed wise use of a resource by a single owner has plenty of counter examples. Particularly in the case of non-renewable resources, the use is often anything but wise.

Let us now return to the case of Alison and Trevor. Alison left the room, but did she intend to return to her game? If she didn't then by leaving the cards lying on the table she isn't exercising good stewardship over the cards themselves. If anyone else wants to play another game or use the table for a different purpose, they'll first have to tidy up the cards. The ownership of the cards (or the table) is not the main point. Instead, the key issue is respect for other people.

More broadly speaking, conflicts that are framed as property issues often go far beyond ownership and involve more fundamental issues of respect, tolerance, and basic human needs. Our society's traditions, conventions, and language can easily corral us into thinking that property rights are supreme. As we go through life, and as we raise our children, we should keep this in mind.
Bookmark and Share

Friday, August 12, 2011

The landscape of probability


We all know that the probability of some condition can lie anywhere between a sure thing (which we represent as a probability of 1) and a flat-out impossibility (0). But it turns out there are several other points of interest along the way. Let's take a tour.

When we say that something is a sure thing, we mean it is bound to be so. For example, the probability that a bachelor is unmarried is 1. This is a logical sure thing because, by the definition of 'bachelor', it couldn't be otherwise. (It could also be called an apodictic sure thing, however that's pretty much guaranteed to sound pretentious—but I'm getting ahead of myself.) Now consider the statement that an object with a positive electrical charge is attracted to an object with a negative electrical charge. This is a physical sure thing: though there may be a universe where this isn't true, it is true in ours.

Now let's move from sure things to things that are pretty much guaranteed. For example, it's pretty much guaranteed that the sun will rise tomorrow. Not a sure thing though: who knows what strange astronomical events might come to pass overnight? More about this in moment.

Nevertheless, most of the time, when we think about things that we consider likely, we're not thinking of things that are so overwhelmingly likely as the sun rising tomorrow. Likely things just have better than even odds.

Even odds (a probability of ½) is of course the sweet spot where the probability of some condition is exactly equal to the probability of its opposite. This can be interpreted as an expression of perfect uncertainty about the condition, and this was how Pierre-Simon Laplace used it in working out a solution to the Sunrise problem.

But as we continue our stroll, we find ourselves in the realm of unlikely things. Note that they don't have to be terribly unlikely. Something with a probability of 0.49 is (just slightly) unlikely, in that it has worse than even odds.

As the probabilities get thinner and thinner, we soon find ourselves encountering things that ain't gonna happen. These are the opposite of sure things. I'd love to win $50,000,000 in the lottery, but it ain't gonna happen. Well, of course it is gonna happen, but not to me (in all likelihood). Not that it's a physical impossiblity of course, much less a logical impossibility.

I'd just have to be awfully lucky.

Labels:

Bookmark and Share

Wednesday, June 01, 2011

The fragmentary nature of television

For the past couple of months, I've been reading, and thinking about phenomenology, a philosophical movement concerned with the nature of conscious experience. My guide has been the wonderful Introduction to Phenomenology by Robert Sokolowski. I haven't found it easy to understand phenomenology, nor have I found it easy to explain to other people, but I am finding it to be a very rich source of insight. As I was watching a television show the other night (Vampire Diaries, if you must know), I suddenly made a connection with something I had read in Sokolowski's book:
One of the dangers we face is that with the technological expansion of images and words, everything seems to fall apart into mere appearances. ... it seems that we now are flooded by fragments without any wholes, by manifolds bereft of identities, and by multiple absences without any enduring real presence. We have bricolage and nothing else, and we think we can even invent ourselves at random by assembling convenient and pleasing but transient identities out of the bits and pieces we find around us.
It struck me that television is a perfect example of this point. Now it's easy to criticize TV, but my main goal here is to understand a particular aspect of television, its fragmentary nature. If we can better understand the phenomenon of television in our culture, we may be able to approach it more wisely.

Perhaps the most fragmented experience of television is channel surfing—a series of disconnected images, sounds, and representations—a toothpaste commercial / a football game / a lion in a savannah / a riot / a soap opera / ... But even when one stays on one channel, watching television is a fragmented experience. Camera angles switch constantly, and our attention is distracted by commercials, and sometimes streaming lines of news updates and stock prices at the bottom of the screen.

But the fragmentation runs much deeper. Consider the constructed narrative of reality TV, a Frankenstein's monster of dismembered and reassembled footage. More fragmented still is one of the parents of reality TV: the news. Like reality TV, a television news segment consists of a patchwork of content assembled to tell a story. A number of these segments are themselves assembled into a news broadcast. Allowing time for advertisements, and to keep things lively, each segment is typically only a few minutes long. What gets left out is context: current events are presented with a minimum of historical, political, and cultural background.

As another example, consider the situation comedy. Granted, this is fiction designed to entertain, and the outlandish characters and situations sitcoms portray are not meant to be taken seriously. Such elements are easily set aside. What is more disconcerting is some of the elements that at first sight seem more mundane. While sitcoms often represent people in apparently ordinary situations, there are jarring absences. For example in some sitcoms nobody seems to have to work. In others, ordinary standards of politeness don't seem to apply. Bizarrely, a laugh track is added in place of an absent audience. Episodes are totally disconnected from each other, so that events have no long-term consequences. One-dimensional characters substitute for authentic identities. It is as if the characters are trapped in an endless loop, condemned to repeatedly play out their fates under a variety of starting conditions, never learning, never changing.

The examples go on. Consider sports on television. Or dramas. Or talk shows. In each case, we can see "fragments without any wholes ... manifolds bereft of identities ... multiple absences without any enduring real presence."

Sokolowski argues that because of this "we think we can even invent ourselves at random by assembling convenient and pleasing but transient identities out of the bits and pieces we find around us." Is he right? Does the fragmentary nature of television lead to a fragmented sense of ourselves? And if so, what is the impact?

Labels: ,

Bookmark and Share