In a recent series of ground-breaking articles over the past several years, Nicholas Christakis (a sociologist here at Harvard) and James Fowler (a political scientist at UC Davis) have shown that health behaviors seem to flow through social networks. Using new data from the Framingham Heart Study, they've shown apparent contagion effects for obesity, smoking, depression, and even isolation. Recently, however, Ethan Cohen-Cole and Jason M. Fletcher (two economists) have used data from Add Health to show apparent "implausible social network effects" for acne, height, and headaches. The economists also demonstrate that after controlling for "environmental confounders" the effects disappear, suggesting that contagion through networks are really just capturing similar structural conditions (e.g., the fact that I get obese after you are obese is really because we both live in a neighborhood with another fast food restaurant).
Although Cohen-Cole's and Fletcher's criticisms might seem relatively damning, there are a number of very serious problems with concluding that social network effects do not exist:
(1) Different data: Cohen-Cole and Fletcher use a different dataset than Christakis and Fowler. The studies differ in location (high schools versus homes), population (teenagers versus the general population), time (less than one decade versus three), and so forth. It's entirely possible that contagion effects do not exist for teenagers in high schools, but do exist for members of Framingham, Massachusetts. This would make sense if, for example, social imprinting effects were weaker among teenagers with possibly ephemeral relations in high schools rather than adults with more durable relationships.
(2) Presence vs. absence: The presence of putatively implausible contagion effects for variables such as height and acne does not demonstrate the absence of plausible contagion effects for variables such as obesity and happiness. Another way to think about it: studies showing that smoking causes cancer are just as valid even if, say, frequent use of a fireplace in a house does not cause cancer. Although the mechanisms are broadly similar (inhaling smoke), there may be important differences (the substance that is smoked).
(3) Definition of "implausible": Cohen-Cole and Fletcher assert that network effects for acne, headaches, and height are "implausible." Is this really the case, however? All the data from AddHealth on acne, headaches, and height are self-reported; thus, if I have a friend who complains of headaches then I may very well find it easier to complain of headaches due to changing norms or a driving desire to be more like my friends. Or perhaps my friend has found an effective way to prevent headaches (such as a daily dose of aspirin), and I've adopted my friend's behaviors. Or it's even possible that headaches are associated with stress, and that what is actually spreading through social networks is stress along with headaches. These same points can also be said about acne. What about height, though? Self-reported height could also spread through networks; if my friends are tall, then I'm likely to nudge up or down my actual height. We've all known short guys who have added a few inches and tall women who've subtracted a few. In addition, even assuming that height in AddHealth is accurately measured, there is substantial evidence that height during adolescence is greatly influenced by diet, exercise, and smoking habits. To the extent that any of these flow through networks, then so will height.
(4) Flaws with fixed effects: Cohen-Cole and Fletcher used fixed effects to "control" for time-invariant unobserved differences between schools. Although superficially a good way to adjust for all stable differences between schools, it is well-known that the fixed effects estimator reduces the size of the standard errors (intuitively, because less information is used to estimate the coefficients). This is especially the case if the number of waves is small (as is the case with AddHealth, which consists of only three waves of data, although each school has numerous individuals). More importantly, however, is that with a lot of inefficiency not only are the standard errors larger, but the point estimates may also become small and fragile, resulting in erroneous inferences (in the same way that a biased coefficient would). For this reason I'd recommend Cohen-Cole and Fletcher re-do their analyses using fixed effects vector decomposition, which can improve the efficiency of estimates yet retain some of the advantages of the traditional fixed effects estimator.
(5) Neglected information: Another problem is that Cohen-Cole and Fletcher neglect important ancillary evidence that supports the interpretation of the Framingham data given by Christakis and Fowler; to wit, there is substantial evidence from social psychology that people automatically mimic each other in myriad ways (e.g., emotions and behavior, including other people's tone of voice and body posture). Social mimicry is a highly plausible mechanism that buttresses the network diffusion interpretation of the data.
(6) Relevance of networks: Even if network effects are not causal, such descriptive information is still highly relevant. Most importantly, knowing the social patterning of behaviors and attitudes is extraordinarily useful for maximizing the impact of social interventions. For example, regarding obesity, clinicians could provide information on the benefits of dietary changes to people and then offer incentives to overweight people to recruit others in their social network. In this way, the population in most need of a particular intervention would be efficiently affected.
Understanding social causes is complex, and necessarily requires an accumulation of different kinds of knowledge. This is how epidemiologists discovered that smoking causes cancer: a few observational studies combined with information on the substances in cigarette smoke, not to mention a dash of qualitative evidence from clinicians. There was no single "smoking gun" (pun intended). In the same way, understanding social network effects will not be ruled out by a single study using a fixed effects estimator on a handful of variables gathered from a few thousand teenagers living in late 20th century America. Rather, understanding the degree to which social network effects can be understood as causal will require results from various sources, including both nationally-representative datasets (e.g., AddHealth and Framingham) as well as additional studies on contagion mechanisms (such as the findings on social mimicry from social psychology).