Duncan Watts, a differenza di molti scienziati delle generazioni più recenti, ha una storia tutt’altro che lineare. Nato in Canada nel 1971, passato per la Scozia, è poi cresciuto in Australia, in una fattoria del Queensland, dove ha fatto le superiori (alla Toowoomba Grammar School). Racconta che da bambino voleva fare l’astronauta e, più grande, l’astronomo. Ma dopo aver iniziato l’università scoprì che trovava l’astronomia noiosissima e si laureò invece in fisica all’accademia militare (Australian Defence Force Academy), dove divenne anche ufficiale. Dopo un periodo di servizio in marina (nella Royal Australian Navy), si trasferì negli Stati Uniti, alla Cornell University, per il dottorato al Department of Theoretical and Applied Mechanics.
Questo spiega forse perché non abbia per niente l’aria da secchione, ma anzi quella della persona con cui non mettersi a litigare.
Non è esagerato dire che il lavoro di Watts e Strogatz è (insieme a quello di altri, soprattutto di Albert-László Barabási e di Réka Albert) alla base della ripresa d’interesse per la teoria delle reti e, insieme, per il lavoro pionieristico di sociologi come Milgram e Granovetter. Scrivo questo non per sfoggio di cultura, ma perché questo è il ponte che spiega il passaggio di Watts dalla fisica alla sociologia (Watts è tra quelli che, senza alcun cedimento al postmodernismo alla francese, pensa che le soft sciences siano in realtà più toste delle hard sciences) e anche (insieme probabilmente a una retribuzione annua stratosferica) il passaggio dall’accademia alla posizione di direttore del gruppo Human Social Dynamics alla Yahoo.
Questo nuovo libro riassume molte delle ricerche sviluppate con il nuovo gruppo di ricerca (che gli offre la possibilità di accedere a un’enorme base di dati), ma anche di quelle svolte in ambito accademico. Il connettivo è offerto dalla tesi che il senso comune non è una buona guida per decisioni con pochi precedenti in situazioni complesse, ma devo dire che è un filo conduttore piuttosto tenue. Il libro merita comunque di essere letto, per la documentazione di numerosissimi esperimenti e per molti spunti di riflessione interessanti: è un libro importante.
Prima di passare alla consueta rassegna di citazioni dal libro, penso sia utile proporvi alcuni interventi live di Duncan Watts.
Il primo è molto facile da seguire, perché ci sono anche le slide. Comincia poco dopo il 10° minuto.
In questo secondo (che penso sia parte di un tour di presentazione del libro) Watts racconta un po’ della sua storia e del suo background.
L’ultimo, una polemica con The Tipping Point di Malcom Gladwell, è interessante perché – oltre a rivelare il Watts polemista, su un tema sviluppato anche nel libro – conferma la mia tesi che sia meglio non litigarci!
Citazioni: sono miei personali appunti che non siete obbligati a leggere, ma se siete curiosi qualcosa di utile e stimolante certamente lo troverete. Come di consueto il riferimento è alla posizione sul Kindle:
The paradox of common sense, therefore, is that even as it helps us make sense of the world, it can actively undermine our ability to understand it. 
Common sense, in other words, is not so much a worldview as a grab bag of logically inconsistent, often contradictory beliefs, each of which seems right at the time but carries no guarantee of being right any other time. 
The large scale and disruptive nature of economic and urban development plans make them especially prone to failure [...] 
There are also so many more corporations than governments that it’s always possible to find success stories, thereby perpetuating the view that the private sector is better at planning than the government sector. But as a number of management scholars have shown in recent years, corporate plans—whether strategic bets, mergers and acquisitions, or marketing campaigns—also fail frequently, and for much the same reasons that government plans do. In all these cases, that is, a small number of people sitting in conference rooms are using their own commonsense intuition to predict, manage, or manipulate the behavior of thousands or millions of distant and diverse people whose motivations and circumstances are very different from their own. 
[...] because we seek to explain these events only after the fact, our explanations place far too much emphasis on what actually happened relative to what might have happened but didn’t. Moreover, because we only try to explain events that strike us as sufficiently interesting, our explanations account only for a tiny fraction even of the things that do happen. The result is that what appear to us to be causal explanations are in fact just stories— descriptions of what happened that tell us little, if anything, about the mechanisms at work. Nevertheless, because these stories have the form of causal explanations, we treat them as if they have predictive power. In this way, we deceive ourselves into believing that we can make predictions that are impossible, even in principle. 
[...] common sense is wonderful at making sense of the world, but not necessarily at understanding it. 
[È una citazione di James Duesenberry] “economics is all about choice, while sociology is about why people have no choices.” 
Finally, people digest new information in ways that tend to reinforce what they already think. In part, we do this by noticing information that confirms our existing beliefs more readily than information that does not. And in part, we do it by subjecting disconfirming information to greater scrutiny and skepticism than confirming information. Together, these two closely related tendencies—known as confirmation bias and motivated reasoning respectively—greatly impede our ability to resolve disputes [...] 
Rather, just as Paul Lazarsfeld’s imagined reader of the American Soldier found every result and its opposite is equally obvious, once we know the outcome we can almost always identify previously overlooked aspects of the situation that then seem relevant. 
[...] in environments where individual contributions are hard to separate from those of the team, financial rewards can encourage workers to ride on the coattails of the efforts of others, or to avoid taking risks, thereby hampering innovation. 
So how do we get from the micro choices of individuals to the macro phenomena of the social world? Where, in other words, do families, firms, markets, cultures, and societies come from, and why do they exhibit the particular features that they exhibit? This is the micro-macro problem. 
Historically, science has done its best to dodge this question, opting instead for a division of labor across the scales. 
When it comes to social phenomena, however, we do speak of “social actors” like families, firms, markets, political parties, demographic segments, and nation-states as if they act in more or less the same way as the individuals that comprise them. Families, that is, “decide” where to go on vacation, firms “choose” between business strategies, and political parties “pursue” legislative agendas. Likewise, advertisers speak of appealing to their “target demographic,” Wall Street traders dissect the sentiment of “the market,” politicians speak about “the will of the people,” and historians describe a revolution as a “fever gripping society.” 
Introducing social influence into human decision making, in other words, increased not just inequality but unpredictability as well. Nor could this unpredictability be eliminated by accumulating more information about the songs any more than studying the surfaces of a pair of dice could help you predict the outcome of a roll. Rather, unpredictability was inherent to the dynamics of the market itself. 
Contagion—the idea that information, and potentially influence, can spread along network ties like an infectious disease—is one of the most intriguing ideas in network science. 
By effectively concentrating all the agency into the hands of a few individuals, “special people” arguments like the law of the few reduce the problem of understanding how network structure affects outcomes to the much simpler problem of understanding what it is that motivates the special people. 
[...] how much these explanations really explain, versus simply describe. 
For problems of economics, politics, and culture—problems that involve many people interacting over time—the combination of the frame problem and the micro-macro problem means that every situation is in some important respect different from the situations we have seen before. 
[...] rather than producing doubt, the absence of “counterfactual” versions of history tends to have the opposite effect—namely that we tend to perceive what actually happened as having been inevitable. This tendency, which psychologists call creeping determinism, is related to the better-known phenomenon of hindsight bias, the after-the-fact tendency to think that we “knew it all along.” [...] Creeping determinism, however, is subtly different from hindsight bias and even more deceptive. [1681-1687]
The only way to identify attributes that differentiate successful from unsuccessful entities is to consider both kinds, and to look for systematic differences. Yet because what we care about is success, it seems pointless—or simply uninteresting—to worry about the absence of success. Thus we infer that certain attributes are related to success when in fact they may be equally related to failure.
This problem of “sampling bias” is especially acute when the things we pay attention to—the interesting events—happen only rarely. 
[...] narrative sentences, meaning sentences that purport to be describing something that happened at a particular point in time but do so in a way that invokes knowledge of a later point. 
[...] stories that are constrained by certain historical facts and other observable evidence. 
[...] we are bad at distinguishing predictions that we can make reliably from those that we can’t. 
Simple systems are those for which a model can capture all or most of the variation in what we observe. 
Nobody really agrees on what makes a complex system “complex” but it’s generally accepted that complexity arises out of many interdependent components interacting in nonlinear ways. 
Until it is actually realized, all we can say about the future stock price is that it has a certain probability of being within a certain range—not because it actually lies somewhere in this range and we’re just not sure where it is, but in the stronger sense that it only exists at all as a range of probabilities. Put another way, there is a difference between being uncertain about the future and the future itself being uncertain. The former is really just a lack of information—something we don’t know—whereas the latter implies that the information is, in principle, unknowable. The former is the orderly universe of Laplace’s demon, where if we just try hard enough, if we’re just smart enough, we can predict the future. The latter is an essentially random world, where the best we can ever hope for is to express our predictions of various outcomes as probabilities. 
[...] what is relevant cannot be known until later. 
[...] just as commonsense explanations of the past confuse stories with theories—the topic of the last chapter—so too does commonsense intuition about the future tend to conflate predictions with prophecies. 
Predictions about complex systems, in other words, are highly subject to the law of diminishing returns: The first pieces of information help a lot, but very quickly you exhaust whatever potential for improvement exists. 
The one method you don’t want to use when making predictions is to rely on a single person’s opinion—especially not your own. The reason is that although humans are generally good at perceiving which factors are potentially relevant to a particular problem, they are generally bad at estimating how important one factor is relative to another. 
The real problem with relying on experts, however, is not that they are appreciably worse than nonexperts, but rather that because they are experts we tend to consult only one at a time. 
According to Raynor, the problem with most companies is that their senior management, meaning the board of directors and the top executives, spends too much time managing and optimizing their existing strategies—what he calls operational management—and not enough thinking through strategic uncertainty. 
The Mullet Strategy is also an example of “crowdsourcing,” a term coined in a 2006 Wired article by Jeff Howe to describe the outsourcing of small jobs to potentially very large numbers of individual workers. 
Facebook, meanwhile, publishes a “gross national happiness” index based on users’ status updates [...] 
[...] the shift from “predict and control” to “measure and react” is not just technological—although technology is needed—but psychological. 
[...] sometimes even a bad plan is better than no plan at all. [2927: o no?]
According to Scott, the central flaw in this “high modernist” philosophy was that it underemphasized the importance of local, context-dependent knowledge in favor of rigid mental models of cause and effect. As Scott put it, applying generic rules to a complex world was “an invitation to practical failure, social disillusionment, or most likely both.” The solution, Scott argued, is that plans should be designed to exploit “a wide array of practical skills and acquired intelligence in responding to a constantly changing natural and human environment.” This kind of knowledge, moreover, is hard to reduce to generally applicable principles precisely because “the environments in which it is exercised are so complex and non-repeatable that formal procedures of rational decision making are impossible to apply.” In other words, the knowledge on which plans should be based is necessarily local to the concrete situation in which it is to be applied. 
The bright-spot approach is also similar to what political scientist Charles Sabel calls bootstrapping [...] three practices—identifying failure points, tracing problems to root causes, and searching for solutions outside the confines of existing routines— [...] [2998-3006]
[...] the formal rules that officially govern behavior in organizations and even societies are rarely enforced in practice, and in fact are probably impossible to enforce both consistently and comprehensively. [...] Yet the rules nevertheless serve a larger, social purpose of providing a rough global constraint on acceptable behavior. [3114-3121]
Oliver Wendell Holmes used to defend freedom of speech—not because he was fighting for the rights of individuals per se, but because he believed that allowing everyone to voice their opinion served the larger interest of creating a vibrant, innovative, and self-regulating society. 
Unlike regular markets, which are characterized by large numbers of buyers and sellers, publicly visible prices, and a high degree of substitutability, the labor market for CEOs is characterized by a small number of participants, many of whom are already socially or professionally connected, and operates almost entirely out of public scrutiny. The result is something like a self-fulfilling prophecy. 
[...] arguments about the so-called redistribution of wealth are mistaken in assuming that the existing distribution is somehow the natural state of things, from which any deviation is unnatural, and hence morally undesirable. In reality, every distribution of wealth reflects a particular set of choices that a society has made: to value some skills over others; to tax or prohibit some activities while subsidizing or encouraging other activities; and to enforce some rules while allowing other rules to sit on the books, or to be violated in spirit. All these choices can have considerable ramifications for who gets rich and who doesn’t—as recent revelations about explicit and implicit government subsidies to student lenders and multinational oil companies exemplify. But there is nothing “natural” about any of these choices, which are every bit as much the product of historical accident, political expediency, and corporate lobbying as they are of economic rationality or social desirability. 
Nature and Nature’s laws lay hid in night: God said, Let Newton be! and all was light. [3518: citazione di Alexander Pope]
It was Spencer, in fact, not Darwin, who coined the phrase “survival of the fittest.” 
[...] homophily principle—the idea that “birds of a feather flock together.” 
[...] homogeneous social circles can also lead to a more balkanized society in which differences of opinion lead to political conflict rather than exchanges of ideas among equals. 
[...] scientific procedures—of theory, observation, and experiment—that incrementally and iteratively chip away at the mysteries of the world. 
One way to understand the entire project of what Rawls called political liberalism (Rawls 1993), along with the closely related idea of deliberative democracy (Bohman 1998; Bohman and Rehg 1997), is, in fact, as an attempt to prescribe a political system that can offer procedural justice to all its members without presupposing that any particular point of view—whether religious, moral, or otherwise—is correct. The whole principle of deliberation, in other words, presupposes that common sense is not to be trusted, thereby shifting the objective from determining what is “right” to designing political institutions that don’t privilege any one view of what is right over any other. 
[Max Weber] effectively defined rational behavior as behavior that is understandable [...] 
The definition of “methodological individualism” is typically traced to the early twentieth century in the writings of the Austrian economist Joseph Schumpeter (1909, p. 231); however, the idea goes back much earlier, at least to the writings of Hobbes, and was popular among the thinkers of the Enlightenment, for whom an individualistic view of action fit perfectly with their emerging theories of rational action. See Lukes (1968) and Hodgson (2007) for a discussion of the intellectual origins of methodological individualism, as well as a scathing critique of its logical foundations. 
[Metis] meaning the collection of formal decision procedures, informal rules of thumb, and trained instinct that characterized the performance of experienced professionals. 
[...] disorganized and organized complexity (Weaver 1958), where the former correspond to systems of very large numbers of independent entities, like molecules in a gas. Weaver’s point was that disorganized complexity can be handled with the same kinds of tools that apply to simple systems, albeit in a statistical rather than deterministic way. By organized complexity, however, he means systems that are neither simple nor subject to the helpful averaging properties of disorganized systems. 
Raynor actually distinguishes three kinds of management: functional management, which is about optimizing daily tasks; operational management, which is focused on executing existing strategies; and strategic management, which is focused on the management of strategic uncertainty. (Raynor 2007, pp. 107–108) 
See Heath and Heath (2010) for their definition of bright spots. See Marsh et al. (2004) for more details of the positive deviance approach. Examples of positive deviance can be found at http://www.positivedeviance.org/. 
Possibly this disconnect between espoused and revealed preferences implies only that people do not understand the consequences of their actions; but it may also imply that abstract questions about “privacy” are less meaningful than concrete tradeoffs in specific situations. A second, more troubling problem is that regardless of how people “really” feel about revealing particular pieces of information about themselves, they are almost certainly unable to appreciate the ability of third parties to construct information profiles about them, and thereby infer other information that they would not feel comfortable revealing.