Introduction to complexity

Introduction to complexity

This text is intended as a brief introduction to complexity science, aimed at piquing the interest of a new-comer to the field.

 

Why Complexity Science?

In Thailand, there is a particularly interesting species of firefly. As you know, fireflies normally tend to flutter about, flashing in way that – while enchanting and beautiful – seems pretty random. But the special thing with these Thai fireflies is that they are synchronized: they all flash at the same time. There are huge colonies with thousand and thousands of fireflies, all going on and off at exactly the same time. In 1965, John Buck became the first person to study these fireflies[1]. He and his wife captured a bunch of them, brought them back to their hotel room, and released them.  They first flickered randomly. Then duos and trios formed. They flickered closer and closer. Until finally the whole room was synchronized. All flies flashing at exactly the same time. Order suddenly appearing from chaos.

How do they do this? No one knows – there are literally dozen theories.

Fireflies are not the only insects to manage complex organization, while less enchanting to watch, the common ant is really quite a wonder of organizational ability. A common idea on ant organization, still held by some laymen, is that the queen ant decides everything: passing out orders, controlling the anthill like a great dictator. But looking closer, this theory quickly dissolved: the ants don’t really seem to mind the queen too much, and her job seems mainly to function as the colony’s glorified uterus.

So who’s giving out the orders then? Well, if you look closely at ants, they don’t really seem to be following any orders. In fact, they seem to be dumb as a box of rocks. For instance, hundreds or even thousands of ants can literally get stuck following each other around trees, walking in a circle – around and around – until they all starve to death. Another illustration of their mental inability is that two ants can often be found stuck for hours pulling on the opposite sides of a tiny branch; they both find it worth pulling, and neither seem to mind that it’s not actually going anywhere.

Yet, despite of their mental limitations, ants have been around for some 100 million years and are arguably among the most successful animals on the planet. It might not surprise you that there are more ants than humans on the planet, but it should that there actually more of them even measured by mass: for each human, there are enough ants match the weight, and plenty to go. This begs the question: how can so stupid animals be so enormously successful?

The secret is that ants might seem stupid if you look closely – but only if you look closely. If you zoom in, they’re dumb as bean dip. But if you zoom out, they’re suddenly absolutely brilliant.

In fact, on the colony level, ants can do a great number of feats that seemingly more intellectually promising animals are unable to. Ants can build bridges to cross chasms. They can build anti-flooding systems when anticipating storms. Ants farm, make gardens and organize war. They even maintain advanced climate control, and run massive public work projects on a scale that makes the New Deal pale in comparison. How can so many, so stupid animals be so intelligent?

Both these examples really boil down to one question: how can order come out of disorder? This is arguably the big mystery of science.

Science has had hundreds of years of success, since Galileo and Newton, from using a reductionist approach: by taking things apart and looking at the pieces. And this approach has proved greatly successful – the pieces and individuals are important. Through this approach, we have learned more and more about our universe through the sciences we’ve uncovered; physics, chemistry, evolution, etc. We are at a point where we can dissect an organism down through its organs, cells, molecules, down to the very subatomic particles that constitute their most basic building blocks. But as we become more and more educated on these things, there increasingly seems to be something missing.

Why is there even so much structure and order to be found when studying the universe? Why can each of these levels of structure be studied, often using amazingly simple formulas and rules?

Any order is fighting a perpetual battle against chaos and randomness: a constant bombardment of cosmic rays, the threat of bacteria and viruses, and asteroid strikes. The moment any order is achieved in this universe, there are forces trying to tear it down. And yet that is exactly what is happening – order appears from chaos.

This gaping hole in science has called many people to seek religion. It is about fundamental questions that we have somehow left unanswered – even chosen not to look at. While science has built solid foundations for the origin of species and the first nanoseconds of the universe – this fundamental question remains unanswered, by some seen as a last outpost of enchantment; the last beacon of the unknown; the last hope of something else, something more.

This is one way to see the grand ambition of complexity science: it’s the ambition not only to take apart the structures in nature and study their pieces, but to allow us to go in the other direction; to understand how the pieces interact to create a higher level of order. To go upward, from interacting cells to organs, to organisms, to societies. To understand how order can appear from apparent chaos; how ants can be so successful; how fireflies can coordinate their flashing; how social structures emerge in human societies.

 

What is complexity theory?

Taking a complexity theoretic approach to a system is not always necessary. Certain systems are – for whatever reasons – organized in such a way that reductionist methods will work well.

Take for instance a car. We’ve designed cars to be structurally organized in such a way that reductions will work amazingly well. While cars are exceptionally complicated, we can – given enough time – understand how a car works by simply disassembling it into its components and seeing what everything does. Each component plays its part and interacts with the rest of the system in a limited and ordered way. This is really the reason that we can build so incredibly effective systems: it’s really quite amazing that we can build a spaceship that can land with high precision on Mars, some 60 million kilometers away. These types of system are indeed very complicated, but they can nonetheless be understood and explained by dissecting the systems to its components. In other words, we can use a reductionist approach, deconstructing the system to the constituting parts to fully understand the system. These systems are complicated but not complex, and therefor can be placed at the very left in Figure 2.

In the other corner of the figure, we have ants. Ants seem very simple in comparison to a car. Each ant is more or less the same, and they are – as we complex complicated
have already established – pretty simple-minded. Yet, if we try to understand an ant colony by disassembling it into pieces – i.e. studying the behavior of a single ant in isolation, we will have a very hard time figuring out how the ant colony functions. The behavior of the ant makes very little sense on its own.

So how does an ant colony function without being functionally structured? Let’s see if we can understand this better by looking at how some aspect of how an ant colony functions. For instance – how do ants, being blind and basically brainless, manage to organize an effective invasion of your picnic basket?

So, as you may know, ants communicate by pheromones. They notice pheromones laid out by other ants and simply follow this trail. If you study a colony of ants looking for food – which some researchers indeed do (for example http://icouzin.princeton.edu/) – you’ll first see the ants scurrying about totally randomly. Sooner or later, by accident, one of these ants will randomly end up in your picnic basket, discovering your package of Tasty Chocolate Chip Cookies. Having made this wonderful discovery, the ant returns home with its precious piece of pastry, leaving a pheromone trail behind. Another ant accidentally walks into, and picks up on the trail laid by the first ant, and rediscovers the cookies. This ant will bring another piece back home – simultaneously making the pheromone track even stronger. Within soon, there’s a veritable highway of pheromones and ants scurrying for your picnic basket.

graph ants

 

In other words, the first tiny successful accident results in a positive feedback loop, reinforcing until it dominates the behavior of the entire system. In this way, the behavior of each ant can be random and chaotic, but the outcome as a whole is certain and determined: your cookies will be found and they will be eaten. In other words, error is part of the system architecture; the behavior is based on accidents that always happen.

Another thing that we may observe – which is also characteristic for complex systems – is that this dynamic is reminiscent of other complex systems. Think of cities.  City neighborhoods often have a distinct personality; they are dominated by certain types of stores and certain type of people. How does this come into place? How come in Gothenburg, Andra Långgatan has all the cheap bars and Haga the expensive cafés?

This is another example of a positive feedback loop. Say that sociology student Anna is looking for a book about Complexity Theory. When she goes to store A to look for it, she accidentally passes by store B and C. If she sees something interesting, she might go in for a look. This simple action, swerving when seeing something interesting, is the very foundation of neighborhoods. If these stores sell things that will also interest her, they will benefit from this accident – in other words, a positive feedback is in action. Relevant bookstores will fare better, resulting in certain neighborhoods becoming dominated by certain themes: because of Anna’s accidental swerving, the neighborhood with bookstore A might soon turn into a Complexity neighborhood, with heaps of nerdy bookstores, and bars named El Farol[2].

In other words, both cities and anthills are organized by a behavior that emerges from a large number of accidents and random acts. There is a rule, or sense of direction, embedded in these systems. But where is this rule? How can we see it?

What is both fascinating and uncomfortable is that this is simply the wrong question: everyone and nobody embodies this rule. The instructions aren’t anywhere. We could never pick out and study one of the ants and somehow find this rule. The rule comes out of the way the colony lives and behaves. The rule can only be seen between the ants – in their multitude and relations; in what emerges from their interaction. Just like thoughts in your brain, the rule cannot be located in any specific place – a thought cannot be found in any specific nerve cell; it’s in all neurons and in none at the same time.

This is exceedingly difficult to think about. It is difficult for two reasons, firstly, it seems that our brains simply aren’t wired to handle this type of “mass-dynamic”. The chains of causality are simply to long for us to process.  Secondly, it is fundamentally different from everything that we have been doing in science for the last few hundred years: it constitutes a fundamental scientific paradigm shift. We have yet to develop scientific tools and concepts to approach this type of system. Luckily, such tools are being developed.

 

Approaching Complex Systemsdefiniotino

Next to ants, birds are a favorite go-to example for complexity scientists talking about complexity. Just like with ants, they were commonly thought to have a leader – a special bird flying up front, taking all the important in-flight decisions. We, having gone through how ant colonies are organized, of course understand that this is not the case: the behavior of the bird flocks emerges from interactions; they follow a rule that is part of everyone and nobody in the flock.

While scientists were still rather content with their understanding of bird flocks as controlled by a totalitarian leader, a 3D-animator named Craig Reynolds was trying to solve a practical problem. He was trying to figure out how to make realistic bird flocks for a movie production. Having an actual flock of birds performing in the movie studio turned out to be rather impractical, so the movie people decided to instead go with animations – which was Craig’s table.

Since Craig wasn’t a scientist, he didn’t really know about the leader theory, and in any case, such a thing seems terribly difficult to program. So, like any good programmer, he just did the simplest thing he could think of: simulating the birds as simple interacting particles, following just simple rules. But what rules should the birds follow? Well, naturally, they should avoid collisions! Secondly, they should stick together with the flock and avoid going off on their own – otherwise it wouldn’t really be much of a flock. And third, they should try to go more or less the same direction as the rest of the flock, so they look a bit more coherent. Anything else? Well, those three should suffice!

Much to Craig’s surprise, this approach worked. In fact, it worked really well.

What he had done had more implications than just solving the problem of bird feces on a movie set: he had found a way to recreate the emergent rule implicit in the behavior of birds. He had found a way to make the same rule – or at least a similar-looking one – emerge from the interaction of virtual birds: a way to experiment with emergence of rules.

While this is indeed cause for celebration, we should remember that we don’t actually know whether the behavior of Craig’s birds has anything to do with the behavior of real birds. There may be a large set of bird behavior that all lead to similar flock behavior. For all we know, different breeds of birds may have totally different behavior. All we’ve actually learned is that these particular rules lead to flock behavior. By itself, this may not be a very valuable piece of knowledge.

Certain emergent rules are very easy to recreate. Traffic congestion is – as any city driver may have already observed – a very robust phenomenon, and emerges in a large number of systems. You can literally give 30 complexity science students an hour and the task of writing a model of traffic, and there’s a good chance that they all – no matter how tired from studying all night – will experience traffic congestion. Other emergent rules are very hard to recreate. For instance, the dynamics of water has proved amazingly difficult to recreate from the interactions of water molecules.

Craig’s birds – that he called boids – exemplify how modeling can be used to understand complex systems. While we humans, as we have already observed, are not great at long chains of causality, it is the bread and butter of computers. In this way, computers can help us bridge the gap of mass-dynamic, to better understand complex systems. Because of this, computer modeling has become a central part of complexity theory.

 

boids

Another tool that complexity science has provided us with is a set of words and concepts that we can use to better understand the world. By creating a concept, something abstract can become available and accessible. These concepts will be another focus of this course, and we’ll mention some here, just to give you a feel for it.

We have already happened upon some of these words: emergence – how larger organization develops from the simple interactions, and positive feedback loops, the process through which small differences are amplified over time. Other examples tipping points, popularized by Malcolm Gladwell and the greenhouse effect, showing how certain factors can suddenly become self-reinforcing and come to dominate an entire system, when passing a certain threshold. For example, many worry that if global warming passes a certain threshold the climate will spiral out of control – and Sweden may suddenly become habitable.

Lock-ins is a phenomenon that can perhaps be connected to negative feedback loops: it is the inability of a system to coordinate to leave a local optimum. Since there is no central authority in complex systems, coordination can be exceptionally difficult. Because of this, a flock of lemmings may walk off a cliff[3] – even though no lemming in the group wants to. Similarly, a whole society may collectively go toward the metaphorical cliff of environmental collapse, with its constituting citizens realizing it, yet being totally unable to change the collective direction. Since the direction the society is going is emergent, and embedded in the activities of everyone and nobody, it may be next to impossible to change without a large effort of collective reorganization. Indeed, even if all the ants became aware that their colony was involved in unsustainable foraging practices – they would most likely be unable to change it[4]. The QWERTY keyboard layout is an epitomizing – and much less depressive – example of a lock-in. The QWERTY keyboard was designed to avoid interlocking of mechanical details in early typewriters, by putting keys that are commonly used in succession as far away from each other as possible. However, this of course also means that the layout is optimized for maximal finger movement – which is terrible for both typing comfort and speed. The advantage of the design disappeared as typing machines became more sophisticated, and is now only a distant memory, while the downsides of the design remain with us. Yet, this very text is written on a QWERTY keyboard.

Positive feedback loops and tipping points can sometimes act through cascades – the phenomenon that from time to time lead to rolling power outages in the US: one lonely and tired power station somewhere out in the boonies gets overcharged as Hillbilly Bill power up his moonshine machine, leading to a sudden power surge causing the next station to become overcharged, in turn leading to an even larger power surge. At the end of the day, through this cascade of collapse, Mr. Bill may have managed to single-handedly wipe out the electricity supply in the entire Western hemisphere!

 

Complexity in social systems

As you may have noticed by now, complexity is all about collectivity – about crowds and interactions. But at the turn of the century – around the dawn of sociology – the collective crowd was not exactly in vogue. This was at a time when the elite was fearing a growing militant working class: the masses were banging on the door to social power. The great intellectuals of the time, such as Thomas Carlyle, Gustaf Le Bon, and Nietzsche, all viewed crowds as the epitome of irrationality and stupidity: they caused riots and destruction, posing a threat to organized society.

But an important blow against this perspective was to come from a rather unexpected direction – from one of their own. In fact, from one of the most ardent elitists of them all: the British intellectual Sir Francis Galton, today lovingly remembered as the founder of eugenics.

One day, as Sir Francis Galton was visiting a local market – most likely amused by the bartering of common folks – he happened upon a guessing game that caught his attention. It was a competition to guess the weight of a particularly large ox, with a prize promised for the best guesser. A large and diverse crowd of farmers and workers had gathered around the spectacle, eager to try their luck. After the game, an ever-curious Francis asked the game’s organizer for the tickets with the guesses. He assumed that they would not only all be of rather poor quality – the crowd looked rather unschooled in the hard science of bovine weight approximation – but that they as a collective would be even worse. As a natural starting point, he therefore looked at the average guess of the crowd, confident in that it would be way off. The average guess was 1887 pounds. The actual weight of the ox was 1888 pounds. That’s a pretty good guess. And it was not a one-off, but it turned out to be a reproducible pattern: the average guess of a group is usually better than any the guess of any of the group’s members. To his credit, Francis was more of a scientist than an elitist, and published this finding showing this phenomenon, sometimes called the wisdom of crowds.

 

crowds

These types of effects are very reliable. In fact, a similar phenomenon both constitutes a valuable lifeline in “Who wants to be a millionaire”, and is also the foundation of Google. It is very much the same mysterious crowd effect that allows Google to organize the web in a bottom-up way. The algorithm that does this is called PageRank, and functions as a type of voting system where each link between two websites counts as a vote. The basic rule is that if a lot of websites link to you, and a lot of websites link to those websites, your website will become more visible on Google. So who decided which page is at the top of a Google search? Again, everyone and nobody: the intelligence of Google is emergent.

But there is one central difference here: unlike the ants, we – the Internet users and producers – are aware of the emergence. And when we understand how it works, we can interact with the emergent dynamics to get benefits in our competition with others! This is indeed the reason for online spam bots putting Viagra links all over the Internet.

Because of this, Google’s only option is to make their algorithm more complicated, to try to detect and punish this type of adaptation. In turn, spammers will become more sophisticated in their attempts to get around these algorithms. The very complicatedness of the algorithms is in fact part of the game, since Google wants it to be as hard as possible for spammers to guess what their algorithms do. If the spammers can figure out how the algorithms work, they can find the weak points and exploit them.  In other words, it seems what we got here is a good old-fashioned evolutionary arms race. But it is a very particular type of arms race: one toward increased complicatedness in a complex system. As the system ceases only being complex, and starts to combine the complexity with high levels of structural complicatedness, they will increasingly diverge from the properties of complex systems. This arms race dynamic can ultimately even lead to the systems leaving the realm of complex systems altogether, and becoming something qualitatively different. And this is not something that can only be observed on Google, but in almost any emergent system where adapting living things are involved.

We owe, like so many things, one of the best descriptions of such systems to the British mathematician and logician Charles Lutwidge Dodgson in one of the most brilliant and successful academic works of the 19th century. I am of course talking about Alice in Wonderland, written under the pseudonym of Lewis Carroll.

In Alice in Wonderland, they play an interesting game of croquet. Instead of the usual croquet tools, they use flamingos for clubs, hedgehogs for balls, and soldiers for hoops. Of course, this rather affects the game dynamics. The flamingos really aren’t too interested in being used as clubs, and would suddenly bend their necks so the players wouldn’t know if they even hit the ball, let alone how it was hit. And anyway, the ball might walk away of its own accord because they were actually hedgehogs. And the hoops might too, since they were in fact soldiers. The whole thing just ends up being a muddle, with everything moving and no nothing being fixed, and there were really nothing that the players could do about it.

That this game becomes such a total muddle is distinctly linked with the game devices being alive: had the balls just been uneven, the clubs wobbly or the lawn bumpy, the players could have learned and adapted. But with living things, this becomes impossible: the hedgehogs and flamingos will in turn adapt to the adaptation of the players, and the whole thing just becomes impossible.

This metaphorical game of croquet is in fact more profound than it may initially seem. What it implies is that when the interacting components of a complex system are not merely rule-obeying agents such as the case of birds or ants, we suddenly find ourselves in an even more complex, intriguing situation than before.alice

Exactly this often happens to be the case when it comes to complex social systems. The basic components of social systems are people: individuals with a unique ability to interpret, describe and understand the realities that surround us, to internalize representations and narratives of what we experience, and then act upon these interpretations. This means that suddenly we are not only dealing with emergence from the bottom-up as in the case of Craigs model of boyds, where the individual boyds interact with each other and from this interaction we get emergent, higher-level patterns. Now, we are also dealing with emergence from the top and down, since the emerging structures and patterns in turn have an impact on the lower-level interactions: they change the rules of the game. Human do generally not only follow rules: we constantly adapt to new situations and adopt new behavior. In this sense, there is a dynamic relation between the emergent structures and the underlying levels.  The system-theorist Karl Marx once expressed this in a beautiful quote:

“Men make their own history, but they do not make it as they please; they do not make it under self-selected circumstances, but under circumstances existing already, given and transmitted from the past. The tradition of all dead generations weighs like a nightmare on the brains of the living.“

However, making this even more complex, most social systems are in fact not only characterized by bottom-up and top-down emergence. Higher-level structures can also be the emergent product of interactions at their own levels or across levels. This means that social systems are often closely intertwined with other systems and affected by causal mechanisms and feedback processes both within and between many levels. Think of an anthill where the collective behavior of the ants not only emerges from below, but is also affected by changes in the environment and surrounding systems.

In much the same way, humans tend to exist within several systems and on several levels at the same time. We both occupy positions in structures of e.g. gender, class and ethnicity, and these different systems are entangled: they are deeply connected to each other and difficult to neatly separate. This means that most social systems can perhaps more correctly be described as a messy entanglement of causation and emergence from all levels, as surrounding systems constantly impact each other. In other words, these systems are not like an onion or a Russian doll that can easily be peeled into separate layers to be studied in isolation, but rather resemble a mango: a messy entanglement of gummy strings and smudgy threads, and where an effort to disentangle it merely results in sticky hands and a massacred fruit.

So, where does this leave us?

As we have argued, most social systems are undeniably complex systems. They are characterized by complex related system dynamics such as positive and negative feedback, tipping points and emergence. Just think of how a single person setting himself on fire in the right circumstances may trigger a large-scale, global uprising, removing decades-old dictatorships from power. Or think of how society for a long time can be locked-in in a suboptimal technical solution (such as the QWERTY-keyboard above), but where a new innovation suddenly may breakthrough, leading to a cascade of new innovations that transforms the entire socio-technical system, much like in the way the introduction of the petrol engine profoundly changed the way society was organized.

But at the same time, social systems are not only complex; they are also very complicated with their multi-level organization and bewildering array of qualitatively different and interacting entities. So while they do exhibit complex related system dynamics, we cannot simply use the same approach as when dealing with ants or a flock of birds and create a simple computer simulation. There are basically two interrelated reasons why this is not possible.

Firstly, most social systems are, as we have seen, open systems and therefore very difficult to separate from their environment and study in isolation: we cannot just cut them off at their limbs since they consist of so many different levels with processes that constantly interact with each other and co-evolve. This means that the decision where to make the cut- to decide what is considered as part of the system and what is part of the environment- is essentially a subjective decision, based on certain ideas, conceptions and interests. In this way, by cutting off the system we also frame it in a certain way.

Secondly, how do you operationalize individuals into rule-obeying agents in a model? It is quite difficult to imagine how we can create a complete computer model of, for instance, a group of people interacting with each other. Human are, just as the flamingos in Carrolls game of croquet, adaptive and their behavior changes dynamically in a way simple rules generally cannot encapsulate. In other words, humans are themselves complex systems.

This is essentially where the field of complexity science and sociology is today. We know that most social systems are complex systems. We know that they consist of sometimes millions of individuals interacting, making up chains of causation that are too long for our unaided cognitive abilities to comprehend. Our brains are simply not made for following and understanding this type of mass dynamics where so many actors constantly interact and impact each other. We clearly need formal computer models to help us out here. They can be understood as a form of thought-experiment that helps us to zoom in on complexity and to narrativize mass dynamics. But we also need more “informal”, traditional sociological perspectives and theories that are rich in context and focus more on understanding processes and patterns, rather than exact formalization and reducing the system. This is also one of the major issues in the field at this moment: how can we combine traditional sociological theory with computer models to help us understand the complexity of social systems?

We realize that this short essay perhaps did not provide so many clear-cut answers to this. But we hope that it at least has helped opening up new ways of thinking, enabling to see old problems in a new light. We also hope to have shown why we need more system thinking in contemporary sociology and how this can help us understanding the complexity of everyday life.

 

Recommended literature

James Surowiecki (2004): The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations

Malcolm Gladwell (2001): The Tipping Point

Steven Johnson (2001): Emergence: The Connected Lives of Ants, Brains, Cities, and Software

Mark Buchanan (2007) The Social Atom

Footnotes

[1] http://www.jstor.org/stable/2808377

[2] Arthur, B. 1994. Inductive Reasoning and Bounded Rationality. http://www.jstor.org/stable/2117868

[3] It should be noted that the idea that lemmings collectively commit suicide is a misconception: they are a strongly migratory species, that will sometimes collectively cross terrain that is just too difficult for parts or all of the group – such as crossing a very wide river.

[4] Ants sometimes are involved in unsustainable foraging practices – this is how ant insecticides work!