Complexity Digest May 2014
Nobody has figured out how to spot the most influential spreaders of information in a real-world network. Now that looks set to change with important implications, not least for the superspreaders themselves.
Who are we, and how do we relate to each other? Luciano Floridi, one of the leading figures in contemporary philosophy, argues that the explosive developments in Information and Communication Technologies (ICTs) is changing the answer to these fundamental human questions.
As the boundaries between life online and offline break down, and we become seamlessly connected to each other and surrounded by smart, responsive objects, we are all becoming integrated into an “infosphere”. Personas we adopt in social media, for example, feed into our ‘real’ lives so that we begin to live, as Floridi puts in, “onlife”. Following those led by Copernicus, Darwin, and Freud, this metaphysical shift represents nothing less than a fourth revolution.
The problem of targeted network immunization can be defined as the one of finding a subset of nodes in a network to immunize or vaccinate in order to minimize a tradeoff between the cost of vaccination and the final (stationary) expected infection under a given epidemic model. Although computing the expected infection is a hard computational problem, simple and efficient mean-field approximations have been put forward in the literature in recent years. The optimization problem can be recast into a constrained one in which the constraints enforce local mean-field equations describing the average stationary state of the epidemic process. For a wide class of epidemic models, including the susceptible-infected-removed and the susceptible-infected-susceptible models, we define a message-passing approach to network immunization that allows us to study the statistical properties of epidemic outbreaks in the presence of immunized nodes as well as to find (nearly) optimal immunization sets for a given choice of parameters and costs. The algorithm scales linearly with the size of the graph, and it can be made efficient even on large networks. We compare its performance with topologically based heuristics, greedy methods, and simulated annealing on both random graphs and real-world networks.
Following Holland, complex adaptive systems (CASs) are collections of interacting, autonomous, learning decision makers embedded in an interactive environment. Modeling CASs is challenging for a variety of reasons including the presence of heterogeneity, spatial relationships, nonlinearity, and, of course, adaptation. The challenges of modeling CASs can largely be overcome by using the individual-level focus of agent-based modeling. Agent-based modeling has been used successfully to model CASs in many disciplines. Many of these models were implemented using agent-based modeling software such as Swarm, Repast 3, Repast Simphony, Repast for High-Performance Computing, MASON, NetLogo, or StarLogo. All of these options use modular imperative architectures with factored agents, spaces, a scheduler, logs, and an interface. Many custom agent-based models also use this kind of architecture. This paperÿÿs contribution is to introduce and apply a theoretical formalism for analyzing modular imperative agent-based models of CASs. This paper includes an analysis of three example models to show how the formalism is useful for predicting the execution time and space requirements for representations of common CASs.
Evolution of online social networks is driven by the need of their members to share and consume content, resulting in a complex interplay between individual activity and attention received from others. In a context of increasing information overload and limited resources, discovering which are the most successful behavioral patterns to attract attention is very important. To shed light on the matter, we look into the patterns of activity and popularity of users in the Yahoo Meme microblogging service. We observe that a combination of different type of social and content-producing activity is necessary to attract attention and the efficiency of users, namely the average attention received per piece of content published, for many users has a defined trend in its temporal footprint. The analysis of the user time series of efficiency shows different classes of users whose different activity patterns give insights on the type of behavior that pays off best in terms of attention gathering. In particular, sharing content with high spreading potential and then supporting the attention raised by it with social activity emerges as a frequent pattern for users gaining efficiency over time.
We study percolation on networks, which is widely used as a model of the resilience of networked systems such as the Internet to attack or failure and as a simple model of the spread of disease over human contact networks. We reformulate percolation as a message passing process and use the resulting equations to show, among other things, that for sparse networks, which includes most networks observed in the real world, the percolation threshold is given by the inverse of the leading eigenvalue of the so-called non-backtracking matrix. Like most message passing calculations, our results are exact on networks that have few small loops but, as we show, they also provide bounds on the percolation behavior of networks that do contain loops.
Social media systems rely on user feedback and rating mechanisms for personalization, ranking, and content filtering. However, when users evaluate content contributed by fellow users (e.g., by liking a post or voting on a comment), these evaluations create complex social feedback effects. This paper investigates how ratings on a piece of content affect its author’s future behavior. By studying four large comment-based news communities, we find that negative feedback leads to significant behavioral changes that are detrimental to the community. Not only do authors of negatively-evaluated content contribute more, but also their future posts are of lower quality, and are perceived by the community as such. Moreover, these authors are more likely to subsequently evaluate their fellow users negatively, percolating these effects through the community. In contrast, positive feedback does not carry similar effects, and neither encourages rewarded authors to write more, nor improves the quality of their posts. Interestingly, the authors that receive no feedback are most likely to leave a community. Furthermore, a structural analysis of the voter network reveals that evaluations polarize the community the most when positive and negative votes are equally split.
This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1 000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.
We analyze the flow of information in multiplex networks by means of the communicability function. First, we generalize this measure from its definition from simple graphs to multiplex networks. Then, we study its relevance for the analysis of real-world systems by studying a social multiplex where information flows using formal-informal channels and an air transportation system where the layers represent different air companies. Accordingly, the communicability, which is essential for the good performance of these complex systems, emerges at a systemic operation point in the multiplex where the performance of the layers operates in a coordinated way very differently from the state represented by a collection of unconnected networks.
Understanding the mechanisms and processes underlying the dynamics of collective violence is of considerable current interest. Recent studies indicated the presence of robust patterns characterizing the size and timing of violent events in human conflicts. Since the size and timing of violent events arises as the result of a dynamical process, we explore the possibility of unifying these observations. By analyzing available catalogs on violent events in Iraq (2003-2005), Afghanistan (2008-2010) and Northern Ireland (1969-2001), we show that the inter-event time distributions (calculated for a range of minimum sizes) obeys approximately a simple scaling law which holds for more than three orders of magnitude. This robust pattern suggests a hierarchical organization in size and time providing a unified picture of the dynamics of violent conflicts.
In the last few years there have been many efforts in econophysics studying how network theory can facilitate understanding of complex financial markets. These efforts consist mainly of the study of correlation-based hierarchical networks. This is somewhat surprising as the underlying assumptions of research looking at financial markets are that they are complex systems and thus behave in a nonlinear manner, which is confirmed by numerous studies, making the use of correlations which are inherently dealing with linear dependencies only baffling. In this paper we introduce a way to incorporate nonlinear dynamics and dependencies into hierarchical networks to study financial markets using mutual information and its dynamical extension: the mutual information rate. We show that this approach leads to different results than the correlation-based approach used in most studies, on the basis of 91 companies listed on the New York Stock Exchange 100 between 2003 and 2013, using minimal spanning trees and planar maximally filtered graphs.
Networks and Network Analysis for Defence and Security discusses relevant theoretical frameworks and applications of network analysis in support of the defence and security domains. This book details real world applications of network analysis to support defence and security. Shocks to regional, national and global systems stemming from natural hazards, acts of armed violence, terrorism and serious and organized crime have significant defence and security implications. Today, nations face an uncertain and complex security landscape in which threats impact/target the physical, social, economic and cyber domains. Threats to national security, such as that against critical infrastructures not only stem from man-made acts but also from natural hazards. Katrina (2005), Fukushima (2011) and Hurricane Sandy (2012) are examples highlighting the vulnerability of critical infrastructures to natural hazards and the crippling effect they have on the social and economic well-being of a community and a nation. With this dynamic and complex threat landscape, network analysis has emerged as a key enabler in supporting defence and security. With the advent of ÿÿbig dataÿÿ and increasing processing power, network analysis can reveal insights with regards to structural and dynamic properties thereby facilitating greater understanding of complex networks, their entities, interdependencies, vulnerabilities to produce insights for creative solutions. This book will be well positioned to inform defence, security and intelligence professionals and researchers with regards to leading methodologies and approaches.
– Complex Systems Society
Gottfried Mayer, Founding Editor
Carlos Gershenson, Editor-in-Chief