From CSULA CS Wiki
Jump to: navigation, search

Summaries originally compiled by R. Abbott. Feel free to change a summary if you think it does not represent the paper adequately.

Contents


Abbott, "Emergence and Systems Engineering: Putting complex systems to work"

From the paper. Our focus in this paper will be on the relationship between emergence and design. Emergence will lead us to consider the issue of entities, what they are and what it takes for them to persist. We also examine the sorts of environments that support emergence.

Section 2 discusses the notion of design in complex systems and explores what it means to say that a system is complex.

Section 3 extends the framework developed in section 2 to define emergence. It shows how emergence as we understand the term intuitively differs from the information theoretic sense. It also shows how emergence is intimately related to systems engineering. The notion of downward entailment (as contrasted with downward causation), developed in a previous paper, is reviewed.

Section 4 discusses how emergence is connected to our notion of entities. It discusses the question of whether entities are objectively real. (We conclude that they are.)

Section 5 discusses the relationship between thoughts and things—and in particular between thoughts, requirements, designs, and things. It discusses computer science’s success in developing languages that help us externalize our thoughts. It attributes a significant part of that success to the fact that the languages in which we express our thoughts are also the languages we use to control computers. Systems engineering isn’t so fortunate.

Section 6 discusses dissipative systems, a kind of entity intermediate between static and dynamic entities, and the kind of entities engineers tend to build. A major difference between dissipative systems and dynamic entities is that dynamic entities are designed from the core to be self-sustaining, with whatever additional functionality they have built on top of their ability to sustain themselves.

Section 7 discusses service-oriented designs. It argues that this is not a fad but a fundamental design principle used by nature.

Section 8 discusses feasibility ranges, that all emergent properties have them, and that it would serve us well to pay more attention to them.

Section 9 discusses modeling and simulation. Makes the point that we aren’t nearly as good at it as we need to be. It also makes the points (a) that because of multi-scalar phenomena modeling has built-in limitations and (b) even if we were much better at it, we would still not necessarily know how to use it to model emergence.

Section 10 discusses innovative environments. An innovative environment is one that may be thought of as emergence-friendly. This section suggests some properties that innovative environments may be expected to have and that seem likely to foster emergence.


Axelband, "Stability: A Contribution Complex Systems can make to System of Systems Engineering?"

Treats the development process as a complex (chaotic) system and asks (but doesn't really answer) whether complex systems theory can help make it more stable.


Berryman, "Optimizing genetic algorithm strategies for evolving networks"

From the paper. This paper explores the use of genetic algorithms for the design of networks, where the demands on the network fluctuate in time. For varying network constraints, we find the best network using the standard genetic algorithm operators such as inversion, mutation and crossover. We also examine how the choice of genetic algorithm operators affects the quality of the best network found. Such networks typically contain redundancy in servers, where several servers perform the same task and pleiotropy, where servers perform multiple tasks. We explore this trade-off between pleiotropy versus redundancy on the cost versus reliability as a measure of the quality of the network.

Bhavnani, "Adaptive Agents, Natural Resources, and Civil War"

From the paper. This article adds agency to greed-based explanations of civil war by creating an artificial landscape populated with agents (government, rebel, and peasant) and natural resource deposits (alluvial and kimberlite diamonds). We model the incidence of civil war as contingent upon: (i) the government and rebel allocation of revenue—investment in extractive and military capacity, short-term robbery, or spending on social welfare; (ii) peasant support for the government or rebels; and (iii) the nature of the physical landscape — the type, size, and location of resource deposits. Using this exploratory model, we begin to explain contradictory findings from quantitative research on natural resources and civil war, find the relationship between export agriculture and civil war to be determined largely by government strategy, and highlight the importance of measurement, distinguishing between conflict onset, the number of independent conflict episodes, and disparate measures of conflict duration.

Boardman, "System of Systems – the meaning of of"

From the paper. We present distinguishing characteristics (i.e. autonomy, belonging, connectivity, diversity, and emergence), that can help us to recognize or to realize a System of Systems (SoS). The principle differentiation that we make between a thing being either a ‘system’ or a SoS focuses on the nature of a system’s composition. We will distinctly define this set of distinguishing characteristics which will include a set of cross references from our literature research where we believe others are articulating our chosen differentiating characteristics. We conclude by summarizing the difference in these terms in a fundamental sense, one that impacts their structure, behavior and realization, and the distinction comes from the manner in which parts and relationships are gathered together and therefore in the nature of the emergent whole. …

Systems thinking for too long has been preoccupied with interior design, with the parts and their relationships. Meanwhile, exterior design, the context for the whole and all that this means in terms of influences, ownership, and adaptation has been sadly neglected or reduced to statements that determine the system’s interior design [2]. Anyone, military commander or CEO, will typically make this remark concerning the system they lead, “I care less about the make-up of the system, as an objective per se, but more about its ability to survive and prosper in uncertain environments perpetually changing in unknowable ways that increasingly appear to be more actively lethal with purposeful intent to secure my system’s demise”. …

For us the difference between system and SoS lies in composition. Both terms conform to the accepted definition of system in that each consists of parts, relationships and a whole that is greater than the sum of the parts, and therefore in that sense they are the same. But these terms differ in a fundamental sense, one that impacts their structure, behavior and realization, and the distinction comes from the manner in which parts and relationships are gathered together and therefore in the nature of the emergent whole. This distinction in gathering together comes about by two opposing forces, present in a SoS but entirely lacking for a system. These are the forces of legacy and mystery. Legacy is a driving force from the parts perspective and mystery acts upon the whole.

Boehm, "Putting Systems to Work: Processes for Expanding System Capabilities Through System of Systems Acquisitions"

From the paper. [T]raditional 20th century acquisition and development processes do not work well on [21st century software-intensive system of systems (SISOS)]. This article summarizes the characteristics of such systems, and indicates the major problem areas in using traditional processes on them. We also present new processes that we and others have been developing, applying, and evolving to address 21st century SISOS. These include extensions to the risk-driven spiral model to cover broad (many systems), deep (many supplier levels), and long (many increments) acquisitions needing rapid fielding, high assurance, adaptability to high change traffic, and complex interactions with evolving, often complex, Commercial Off-the-Shelf (COTS) products, legacy systems, and external systems. …

The appropriate metaphor for addressing rapid change is not a build-to-specification metaphor or a purchasing-agent metaphor but an adaptive “command-control-intelligence-surveillance-reconnaissance” (C2ISR) metaphor, shown Figure 2. It involves an agile team performing the first three activities of the C2ISR “Observe, Orient, Decide, Act” (OODA) loop for the next increments, while the plan-driven development team is performing the “Act” activity for the current increment.

Bolton, "Some Thoughts on Systems Engineering, Engineering Systems & Complexity"

From the paper. The “hot buttons” which are the “baby steps” to the mechanism alluded to above are;

  1. A clear, concise, consistent (and lucid) definitional statements of requirements for Capability.
  2. The Cost of Capability in Architectural terms.
  3. The (Systems) Integration of Risk & Uncertainty, particularly in the definition and construction of interfaces under the influence of complexity (adaptation and emergence).

Clearly what I am attempting to describe is what we mean by Capability and how we visualise it, (what it looks like now and equally what it will look like in 20 years time !), and more importantly what is it all going to cost.

This now leads me to the position that within the practice of SE we are attempting to structure problem solutions (systems) that are increasingly complex and this is really because the dynamical behaviour (of the problem solution) can be difficult to understand. …

So by introducing Complexity Science, we are looking at the “Management of Diversity and Change”, Change being introduced largely because we are considering a whole life cycle (Through Life Capability Management) and in these circumstances, as always, change is inevitable. …

After serious methodological analysis, the position that I have reached is that the integration process for SE and Complexity Science, Sussman (2000) and Moses (2004), may well be the MIT ESD discipline of Engineering Systems. …

The motivation for this note was to investigate whether it was possible to be able to integrate the certainties of largely freeform Complexity Science into a highly process driven Systems Engineering Environment.

This integration is considered to be possible through a third party mechanism, the one chosen being the discipline of Engineering Systems (MIT ESD). Although for the sake of completeness this author felt it necessary to prove the motivations between Science and Engineering, particularly using metaphors from software engineering to prove the case.

Boschetti, "An information-theoretic primer on complexity, self-organization, and emergence"

From the paper. Complex Systems Science aims to understand concepts like complexity, self-organization, emergence and adaptation, among others. The inherent fuzziness in complex systems definitions is complicated by the unclear relation among these central processes: does self-organisation emerge or does it set the preconditions for emergence? Does complexity arise by adaptation or is complexity necessary for adaptation to arise? The inevitable consequence of the current impasse is miscommunication among scientists within and across disciplines. We propose a set of concepts, together with their information-theoretic interpretations, which can be used as a dictionary of Complex Systems Science discourse.

Paraphrased. The paper provides an introduction to information theory and various measures of complexity (algorithmic complexity, statistical complexity, and excess entropy). Using that framework it offers both intuitive and formal definitions of terms commonly associated with complex systems: edge of chaos, self-organization, emergence, adaptation and evolution, and self-referentiality.

Braha, "Untangling the Information Web of Complex System Design"

Understanding the structure and function of complex networks has recently become the foundation for explaining many different real-world complex biological, technological and informal social phenomena. The analysis of these networks has uncovered surprising statistical structural properties that have also been shown to have a major effect on their functionality, dynamics, robustness, and fragility. This paper examines, for the first time, the statistical properties of large-scale engineering systems networks; and discusses the significance of these properties in providing insight into ways of improving the strategic and operational decision-making of the organization. The authors have shown that the empirical findings are found more generally in other large-scale Complex Engineered Systems (CES) including large-scale software and electrical circuits. The theory also provides plausible explanation for the prevalent phenomena of large-scale engineering failures. The new analysis methodology and empirical results are also relevant to other organizational information-carrying networks.

The following main results are obtained:

1) Complex engineering networks exhibit the “small-world” property, which means that they react rapidly to changes in design status;

2) Complex engineering networks are characterized by inhomogeneous distributions of incoming and outgoing information flows of nodes. Consequently, complex engineering networks are dominated by a few highly central ‘information-consuming’ and ‘information-generating’ nodes;

3) Complex engineering networks exhibit a noticeable asymmetry (related to the cut-offs) between the distributions of incoming and outgoing information flows, suggesting that the incoming capacities of nodes are much more limited than their counterpart outgoing capacities. The cut-offs observed in the in-degree and out-degree distributions might reflect Herbert Simon’s notion of bounded rationality, and its extension to group-level information processing.

4) Focusing engineering and management efforts on central ‘information-consuming’ and ‘information-generating’ nodes will likely improve the performance of the overall complex engineering process;

5) ‘Failure’ of central nodes affects the vulnerability of the overall complex engineering process;

6) Positive correlation between neighboring nodes ("coupling") tends to limit the range of the parameters’ values for which the system converges to the uniformly resolved ("error free") state.

7) Complex engineering networks dynamics is highly error tolerant (robust), yet highly responsive (sensitive) to perturbations that are targeted at specific nodes.


Main References:

[1] D. Braha and Y. Bar-Yam, “The Statistical Mechanics of Complex Product Development: Empirical and Analytical Results,” Management Science, Vol. 53 (7), July 2007.

[2] Dan Braha, Ali A. Minai, and Yaneer Bar-Yam, Complex Engineered Systems: Science Meets Technology. Springer, New York, June 2006.

[3] D. Braha and Y. Bar-Yam, “The Topology of Large-Scale Engineering Problem-Solving Networks,” Physical Review E, Vol. 69, No. 1, 2004.

[4] D. Braha and Y. Bar-Yam, “Information Flow Structure in Large-Scale Product Development Organizational Networks,” Journal of Information Technology,” Vol. 19, No. 4, pp. 234-244, 2004.

Browning, "Program Architecture and Adaptation"

From the paper. Programs are extremely complex entities. They combine the challenges of engineering product systems with those of managing people, organizations, tools, processes, schedules, and budgets. As supply chains, partnering arrangements, outsourcing, globalization, technologies, capabilities, and stakeholder desires have grown in size and sophistication, program planning and control has become even more difficult. As a result, programs are notoriously challenged to deliver desired outcomes—i.e., a result with pre-specified levels of performance by a deadline and within a budget. The working assumption in this paper is that program managers do not have adequate decision-support systems and are faced with information overload. They are often surprised by emergent problems in programs. However, it is interesting to note that such surprises to the program manager are often known much earlier by someone on the program. No matter how good the “problem discovery” capabilities on a program, they are of limited use unless accompanied by effective “problem transmission” capabilities.2 Addressing these issues would seem to require the recognition and treatment of programs as complex systems—systems that perhaps can be engineered in a better way, or at least modeled and better understood. …

The paper consists of two main parts. The first discusses an approach to modeling five of the systems in a program [the product, the process, the organization, the tools, and the goals] and their interactions. The second discusses adaptation and emergence in the context of programs and the five systems, drawing examples from four case studies. …

Burkhart, "A Swarm Ontology for Complex Systems Modeling"

From the paper. Modeling and simulation will provide crucial capabilities for complex systems engineering, but existing languages, frameworks, and tools fall short of both fundamental and practical needs. This paper describes work-in-progress to fill these needs, based on a foundation of logic-based description and a system of concepts to describe complex dynamic systems. The modeling framework can include mappings to visual forms of model representation such as the OMG Systems Modeling Language (OMG SysML), and to executable forms of multi-agent simulation such as those implemented in the Swarm simulation system. …

Both these modeling frameworks, however, currently lack the formality and abstraction above the level of implementation which is needed for them to scale across multiple communities and development phases as needed for development of real systems. Swarm and related agent-modeling toolkits are implemented at the level of programming language libraries (the Objective C language in the case of Swarm, Java for many of the more recent toolkits), as driven by their primary goal to drive executable simulations. SysML is derived from the Unified Modeling Language (UML), also standardized by OMG, and combines visual diagrams for human communication with metamodels that can be exchanged in digital form across modeling tools. UML and SysML, however, provide only a limited set of standardized behavior models (procedural operations, activity flow diagrams, or finite-state machines), and their newly standardized abilities to describe hierarchical system structure and interconnection of system elements are still incomplete and in need of further specification and formalization.

The core of system structure description in UML and SysML, however, can be mapped to formal semantic models under various forms of logic-based languages, such as the description logics of the Resource Description Framework (RDF) and the Web Ontology Language (OWL) being standardized by semantic web initiatives, or full first-order logic in the Common Logic language being standardized by the International Organization for Standardization (ISO). The OMG Ontology Definition Metamodel [9] contains metamodels for these and other languages for logic-based description, along with the beginnings of mappings across them and with UML.

A notable characteristic of logic-based languages is their neutrality with respect to ontology. An ontology is a system of concepts suitable for describing some domain of interest. In languages such as UML and SysML, such concepts are defined in terms of basic types for the elements in some domain (classes in UML, or blocks in SysML), along with properties that relate these elements to each other.

Byrne, "Practicing Enterprise Systems Engineering"

From the paper. There is a growing recognition that traditional systems engineering (TSE) practices need to be augmented by what is often called enterprise systems engineering (ESE). ESE proposes a different set of principles and strategies to adaptively build systems where there is less control, certainty, and understanding of the environment. …

Perhaps the biggest obstacle of applying ESE to DOD systems is the long, successful history of TSE practices. …

Another critical obstacle to ESE is the tradition to avoid risk by driving out uncertainties. ESE embraces uncertainty as part of its strategy. The result is a shift from risk avoidance to risk management. When dealing with life critical systems, the prospect of evolving to weed out mistakes and the expectation of emergent behavior go against conventional certification and testing requirements. Also, proper risk management requires close partnership with the end user, something traditional acquisition processes tend to de-emphasize (sometimes with legal constructs to keep these separate). …

Three initial concepts that are worth exploring are:

  • 80/20 Products. Take 20 percent of the normal effort to get 80 percent of the vision quickly.
  • Convergence Protocols. A classic example of this is the Internet Protocol (IP) that is often shown as the neck of an hourglass. Applications ‘above’ the ‘IP neck’ all converge to IP and then ‘fan out’ below the IP convergence point to any number of transport/communications options.
  • Continuous Competition. If programs were structured with common frameworks using simple convergence protocols, in theory contractors could mix and match their contributions (DOD’s version of mashups).

Paul Davis

Wayne Davis, "Systems-within-Systems: A Unifying Paradigm"

From the paper. This paper introduces a new paradigm for the design and operation of complex systems. Traditional approaches, including the system-of-systems perspective, attempt to reduce the overall into its fundamental components. Such approaches inherently seek to differentiate, and often isolate, the components. The proposed paradigm seeks a shared mission for all components to exploit recursive design practices. An underlying feature of the proposed recursive designs is containment, where identified components are characterized as a system within another system.

While developing the proposed paradigm, the core system technologies—mechanics, controls and planning—were also unified. This unification was accompanied with an unanticipated unification of time—the past, present and future. In fact, a second temporal axis has been introduced to facilitate the on-line concurrent implementation of planning and control responsibilities. The paradigm discusses the inherent deficiencies of planning in general, and the specific limitations of applying optimization in a real-world planning situations. It establishes an insurmountable need for another agent to implement an entity’s plan while refuting the subordinate stature that traditional hierarchical architectures would assign to an implementing agent. Rather, this interdependency establishes the need for expanded interaction among the planner and implementing agent, including the necessity of collaborative planning among the interacting systems. …

The intent in developing the proposed paradigm was to unify rather than invent. A first unification sought to exploit the similarity among the component subsystems within a system rather than emphasizing their distinguishing characteristics; a second unification sought to support direct interaction among the components rather than providing a central interface through which the interactions must occur; and a third sought to integrate planning, control and identification. The fourth and unanticipated unification sought to integrate the past, present and future.

These unifications established a need for collaborative planning. An individual entity can plan between specified initial and final states. We have been addressing that problem for decades by assuming that the initial and final states are known. The unifying paradigm asserts that these initial final states represent the shared state variables for the coupling dynamics among interacting systems at a given time. As such, any initial state must be established in collaboration, not isolation.

Each composite controller addresses at least three forms of planning concurrently: collaboratively specifying its initial state using the feedforward projections of its implementing agents for alternative goal assignments, collaboratively specifying its goal state while serving as an implementing agent for one or more other composite controllers, and individually determining its current plan for transitioning between its current specification of an initial and final planning states.

There are actually other optimizations to be addressed. Recall that a composite controller’s state transition function represents an aggregation of its component’s state transition function. We previously assumed that the processes are time variant, implying that their state transition functions change in time. For time-variant systems, another optimization, termed system identification, becomes essential. The composite controller must collaborate with its component processes to update its state transition model because the composite controller’s dynamics are derived from the dynamics of its time-variant processes. It must also participate in their collaborative system identification because the composite controller’s dynamics are included in the dynamics of any other controller for which it serves as an implementing agent.

System identification is also inherently dependent upon observing prior responses. This necessarily implies that the identification process is constrained by prior usage of the real-world process. The true capabilities of the process can only be ascertained through experimentation; that is, until something different is tried, one cannot ascertain whether the explored behavior is feasible or not. This experimentation obviously represents a form of learning, and one must choose what performance frontier should be explored next. Again, this determination corresponds to another optimization.

Doyle, "Towards a Theory of Scale-Free Graphs"

From the overview. John Doyle’s research builds on insights about the fundamental nature of complex biological and technological networks that can now be drawn from the convergence of three research themes. 1) Molecular biology has provided a detailed description of much of the components of biological networks, and with the growing attention to systems biology the organizational principles of these networks are becoming increasingly apparent([2], [3], [4], [5], [8], [9], [12], [13], [16], [17], [19], [20], [21], [22], [28], [30], [36] www.sbml.org ). 2) Advanced technology has provided engineering examples of networks with complexity approaching that of biology. While the components differ from biology, we have found striking convergence at the network level of architecture and the role of layering, protocols, and feedback control in structuring complex multiscale modularity ([1], [8], [10], [16], [20], [25], [29], [31], [32], [39]). Our research is leading to new theories of the Internet and related networking technologies, and to new protocols that are being tested and deployed, particularly for high performance scientific computing ([6], [11], [13], [15], [18], [23], [26], www.hot.caltech.edu, netlab.caltech.edu). 3) Most importantly, there is a new mathematical framework for the study of complex networks that suggests that this apparent network-level evolutionary convergence both within biology and between biology and technology is not accidental, but follows necessarily from the requirements that both biology and technology be efficient, adaptive, evolvable, and robust to perturbations in their environment and component parts (www.cds.caltech.edu/sostools, [10], [13], [27], [33], [37], [38]). This theory builds on and integrates decades of research in pure and applied mathematics with engineering, and specifically with robust control theory.

Through evolution and natural selection or by deliberate design, such systems exhibit highly functional and symbiotic interactions of extremely heterogeneous components, the very essence of “complexity.” At the same time this resulting organization allows, and even facilitates, severe fragility to cascading failure triggered by relatively small perturbations. Thus robustness and fragility are deeply intertwined in both biological and technological systems, and in fact the mechanisms that create their extraordinary robustness are also responsible for their greatest fragilities ([8], [16], [17], [21], [22]). Our highly regulated and efficient metabolism evolved when life was physical challenging and food was often scarce. In a modern lifestyle, this robust metabolism can contribute to obesity and diabetes ([21]). More generally, our highly controlled physiology creates an ideal ecosystem for parasites, who hijack our robust cellular machinery for their own purposes. Our immune system prevents most infections but can cause devastating autoimmune diseases, including a type of diabetes. Our complex physiology requires robust development and regenerative capacity in the adult, but this very robustness at the cellular level is turned against us in cancer. We protect ourselves in highly organized and complex societies which facilitate spread of epidemics and destruction of our ecosystems. We rely on ever advancing technologies, but these confer both benefits and horrors previously unimaginable. This universal “robust yet fragile'” (RYF) nature of complex systems is well-known to experts such as physicians and systems engineers, but has been systematically studied in any unified way only recently. It is now clear that it must be treated explicitly in any theory that hopes to explain the emergence of biological complexity, and indeed is at the heart of complexity itself.

These RYF features appear on all time and space scales, from the tiniest microbes and cellular subsystems up to global ecosystems, and also –we believe- to human social and technical systems, and from the oldest known history of the evolution of life through human evolution to our latest technological innovations. Typically, our networks protect us, which is a major reason for their existence. But in addition to cancer, epidemics, and chronic auto-immune disease, the rare but catastrophic market crashes, terrorist attacks, large power outages, computer network virus epidemics, and devastating fires, etc, remind us that our complexity always comes at a price. Statistics reveal that most dollars and lives lost in natural and technological disasters happen in just a small subset of the very largest events, while the typical event is so small as to usually go unreported. The emergence of complexity can be largely seen as a spiral of new challenges and opportunities which organisms exploit, but lead to new fragilities, often to novel perturbations. These are met with increasing complexity and robustness, which in turn creates new opportunities but also new fragilities, and so on. This is not an inexorable trend to greater complexity, however, as there are numerous examples of lineages evolving increasing simplicity in response to less uncertain environments. This is particularly true of parasites that rely on their hosts to control fluctuations in their microenvironment, thus shielding them from the larger perturbations that their hosts experience.

Grisogono, "The Success or Failure of Adaptation"

From the paper. Adaptation is a powerful mechanism displayed in many forms by living systems. It underpins evolution of species, animal learning, the development of culture, the response of the immune system to infections, and human creativity and problemsolving to name but a few. It can also fail in many ways, as in species extinction, the development of phobias, or premature convergence on poor solutions.

Selection is an essential aspect of adaptation, and together with the variation which it acts on, is responsible for producing the rich diversity of designs, strategies, tactics and concepts which we observe in the natural world.

What we are interested in exploring in this paper is the question of what determines how well adaptation can work – what are the factors that limit it, and to what can we attribute the extent to which adaptation does succeed in increasing the success of living systems? …

We have seen that for both individual systems, and populations of them, adaptation is about protecting and increasing their fitness, and that since fitness is a function of both system properties and context properties, it can do this by either acting on the system to change some of its properties – i.e. move it in design space, or by acting on the context to change the height of the fitness surface in the region of design space that the system already occupies.

The effectiveness of adaptation then depends on the factors that determine the rate of exploration of design space: the rate at which variations are produced, the scope, depth and concurrency of variations, the rate at which the variations can be evaluated for their impact on fitness, the accuracy of the evaluation, the strength of the selection pressure, and the accuracy and tolerance of the selection process, and on the effectiveness of the exploration: what can be varied, whether the variations are random or targetted, and what the mapping is between the things that can be varied, and the resulting changes in system properties – the so-called genotype-phenotype map [12] which depends on such properties as the modularity of the map, as well as topological properties of the fitness landscape that is being explored [see eg 13] which will determine what parts of the fitness space are accessible, and dynamical and other properties of the context which determine the rate and types of challenges to be met.

Hornby, "Toward the Computer-Automated Design of Sophisticated Systems by Enabling Structural Organization"

From the paper. [F]or computer-automated design systems to scale to design more sophisticated systems they must be able to produce designs with greater structural organization. By structural organization we mean the characteristics of modularity, regularity and hierarchy (MR&H), characteristics that are found both in man-made and natural designs. We claim that these characteristics are enabled by implementing the [fundamental properties of programming languages, which are] combination, control-flow and abstraction in the representation . To defend this claim we define metrics for measuring the three components of structural organization (modularity, regularity and hierarchy) and then use an evolutionary algorithm to evolve solutions to different sizes of a table design problem using five different representations, each with different combinations of modularity, regularity and hierarchy are enabled. We find that the best designs are achieved when all three of these attributes are present, thereby supporting our claim. Finally, we demonstrate the value of our metrics by comparing them against existing complexity measures and show that our metrics better correlate with good designs than do the other metrics. …

For example, in creating a design for a dining-room table the length of each table leg is dependent on the lengths of the other legs in the table and it is only useful to change the lengths of all legs together. By having a single description of a table leg, with references to this description at each place where it is used, all table legs are changed by changing this one description. Without reusable modules the CAD system must find and change all occurrences of a leg together, but this is feasible only when the dependencies are known beforehand and not when they are created during the search process. In the second case, as the of number parts in a design increases there is an exponential increase in the size of the design space. Since search consists of iteratively making changes to designs that have already been discovered, this increase in the design space reduces the relative effect of changing a single part in a design and increases the number of changes needed to navigate the design space. Increasing the amount of change made before re-evaluating a design is not a viable solution because this increase produces a corresponding decrease in the probability that the resulting design will be an improvement. With a generative representation the ability to combine and reuse previously discovered assemblies of parts, by either adding or removing copies, enables large, meaningful movements about the design space. Here the ability to hierarchically create and reuse organizational units acts as a scaling of knowledge through the scaling of the unit of variation.

Horowitz, "Self-Evaluating Agile Large-Scale Systems: SEALS"

From the paper. The basic concept for SEALS is to regularly perform opportunity and risk analyses in order to:

a) Determine the opportunities and risks for which more careful assessments are desired,

b) Support decisions on creation of a measurement and analysis sub-system, built-in as part of the overall system under consideration, to gather and assess information pertaining to the opportunities and risks of concern, including estimates of time to successfully seize opportunities or reduce risks, and

c) Create a system architecture that can be responsive to possibly needed changes in technical capabilities or human organization in a timely manner, and based on results from the measurement and analysis sub-system, initiate actual implementation efforts for “just in time” availability.

Hubler, "Guiding an adaptive system through chaos"

From the paper. We study the parametric controls of self-adjusting systems with numerical models. We investigate the situation where the target dynamics changes slowly and passes through a chaotic region. We find that feedback destabilizes controls if the target is chaotic. If the control is unstable the system migrates to the closest non-chaotic target, i.e. it adapts to the edge of chaos. For weak controls the deviation between system dynamics and target is larger, but the system dynamics is less chaotic and therefore more predictable. …

Most social organizations are modular structures with numerous feedback loops. A supervisor in a company controls the daily activities of a group of people, but his overall behavior is influenced by feedback from the supervised people. Activated genes control the cell dynamics, but the activation of genes occurs through feedback from the cell dynamics. The citizens of a state have to obey the laws but can change the laws through legislature. In many self-adjusting systems the feedback occurs on a much larger time scale than the dynamics.

One of the most striking features of self-adjusting parameters is their tendency to avoid chaos. This phenomenon is called “adaptation to the edge of chaos”. The concept “adaptation to the edge of chaos” refers to the idea that many complex adaptive systems, including those found in biology, seem to naturally evolve toward a narrow regime near the boundary between order and chaos [10]. …

An issue which has received little attention is the control of simple self-adjusting system. … In the following we discuss the management of some very simple deterministic chaotic agents which adjust their dynamics through feedback. …

Figure 1 shows that feedback destabilizes controls if the target is chaotic. Even if the control is unstable, both the dynamics of the state of the selfadjusting system and the evolution of it’s parameter are both predictable. If the control is unstable the system parameter migrates to the closest non-chaotic target, i.e. it adapts to the edge of chaos. Figure 4 shows that for soft controls the deviation between system dynamics and target is larger, but the system dynamics is less chaotic and therefore moredictable.

Humphreys, "Excerpts from Extending Ourselves [on modeling and simulation]"

From the paper. [T]he equations for most field problems of interest to engineers take no more than five or six characteristic forms. It therefore appears logical to classify engineering field problems according to the form of the characteristic equations and to discuss the method of solution of each category as a whole.

Clearly, the practical advantages of this versatility of equation forms is enormous - science would be vastly more difficult if each distinct phenomenon had a different mathematical representation. As Feynman put it in his characteristically pithy fashion `The same equations have the same solutions.’ As a master computationalist, Feynman knew that command of a repertoire of mathematical skills paid off many times over in areas that were often quite remote from the original applications. From the philosophical perspective, the ability to use and reuse known equation forms across disciplinary boundaries is crucial because the emphasis on sameness of mathematical form has significant consequences for how we conceive of scientific domains. …

Percolation theory (of which Ising models are a particular example – see section 5.3) can be applied to phenomena as varied as the spread of fungal infections in orchards, the spread of forest fires, the synchronization of firefly flashing, and ferromagnetism. Agent based models are being applied to systems as varied as financial markets and biological systems developing under evolutionary pressures. Very general models using directed acyclic graphs, g-computational methods, or structural equation models are applied to economic, epidemiological, sociological, and educational phenomena. All of these models transcend the traditional boundaries between the sciences, often quite radically. The contemporary set of methods that goes under the name of `complexity theory’ is predicated on the methodological view that a common set of computationally based models is applicable to complex systems in a largely subject-independent manner. …

My question is … why do the same mathematical models apply to parts of the world that seem in many respects to be completely different from one another? … [My note. I'm not sure I know whether an answer to this question was offered.]

[Definition:] System S provides a core simulation of behavior B just in case S dynamically produces solutions to a computational model which correctly represents, either dynamically or statically, B. If in addition the computational model used by S correctly represents the mechanisms by means of which the real system R produces B, then S provides a core simulation of system R with respect to B. …

Because one of the goals of science is human understanding, how the simulation output is represented is of great epistemological import when visual outputs provide considerably greater levels of understanding than do other kinds. The output of the instrument must serve one of the primary goals of science, which is to produce increased understanding of the phenomena being investigated. Increases in human understanding are obviously not always facilitated by propositional representations and in some cases are precluded altogether. The form of the representation can profoundly affect our understanding of a problem and because understanding is an epistemic concept, this is not at root a practical matter but an epistemological one.


Johnson, "An Example of Modeling and Simulation of Large-Scale Complex Systems, Processes, and Behaviors"

From the paper. Experiences in modeling and simulating large-scale complex systems, process, and behaviors are presented. Specific emphasis is on the bone-fracture healing process and the digestive system in humans. Also, an approach to autonomic organism behavior modeling is presented. Methods and tools used are applicable in any arena where large-scale complex environments are being addressed. Results presented are representative of the information used to develop the models.

Kreitman, "From The Magic Gig to Reliable Organizations: A New Paradigm for the Control of Complex Systems"

From the paper. In cybernetic terms, control is distinct from regulation. Control effectiveness lies in the business of setting and communicating parameters for results, and co-ordinating the environment so that the regulators will automatically behave in ways which achieves the system's overall goals. Thus, hierarchy which limits variety in the regulator is misplaced. Hierarchy which looks at a higher level of concern—longer timeframes, strategic evolution, and the like—needs to be in the business of control, but not in the business of regulation. The key to creating reliable environments is the principle of a designed environment consisting of policies and practices which are consistent with these principles and support behaviors which are favorable to accomplishing the whole system’s goals, and maximum flexibility within this structure to preserve coordinative and regulatory variety. This overriding systems principle is: fixed meta-structure, variable structure.

Kuras, "Complex-System Engineering"

From the paper. This paper proposes recognizing complex-system engineering as the second branch of general system engineering – alongside traditional system engineering.

There are problems for which the problem solving template of traditional system engineering is not appropriate – problems that continually change as they are being addressed, or that must be conceptualized at multiple scales in order to be fully comprehended, for example. Such problems are, however, amenable to solution using another problem solving template that is also faithful to the fundamental predicates of general system engineering. This is the template of complex-system engineering. Its developmental methods are summarized as the regimen of complex-system engineering. …

The overall [General System Engineering] problem solving template rests on three propositional predicates:

  • Any problem can be understood in terms of a system that is a solution to the problem. In this sense a system is the realized equivalence of a problem and its solution. Knowing what a system is is the sine qua non of system engineering.
  • The solutions to non-trivial problems do not instantly appear and disappear. The realizations of solutions to problems (as systems) exhibit life cycles.
  • The realizations of solutions to non-trivial problems involve a melding of multiplicities.

The differences between [Traditional System Engineering] and [Complex System Engineering] can be understood as differences in the way that a system is (or should be) conceptualized; as differences in the life cycles that account for their realization; and as differences in the multiplicities that must be melded, and how that melding is accomplished. &hellip

For a system engineer, a system has to be a part of reality – so that a real world problem can be solved. But a system also has to be conceptualized. A system also has to be “in the head” of an engineer, as it were, so that an engineer can think about the system. An engineer generally thinks about a system in terms of its structure and substance, and its dynamics or behavior (or what the system does). The reality of a system and the conceptualization of a system are different manifestations of the same system; but they are not independent. And one is not simply the reflection of the other. In particular, the system “in the head” of an engineer is not simply a reflection of the system as it might exist in the real world. The purpose of a system, for example, (the reason or motivation for what a system is, or why a system does what it does) exists only in the head of the engineer. …


In many actual cases, not all of the patterns that constitute a system are available at a single scale of conceptualization. And it is not generally possible to conceptualize combinations of conceptualizations at different scales together. So multiple scales of conceptualization are required. This results in a substantially revised multi scale definition of a system. …

[Traditional System Engineering] proceeds on the basis that a single scale conceptualization of a system is adequate to realize a solution to a problem. … [Complex Systems Engineering] uses the more general (multi scale) definition of a system. …

[Traditional System Engineering] employs a life cycle that is a sequential or iterated series of disjoint phases. In the ideal, a phase is not begun until exact conditions are satisfied that conclude the immediately preceding phase.

In order to accomplish a realization of a non-trivial (and non-social) system, multiple disciplines must be brought to bear in a deliberate way. This deliberate integration of multiple disciplines is one of the two aspects of the melding of multiplicities in [Traditional Systems Engineering]. …

The [Traditional System Engineering] template is used to address problems that can be stabilized and for which their solutions (as systems) can be adequately isolated from incidental issues in their environments. …

[Complex Systems Engineering] employs a life cycle that involves a series of overlapping (not disjoint) phases. The early phases of this cycle are captured in the S-curve. The knees in this curve mark the transition from one phase to the next.

In [Complex Systems Engineering] the melding of multiplicities is centered on modulating the interactions of multiple autonomous agents. Techniques for modulating cooperation and competition as well as for coordination are employed.

[Complex Systems Engineering] is used to address problems that constantly change (i.e., problems that can’t be stabilized). This means that their solutions must constantly change as well. And there are problems (and their solutions) that can’t be stabilized. … Moreover, such problems and their solutions (as systems) can never be isolated from their environments. …

[What] is important to grasp is that there is a class of problems and solutions for which development cannot be isolated from operation; in which a system (as the equivalence of a problem and solution) cannot be isolated from its environment and that exhibits relevant functionality at multiple scales; that is social in nature (involves people and their behavior); and whose development involves either or both maturation and evolution. [Traditional Systems Engineering] cannot successfully deal with such problems, only with parts of such problems; [Complex Systems Engineering] addresses such problems and their solutions. …

The development and operation of a complex-system is self-directed. Central to this self-direction are the processes of evolution. [Complex Systems Engineering] is the deliberate and sustained intervention in these processes. Methods for doing this (intervention) are summarized as the regimen of complex-system engineering.

  1. Analyze and temporarily modify the environment in order to influence the self-directed development of the system;
  2. Tailor efforts by explicitly recognizing specific regimes and scales of the complex-system and its environment;
  3. Formulate targeted Outcome Spaces rather than exactly aligned outcomes at all scales and in all regimes;
  4. Establish rewards and penalties for autonomous agents;
  5. Judge actual results, and allocate prizes;
  6. Formulate and apply developmental stimulants (catalysts);
  7. Continuously characterize the operation, substance and structure of the system;
  8. Formulate and enforce safety regulation (policing).

Lane, "COSOSIMO Parameter Definitions"

From the paper. The Constructive System-of-Systems (SoS) Integration Cost Model (COSOSIMO) is designed to estimate the effort associated with the Lead System Integrator (LSI) activities to define the SoS architecture, identify sources to either supply or develop the required SoS component systems, and eventually integrate and test these high level component systems. For the purposes of this cost model, an SoS is defined as an evolutionary net-centric architecture that allows geographically distributed component systems to exchange information and perform tasks within the framework that they are not capable of performing on their own outside of the framework. The component systems may operate within the SoS framework as well as outside of the framework, and may dynamically come and go as needed or available. In addition, the component systems are typically independently developed and managed by organizations/vendors other than the SoS sponsors or the LSI.

Results of recent COSOSIMO workshops have resulted in the definition of three COSOSIMO sub-models: a planning/requirements management/architecture (PRA) sub-model, a source selection and supplier oversight (SS) sub-model, and an SoS integration and testing (I&T) sub-model. …

This technical report is an update to the COSOSIMO parameter definitions dated March 2006 and describes the parameters for each of the COSOSIMO sub-models. The parameters include a set of size drivers that are used to calculate a nominal effort for the sets of activities associated with the sub-model and a set of cost drivers that are used to adjust the nominal effort based on related SoS architecture, process, and personnel characteristics. Each size driver description includes a definition of the parameter as well as associated counting rules and guidance for assigning complexity ratings. Each cost driver description includes a definition of the parameter as well as guidance for assigning the appropriate rating factor.

Finally, COSOSIMO workshop findings indicate that some of the SoS LSI activities are similar to systems engineering activities addressed by the Constructive Systems Engineering Cost Model (COSYSMO) and have similar size and cost drivers. Therefore, some of the COSOSIMO parameter definitions are adapted from the COSYSMO definitions in [Valerdi 2005] and are indicated by a footnote.


Maier, "Dimensions of Complexity other than “Complexity”"

From the paper. This paper offers a variety of attributes, with ranges, that are potentially equally valid development discriminators. …

Paraphrased. Dimensions along which complexity should be measures include: Sponsors, Users, Technology, Feasibility, Control, Situation-Objectives, Quality, Program Scope, Organizational Maturity, Technical Scope, and Operational Adaptation. … Using these dimensions a number of footprints of various degrees of complexity are suggested.

Marcus, "Complex Systems Engineering Strategies"

From the paper. Complex systems are systems of elements that exhibit emergent, multiscale, metastable, non-equilibrium and evolutionary dynamics.

There are several possible ways for components of complex systems to interact.

  1. Collaboration - Components work together driven mainly by individual goals and constraints.
  2. Coordination - Component behavior is strongly influenced by group goals and constraints.
  3. Control - Component behavior is based mainly on centrally set goals and constraints.

The engineering of complex systems should be based on a combination of strategies

  1. Top Down - Classical systems engineering driven by user requirements (Control-based).
  2. Bottom Up - Behavior emerges from self-organization of components (Collaboration-based).
  3. Matchmaking - Requirements are assigned to existing components (Discovery + Orchestration).
  4. Middle Out - Requirement and capability mediation extends matchmaking (Coordiniation-based).

In the Middle Out approach, requirements and capabilities can be modified during the development and operation of the of the complex systems. Most complex systems engineering programs will require a combination of strategies. The Network Centric Operations Industry Consortium (NCOIC.org) is exploring a Bottom Up "Foundation Information Grid" demonstration approach as step towards the DoD's Global Information Grid. Follow-on steps will incorporate user requirements, matching requirements to capabilities in the Foundation Information Grid, and extension of the NCOIC demonstrations as necessary.

McConnell, "10 Barriers to Complexity Science"'

From the paper. In my opinion there is much that Systems Engineering ought to be learning from the nascent science of complex systems. However, the reality within industry is that, for the most part, systems engineers remain ignorant of the subject and the benefits that could be accrued.

Ten reasons why: Not invented here; Requires a change of mindset; Unable to accept increased uncertainty; Many Systems Engineers don't get 'systems'; Complex Science doesn't "get" engineering; Too focused on "doing the job"; No thinking outside the box; Not on the radar; It's just a fad; Often requires early investment; It is complex!

Norman, "Let’s focus on the manner and means of evolution"

From the paper. One of the key ideas coming from complexity scientists is the central role for evolution as the prime mechanism for results found at large scales and for complex systems. …

What I’d like to suggest are discussions which focus on the general characteristics of evolution as the means and mechanism by which extremely large-scale systems and enterprises are envisioned, initiated, designed, grown, and managed.

North, "Containing Agents: Contexts, Projections, and Agents" and "Agent-based Meta-Models"

"Containing Agents: Contexts, Projections, and Agents"

From the paper. Agent-based models have historically maintained a tight coupling between individuals, their behaviors, and the space in which they interact. As a result, many models have been designed in a way that limits their ability to express behaviors and interactions. In this paper, we propose a new approach toward designing simulations that builds upon the experiences of developing and working with several agent-based toolkits. This approach encourages flexibility and reusability of components and models. A preliminary implementation is the core structure of the upcoming Repast Simphony agent-based modeling and simulation toolkit. By creating a “proto-space” called a Context, we provide model designers with a container that can maintain a localized state for agents. A Context’s state can maintain multiple interaction spaces called Projections, as well as more typical state information. Projections are designed such that they can be used to represent a wide range of abstract spaces, from graphs to grids to realistic geographic spaces. Importantly, projections and agents or individuals are independent of one another. Agents can be agnostic toward the type of projection in which they are interacting, and projections can be agnostic toward the type of agents whose relationships they maintain. Finally, the context provides a logical location to maintain agent behaviors that are dependent on localized agent interactions and environment.


"Agent-based Meta-Models"

From the Paper. The agent-based modeling and simulation (ABMS) community has long recognized a need for concise, complete, and implementation-neutral representations of agent-models, and for modeling tools that do not require significant computer programming experience. We discuss earlier efforts to address these needs, arguing that proposed representations were typically too high-level and did not cover behavior. It may be that these weaknesses were insurmountable at the time—and that it is only now, with the availability of relatively mature Domain Specific Languages (DSLs) and Model Driven Software Development (MDSD) tools that these needs may finally be met. We justify this claim by identifying significant issues modelers face in using General Purpose Languages (GPLs) for agent-based models and how these issues might be overcome by using DSLs. We describe the specific tools we intend to employ in that effort and how we plan to use those tools, and we propose a general meta-model for ABMS.

Perrow, "Normal Accident Theory"

From the paper. Normal Accident Theory (NAT) applies to complex and tightly coupled systems such as nuclear power plants, aircraft, the air transport system with weather information, traffic control and airfields, chemical plants, weapon systems, marine transport, banking and financial systems, hospitals, and medical equipment (Perrow 1984, 1999). It asserts that in systems that humans design, build and run, nothing can be perfect. … But occasionally two or more failures, perhaps quite small ones, can interact in ways that could not be anticipated by designers, procedures, or training. These unexpected interactions of failures can defeat the safeguards, mystify operators, and if the system is also `tightly coupled' thus allowing failures to cascade, it can bring down a part or all of system. The vulnerability to unexpected interactions that defeat safety systems is an inherent part of highly complex systems; they cannot avoid this. The accident, then, is in a sense `normal' for the system, even though it may be quite rare, because it is an inescapable part of the system. …

The quintessential system accident occurs in the absence of production pressures; no one did anything seriously wrong, including designers, managers, and operators. The accident is rooted in system characteristics. …

A large reinsurance company found that it was making more money out of arbitraging the insurance premium it was collecting from many nations: making money by transferring the funds in the currency the premium was paid in to other currencies that were slightly more valuable. They enlarged the size of the financial staff doing the trading and cut the size of their property inspectors. The inspectors, lacking time to investigate and make adequate ratings of risk on a particular property, were encouraged to sign up overly risky properties in order to increase the volume of premiums available for arbitraging. More losses with risky properties occurred, but the losses were more than covered by the gains made in cross-national funds transfers. The public at large had to bear the cost of more fires and explosions (`socializing' the risk). Insurance companies have in the past promoted safe practices because of their own interest in not paying out claims; now some appear to make more on investing and arbitraging premiums than they do by promoting safety. Open financial markets, and the speed and ease of converting funds, appear to interact unexpectedly with plant safety. …

Scott Snook examines a friendly fire accident wherein two helicopters full of UN peacekeeping officials were shot down by two US fighters over northern Iran in 1991 (Snook 2000). The weather was clear, the helicopters were ¯ying an announced flight plan, there had been no enemy action in the area for a year, and the fighters challenged the helicopters over radio and flew by them once for a preliminary inspection. A great many small mistakes and faulty cognitive models, combined with substantial organizational mismatches and larger system dynamics caused the accident, and the hundreds of remedial steps taken afterwards were largely irrelevant. In over 1,000 sorties, one had gone amiss. The beauty of Snooks' analysis is that he links the individual, group, and system levels systematically, using cognitive, garbage can, and NAT tools, showing how each contributes to an understanding of the other, and how all three are needed. It is hard to get the micro and the macro to be friends, but he has done it. …

One lesson is that NAT is appropriate for single systems (a nuclear plant, an airplane, or chemical plant, or part of world-wide financial transactions, or feedlots and live-stock feeding practices) that are hardwired and thus tightly coupled. But these single systems may be loosely coupled to other systems. It is even possible that instead of hard-wired grids we may have a more `organic' form of dense webs of relationships that overlap, parallel, and are redundant with each other, that dissolve and reform continuously, and present many alternative pathways to any goal. We may find, then, undesigned and even in some cases unanticipated alternatives to systems that failed, or pathways between and within systems that can be used. The grid view, closest to NAT, is an engineering view; the web is a sociological view. While the sociological view has been used by NAT theorists to challenge the optimism of engineers and elites about the safety of the risky systems they promulgate, a sociological view can also challenge NAT pessimists about the resiliency of large system (Perrow 1999). Nevertheless, the policy implications of NAT are not likely to be challenged significantly by the `web' view. While we have wrung a good bit of the accident potential out of a number of systems, such as air transport, the expansion of air travel guarantees catastrophic accidents on a monthly basis, most of them preventable but some inherent in the system. Chemical and nuclear plant accidents seem bound to increase, since we neither try hard enough to prevent them nor reduce the complexity and coupling that make some accidents `normal' or inevitable. New threats from genetic engineering and computer crashes in an increasingly interactive world can be anticipated. …

We have yet to look at the other side of systems: their resiliency, not in the engineering sense of backups or redundancies, but in the sociological sense of a `web-like' interdependency with multiple paths [to accident prevention or containment] discovered by operators (even customers) but not planned by engineers. NAT, by conceptualizing a system and emphasizing systems terms such as interdependency and coupling and incomprehensibility, and above all, the role of uncertainty, should help us see this other, more positive side.

Prokopenko, "Evolving Spatiotemporal Coordination in a Modular Robotic System"

Author's informal summary. The main idea of the paper, and the underlying approach ("information-driven evolutionary design"), is to capture information transfer in a multi-agent system with varying degrees of coupling/interaction, and to evolve the system by maximizing the transfer within certain "channels".

From the paper. In this paper we present a novel information-theoretic measure of spatiotemporal coordination in a modular robotic system, and use it as a fitness function in evolving the system. This approach exemplifies a new methodology formalizing co-evolution in multi-agent adaptive systems: information-driven evolutionary design. The methodology attempts to link together different aspects of information transfer involved in adaptive systems, and suggests to approximate direct task-specific fitness functions with intrinsic selection pressures. In particular, the information-theoretic measure of coordination employed in this work estimates the generalized correlation entropy K2 and the generalized excess entropy E2 computed over a multivariate time series of actuators’ states. The simulated modular robotic system evolved according to the new measure exhibits regular locomotion and performs well in challenging terrains.

Ryan, "Emergence is coupled to scope, not level"

From the paper. Since its application to systems, emergence has been explained in terms of levels of observation. This approach has led to confusion, contradiction, incoherence and at times mysticism. When the idea of level is replaced by a framework of scope, resolution and state, this confusion is dissolved. We find that emergent properties are determined by the relationship between the scope of macrostate and microstate descriptions. This establishes a normative definition of emergent properties and emergence that makes sense of previous descriptive definitions of emergence. In particular, this framework sheds light on which classes of emergent properties are epistemic and which are ontological, and identifies fundamental limits to our ability to capture emergence in formal systems.


Sheard, "Principles of Complex Systems for Systems Engineering"

From the paper. This paper shows how three systems of types well-known to systems engineers can be understood as complex systems, and what principles can and should apply to developing and improving them. … The three examples are INCOSE, the systems engineering process (such as a company’s standard process), and air traffic control. …

This paper presents a number of Complex Systems principles, selected for their applicability to the development and use of man-made engineering-based systems, i.e., systems engineering. …

[These are characteristic of complex systems, and the three systems listed above have these qualities:] Autonomous interacting parts (agents); Fuzzy Boundaries; Structure; Self-organization (emergent order); Can’t design or run top-down because...; Nonlinearity; Structure not deducible from structure of component parts; Energy in and out (examples); Adaptation to surroundings (environment); Become more complex with time; increasingly specialized; Elements change in response to imposed pressures from neighboring elements; Developer-Artifact-User (DAU) system components;

Many tables, lists, and comparisons between complex systems and systems engineering.

Quotes Bar-yam's recommendation for how to transition from doing SE to doing CSE (based on [Bar-Yam 2006])

  1. Continually build on what already exists [It’s a complex system after all; it must evolve] Evolution from scratch is slow; start from something close to what you want.
  2. Focus on creating an environment and process rather than a product
  3. Individual components must be modifiable in situ
  4. Operational systems include multiple versions of functional components
  5. Utilize multiple parallel development processes
  6. Evaluate experimentally in situ
  7. Gradually increase utilization of more effective components. Note: Effective solutions to specific problems cannot be anticipated

Scheffran, "From Complex Conflicts to Stable Cooperation Cases in Environment and Security"

From the paper. To study the dynamics of conflict and the evolution of cooperation, we introduce an integrated framework for modeling the interaction of multiple actors who pursue objectives by allocating their resources to various action paths. In repeated learning cycles actors can adjust their targets and resources to those of other actors, thus shifting from conflict to cooperation. Within the general framework it is possible to study the complexity and instability of multi-actor constellations and the transition to cooperation for specific examples in a wide range of fields, including military security and environmental conflicts in fishery management, energy and climate change. …

Conflict potential is described as a continued difference of various factors: [values and goals, resources and means, and system states, options and actions.] For each of these dimensions, actors are able to draw a line between what they are willing to tolerate and what not. As long as a given state of the environment is wanted by some actors and unwanted by other actors, they tend to use their resources to change the difference to their benefit, at the cost of other actors. … An interaction process that increases rather than decreases the conflict potential contributes to conflict escalation which can result in an unstable dynamics. In many cases an escalation is only finished if one or several of the involved actors reach their resource limits or disappear. If actors succeed in reducing the conflict potential, they are moving towards conflict resolution and a more stable interaction. Through learning and cooperation actors adapt their goals, resources and actions to those of other actors, working together rather than against each other.

Shell, "Principled Synthesis for large-scale multi-robot systems: task sequencing"

From the author. We describe a novel approach for the synthesis of non-trivial coordinated behavior in large-scale swarms of robots. The idea is that coordination may be achieved through local interactions which are chosen to satisfy the ergodic property. Such local interactions may be described macroscopically (and in a time-invariant manner) through standard statistical mechanics techniques. Our research hopes to build a toolbox of such processes and their associated macroscopic characterizations. Construction of robot controllers can then be achieved by combining distributed processes while, simultaneously, coupling the processes associated macroscopic characterization. The objective is to allow the system designer to think about controller synthesis as the problem of combining macroscopic templates rather than as manipulation of low-level controllers which are often sensitive to changes.

Silver, "Why we need systems biology"

From the paper. [T]here are … a number of observations that are not explained by existing [biological] disciplines. Why are biological systems designed the way they are, and not some other way? Why do oscillations sometimes arise spontaneously in perturbed biological systems, and what controls their characteristics? How do the conflicts between genes play out within an organism? Can a quantitative understanding of system failure inform therapy for diseases and disorders?

[Systems biology approaches] organisms as ‘poorly oiled machines’ and [has as a goal] to understand and predict the behavior of organisms that have not been rationally constructed.

Smith, "Establishing a Network Centric Capability: Implications for Acquisition and Engineering"

From the paper. This paper focuses on specific challenges of migration to network centric operations. Network centric operations refers to the moving and sharing of infor¬mation in an agile manner among personnel and sys¬tems with the network as a central enabling mechanism for the sharing of information. …

[We] recommend the following strategies:

  • Identify Engineering Implications of Network Centric Missions
  • Adopt an Inclusive View of the Network Centric Community
  • Characterize the Existing Technology Base
  • Characterize the Gaps between Doctrine/Mission and the Technology Base
  • Collaborate with Others to Develop Governance Rules
  • Establish a Network Centric Integration Environment
  • Establish a Reward Structure
  • Train Network Centric Command and Engineering Staff
  • Prepare for Novel Forms of Acquisition

Because network centric operations assume an SOA orientation, the implications of SOA need to be understood and addressed. …

[Successful] SOA-based systems development requires attention to four pillars: [

  • Strategic Alignment with mission and business goals,
  • Governance,
  • Technology Evaluation,
  • Awareness of a Different Mindset (loose coupling, network-wide semantics, incompletely known service set, and muultiple sources for services].

Stepney, "Neutral Emergence: a proposal"

From the paper. An emergent property exhibits neutral emergence when a change in the microstate L does not change the macrostate S, or vice versa. In particular, it can be robust to many changes in its implementation, including, possibly, the effect of errors. It is often stated that emergent systems (often modelled on natural processes) exhibit robustness: here we see why (and where) this may be the case. The excess information in L (a large H(L|S)) is necessary for emergent systems to be robust in this manner.

As argued earlier, an engineering development process can be seen as implementing specification S by finding an L with a high mutual information I(S : L). Here we see that, at the same time, the process can also seek to maximise robustness, by searching for a system that is insensitive to (uncorrelated with) certain failure modes or other possible changes in L. If a system were stressed during development (exposed to a range of stresses and implementation errors), its implementation could be encouraged towards regions that are insensitive (robust) to such events. (Compare this to the development of formally proven systems: they do not guarantee any level of performance with even the smallest change.) By analogy to evolutionary fitness landscapes, we want to find systems that lie in gently sloping plains and plateaux, rather than on narrow peaks or steep cliffs. …

[A] system exhibits minimal emergence when everything is a surprise (zero mutual information). Clearly a model like this that knows nothing about what it is modelling is useless, but equally (as argued above) some degree of surprise (some conditional information, or less than maximal mutual information) in the system may prove advantageous. Thus there should be a level of emergence with the maximum utility, a position at which the model has freedom to explore but is held within a constrained region of the search space.

Terrile, "Evolutionary Computation Technologies for Space Systems"

From the paper. The Evolvable Computation Group at NASA’s Jet Propulsion Laboratory, is tasked with demonstrating the utility of computational engineering and computer optimized design for complex space systems. The group is comprised of researchers over a broad range of disciplines including biology, genetics, robotics, physics, computer science and system design, and employs biologically inspired evolutionary computational techniques to design and optimize complex systems. Over the past two years we have developed tools using genetic algorithms, simulated annealing and other optimizers to improve on human design of space systems. We have further demonstrated that the same tools used for computeraided design and design evaluation can be used for automated innovation and design. …

We have demonstrated that evolutionary computational techniques can be applied to the design and optimization of space systems. Generally, these applications offer better performance (in the range of at least 10%) than traditional techniques and show faster design times. Additionally, changing fitness requirements and redesign, which inevitably occurs in real systems and generally causes great fiscal and schedule disruption, can be accommodated at relatively low cost.

Vandergriff, "Systems Engineering in the 21st Century:

Implications from Complexity Theory": slides & text

Paraphrased from the paper. This white paper provides initial insights on how to accommodate the demands that are not being met by the classical 20th Century System Engineering model. Complicated isolated systems and physical capital dominated systems of systems lent themselves to traditional “grand, all-at-once” 20th Century system engineering. The complexity concepts explored in “complex systems” literature also focus on a limited set of complicated asset and connectivity only solution sets. As observed in Complexity Theory, it is not necessary for a solution to be large in scope, complicated, or have a high number of interfaces to exhibit complex behaviors. Rather 21st Century solutions deal with behaviors arising from the interdependence of users, technology, and context often referred to “wicked” problems. It is important to clearly define a model to inform the Architecting and Systems Engineering Acquisition best practices. To do this one must first explore the fundamental differences in defining, developing, and implementing complicated systems and complex ventures . The proposed Complex Venture model builds upon the insights derived from chaos and complexity theories; observations of several acquisitions successes and failures; and my doctoral research on decision support for Agile Enterprises.

Complicated and complex models of reality have inherently different characteristics and descriptions. Lissack and Roos (2000) have described the differences between a model of the world that has discrete, yet complicated, structure and one that has interdependent complex structure. The insight, they explain, lies in the roots of the two words. “Complicated” uses the Latin ending “plic” that means, “to fold” while “complex” uses the “plex” that means, “to weave.” Thus, a complicated structure is one that is folded with hidden facets and stuffed into a smaller space. On the other hand, a complex structure uses interwoven components and context that introduce mutual dependencies and produce more than a sum of the parts. In today’s solutions, this is the difference between a myriad of connecting complicated “stovepipes” with varying description lengths over scales and effective complex “integrated” solutions composed of both simple and complex systems varying over time to increase fitness.

Although Complicated Systems and Complex Ventures Architecting and Systems Engineering approaches have some overlap, research is beginning to predict several differences also. Key observations of approaches that seem to improve Complex Venture Acquisition and System Engineering are summarized here as a starting point for further Architecting and Systems Engineering community discussion and experimentation. (Paper explains each)

  • Leadership, not control, with clear and consistent venture-wide vision
  • Address rapidly changing context and the co-evolutionary ventures
  • Institute tiered situationally aware decision-making in both time and place
  • Address all factors contributing to success

Webb, "UML Modeling of Finite State Machines and Molecular Machines"

From the paper. In this paper I introduce a simple biological control system in which an enzyme continuously transforms glycogen into a more readily usable sugar. The activity of this enzyme is closely regulated by two other enzymes. I then introduce aspects of the Unified Modeling Language (UML), a popular standard used by software developers to graphically design executable models. I also present Xholon, a research tool and runtime environment that can execute UML models.

The bulk of the paper works through a progression of models designed and executed with UML and Xholon. Each is a model of the same biological control system. Model 1 is a symbolic finite state machine (FSM). Model 2 is a more physical simulation using UML FSM objects. Model 3 uses more biologically plausible objects. Model 4 attempts to remove any remaining hidden symbols to produce a system in which each object's behavior is only dependent on what other objects it is composed of or attached to. Model 5 is a version that can be integrated into an existing much larger more complex simulation, to explore how scalable the ideas presented here are. Model 6 is a fully physical realization of the system using Lego blocks, to reinforce the importance of taking a physical perspective.

Weber, "What are highest priorities in building intellectual infrastructure to understand & design Complex Human Systems (CHS)? The case for Multi-Agent Based Simulation"

From the paper. The major bottleneck in using multi-agent based simulation for exploratory analysis and understanding emergent behavior either as unintended consequences or desired synergy is the time spent constructing scenarios of sufficient complexity—enough for surprising and useful results but not so complex that the scenario becomes confusing or generates “noise in the analysis.” Most current scenarios have one and possibly two scales of interaction, but most interaction is within scale rather than the more interesting interactions from larger to smaller scale or an integration of smaller scale effects to cause an effect at the larger scale.

Three is a useful number of scales to aggregate and play in a simulation. One can think of them as global, national or strategic; regional, community, or theater; and local, personal or tactical. Each scale exists within a unique domain of time and space, say 1 to 30 min and .01 to 100 km for tactical interactions, 0.5 to 12 hrs and 100 km to 1000 km for regional and 0.5 to 5 days and 1000 km to GEO for strategic. …

White, "Multi-Scale Modeling of the Air and Space Operations Center"

From the paper. The goal of this effort is to use multi-scale modeling to understand the effect of operatorenvironment interaction and the global environment on Air and Space Operations Center (AOC) processes. Models were developed at 3 scales, including: 1) Operator interaction with computer interface (Agent-based model); 2) Processing of Time Sensitive Targets (TST, Petri net model); and 3) Mission-scale objectives, strategy, and processes, including adversary response and global and US public perception (System Dynamics model). An existing Petri net model of the operational architecture of the AOC was updated for this study; all models were developed with subject matter experts.

Petri nets are well-suited for modeling systems that consist of a number of processes that communicate and need synchronization. The focus of the Petri net process model is critical event response time and manpower utilization. These measures of operation are computed and passed to the System Dynamics model. Information overflow indicators for operators (e.g., operator stress) and the effect of operator reduction on critical event processing both have high degrees of interdependence with the overall time critical event processing, and were found to be critical future research areas during the Joint Expeditionary Forces Experiment (JEFX) in 2006. AOC process models often focus on making operations as efficient as possible. However, without factoring in the global environment in which the AOC operates, locally optimum procedures may result in solutions that are not optimal for the global scale.

The System Dynamics model runs over a time horizon appropriate for long term strategic planning. The focus is on the mission-scale (i.e., Joint Force Objectives, political decision making, social support level), which might require a time horizon of 10, 30, or 365 days, for example. Events and behaviors that unfold in the System Dynamics model are then passed back to the Petri net model. The AOC process model is linked to the System Dynamics model through the variables average response time and maximum personnel utilization. The variable average response time drives the effectiveness of destroying adversary resources in the System Dynamics model, while the variable maximum personnel utilization is used to drive the probability of major errors in prosecution. The System Dynamics model is linked to the AOC Petri net process model through the variable event data stream, which generates event streams for five different event types to be processed in the AOC model.

Wolpert, "Using Self-dissimilarity to Quantify Complexity"

From the paper. For many systems characterized as "complex" the patterns exhibited on different scales differ markedly from one another. For example the biomass distribution in a human body "looks very different" depending on the scale at which one examines it. Conversely, the patterns at different scales in "simple" systems (e.g., gases, mountains, crystals) vary little from one scale to another.

Accordingly, the degrees of self-dissimilarity between the patterns of a system at various scales constitute a complexity "signature" of that system. Here we present a novel quantification of self-dissimilarity.

This signature can, if desired, incorporate a novel information-theoretic measure of the distance between probability distributions that we derive here. Whatever distance measure is chosen, our quantification of self-dissimilarity can be measured for many kinds of real-world data. This allows comparisons of the complexity signatures of wholly different kinds of systems (e.g., systems involving information density in a digital computer vs. species densities in a rain-forest vs. capital density in an economy, etc.). Moreover, in contrast to many other suggested complexity measures, evaluating the self-dissimilarity of a system does not require one to already have a model of the system. These facts may allow self-dissimilarity signatures to be used as the underlying observational variables of an eventual overarching theory relating all complex systems. To illustrate self-dissimilarity we present several numerical experiments. In particular, we show that underlying structure of the logistic map is picked out by the self-dissimilarity signature of time series' produced by that map.

Zaerpoor, "Issues in the Structure and Information Flow in the Pyramid of Combat Models"

From the paper. Experience with the SEAS multi-mission combat model has raised issues regarding interfaces between it and military models of different resolution and functionality. We have observed that certain layers in the traditional hierarchy of military models often have little if any added value. Too often, the range and scope of the different models employed in a study are extended to the point of obscuring or impeding analysis. Most significantly, the models at the highest or the lowest level do not seem to contribute to the analyst’s insight. The output of aggregate campaign simulations usually contains little information not implicit in the inputs and their direct interpretation within the model. In the case of detailed, high resolution models, the input is often unavailable and must be arbitrarily specified to satisfy model requirements. Faced with these obstacles, we propose a general framework for simulation based acquisition.