On definitions and free lunches
There are some interesting differences in definition and focus. One that Alex Ryan brought to my attention is the definition of adaptation. I tend to use "adaptation" to describe the process of both learning and evolution. In biological sciences, it tends to be used to describe part of an end result of that process (eg. the mind of humans is a useful adaptation). In some studies of complex adaptive systems, the focus is more on evolution and learning is mostly ignored, however for some complex adaptive systems (eg. on the battlefield, or more generally in organisations) you may not have time scales over which evolution can occur, and the only way to get adaptation is through learning. In other systems (eg. my paper, Prokopenko's paper) you run up against the No free lunch theorem in which case the choice between an evolutionary algorithm or learning may boil down to context (or it may also depend on things like ease of programming algorithms, computational power, computer memory limitations). I was going to write a good definition here on the distinction between a complex system and a complex adaptive system, but Grisogono writes: "This last property, of learning from experience, is a defining characteristic of CAS, and distinguishes complex adaptive systems from those (complex systems) which are simply reactive" For more definitions, I strongly recommend reading through this and also Cosma Shalizi's notebooks
Thoughts on the Prokopenko paper: It would be nice to see the snakes evolve using a multi-objective evolutionary algorithm with both excess entropy (co-ordination of movement) and speed. It would be nice to see if this resulted in the set of fast robots that is contained in the set of well-co-ordinated robots. Edit: my point is somewhat redundant, as Prokopenko pointed out in his talk, their measure is already trying to evolve fast and well-co-ordinated (robust) robots. A similar point could be made about my own paper, in using both pleiotropy and redundancy measures rather than combining them into a single fitness measure. It would be nice to see the spread of fitness, rather than just the best performer in each generation as plotted in Fig's 7 and 8.
If I had the chance to rewrite my paper again, I would build a multi-objective evolutionary algorithm, as discussed above, evolving on both the cost and reliability as separate function. I would also write in the alternative approach to tackling the problem / fitness function as discussed at the end of my slides. It would also be nice if I included at least some results from my previous paper for easier comparison, or better yet simply repeated with the current fitness functions but with crossover removed. I'm very open to different ways that crossover could be implemented, since I'm unhappy with the method I used.
--George McC 04:11, 10 January 2007 (PST)
Not got any particular comments on your paper (except to congratulate you on it), but I do have a point that I think is worth making with regard to the use of GAs etc..
I found that when I used a GA to tackle a scheduling problem one of the things that was most beneficial was getting the engineers to sit down and define what constituted a "good solution" in order to arrive at the fitness function. Prior to that they were messing about with candidate schedules and little more than a "gut-feel" of what constituted a good one.
It need not be a complicated fitness function, although often it is necessary to build a multi-objective algorithm. Even with simple "goodness" measures the intuitive solutions can still be very far away from the optimum.
Matthew again: The extra fitness function in my powerpoint is as a result of talking to an ex-AT&T engineer. It was most illuminating. Obviously I could extend this much more... this process is a higher level of adaptation, in which we evolve the fitness function and parameters.
I don't have a background in cost models, so these are just general thoughts on the Lane paper: I couldn't find the previous paper on COSOSIMO parameter definitions (a detailed reference isn't given) which at times made it difficult to understand this paper, which mainly reads as a supplement to that. Thus there is no real discussion of previous / related work on cost models, which I would like. In qualitative work of this nature, I feel it is important to give examples of or references that distinguish between levels (eg, between a high level of trust, and a medium level of trust). In Table 5, what is meant by multiple similarities in language and expertise? It seems to be suggesting that a shared language is good (which I agree is useful for SoS development) but that expertise should be perhaps a bit more homogeneous than I agree is necessary or useful.
Thoughts on the Kuras paper: I liked this paper and think it's worth highlighting some of the key points, but I have a few bones to pick / thoughts to add first:
- The authors state that "Explicitly noting overlaps in certain conceptualizations at multiple scales has been given the label of emergence", however, I would argue that the key to emergence is not overlap (which could merely mean A is contained in B) but rather all three statements (A intersection B not the empty set, A not a subset of B, B not a subset of A), and the emergent part is the part of B not in A (the complement of A, intersection with B), where B is at a higher scale than A.
- Does the term Holon have any distinction above a selection function? It is worth pointing out that a Holon is in itself a pattern (or at least is actualised and/or represented as) patterns.
- Some thoughts on Holons: is it possible to decompose H^\nu as some rule which gives the minimal set of patterns describing a system at the lowest level, and then a rule for the system at another level (eg. a bicycle, I guess a common rule is "a device which has two wheels and is used for transport" and then at some higher levels you can add "with a horn" or "with paint for rust protection" but there is always (?) a common, orthogonal component of a rule set).
- The definition of complex system includes "employs the processes of natural evolution and maturation" but I would argue this is part of the definition of a complex adaptive system. Examples of systems that I would call complex, but not adaptive: a static network of computers, a red blood cell (as it operates in its final state), Conway's Life. See Grisogono's paper on co-adaptation for more examples, and a better discussion of this point. Of course, in order to do complex systems engineering, we must work with complex adaptive systems.
- "When TSE is used, development is not supposed to happen during the actual operation of a system." -- How much of this is due to the nature of the systems? It is hard to tear apart a car, or to normally replace software modules, but there are software systems that fit in a level of conceptualisation (and thus meet the author's definition of TSE)and where the components can be individually designed, but can be replace while the system is running (reference).
- On a related note, to do with the authors definition of TSE.. I guess I would call circuit design a form of TSE. However, before one proceeds with design at one level of scale (or I guess more properly, in scope) you can (often) calculate from the parameters which level of scope (lumped circuit theory, distributed circuit theory) in which to proceed.
- In the figure where realization is plotted versus participation (why aren't figures numbered?) I would argue that variation and selection are two components of one form of adaptation, and that the adaptation mentioned on the graph is really a state of being adapted + ongoing adaptation (as the author later mentions). See my notes on definitions above. I'm not sure what variation has to do with emergence in the complex systems set. When I evolve networks, the variation is random. Nothing new emerges, in the sense of being able to describe things at a different scale.
Some good things in the paper I wish to highlight:
- It is important to revisit Lamarckian theory, as I've seen many people (including my former self) that are under the perception that Darwinian theory is the only way of evolving. I'm ignoring other things, like Hebbian learning, in this system, as I group them under adaptation more generally.
- I think the gardener analogy is very useful (though I do wonder if all gardeners think of the environment as something separate, or as part of the system.. is this distinction in the minds of the author due to a belief that gardeners think of this as separate, a confusion between the everyday and complex systems uses of the term "environment", or a wish to use the distinction for analogy purposes)?
- The section on rewards being used at specific scales makes me think of examples where rewards are given across multiple social scales (for example, the Nobel peace prize). I like a lot of the comments on reward criteria.
In general I agree with the Norman position paper, but I still think there is a place for describing and cataloguing standard tools, this is important in feeding back knowledge into our discussion of the manner and means of evolution. I would also replace the word "evolution" with "adaptation" throughout, to broaden it (see my point on, or read the Kuras paper).
Thoughts on the Byrne position paper: From an Australian perspective, there has been a lot of fuss over purchasing blowouts and system integration, and I would say the public in general perceives a crisis. To deal with the problem of public / government non-acceptance of adaptive mechanisms, I think is one of publicity of efforts (eg. DARPA challenges) where this does work. It is probably worth writing "publishing / subscription" and a more detailed description of RSS for those not familiar with it, and some examples of uses (eg. news, comics, audio, video) to illustrate "disparate information systems".
Comments on Success or Failure of Adaptation paper:
- In section 4.1 on the success of a population of systems, it's interesting to try and tackle this as ecosystems evolve and species diverge... how to define the success over long periods of time?
- There is overlap as well as distinction between the MoS(P) (Measure of Success of Populations) measures.
- In the measures for success of an individual system, you mention integrity and level of functioning, for which there are some interesting (albeit somewhat context-specific) measures in the Prokopenko paper
- Overall, the paper is a good example of what the Norman position paper is talking about: how a discussion of processes of adaptation is needed first.
Thoughts on the Co-adaptation paper:
- The main difference I see in Grisogono's requirements for CAS and Kuras's predicates for biological evolution is Kuras's point on superfecundity. I don't think this is a strict requirement, but that it may speed up evolution once you then select.
- Her discussion of fitness and selection for learning, reminds me of this talk I attended. I don't necessarily agree with everything in it, but it raises some interesting points, and I do think emotions play a big role. Emotions are an evolutionary adaptation to promote individual adaptation.
- On the discussion on agility, and adapting to adapt, I am reminded of this work on a selectable "hypermutation" (increased mutation rate) when pathogens are faced with challenging environments. It's worth pointing out that the immune system does a similar trick too.
- One of the key points for complex systems engineering is: "the Defining-success level of adaptation is going to be a much slower cycle than the first three levels, and over longer periods of time can steer the direction of operation of the first three levels towards regions of outcomes which align better with what will actually be judged as success in the longer term." I think the trick in complex systems engineering will be to steer this operation.
- Another key point is: "symptoms of the problem may reside at one level, contributing mechanism at another, while the most effective levers of intervention for resolving the problem may in fact be at different levels again."
The Maier paper purports to extend the definition of complexity but seems to be more about the ways in which complex systems are engineered. So I agree with George McConnell in a way, who says to what extent are these causes of complex systems? The strict definitions (there are many) of complex systems are necessary to distinguish complex systems from complicated systems. It is not clear how the definitions are changed by this paper, which focuses more on the way complex systems are built. It gives a useful comparison of the way complex systems and traditional systems are typically engineered.
To pick up on a fault I perceive in the tables: I can't think of a definition of complexity I've seen that wouldn't include the Internet (I'm here talking purely of the hardware and communications software involved). Yet the Internet was built with millions of sponsors, like me, who pay for / subsidise the cost of links and nodes.. so this is an example of a system with many sponsors who do have money, in contrast to all of the tables. Maier states that "This is because engineering is associated with the development effort, not the system itself" which seems in contrast to parts of my engineering degree looking at everything from the way that economic systems work in establishing the price of engineered systems and legal implications if your system doesn't work through to maintaining systems once built.
I liked the Kreitman paper a lot. The key idea seems to me to be about changing the normal perception of control in a business, to be more like the beautiful boat analogy presented in the paper. Alignment of measurement systems in the Kreitman paper could be framed in terms of Grisogono's Co-adaptation paper, which indicates the role of levels and time scales in doing this. Hierarchy could be discussed more in the "Creating Reliable Environments" section, as it influences and is influenced by some of the points raised (hierarchies influence communication, and vice versa). Hierarchy changes should be adaptations to new organisational challenges. One small point about the paper to find fault with: what statistics are out there on management/organisational techniques? For example, how often has TQM worked / not worked / made no difference? Somewhat unrelated to the paper, but isn't the "Kreitman conjecture" just a restatement of Heisenberg's uncertainty principle?
A lot of nice ideas here. I'm spent a bit of time after reading this paper, trying to envisage what a Swarm VM would look like. A VM for adaptation has been done before: Tierra. I guess a VM for Swarm would involve some interesting mechanisms for evolving. Which brings me back to the topic of patterns. It would be nice in this paper to see some examples of UML, SysML et al., to see what's on offer for CS/CAS/CSE but also to see what the new structural abilities are, and how they are limited.
- Horowitz writes: "successfully manage potential instability of system design requirements during development" for large-scale systems, which is I think a key point for complex systems engineering (related to, but slightly distinct from Grisogono's points on managing variation).
- In Section 4, I'm not sure if points (i)-(v) relate to complex systems per se. I'm sure there are many complicated systems with these. They are, however, typical of the way complex systems are developed, so from that perspective they are useful.
- "Another interesting aspect of the complex system is that new functions and capabilities can emerge in a manner that depends on the integrated contributions of individual developing organizations, possibly resulting in outcomes that were never anticipated (referred to as emergent)." -- I think it's worth clarifying here that the complex system is the organisation. The emergence that occurs may not necessarily be in the systems produced, eg. I wouldn't consider a lumped electrical circuit to normally have any emergent behaviours, but it may be a solution that could not have been easily predicted until it emerges from group interactions (in several cases I can think of, software implementations were tried first).
- The SEALS architecture (Fig. 1) could be extended by showing influences from another system on each of the various parts, eg. the self-evaluation is in the context of how a system reacts with other systems.
- "An integrated economic analysis can help define the selection of measurement and analysis capabilities..." -- I think it's worth pointing out here that economic systems are complex adaptive systems, with a feedback loop between the system and its economic environment, and thus the economic analysis needs to be aligned to match (or track within some margin) the evolution of the economic system.
- The hierarchy in Fig's 2 and 3 needs clarifying (perhaps in extended captions): as we move down the links, in places this appears to be showing classes (eg, surface water is a class that contains lakes, rivers, etc. but not groundwater), and in other places it seems to be showing levels of risk (eg, chemical contamination is arguably less of a risk than biological contamination). If it is in terms of risk, then surely climate and seasonal variation is more of a risk than mountains and plains? Maybe my perspective is biased somewhat from living in Adelaide (and also having seen similar issues in New Mexico in the US).
The Braha paper is an interesting look at network structures associated with product development and product design. It attempts to answer (among other question) "Which patterns of information connectivity lead to better performance?" and "What are the patterns of information connectivity observed in real-world large-scale PD organizations?" To me, the paper somewhat answers the latter question (with a very limited number of sample organisations) and to the first question puts forwards some conjectures that seem convincing, but really need further investigation. I'm not sure that the cutoff differences observed reflect bounded rationality (in decision making), I agree more with the comments that they may be bounded information processing. Whether this is a good/bad thing may depend on context. Similarly with the authors coment that "PD complex networks exhibit the `small-world' property, which means that they react rapidly to changes in design status", which relates to issues raised in the Horowitz paper. This paper reminds me I should really apply some of the network measures to my evolved networks.
P(paper by Prokopenko Boschetti and Ryan is brilliant | background in information theory) = 1. I'm not sure how useful this is to those without a background in information theory, although I think it makes a good attempt at explaining the information theory in plain, yet precise language. Which brings me to the first of a couple of points about the version that I got from the Symposium wiki:
- Equation 5 is missing a minus sign. If you use the identity log(b/a) = log b - log a, within <...>, then to match up with Equation 6 (as it should) it needs a minus sign. If you think a bit about Equation 5, it is clear from that alone that it needs the minus sign: if your probability distribution about the future given the past approaches the probability distribution of the future alone, then you have a lot of uncertainty about the future, and taking logarithms gives you a number approaching 0 from below. As you gain more information (predictability) about the future then the top line falls and the ratio approaches 0, so taking logs gives you a number approaching negative infinity. So more predictability means a more negative number (as is) and you need a minus sign to correct this. The minus sign is often left off in this field, but here I think it makes sense, since you are trying to define predictability in terms of information, rather than strictly information.
- In Equation 7, the limit given just below it for h_\nu should have L approaching infinity.
Speaking of L going to infinity, if it does this then the CSSR algorithm (referenced in 62 and 63 of the paper, and discussed by me here) gives a reconstruction which approaches the system. So subsections 3.2.2 and 3.2.3 are (I believe) really the same thing, with the advantage of the method in 3.2.3 being that it can copy with n-dimensional series, and the advantage of 3.2.2 being that it gives you a nice diagram of the states in the system and allows you to reconstruct statistically identical sequences (for the statistics measured. On entropy, it is worth mentioning that base 2 gives units of bits, and is not always the most useful base to use, although of course different bases give entropy measures that differ by a constant ratio. I'm not sure if there are any real world uses that would give an infinite complexity.
Some other points:
- There are some interesting links between Kauffman's NK networks (and other classes of switching networks) and Wolfram's clases for CA behaviour, varying with N and K, that you could mention or refer to.
- Some of the formulae in Section 5.2 come from Shalizi & Shalizi and Haslinger and this needs to be referenced (from reading that section, it sounds like they all come from Correia).
- Use Chaitin-Kolmogorov (preferred) or Kolmogorov entropy consistently. Where you first mention this, you should mention Universal Turing Machines there too.
- You state that "Adaptation is a process where the behaviour of the system changes such that there is an increase in the mutual information between the system and a potentially complex and non-stationary environment", however in endosymbiosis there appears to be a decrease in mutual information when now the genes in the symbiote can rely on the genes in the host, and so some in the symbiote are lost. I'm not sure if the changes elsewhere in the symbiote are enough to counter this.
I want to abstract away from this paper and the model a little, and talk a bit about modelling in general, with this paper as an example, and including some ideas from complex systems engineering:
Firstly, and most simple to add, it's useful to have a table listing all of the symbols used, all in one place. Secondly, it's not just important to define terms, it's important to define terms and equations in language that people from a general background, and in particular those who are specialists in the field can understand. This paper would benefit from discussions with social scientists, but I know many who would balk at many of the equations. Why is a logistic used for sympathy? Why not another function mapping to [-1,1]? Include discussion and probably a graph of what it looks like, and why the 0.5 is there, etc.
It is nice to see a diagram of the structure of the model. The points raised in Burkhart's paper and Webb's paper are valid in papers like this one. Equations and word descriptions make it too hard to see / reproduce the structure. With software tools, it is possible to take models written (drawn) in these modelling languages and reconstruct the model in software, so that the results from modelling can be reproduced and verified.
This gets back to the point that George McC made (including my addition) in discussing my paper above: How did you select the parameters? I don't think a parameter sweep (as identified in the paper) is appropriate, but you should evolve these (or detailed your learning of what parameters to use). With a model, how does one best constrain the appropriately the parameter space / detail of equations? I believe the answer is to consult as widely as possible, which makes explaining the model the most important thing.
The best quote from the paper is: "...we do not advocate the use of simulation as a substitute for careful empirical analysis. Rather, our objective is to develop and exploratory model..." which I strongly agree with.
More interesting discussion on modelling and science can be found here.
The paper by Browning is a good partner to the Braha paper. A lot of this (Browning) paper could be studied in the framework of the Braha paper. The "design structure matrix" is also used in analysing networks, where it is called an "adjacency matrix", for example. A few other comments:
- Windows is a family of products geared at different fields (environments), and different competitors challenge it in different fields (BSD in web hosting, Linux in the server room, Mac OS X on the desktop. More of this could be discussed, particularly in changes to the architecture, in order to add and configure different functionality.
- I think there is more research that needs to be done on studying what type of hierarchy adapts best in different contexts. I'm not convinced by a few case studies that all structures should be as flat (or at some point in-between flat and very hierarchical) as people think. The structure in all cases (Microsoft, Army, Google) is clearly changing.
- Agility is not necessarily the same as adaptability.