From CSULA CS Wiki
Jump to: navigation, search

This paper (presented at ICCS 2006) discusses the factors that influence the degree of success or failure of an adaptive mechanism (for both populations of systems and for single systems). The relevance to complex systems engineering is in the conjecture that for sufficiently complex systems, exploiting adaptation is the only viable way (or less assertively, *is* a viable way) of developing design that works in the context. If we wish to do this we need to know how to make adaptation work successfully. Hence my interest in understanding what the factors are that determine its success, and how those factors need to 'cohere' - i.e. they cant be set or tuned independently of each other - again this is a conjecture requiring further exploration.

In the Notes box I have also offered a second paper (presented as an invited paper at SPIE 6093 Complex systems in Dec 05 in Brisbane) - this is an earlier one which lays the groundwork and context for the main paper, and describes the conceptual framework for adaptation in more detail. It is a little dated now (about a year old) since I've had valuable critical feedback from a number of people and my ideas have developed somewhat - so there will be another paper to write - but not in time for this teleconference!

Together these two papers lay out my agenda for how to move forward with CSE.


--George McC 15:11, 8 January 2007 (PST)

Hello Anne-Marie,

I enjoyed reading your paper - I have done some work on GAs and also work in the defence domain, so it rang plenty of bells. Just a couple of comments that might spark some interest.

  1. your paper is aimed at the use of the deployed system rather than at the engineering of the deployed system (hope the distinction is clear) - do you see any difference between the two as far as adaptation is concerned? I would, and this is an untested idea, have expected there to be a greater degree of 'control' within the engineering environment as against the "real world" and therefore an increased ability to experiment (or at least to control the experiments).
  2. my other point is regarding your measures of success and failure. Are these always just "two sides of the same coin", or is there something more subtle here? I am thinking in terms of MCDA-type modelling where failure would be a lack of success (and vice versa). Is that reasonable or are there parameters which are only indicative of failure or success (i.e. the absence of them does not imply the other)?



Hi George - Anne-Marie here. Thanks for your comments. I'm glad you enjoyed the paper.

re your first point: although my overall goal is to apply what I learn about adaptation to the 'engineering' of complex systems (and I put engineering in quotes, because I anticipate that it may be more akin to continuously 'growing' the system than to a traditional 'design, build, test, deliver' process) in fact my focus in this paper is on first learning about what makes adaptation succeed or fail in the natural world, so I guess I wouldnt use the term 'deployed' at all.

However, I agree with your distinction between the 'development/engineering' phase and the 'use' phase and I believe adaptation is relevant to both, but in significantly different ways. In the natural world we observe three distinct phases, each with their own unique adaptation processes. Specifically, and looking at a particular living organism, it is the product of:

[1] the evolutionary history which produced its design, - the main process here is obviously evolution,

[2] the development of the individual from a single cell - here we see self-organisation and self-assembly, as well as complex adaptive interactions with the context in which the organism is growing, and

[3] the lifelong adaptation and learning of the individual in its environment whereby it copes with the stresses of its environment and learns to make the best of what [1] and [2[ have equipped it with.

These ‘Evo-devo’-learning phases correspond to design-build-use phases in engineered systems, and I think we can learn from the natural systems about what kinds of choices are best made in which phase, and why, and how to improve how we perform each stage.

You expect that an engineering environment permits ‘more’ experimentation – and imply therefore ‘better’ adaptation. This is precisely the kind of question I hope to illuminate with this work. There are many parameters that characterise any particular instantiation of adaptation and the number and range and scope of variations tried (= experiments), are just a subset of them. We need to understand where we get the greatest quantum improvement in the success of the system – and it may not be by increasing the number of experiments, but perhaps by changing the nature of the experiments, or how we evaluate the impact on success proxies, or by a better choice of those proxies, or on how we select which variations to retain, etc… I agree that a controlled environment allows the taking of risks that one could not afford in the real world – however at the price of reduced confidence in the transferability of the results to the real world. I could go on – this is a deep and complex subject – but that’s probably enough for here and I thank you for raising it.

Your second point is also an excellent one and you have picked up on the fact that mentioning both success and failure implies that in general one is not simply the negation of the other. There are many factors here – one is the asymmetry of success and failure – usually there are many more ways to fail than there are to succeed; and the other is the incomplete specification of success. With military missions success is usually defined in terms of measurable objectives, explicitly but only incompletely – there are many implicit aspects which are understood by virtue of shared doctrine and knowledge. Naïve attempts to achieve the explicit success measures at the expense of all else is almost bound to produce unacceptable outcomes in the many implicit measures. Moreover, while I suppose it is often true that non-achievement of a desired success measure will be equated to a failure, this is not always the case – eg instead achieving a better outcome through some other innovative strategy. This raises another factor, which is the ‘level’ at which success and failure are defined, measured, assessed. In the above example here one could argue that if success had been articulated at a higher level then the issue would have reduced to an adaptive choice of strategy but no change to the overall success measure. Again – room for much more discussion – but better leave it for now.