From CSULA CS Wiki
Jump to: navigation, search

Ken Webb notes These are some first thoughts while reading papers from the other participants. I'm thinking in terms of (1) ideas from these papers that I can use in my own research, and (2) apparent similarities in concepts. These comments may or may not relate to the main theme of the authors. I'll try writing this in a blog-like way, and may write things before I have finished reading or understanding a given paper, as a type of active engagement.

January 1, 2007

North papers (Parker et al - Agent-Based Meta-Models; Howe et al - Containing Agents: Contexts, Projections, and Agents)

There is much relevant material for me in both papers. The systems described in my paper use a tool chain consisting of a UML tool with UML as the modeling language, XSLT for model transformation, and Xholon for model execution. Parker et al use a tool chain that parallels this, consisting of a domain specific language (DSL) modeled using the Eclipse Modeling Framework (EMF), Open Architecture Ware (OAW) for model transformation, and Repast S for model execution.

December 14, 2006

Boschetti paper (Prokopenko, Boschetti and Ryan - An information-theoretic primer on complexity, self-organization and emergence)

I'm not very familiar with the ideas of information theory, but I do like the concept of using an established formal discipline to define terms in a less vague way. The subject of my own paper, UML modeling, could also be used to help define, or at least gain insights on, the same concepts. UML is not as formally defined as information theory, but modeling is a good way to test out ideas because it makes them more concrete and therefore hopefully less vague.

They reference a paper by Bialek et al. - "it only takes a fixed number of bits to code either a call to a random number generator or to a constant function". (p.7)

How do the ideas of the Boschetti paper apply to UML finite state machines (FSM)? The simplest possible FSM can generate both random and constant sequences. It would have a single state with a UML do activity, or a regular timeout with a self-transition. During each time-step, it would execute something like either of the following two statements:

print(0.1234567); // constant output

print(random.nextDouble()); // random output

To generate sequences between these two extremes would require a less trivial FSM.

My paper also discusses molecular machines (MM), and compares them with FSM. It would be interesting to explore the concepts presented in the Boschetti paper, by systematically building FSM and MM executable models, instead of using Universal Turing Machines.

(Matt Berryman here: they reference the CSSR algorithm by Shalizi and Klinkner, which builds recursive hidden Markov models from time series in a systematic way using statistics.)

Real molecular machines, as found in biolgical organisms, might not find it as trivial to output regular or random sequences.

I'm thinking about the use of "size of program" as a metric. In the case of my own Xholon modeling framework, a lot of code has been pushed into the framework. The metric for Xholon might be whether the functionality is sufficiently general or well understood that it can be pushed into a framework or common library. "Random output" and "constant output" are both relatively easy to incorporate in a common framework, while outputs that fall between these two extremes are more difficult to generalize. Boschetti et al. use the term "predictive information". For a tool like Xholon, things that are predictable are easier to incorporate in a framework and will therefore allow for shorter end-user programs.

I haven't finished reading this paper yet. I hope to spend more time studying the interesting Figure 1, and thinking about how it relates to my own work.


Braha paper (Braha and Bar-Yam - Untangling the information web of complex system design)

This paper describes various types of complex networks, including small-world and scale-free networks. The authors are interested in finding "a bridge between structural properties of product development networks and their resulting performance measures".

In terms of my own interests, in UML 2, and in the Xholon tool, networks can be superimposed on trees. Xholon can already use genetic programming to explore automatic generation of portions of the tree structures. Given the extent of current research going into complex network theory, it may be possible to also get automated assistance when generating the superimposed network.

The UML 2 concept of composite structure, is used to produce a composite structure hierarchy, which is a tree of object (or role) nodes. The UML 2 concept of port, is used to specify potential lateral connections between nodes in the tree, through which nodes can interact with each other. A set of connectors between ports creates a network.

The download of the Xholon open-source software includes several examples that include grids - Game of Life, Ant Foraging, Ant Trail. The grids are specified first as simple tree structures, with a regular network superimposed on this. It's very easy to specify this because of the regularity of grid (lattice) structures. Random networks could also be specified rather easily. The authors state that "many real networks are neither completely ordered nor completely random", which is the situation with most of the Xholon examples.

Metrics

Complex network theory could be used to derive metrics for networks (and trees ?) produced using UML 2 and Xholon. The metrics could simply be how well a network statistically conforms to some known network type.

Network design

Networks could be automatically superimposed on a tree by specifying some set of network parameters. Xholon currently uses two parameters, grid vs. torus (wraps from left to right and from top to bottom), and Von Neumann (4 neighbors) vs. Moore (8 neighbors), when producing grid networks.

The authors describe various structural properties of complex networks: (1) "average distance between two nodes", (2) "tendency of vertices to be locally interconnected or to cluster in dense modules", and (3) "the distribution of degrees of vertices". Any or all of these could possibly be used as metrics or as inputs to an automated design process. In the conclusion, the authors mention "an evolutionary algorithm involving minimization of link density and average distance" which "might suggest that an evolutionary process that incorporates similar generic optimization mechanisms ... might lead to the formation of a PD network structure with the small-world and truncated scale-free properties".

A research question: What statistically distinguishes the network connectivity patterns of the finite state machine (FSM) and molecular machine (MM) designs described in my symposium paper?

Both the Braha paper and the Boschetti paper agree that the interesting area lies between the realm of randomness and of regularity.