From CSULA CS Wiki
Jump to: navigation, search

Here are some of the features that make the Ants Challenge an Artificial Intelligence (AI) problem rather than just a problem of designing software to do something.

AI in this context refers to traditional AI. AI is sometimes thought of as incuding areas of application, e.g., data science or applications of artificial neural networks, that don't fit well into this framework.

This is not a definition of artificial intelligence. Most definitions of artificial intelligence say something about computers acting ingelligently or in a human-like manner.

  • Norvig and Thrun define AI as the science of making computer software that reasons about the world around it.
  • MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) studies [the most challenging problems of our lives, our work, and our world] in an effort to unlock the secrets of human intelligence, extend the functional capabilities of machines, and explore human/machine interactions. We apply that knowledge with a long-term lens to engineer innovative solutions with global impact.
  • Research in AI at Edinburgh spans knowledge representation and reasoning, the study of brain processes and artificial learning systems, computer vision, mobile and assembly robotics, music perception and visualization.

What follows are a number of somewhat more concrete characteristics that qualify problems as AI problems.

Contents

Incomplete information

Not all the information about the state of the world is known to an ants team. Even though the game controller knows where all the food, hills, water, and ants are, no team knows that unless it sees or has seen all the squares. Information gathered incrementally must be integrated to create a complete picture of the world. In this case, combining information is not very difficult. Even so, there is an information integration task that must be performed. One aspect of the challenge in the Ants problem is that the state of the world is constantly changing. Enemy ants move—and sometime kill other enemy ants or your ants—and food appears and is eaten. Much of this may happen out of sight of a team. It is important for a team to re-visit areas of the board even if they have been seen already.

Multiple agents

Each team consists of multiple agents. The agents are controlled independently, and their individual actions often matter to the other agents.

This is not an agent-based problem. In agent-based problems, each agent is controlled by an independent program. It's as if each agent had its own program running inside it. Agents are often able to communicate, but each agent makes its own decisions about what to do.

In this problem each team has a single central controller, which decides for each ant what it will do. This is typically easier than agent-based modeling in that there is a central planner, which can make a plan for the entire collection or agents simultaneously and does not have to worry about coordinating agents that make their own independent decisions.

Even so, with multiple ants, each of which move separately, this is still a difficult problem. The problem has a very large number of degrees of freedom in that each ant can typically move in one of four directions or not move at all. In addition, there is not a fixed number of ants. At each turn one must first determine which ants exist and hence what the options for taking action are.

Low level actions are required to achieve higher level results

There are a relatively small number of operations that one can perform on the world. The most important one is to move an ant. In fact, that's the only operation one can perform directly. But it's the other operations that matter.

  • Eat food to generate more ants.
  • Fight with and kill ants from other teams.
  • Immobilize another team by stepping on its ant hill.

These operations cannot be performed directly. If one thinks of the game engine as having an API, one would like to be able to think of having access to operations of these sorts. But those operations are not exposed to the ants. All the ants can do is move in such a fashion that these higher level operations are triggered.

The simplest case is eating food pellets. One doesn't eat a food pellet. One moves next to a food pellet, at which point, the game engine will credit that team with eating it. The same is true for the other operations. All one can do is move in such a way that the game will cause one of the higher level operations to be performed.

The fact of this indirection, although not very deep in this game, illustrates another feature typical of AI problems, namely that one has access to the micro states of the world and by manipulating them one attempts to achieve macro results.

It is necessary to recognize higher level patterns based on lower level information

Related to the micro/macro contrast is the fact that many AI problems require that one stand back from the trees and see the forest. That's often very difficult in that the program has access only to the trees and must construct whatever higher level patterns it needs on its own. In the Ants problem a simple example is that we can look at a board and simply see which ants are closest to which food pieces and how to get from an ant's current location to a food target—even if it takes following a path that turns a corner. It takes work (in most cases a BFS) for a bot to do that.

Search of the information space is required

Virtually every AI problem involves search. A fundamental difference between traditional algorithms and AI programs is that with traditional algorithms, one knows how to achieve a result and the software just goes ahead and does it. With AI one often doesn't know which result one wants to achieve first or how to achieve it. All one can do is search the space for information about what to do or how to do it.

With most other software one pretty much knows what the software is going to do for any given input situation. Because AI involves search so fundamentally, one typically can't predict in advance what the result of executing the program will be. The program searches for a solution and when it finds one, it performs that solution. This isn't to say that the program produces unpredictable results—except to the extent that it involves probabilistic reasoning. To find out what the program will do, one just executes it. But that is often the only way to find out.

In the case of the Ants Challenge the What do do possibilities are to decide among getting food, exploring the space, fighting with other agents, attacking other agent hills, or defending one's own hill. The primary How to do it element requires breadth-first search.