There’s a far-ranging discussion that takes off from this point, touching on links among analogical reasoning, arrows and functors, cybernetic images, iconic versus symbolic representations, mental models, systems simulations, etc., and just how categorically or contingently those functions are necessary to intelligent agency, all of which questions have enjoyed large and partly overlapping literatures for a long time now.
It is one question whether a regulator has “knowledge” of the object system and another question whether that knowledge is embodied in the more specific form of a “model”. At this point we encounter a variety of meanings for the word “model”. In my experience the meanings divide into two broad classes, “logical models” and “analogical models”.
- Logical modeling involves a relation between a theory and anything that satisfies the theory, in practice either the original domain of phenomena the theory is created to describe or a formal object we construct to satisfy the theory.
- Analogical modeling involves a relation between any two things that have similar properties or structures or that satisfy the same theory.
It is possible that a regulator has knowledge, competence, or a capacity for performance that exists in the form of a theory or other data structures without necessarily having either type of model on hand.
There is little doubt that models of either sort are extremely useful when we can get them but there are reasons for thinking that the mirror of nature does not go all the way down to the most primitive structures of adaptive functioning.
- Ashby, W.R. (1956), An Introduction to Cybernetics, Chapman and Hall, London, UK. Republished by Methuen and Company, London, UK, 1964. Online.