## Frankl, My Dear : 5

Putting all thought of the Frankl Conjecture out of our minds for the moment, let’s return to the proposition in Example 1 and work through its differential analysis from scratch.

### Example 1

 (1)

Consider the proposition $p \land q \land r,$ in boolean terms the function $f : \mathbb{B}^3 \to \mathbb{B}$ such that $f(p, q, r) = pqr,$ as illustrated by the venn diagram in Figure 1.

The enlargement $\mathrm{E}f$ of $f$ is the boolean function $\mathrm{E}f : \mathbb{B}^3 \mathbb{D}^3 \to \mathbb{B}$ defined by the following equation:

$\mathrm{E}f(p, q, r, \mathrm{d}p, \mathrm{d}q, \mathrm{d}r) ~=~ f(p + \mathrm{d}p, q + \mathrm{d}q, r + \mathrm{d}r).$

Given that $f$ is the boolean product of its three arguments, $\mathrm{E}f$ may be written as follows:

$\mathrm{E}f ~=~ (p + \mathrm{d}p)(q + \mathrm{d}q)(r + \mathrm{d}r).$

Difficulties of notation in differential logic are greatly eased by introducing the family of minimal negation operators on finite numbers of boolean variables. For our immediate purposes we need only the minimal negation operators on one and two variables.

• The minimal negation operator on one variable $x$ is notated with monospace parentheses as $\texttt{(} x \texttt{)}$ and is simply another notation for the logical negation $\lnot x.$
• The minimal negation operator on two variables is notated with monospace parentheses as $\texttt{(} x \texttt{,} y \texttt{)}$ and is simply another notation for the exclusive disjunction $x + y.$

In this notation the previous expression for $\mathrm{E}f$ takes the following form:

$\mathrm{E}f ~=~ \texttt{(} p \texttt{,} \mathrm{d}p \texttt{)(} q \texttt{,} \mathrm{d}q \texttt{)(} r \texttt{,} \mathrm{d}r \texttt{)}.$

A canonical form for $\mathrm{E}f$ may be derived by means of a boolean expansion, in effect, a case analysis that evaluates $\mathrm{E}f$ at each triple of values for the base variables $p, q, r$ and forms the disjunction of the partial evaluations. Each term of the boolean expansion corresponds to a cell of the venn diagram and is formed by multiplying the value of that cell by a coefficient that amounts to the value of $\mathrm{E}f$ on that cell.

For example, in the case $pqr$ where all three base variables are true, the corresponding coefficient is computed as follows:

$\begin{array}{lll} \mathrm{E}f(1, 1, 1, \mathrm{d}p, \mathrm{d}q, \mathrm{d}r) & = & (1 + \mathrm{d}p)(1 + \mathrm{d}q)(1 + \mathrm{d}r) \\[4pt] & = & \lnot\mathrm{d}p \land \lnot\mathrm{d}q \land \lnot\mathrm{d}r \\[4pt] & = & \texttt{(} \mathrm{d}p \texttt{)(} \mathrm{d}q \texttt{)(} \mathrm{d}r \texttt{)} \end{array}$

Collecting the cases yields the boolean expansion of $\mathrm{E}f$ via the following computation:

#### Step 1

$\mathrm{E}f(p, q, r, \mathrm{d}p, \mathrm{d}q, \mathrm{d}r) =$

$\begin{smallmatrix} & & p q r \cdot f(1 + \mathrm{d}p, 1 + \mathrm{d}q, 1 + \mathrm{d}r) & + & p q \tilde{r} \cdot f(1 + \mathrm{d}p, 1 + \mathrm{d}q, 0 + \mathrm{d}r) & + & p \tilde{q} r \cdot f(1 + \mathrm{d}p, 0 + \mathrm{d}q, 1 + \mathrm{d}r) & + & p \tilde{q} \tilde{r} \cdot f(1 + \mathrm{d}p, 0 + \mathrm{d}q, 0 + \mathrm{d}q) \\[4pt] & + & \tilde{p} q r \cdot f(0 + \mathrm{d}p, 1 + \mathrm{d}q, 1 + \mathrm{d}r) & + & \tilde{p} q \tilde{r} \cdot f(0 + \mathrm{d}p, 1 + \mathrm{d}q, 0 + \mathrm{d}r) & + & \tilde{p} \tilde{q} r \cdot f(0 + \mathrm{d}p, 0 + \mathrm{d}q, 1 + \mathrm{d}r) & + & \tilde{p} \tilde{q} \tilde{r} \cdot f(0 + \mathrm{d}p, 0 + \mathrm{d}q, 0 + \mathrm{d}q) \end{smallmatrix}$

#### Step 2

$\mathrm{E}f(p, q, r, \mathrm{d}p, \mathrm{d}q, \mathrm{d}r) =$

$\begin{smallmatrix} & & p q r \,\cdot\, \texttt{(} \mathrm{d}p \texttt{)(} \mathrm{d}q \texttt{)(} \mathrm{d}r \texttt{)} & + & p q \texttt{(} r \texttt{)} \,\cdot\, \texttt{(} \mathrm{d}p \texttt{)(} \mathrm{d}q \texttt{)} \mathrm{d}r & + & p \texttt{(} q \texttt{)} r \,\cdot\, \texttt{(} \mathrm{d}p \texttt{)} \mathrm{d}q \texttt{(} \mathrm{d}r \texttt{)} & + & p \texttt{(} q \texttt{)(} r \texttt{)} \,\cdot\, \texttt{(} \mathrm{d}p \texttt{)} \mathrm{d}q \, \mathrm{d}r \\[4pt] & + & \texttt{(} p \texttt{)} q r \,\cdot\, \mathrm{d}p \texttt{(} \mathrm{d}q \texttt{)(} \mathrm{d}r \texttt{)} & + & \texttt{(} p \texttt{)} q \texttt{(} r \texttt{)} \,\cdot\, \mathrm{d}p \texttt{(} \mathrm{d}q \texttt{)} \mathrm{d}r & + & \texttt{(} p \texttt{)} \texttt{(} q \texttt{)} r \,\cdot\, \mathrm{d}p \; \mathrm{d}q \texttt{(} \mathrm{d}r \texttt{)} & + & \texttt{(} p \texttt{)} \texttt{(} q \texttt{)(} r \texttt{)} \,\cdot\, \mathrm{d}p \; \mathrm{d}q \, \mathrm{d}r \end{smallmatrix}$

To be continued …

## Semiotic Theory Of Information : 6

2014 Oct 14

Through the 1970s I gradually recovered from my early traumas with Fortran, and with the aid of more symbol-friendly programming languages like Lisp and Pascal began to play around again with implementing simple forms of graphical calculi inspired by Peirce and Spencer-Brown in the form of programs that carried out the requisite transformations in automatic or guided semi-automatic fashions.

Experiments like these, moving from scribblings on paper to algorithms and data structures in electronic media, brought about a transformation in my perspective on symbolic logic and other semiotic processes. The shift was very gradual over the decade that followed, but I think it began with thinking of computer memory as very like a sheet of paper, a tabula rasa, or a Sheet of Assertion as Peirce dubbed the unmarked state in his systems of logical graphs.

Thinking this way naturally brings out the system-theoretic aspects of semiosis in general and logic in particular. One begins with sign relations as subsets of cartesian products ${O \times S \times I},$ where ${O, S, I}$ are sets of objects, signs, and interpretant signs, respectively, and over time one begins to see dynamic systems in place of those sets. Then one day in the mid 1980s I distinctly remember flashing on the fact that the graph-theoretic data structures I and my programs were manipulating in memory were actually diagrams in Peirce’s sense.

With that pre-ramble, here is a bit of background from (Awbrey & Awbrey, 1990) that describes our system-theoretic approach to agents with a capacity for learning and reasoning.

### The State Space Approach to Intelligent Systems

The common definition of a system as a list of variables is useful but not absolute. It characterizes the system only as measured from a particular frame of observation. The property of a system that we really care about is its state space, or a representation showing the possible states of a system. When considering systems that exhibit complex sets of properties, such as being able to transform information and to act with intelligent purpose, it becomes more difficult to specify in advance the exact nature of the state space, or even whether a space exists that satisfies a given set of specifications. Therefore we do not consider the state space of intelligent systems to be given absolutely, as the subset of some predefined space (like ${\mathbb{R}^{n}}$), but defined provisionally, subject to a list of constraints and subject to revision.

When considering intelligent behavior, we are most interested in the state of information that an interpreter system has about an object system, and this information has its expression in a system of signs. The characterizations sign, object, interpreter are not so much exclusive categories of entities as roles that systems may play within the so-called sign relation. Although interested in the nature and relations of these systems in themselves, whatever we can say about them takes place within the domain of signs. As we use the words, sign is the general category, while symbol is a particular type of sign. From the pragmatic point of view, almost all the actual work of computation is involved with transformations between expressions in the symbolic domain.

To be continued …

## Semiotic Theory Of Information : 5

2014 Oct 09

I will continue assembling an assortment of background materials and links to other resources that I think are useful in understanding Peirce’s notion of information and how it has the potential to extend and generalize both our intuitive notions of information and the more or less formalized theories of information that have become standard in contemporary scientific practice.

The next bit of background material I wanted to add to the account is the perspective on signs and information that Susan Awbrey and I outlined in the paper we gave at a Society for Applied Learning Technology conference in 1990.

### Information as a Matter of Form, Not a Form of Matter

Information is the property of a message or sign by virtue of which it can reduce the uncertainty of an interpreter about the state of an object. This property has the alternate aspect of acting to increase the control of an interpreter with respect to achieving a goal.

In Aristotle’s Psychology [1], two important distinctions were drawn which we would like to adapt to our discussion of information.

First, he distinguished form and matter, saying that matter is the potentiality, while form is the actuality of the mind. Although it combines both, the essential nature of the mind is found in its form. Applied to the mind in its aspect of information processing system, this proposition foreshadows a point that was often emphasized at the beginning of the information revolution, that information is a formal entity, not a material one.

Next, Aristotle drew a distinction between the possession and the exercise of knowledge. A corresponding distinction may be drawn between the information that a system possesses by virtue of being in a certain state and the information that a self-informed or intelligent system may exercise with respect to its own states. It is not a distinction in the kind or essence of information, but a pragmatic difference in the role a system plays within the relation of sign, object, and interpreter. In the first case, the state of a system serves as a sign to others, reducing the uncertainty of these interpreters about the state of an object system. In the second case, the state of a system serves as information to itself in its role as interpreter. This is one of the marks of an intelligent system.

### Reference

1. Aristotle, “On the Soul” (De Anima), W.S. Hett (trans.), in Aristotle, Volume 8, Loeb Classical Library, William Heinemann, London, 1986.

I continued to make use of the frame introduced in Aristotle’s sketch of the soul in my work on Inquiry Driven Systems, for instance, here:

To be continued …

## Semiotic Theory Of Information : 4

2014 Oct 08

Let us now return to the information.” To coin a phrase. This time around we come to Peirce’s notion of information in a critical and recurring passage that Frederik Stjernfelt takes as the next stepping stone from propositions through dicisigns to the information they convey:

This maybe surprising definition of the Dicisign is closely connected, however, to the basic function of the Dicisign, namely to convey information — to relay claims, assert statements, true or false. Only by separately indicating an object does it become possible for a sign to convey information about that object, correctly or not:

“… the essential nature of the Dicisign, in general, that is, the kind of sign that conveys information, in contradistinction to a sign from which information may be derived. The readiest characteristic test showing whether a sign is a Dicisign or not, is that a Dicisign is either true or false, but does not directly furnish reasons for its being so.” (Syllabus, 1903, EP2, 276).

(Frederik Stjernfelt, Natural Propositions, 54)

In working through the argument of this series of texts I found it worth my trouble to copy out a longer excerpt from the 1903 Syllabus to my blog:

To be continued …

## Semiotic Theory Of Information : 3

2014 Oct 08

Re: Resources On Peircean Information Theory • (1)(2)

In trying to remember why I started this thread, I traced it back to the point when various notions of information came up in Chapter 3.3 of Frederik Stjernfelt’s book, Natural Propositions.

So let us review …

First we have the eureka moment in Kaina Stoicheia where Peirce declares a “true definition of a proposition”, namely, “A proposition is a sign which separately, or independently, indicates its object.” And we know that Peirce attaches the label of a “Dicisign” to the definiens of that definition.

### 3.3. Dicisigns : Signs Separately Indicating Their Object

True to Peirce’s general way of investigating sign types, he describes Dicisigns
compositionally, functionally, and systematically. As Hilpinen (1992) says,
Peirce’s recurrent and “standard” definition of Dicisigns is given in the
following italicized passage from “Kaina stoicheia”:

“It is remarkable that while neither a pure icon or a pure index can assert anything, an index which forces something to be an icon, as a weathercock does, or which forces us to regard it as an icon, as the legend under the portrait does, does make an assertion, and forms a proposition. This suggests a true definition of a proposition, which is a question in much dispute at the moment. A proposition is a sign which separately, or independently, indicates its object.” (EP2, 307, emphasis Hilpinen’s)

(Frederik Stjernfelt, Natural Propositions, 53–54)

To be continued …

## Semiotic Theory Of Information : 2

2014 Oct 06

Some portions of a paper Susan Awbrey and I presented at a Society for Applied Learning Technology conference in 1990 may be relevant at this juncture.

How do we, and how should we, integrate the empirical and rational sources of information that make up our putative knowledge of the actual world we observe and the possible worlds we contemplate? That is the question we sought to address in this line of research.

Those are hardly new questions, of course, but it’s my firm opinion to this day that Peirce set out new ideas, and intrinsically integral ideas, if you will, when it comes to answering them.

One way to explore the problem domain is to write computer programs that tackle the task of integrating learning and reasoning faculties, starting with simple but non-trivial functions of those types, and to see what one can see from the trials of doing that.

Here is our overture:

### Abstract

If computer programs were smarter, they would, like people, recognize sequences of events, form models of their environment, and formulate rules based on experience. This paper describes the development of a program designed to address the difficult computational problems involved in integrating the inductive and deductive reasoning necessary to perform such tasks. Theme One is a prototype program composed of Index, a learning algorithm for sequential data, and Study, an algorithm for building logical models. The project goal is an interactive research tool that assists students and investigators in the exploration of qualitative data using artificial intelligence.

To be continued …

## Semiotic Theory Of Information : 1

2014 Oct 06

Peircers & Others,

On the subject of Peirce’s laws of information — or the semiotic theory of information — here are just a few links that come to mind for possible future reference: