## Pragmatic Traction • 1

C.S. Peirce’s pragmatic maxim marks the place where the tire of theory meets the test track of experience — it tells us how general ideas are impacted by practical consequences.  If our concept of an object is the sum of its conceivable practical effects then the truth of a concept can be defeated by single outcome outside the sum.

## Minimal Negation Operators • 4

Defining minimal negation operators over a more conventional basis is next in order of logic, if not necessarily in order of every reader’s reading.  For what it’s worth and against the day when it may be needed, here is a definition of minimal negations in terms of $\land,$ $\lor,$ and $\lnot.$

### Formal Definition

To express the general case of $\nu_k$ in terms of familiar operations, it helps to introduce an intermediary concept:

Definition.  Let the function $\lnot_j : \mathbb{B}^k \to \mathbb{B}$ be defined for each integer $j$ in the interval $[1, k]$ by the following equation:

$\lnot_j (x_1, \ldots, x_j, \ldots, x_k) ~~ = ~~ x_1 \land \ldots \land x_{j-1} \land \lnot x_j \land x_{j+1} \land \ldots \land x_k.$

Then ${\nu_k : \mathbb{B}^k \to \mathbb{B}}$ is defined by the following equation:

$\nu_k (x_1, \ldots, x_k) ~~ = ~~ \lnot_1 (x_1, \ldots, x_k) \lor \ldots \lor \lnot_j (x_1, \ldots, x_k) \lor \ldots \lor \lnot_k (x_1, \ldots, x_k).$

If we take the boolean product $x_1 \cdot \ldots \cdot x_k$ or the logical conjunction $x_1 \land \ldots \land x_k$ to indicate the point $x = (x_1, \ldots, x_k)$ in the space $\mathbb{B}^k$ then the minimal negation $\texttt{(} x_1 \texttt{,} \ldots \texttt{,} x_k \texttt{)}$ indicates the set of points in $\mathbb{B}^k$ that differ from $x$ in exactly one coordinate.  This makes $\texttt{(} x_1 \texttt{,} \ldots \texttt{,} x_k \texttt{)}$ a discrete functional analogue of a point-omitted neighborhood in ordinary real analysis, more exactly, a point-omitted distance-one neighborhood.  In this light, the minimal negation operator can be recognized as a differential construction, an observation that opens a very wide field.

The remainder of this discussion proceeds on the algebraic convention that the plus sign $(+)$ and the summation symbol $(\textstyle\sum)$ both refer to addition mod 2.  Unless otherwise noted, the boolean domain $\mathbb{B} = \{ 0, 1 \}$ is interpreted for logic in such a way that $0 = \mathrm{false}$ and $1 = \mathrm{true}.$  This has the following consequences:

• The operation $x + y$ is a function equivalent to the exclusive disjunction of $x$ and $y,$ while its fiber of $1$ is the relation of inequality between $x$ and $y.$
• The operation $\textstyle\sum_{j=1}^k x_j$ maps the bit sequence $(x_1, \ldots, x_k)$ to its parity.

The following properties of the minimal negation operators ${\nu_k : \mathbb{B}^k \to \mathbb{B}}$ may be noted:

• The function $\texttt{(} x \texttt{,} y \texttt{)}$ is the same as that associated with the operation $x + y$ and the relation $x \ne y.$
• In contrast, $\texttt{(} x \texttt{,} y \texttt{,} z \texttt{)}$ is not identical to $x + y + z.$
• More generally, the function $\nu_k (x_1, \dots, x_k)$ for $k > 2$ is not identical to the boolean sum $\textstyle\sum_{j=1}^k x_j.$
• The inclusive disjunctions indicated for the $\nu_k$ of more than one argument may be replaced with exclusive disjunctions without affecting the meaning since the terms in disjunction are already disjoint.

## Minimal Negation Operators • 3

It will take a few more rounds of stage-setting before I can get to concrete examples of applications but the following should indicate the direction of generalization embodied in minimal negation operators.

To begin, let’s observe two different ways of generalizing the operation of exclusive disjunction (XOR) or symmetric difference.

Let $\mathbb{B}$ = the boolean domain $\{ 0, 1 \}.$

1. XOR or symmetric difference, sometimes indicated by a delta or small triangle $(\vartriangle),$ is a boolean function $\vartriangle : \mathbb{B} \times \mathbb{B} \to \mathbb{B}$ identical to the field addition $+ : \mathbb{B} \times \mathbb{B} \to \mathbb{B}.$  This is also known as addition mod 2 or GF(2) addition.

Generalizing $p + q$ in that sense would continue the sequence as $p\!+\!q\!+\!r,$  $p\!+\!q\!+\!r\!+\!s,$  $p\!+\!q\!+\!r\!+\!s\!+\!t,$  and so on.  These are known as parity sums, returning $0$ if there are an even number of $1$’s in the sum, returning $1$ if there are an odd number of $1$’s in the sum.

2. The equivalent expressions $\texttt{(} p \texttt{,} q \texttt{)} = \nu(p, q) = p + q = p \vartriangle q$ can also be read with a different connotation, indicating the “next-door-neighbors” or venn diagram cells adjacent to the conjunction $p \land q.$  Generalizing $\texttt{(} p \texttt{,} q \texttt{)}$ in that direction would continue the sequence as $\nu(p, q, r),$  $\nu(p, q, r, s),$  $\nu(p, q, r, s, t),$  and so on.  That sequence of operators differs from the sequence of parity sums once it passes the 2-variable case.

The triple sum can be written in terms of 2-place minimal negations as follows:

$p + q + r ~=~ \texttt{((} p \texttt{,} q \texttt{),} r \texttt{)} ~=~ \texttt{(} p \texttt{,} \texttt{(} q \texttt{,} r \texttt{))}$

It is important to note that these expressions are not equivalent to the 3-place minimal negation $\texttt{(} p \texttt{,} q \texttt{,} r \texttt{)}.$

## Minimal Negation Operators • 2

The brief description of minimal negation operators given in the previous post is enough to convey the rule of their construction.  For future reference, a slightly more formal definition is given below.

### Initial Definition

The minimal negation operator $\nu$ is a multigrade operator $(\nu_k)_{k \in \mathbb{N}}$ where each $\nu_k$ is a $k$-ary boolean function defined by the rule that $\nu_k (x_1, \ldots , x_k) = 1$ if and only if exactly one of the arguments $x_j$ is $0.$

In contexts where the initial letter $\nu$ is understood, the minimal negation operators can be indicated by argument lists in parentheses.  In the discussion that follows a distinctive typeface will be used for logical expressions based on minimal negation operators, for example, $\texttt{(x, y, z)} = \nu (x, y, z).$

The first four members of this family of operators are shown below.  The third and fourth columns give paraphrases in two other notations, where tildes and primes, respectively, indicate logical negation.

$\begin{matrix} \texttt{()} & = & \nu_0 & = & 0 & = & \mathrm{false} \\[6pt] \texttt{(x)} & = & \nu_1 (x) & = & \tilde{x} & = & x^\prime \\[6pt] \texttt{(x, y)} & = & \nu_2 (x, y) & = & \tilde{x}y \lor x\tilde{y} & = & x^\prime y \lor x y^\prime \\[6pt] \texttt{(x, y, z)} & = & \nu_3 (x, y, z) & = & \tilde{x}yz \lor x\tilde{y}z \lor xy\tilde{z} & = & x^\prime y z \lor x y^\prime z \lor x y z^\prime \end{matrix}$

## Minimal Negation Operators • 1

To accommodate moderate levels of complexity in the application of logical graphs our organon needs a class of organules called “minimal negation operators”.

### Brief Introduction

A minimal negation operator $(\nu)$ is a logical connective that says “just one false” of its logical arguments.  The first four cases are described below.

1. If the list of arguments is empty, as expressed in the form $\nu(),$ then it cannot be true that exactly one of the arguments is false, so $\nu() = \mathrm{false}.$
2. If $p$ is the only argument then $\nu(p)$ says that $p$ is false, so $\nu(p)$ expresses the logical negation of the proposition $p.$  Written in several different notations, we have the following equivalent expressions.

$\nu(p) ~=~ \mathrm{not}(p) ~=~ \lnot p ~=~ \tilde{p} ~=~ p^{\prime}$

3. If $p$ and $q$ are the only two arguments then $\nu(p, q)$ says that exactly one of $p, q$ is false, so $\nu(p, q)$ says the same thing as $p \neq q.$  Expressing $\nu(p, q)$ in terms of ands $(\cdot),$ ors $(\lor),$ and nots $(\tilde{~})$ gives the following form.

$\nu(p, q) ~=~ \tilde{p} \cdot q ~\lor~ p \cdot \tilde{q}$

It is permissible to omit the dot $(\cdot)$ in contexts where it is understood, giving the following form.

$\nu(p, q) ~=~ \tilde{p}q \lor p\tilde{q}$

The venn diagram for $\nu(p, q)$ is shown in Figure 1.

$\text{Figure 1.}~~\nu(p, q)$

4. The venn diagram for $\nu(p, q, r)$ is shown in Figure 2.

$\text{Figure 2.}~~\nu(p, q, r)$

The center cell is the region where all three arguments $p, q, r$ hold true, so $\nu(p, q, r)$ holds true in just the three neighboring cells.  In other words:

$\nu(p, q, r) ~=~ \tilde{p}qr \lor p\tilde{q}r \lor pq\tilde{r}$

## Charles Sanders Peirce, George Spencer Brown, and Me • 10

With any formal system it is easy to spend a long time roughing out primitives and reviewing first principles before getting on to practical applications, and logical graphs are no different in that respect.  But the promise of clearer and more efficient methods for solving realistic problems is what led me to the visual calculi of Peirce and Spencer Brown in the first place, so my aim through all our rehearsal of rudiments is to make a bridge to applications a few steps closer to what the real world throws our way.

I’ve been thinking how to make the transition from basic ingredients of logical graphs and laws of form to slightly more interesting examples, still “toy worlds” as AI folk call them but suggestive to some degree of what might be possible in the long run.  I’ll spend a few days gathering assorted examples I’ve worked up before and try presenting those.

## Charles Sanders Peirce, George Spencer Brown, and Me • 9

A wider field of investigation opens up at this point, having to do with the diversity of interactions among the languages we use, and systems of signs in general, to the thoughts that stream through our heads, to the universes we talk and think upon, from Plato’s Heaven to Gaia’s Green Earth to the Tumbling Galaxies Beyond.

The complexities that come into play when we consider a domain of Signs, a domain of Ideas, and a domain of Objects all wound up in relationship to one another is what Peirce’s “semiotics” or theory of sign relations is all about.  Viewing the enterprise of logic within the broader frame of semiotics not only gives us more insight into its means and ends but affords us more “elbow room” for carrying out its operations.

To make a long short, we don’t have to “escape language” because we don’t live inside any language or system of signs, even if we get so confused sometimes as to think we do.  We live in that wider world of reality and only use languages and other systems of signs to describe what little we can of it.