C.S. Peirce : Information = Comprehension × Extension

Re: R.J. Lipton and K.W. ReganA Most Perplexing Mystery

The inverse relationship between symmetry and diversity — that we see for example in the lattice-inverting map of a Galois correspondence — is a variation on an old theme in logic called the “inverse proportionality of comprehension and extension”.

C.S. Peirce, in his probings of the “laws of information”, found this principle to be a special case of a more general formula, saying that the reciprocal relation holds only when the quantity of information is constant.

Readings

Advertisements
This entry was posted in Comprehension, Extension, Information, Information = Comprehension × Extension, Inquiry, Intension, Logic, Logic of Science, Peirce, Semiotics, Sign Relations and tagged , , , , , , , , , , . Bookmark the permalink.

13 Responses to C.S. Peirce : Information = Comprehension × Extension

  1. porton says:

    What is lattice-inverting map?

  2. Poor Richard says:

    With apologies, this is probably entirely off-topic (I have no information theory chops whatever). I’m interested in information quality in the context of “civic intelligence”. I just posted this on my facebook page:

    I’m trying to visioneer an automated bullshit detector as part of a “cognitive immune system” that works like a virus detection app, identifying malicious memes and other toxic language by its patterns or “signatures”. This would become part of a civic intelligence decision support system. Any thoughts, questions?

    • Jon Awbrey says:

      There are a couple of literature streams, as far as I can remember, that explore immune system models of cognitive functioning. Try looking up Gerald Edelman, for one.

      In the order of time, evolution evolved immune systems because evolution itself is too slow. And then evolution evolved neural systems because immune systems are too slow.

      There is one problem with your target application. Immune System Models (ISMs), as studied for example in the belief revision literature, are more adapted to protecting the BS a critter already believes than detecting the BS its real environment is dead set on flushing from the Augean stables of its species mind.

      • Poor Richard says:

        Both Reality and Culture (with its cast of characters) constantly attack or exploit our cognitive biases (many endowed by evolution) and our beliefs, be they bullshit or be they justified. I guess what I’m looking for are new ways to use info tech not so much as a prophylactic but as a cognitive prosthetic to help us distinguish justified beliefs from unjustified ones, at least to the extent that they reach us in digital form. I know this is asking too much, but not so long ago a digital spell checker would have seemed preposterous (and to some, insidious).

      • Poor Richard says:

        FYI I followed your reference to Gerald Edelman to the Neurosciences Institute and got this email in reply to my request for references to relevant resources:

        Sorry, but your question is in areas far from our areas of expertise, so I don’t have any relevant references. However, such a [bullshit] detector would be useful! And perhaps even more than a spell checker! Best wishes for success.

    • Jon Awbrey says:

      Reality is all-inclusive, so culture is made on those fields of reality that we have an extra hand in cultivating. That makes the relation between reality and culture not so much a dissection as a factorization within the manifold of experience, as I touch on somewhat glancingly here —

      Prospects for Inquiry Driven Systems • Reality and Representation

  3. Wu Li says:

    I’m just an armchair philosopher interested in the same subject, but I suggest reading the work of Alastair Clarke which is available online for free. He’s an evolutionary biologist who was studying the function of humor only to discover it works using simple pattern recognition and renormalization. He never comes right out and says it, but if you ask me he is basically saying that humor works as an automatic B.S. detector based on a simple networking strategy. He tries super hard to keep his work “socially acceptable” and sober for his peers so they can continue to explore the subject objectively, but his basic formula can be interpreted as H = G x BS or humor equals gullibility times the amount of B.S. perceived.

    I’m taking a more straightforward functionalist approach myself using language and the assumption that words only have demonstrable meaning according to their function in specific contexts. Thus, words are treated much like mathematical variables that have no intrinsic meaning and it is the context that always provides their meaning.

    • Poor Richard says:

      “words only have demonstrable meaning according to their function in specific context”

      I agree with the need for contextual definitions, but words are not unconstrained variables. Various rules, including a set of common definitions (associations with other specific words), constitute some class of “proper” values. In effect. they are “regular” in some sense. On the other hand language often seems to be the worst enemy of precise thought and communication as it allows for such great ambiguity of intention and interpretation in ordinary use.

      Can we engineer a kind of language that consists of something like semantic regular expressions (like the pattern language used in some computer text editors and scripting languages)? Is that what a semantic ontology or semantic network (taxonomy) is or could be?

      The visual thesaurus is a cool thing: http://www.visualthesaurus.com/browse/en/language

  4. Wu Li says:

    Semantics have been derived from relationships between words and other words in nearby contexts using fuzzy logic. In 2008 the evolutionary biologist Alastair Clarke additionally showed how simple pattern recognition based on the juxtaposition of 8 rudimentary patterns could provide a fuzzy logic account for all of arithmetic and syntax and could be used to describe every joke ever written. A contextual systems approach that automatically scans for whatever is false, misleading, distorted, exaggerated, worthless, counterproductive, pointless, or meaningless inadvertently renormalizing much of the data in the process, while producing the foundations of mathematics and language as well.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s