Re: Michael Harris • The Inevitable Questions About Automated Theorem Proving
MH: Even if computers understand, they don’t understand in a human way.
I like the simple-mindedness of that.
(A simple mind is one with no proper normal submind.)
Fifty-plus years of roundhouse discussions about AI + ATP leave me with nothing new to say about it — so maybe I’ll revisit my earliest thoughts on the subject. I’ve always liked Ashby’s pre-AI notion of IA = Intelligence Amplification and I often used the catchword Intelliscope to sum up my sense of the project worth pursuing. We invented the telescope on analogy with the human eye by studying the anatomy and function of our naturally evolved organ of vision and gradually at first, astronomically in time extending its power to augment and correct our natural faculty and frailty. I think everyone gets the drift of that. It doesn’t mean we have to become cyborgs in any dystopian way — if we do it will be reckoned to some other factor in our erroneous essence or the accidents of history.
MH: Would you call Google an accident of history?
I see the warp driving Googly Eyes towards Panopticon … if it goes that way it will be more like the angry ape in our glassy essence than anything else external.
cc: Cybernetics • Ontolog Forum • Peirce List • Structural Modeling • Systems Science
Pingback: Survey of Theme One Program • 3 | Inquiry Into Inquiry
Pingback: Survey of Theme One Program • 4 | Inquiry Into Inquiry
Pingback: Survey of Theme One Program • 5 | Inquiry Into Inquiry