- Personally, I’d give neither of them [Bohr or Einstein] perfect marks, in part because they not only both missed Bell’s Theorem, but failed even to ask the requisite question (namely: what empirically verifiable tasks can Alice and Bob use entanglement to do, that they couldn’t have done without entanglement?). But I’d give both of them very high marks for, y’know, still being Albert Einstein and Niels Bohr.
This brings me to the question I wanted to ask about AI sentience, but was afraid to ask.
- Does GPT-3 ever ask an original question on its own?
Simply asking for clarification of an interlocutor’s prompt is not insignificant but I’m really interested in something more spontaneous and “self‑starting” than that. Does it ever wake up one morning, as it were, and find itself in a “state of question”, a state of doubt or uncertainty so compelling as to bring it to ask on its own initiative what we might recognize as a novel question?