This discussion inspired me to go back and look at some of the work I did in the late 80s when I was trying to understand Cook’s Theorem. One of the programs I wrote to explore the integration of sequential learning and propositional reasoning had a propositional calculus module based on an extension of C.S. Peirce’s logical graphs, so I used that syntax to write out the clauses for finite approximations to Turing machines, taking the 4-state parity machine from Herbert S. Wilf’s Algorithms and Complexity as an object example. It was 1989 and all I had was a 289 PC with 600K heap, but I did manage to emulate a parity machine capable of 1 bit of computation. Here’s a link to an exposition of that.
It may be quicker to skip to Part 2 and refer to Part 1 only as needed.
I’ll work up the case of a 2-state Busy Beaver when I get a chance.
I always learned a lot just from looking at the propositional form.