Some books excel at presenting information, and some books excel at telling a story, but it is rare to find a book that is able to do both. Douglas Hofstatder’s book is one of those rare cases, and the fact that he was able to do it with such an important topic as intelligence makes this a book well worth the time. Here are some of my notes:

- He explain Epidemes Liar Paradox, which is was his saying, “All Cretans are liars.” Since Epidemes himself was a Cretan, the statement is indeed paradoxical. This is similar to saying, “this statement is false.”
- Old interpretations of AI were skeptical. Lady Lovelace said, “the analytical engine has no pretensions whatever to originate anything. It can do whatever we order it to perform.”
- Bottom-up decision procedures work its way up from the basics, while top-down procedures work their ways back down to the basics.
- He discusses a little bit of the history of mathematics, including the shake-up over Euclid’s fifth postulate that led to non-Euclidean geometry.
- Consistency means that everything produced by the statement is true, completeness means that every true system is produced by the system.
- A bounded loop occurs when the maximum number of steps in a loop is known in advance, and a free loop you just begin and wait until it is completed. As Hofstadter notes, “the second type is dangerous, because the criterion for abortion may never occur, leaving the computer in a so-called ‘infinite loop.’” p 110
- Compilers, interpreters, and assemblers are the key parts of a computer system.
- “Now sophisticated operating systems carry out similar traffic-handling and level-switching operations with respect to users and their programs. It is virtually certain that there are somewhat parallel things which take place in the brain: handling of many stimuli at the same time; decisions of what should have priority over what and for how long; instantaneous “interrupts” caused by emergencies or other unexpected occurrences; and so on.” p 296
- How are symbols stored in the brain–is the correct analogy software or hardware?
- Self-symbolism: “In fact, upon reflection, it seems that the only way one could make sense of the world surrounding a localized animate object is to understand the role of that object in relation to the other objects around it. This necessitates the existence of a self-symbol, and the step from symbol to subsystem is merely a reflection of the importance of the self-symbol, and is not a qualitative change.” p 388
- An algorithm is a “specific delineation of how to carry out a task, that includes a mixture of (1) specific operations to be performed, and (2) control statements.” p 410
- “You fit your mathematics to the world, and not the other way around.” p 457
- He debunks the argument that since machines cannot solve Godel problems, they aren’t conscious (see Shadows of the Mind). There is no reason to suspect that machines cannot be “meta” too, and there is no reason to suspect that humans can solve Godel numbers forever, at a certain point it becomes too complicated even for us. This is not a valid argument against AI.
- Chemistry –> Molecular genetics –> Non-molecular genetics. One step up at each level.
- Reductionist’s dilemma: “In order to explain everything in terms of context-free sums, one has to go down to the level of physics; but them the number of particles is so huge as to make it only a theoretical ‘in principle’ kind of thing. So, one has to settle for a context-dependent sum.” p 522 As he describes, this dilemma leads to some disadvantages.
- Godel’s theorem is a catch-22. “It all hinges on what G says: ‘G is not a theorem of TNT.’ Assume that G were a theorem. Then, since theoremhood is supposedly represented, the TNT-formula which asserts “G is a theorem” would be a theorem of TNT. But this formula is ~G, the negative of G, so that TNT is inconsistent. On the other hand, assume G were not a theorem. The once again by the supposed representability of theoremhood, the formula which asserts “G is not a theorem” would be a theorem of TNT. But this formula is G, and once again we get into paradox.” p 580
- Turing played “round-the-house chess”: You make a move and then run around the house. If they haven’t moved by the time you return you get to move again.
- Most memory deficiencies in both humans and machines are due to errors in difficulty in retrieval, not storage.
- His definition of boredom: “You get bored with something not when you have exhausted its repertoire of behavior, but when you have mapped out the limits of the space that contains its behavior.” p 621
- On the use of counterfactuals: I believe that ‘almost’ situations and unconsciously manufactured subjunctives represent some of the richest potential sources of insight into how human beings organize and categorize their perceptions of the world.” p 664
- The things humans are good at: recognition of faces, recognition of hiking trails in forests and mountains, and reading text without hesitation in hundreds if not thousands of different typefaces. He was unusually prescient with that third one, it is the basis behind today’s reCAPTCHA program.

This book was hugely influential on the way I think about AI in particular and intelligence in general. Although it is almost 30 years old now, it remains relevant because it goes through–step by step–how each of these processes work. I recommend it as a solid base.

on April 5, 2009 at 10:32 pmCognitive robot research as a vehicle into human thought « Brains Lab[…] Many people make the vague argument that “robots will never achieve consciousness,” an argument that Ishii deconstruct this argument thoroughly, claiming that it stems in large parts on Western religious ideals that Japan does not share. I would add that just because we have not yet formulated detailed examinations of self-awareness or other “uniquely human” functions does not signify that we never will. I will hear nothing of limitations based on Godel’s complexity theorem, as that argument has also been sufficiently debunked. […]