Brains of Sand        0      Introduction to analytical AI

This is a summary of my discovery of the function of the brain and the structure of thought, including consciousness and emotion. It is intended to be read by professional cognitive neuroscientists who are frustrated by the lack of progress and closure with all current models. It does not really belong in any one neurocognitive sub-discipline, but has a foot firmly planted in several. It is 'system science' - the construction of theories/models that are, as Fodor puts it, multiply realisable. 


According to the functionalist metaphysics that has come to dominate contemporary philosophy, cognitive science belongs to a class of 'special sciences'^ whose ability to make true statements, and to explain its view of reality in causal terms, does not rely on transitive reductionism, as is the case with the more traditional disciplines like chemistry, astronomy, or geology. These 'traditional' or 'hard' sciences are all reducible in some way to quantum physics and/or general relativity. In other words, the essential nature of cognition can not be determined by analysing the material properties of the brain. Rather, cognition is an emergent property of the brain's structural organisation -ie the functional interconnectivity.  Cognition is, above all else, a mechanism, with both purpose and patent, a unified, functionally integrated machine (its design) encapsulating one or more uniquely inventive 'tricks' (its device), upon which its utility depends. The purpose of cognition is so the self is able to understand its world; its patent is the inventive method of achieving that goal. This method consists of using observations (facts) to modify expectations (beliefs). 

The canonical example of a 'pure' mechanism is a spring/pendulum powered clock, a mechanism whose design (craftsmanship, technology) permits the telling of time, a function which relies on the device (specific 'trick' or key transformation), which is the 'escapement', an energy 'slow-release' system, which is able to automatically and accurately mark out known intervals of time after that particular moment when you set the clock to 'the right time'. The key properties of automaticity and accuracy are both necessary for its  instrumentality (ie to 'do its job'). 

Unlike physical systems, cognitive (phenomenal, psychophysical) systems have an intentionality with multiply realisable ontologies- in plain language, a cognitive system of any particular specification is not limited to a single (eg biological) implementation**- it can be made from different kinds of substrate frameworks (ie  'construction kits'). Therefore, minds of the kind humans possess can, in theory, be implemented using computer (or any other 'sufficiently complex') technology. 

Not every kind of substrate is suitable, of course. Computer technology seems a viable candidate because it shares many features in common with biology, eg its complexity. For example, a typical computer has many millions of information combination units (NAND gates). Each one combines one or more binary input values into the desired output value^^. The NAND gate, as the atomic building block of the computer, has a function which closely resembles that of the neuron. Neurons, like NAND gates, combine multiple inputs into one output. This fact alone suggests that computers may possess roughly comparable informational processing capacity to brains, which have many millions of neurons. 

^O'Brien, G & Opie, J. (2012) The Structure of Phenomenal Consciousness. Adelaide University Press.

**there is no denying, however, that all such artefacts depend on human agency for their original provenance. This is the essence of the objection to AI first raised by Lady Byron a.k.a. Ada Lovelace. No matter how clever is the artefact, it derives most if not all of its intelligent agency from its human creator, who made it so. 

^^A computer made from household plumbing, or cables and pulleys, or even dominoes, might conceivably work, but it would probably need to be as big as a city to have comparable information processing capacity to either living brains or modern digital computers.  The earliest general purpose computing machines were enormous, consisting of banks of electromechanical relays which literally covered the walls of a large room from floor to ceiling. 


The Structure of this Website

This website consists of this introductory section ($0) plus nine (9) webpages, each one a self-contained section. Three appendices $A..$C follow- each one is a worked example.

$0. Introduction to Analytic AI (emulating biological intelligence)

$A. Visual summary

$1. Linguistic Computation

$2. Thoughts as Programs 

$3. Fractal Anatomy

$4. Evolutionary Cybernetics

$5. A Model of the Cerebellum

$6. From Movement to Consciousness

$7. Emotions as Functional Integration

$8. Automated Reasoning

$9. The Subjective Stance
-----------------------------------------------------------------

$B  Thoughts as Programs - Example

$C  Consciousness

$X  Emotions & Consciousness

$N  Neural Schematics

Scientists are for the most part unwilling to believe that brains are computers. This is not due to a lack of similarity between the two classes of system- BOTH brains and computers are machines based upon linguistic mechanisms.  Instead, their unwillingness arises from the following major shortcomings of all known artificial computing machines-
(0.1) they don't exhibit consciousness and emotions, two of the most prominent and useful features of human and (probably) animal minds as well.
(0.2) they can't solve some problems that human minds solve rather easily, especially the effortless use of language. 

By raising objections that venture beyond issues of mere technicality, some influential investigators such as Thomas Nagel* and David Chalmers** have gained access to a deeper level of intellectual prejudice. In doing so, they appeal to our collective reluctance to accept that something so central to our identity as consciousness and emotionality could also arise in non-living machines. They do this in full denial of the fact that all pharmaceutical and some medical interventions rely on the concept of the mind as machine, with these interventions designed to deliberately target those brain locations with predictable effects on mental functionality.

In this discussion, these objections are countered by several techniques-
(0.3) the axiomatic use of 'the subjective stance' at all levels
(0.4) the creation of the first truly subjective subspace (the convolution^ of volition and perception) which is shown to resolve Libet's Paradox.
(0.5) the use of higher order cybernetic principles (such as variable setpoints and drive-state reduction).

Unfortunately, the problems cognitive neuroscientists face aren't limited to abstract, higher order issues such as phenomenology. 
(0.6) No one can currently explain beyond reasonable doubt how neuronal interconnection forms functional circuits in the brain, so consequently there is no definitive consensus on the brain's learning mechanisms either. 

(0.7) Only Anatol Feldman and Miroslav Dyer (the inventor of TDE theory and the author of this website) can currently explain beyond reasonable doubt how brains cause muscles to move. This is surely the most basic of all biomechanical mechanisms, yet it has (until recently) remained unresolved.

TDE theory offers a route past this impasse. It consists of 
(i) a fractal theory of neural architectonics 
(ii) a theory of neural plasticity which does not rely on synaptic change theory (inconclusive and ultimately implausible), but instead explains adaptation in terms of meta-inhibitory circuits (plausible and more likely).

Scientific specialisation has encouraged an approach to AI which is sometimes called 'the synthetic route', in which we attempt to simulate intelligence. In synthetic AI, often called AGI, we try to build intelligent software from existing algorithms, data structures and mathematical methods. Since IBM's Watson program won the TV quiz show 'Jeopardy', beating all human contestants, no one can argue that synthetic AI doesn't work. 

The alternative approach to AI, and the one followed here, is called 'the analytic route', in which the goal is to emulate intelligence. Under this paradigm, called ABI***, rather than use known building blocks to achieve the unknown, we investigate known intelligent systems, ie animal and human brains, and look for familiar computational processes- techniques that we recognise.  

In this website, a causal account (an explanation) of brain, mind and self is presented. This discovery is called the TDE theory, where TDE is a recursive acronym meaning 'TDE Differential Engine'. The TDE is unique in that, as well as demonstrating external veracity (ie empirical truth, proof by experimental data alone) it also possesses internal validity (ie all the bits fit together, and conform to common sense notions). It is, in the words of AI pioneer Alan Newell, a 'unified' solution. 

*Nagel, T. (1974) What Is It Like to Be a Bat? The Philosophical Review, Vol. 83, No. 4 (Oct., 1974), pp. 435-450

**Chalmers, D.J. (1995) Facing Up to the Problem of Consciousness. Journal of Consciousness Studies, 2(3):200-19

***Artificial Biological Intelligence

^ a mathematical method of combining two signals to form a third signal. It is the single most important technique in Signal Processing.

This is the most recent (2018) website in which I have tried to bring my discoveries to a wider audience. In it I present reasoned evidence for the positive case, that the brain IS in fact a computer, but one which augments known stored-program (Turing/VonNeumann) architectures with neural cybernetics.

My previous websites with the same topic are, in order of publication-

www.tde-r.webs.com ...W[1]

https://chuckdemus.wixsite.com/chi-cog ...W[2]

https://cybercognition.webstarts.com/ ...W[3]

https://golemma.webs.com/ ...W[4]

https://mirodyer.simplesite.com/ ...W(5)

www.ai-fu.yolasite.com ...W[6]

www.biointelligence2.webnode.com ...W[7]


Synthetic vs analytic AI

Up until the 1980's, anyone planning to investigate Artificial Intelligence could proceed in one of two ways. Both of these methods could be characterised as being linguistically inspired*.

The first option is to start with computers, then manipulate them till they display sufficiently intelligent (brain-like) behaviours. Lets call this the synthetic  route, because you are making (ie synthesizing) the thing you want. In the early days of computers, there was much talk of creating 'electronic brains', using circuits made from the top technology of the time - thermionic valves and memories made from magnets. 

The second option is to adopt the opposite** approach - this time, start with brains, investigating their anatomy and physiology for features and functions that we can clearly recognise as 'computational'- ie similar to those we use to build computers. Lets call this the analytic route, because you are analysing (ie examining, investigating) the domain of interest for the presence of the thing you want. In contemporary medicine, researchers tend to use computational metaphors to explain various aspects of cognition, even when such use is unsound.  

The scientific sub-discipline known as Artificial Intelligence has invested heavily in the former method. Not unsurprisingly, this strategy has been uniquely unsuccessful. The reasons why this is so, and why we shouldn't be surprised, is to a large degree a matter of commonsense***. In the first case, you are making something new, something which does not yet exist, and is therefore hard to identify. In the second case, you are looking for something that you (a) know already exists  and (b) you can recognise

To better understand the importance of this distinction, imagine for the sake of argument that your house has been burgled. To make an insurance claim for the items stolen, you must describe them. It is much harder to describe your valuables in words, than to provide a photograph of the items. In the former case, you must construct the correct imagery in the reader's mind. This depends on your choice of words, and your ability to arrange them, as well as the imagination and vocabulary of the recipient. In the latter case, you appeal to the subconscious, automatic, all-or-nothing nature of visual recognition****.

The post WWII period was one characterised by strong technological growth, a supersanguine era which, in the west at least, fostered an academic and industrial atmosphere of intellectual adventurousness. This post-war optimism is one potential explanation as to why science opted for the first, synthetic option and not the second, analytic one.  This website represents a belated attempt to put right that collective wrong turn.

The analytic sub-type of AI as outlined in this website has been named ABI - Artificial Biological Intelligence - because it involves a deliberate attempt to imitate (or emulate) nature.

* AI pioneers, John Von Neumann (U.S.) and Alan Turing (U.K.), were both WWII code breakers and mathematicians who extended the field of linguistics to include mathematics and formal logic .

**the proper word is 'dual', which means 'complementary' - ie the one completes (is the complement of) the other. They are conceptually symmetrical.

***In the past, others (notably Thomas Kuhn and Karl Popper) also found it useful to 'look down the wrong end of the telescope at' (ie reverse engineer)  the scientific process itself.

****We often hear someone say, "I can't describe it in words, but I'll know it when I see it". 

.

© 2019 mirodyer@icloud.com
Powered by Webnode
Create your website for free! This website was made with Webnode. Create your own for free today! Get started