Also, instead of trying to program specific features, I'd feel more comfortable getting a low-level system in place with flexible learning abilities, and then teach it appropriate things. Its a pain that AI has been dragged into speech processing with stuff like Eliza, when human speech is as natural to electronics as communicating in binary is to us, or riding a bicycle is to a fish.
This path leads to the Turing test and similar procedures. Of course there's a very easy way of passing the Turing test, but it may be regarded as cheating. But then what might evolution be on a biological and genetic level, but better ways to cheat?
If you want a model for distributed processing on a robot, consider each functional part as an object, pace Object Oriented Programming. Excellent simulation can then be achieved in a langauge as simple as VisualBASIC.
This paper describes the operation of a digital non-biological lifeform, with artificial intelligence, autonomy from human intervention after switch-on (assuming power maintenance) and the ability to develop unique behaviour patterns over time, pass hereditary data on to another generation, and resolve issues through dreams.
I have chosen a canine host as this best permits the description of the procedures required in the functioning of a digital non-biological lifeform.
When you survey the range of natural lifeforms that exist, nowhere can be seen an intelligence separate from its host. It is consequently rather bizarre that scientists have attempted to create artificial intelligence in isolation from a host in the belief that the host and the intelligence can be separated either in a biological or a non-biological lifeform. Intelligence is a subjectively applied description applicable only to a whole lifeform.
The Neural Network approach-an attempt to directly match the physical hardware of the brain using digital hardware-fails to accept the fundamental differences between the analogue/biological and the digital/non-biological.
Subsumption suggests linking low-powered processors together distributing the processing power across different levels of a unified system. I feel this lacks the elegance of a bioframe and at worst may exhibit little more than artificial behaviour. Both systems move from the most basic operations through to excessive complexity learning in an accumulative manner by making mistakes (impossible in evolutionary terms). I would prefer a system that maintained a similar degree of complexity, with a capacity for development and a persistent elegance and level of operation from initiation to exhaustion.
AB may be exhibited merely with a single Light Dependent Resistor in a circuit. It reacts to light levels by altering resistance. Place three such device on a small mobile unit and with the addition of a few logic gates and relays you can get your device to follow or evade light across a 2D surface. This is AB, and not AI.
The key to AI lies in the small conceptual leap that is required when the unity of the intelligence and the host is understood in both biological and non-biological lifeforms. In biological lifeforms, the program is the physical structure and the physical structure is the program. I am not speaking metaphorically and will describe what I mean through a description of how Spot operates.
The trick in the development of AI systems is to recognise the nature of the relationship between an analogue/biological lifeform and a digital/non-biological lifeform. If the mechanisms that operate out of necessity due to the analogue/biological nature of a natural lifeform can be translated into systems that operate to the same purpose within a digital/non-biological system, then viability may be established and a lifeform maintained. Rather than use a 'brand' of cybernetic technology, Spot is goal-orientated to achieve a viable lifeform, and use whatever means are necessary to get there.
Mechanisms may be translated, simulated, or a suitable alternative found (particuarly with constructional materials). When the mechanisms have been equated, the basic methodology of AI Routing that underlies both non-biological and biological lifeforms will stitch together the technology and a viable autonomous lifeform may begin operation. Spot's AI Routing attempts to operate an alternative to the mechanisms of a biological lifeform using digital technology for the same end: the maintenance of a viable lifeform.
The main diagram gives a depiction of the main parts of a basic digital non-biological lifeform.
The device at the centre is not a central processor, but might more accurately be described as an Event Handler acting rather as a clock crystal does within a computer, but in a more proactive manner, functioning as it also does, as a Data Router. Just like us, Spot has one thought in his conscious mind at a time. Sometimes he is thinking of nothing. Sometimes a thought will enter his consciousness describing something that he has seen, or a reaction may be initiated and pass into his consciousness as a thought of an action. Only one event may be processed at any one time. This device is the Stream of Consciousness and in many ways has less processing power than almost any other part of Spot's system. It acts as the main router for packets of cognitive data or thoughts within Spot's system, and like us, it can only receive and despatch one thought at a time.
I shall begin to describe how Spot works by looking at how a thought comes in from Spot's vision system when he sees a cat.
Spot is no data-hoover. Whilst we as analogue biological lifeforms, relate comfortably to the delights of an analogue colour spectrum and receive data in an analogue format through our eyes, Spot is a digital non-biological lifeform, and (just as the vision of a goldfish is related to that creature's viability) Spot requires a system of vision associated with his own viability.
The initial image Spot receives (colour can be regarded as of secondary importance for now) would be moderated by data from IR and radar sensors acting alongside a CCD camera. This would allow Spot to determine objects by matching distance information with visual information, and allow the choosing of a primary object (or objects) within the field of view, and usually, in the foreground.
This object would then be Vectorised, the outline only retained as a Vector-Map.
This would be processed using Morphing software against Vector-Maps held in a look-up table, in real-time.
Whichever morph required the least processing to achieve a match would be the closest fit. In this case, a cat.
The cat in Spot's look-up table would be an object with a great Importance (Spot being a dog). The following data would then be despatched to the Stream of Consciousness to be routed from the Vision Processor to the Task Handler:
V2TH; cat; 10m; 5o; immobile; Imp5.
The immobility would be deduced by comparing the current data from previous data held in Vision Memory that might consequently allow for the determination of movement. The position of the cat is given from the IR and radar sensors, in association with the degree by which the head is turned.
The look-up table could be added to, and objects on it could be forgotten (see later) to make space for others. Of course there is the possiblity of an error, but most people at some time, are confused by something they see. A 'cat' in the distance might turn out to be a cat-shaped piece of wood, or a ball may turn out to be a circle drawn on a wall. This is no problem. Viability is the key to the development of a lifeform, not perfection. The message is very short considering the amount of data that has been processed to achieve it and is despatched to the Stream of Consciousness in a language that I have termed ThoughtScript. Much as Adobe's PostScript is a language for page description, so ThoughtScript is a language for the description of cognitive data, sent in packets within a digital non-biological lifeform for the handling of thoughts and actions.
This object has a high Importance of 5, taken from the look-up table, and given the relationship between a dog and a cat.
The Stream of Consciousness receives this Event and due to the Importance, if nothing more important is happening, passes it to the Task Handler.
The Task Handler will connect the appropriate Task to this Event and pass the Parts for that Task back to the Stream of Consciousness.
At initiation Spot has a basic collection of Tasks and Parts, the former allowing him to do basic canine things like walk, run, stop, bark, turn, look, pick something up in his teeth etc. The Parts are related to the ability of the constructed bodyshell to act in certain ways. A real dog may be able to lick its genitals, but Spot is unlikely to be quite that dextrous in initial models and may not have a mechanical tongue. Consequently the Part for 'Licking' and the Task of 'Licking Genitals' would not be present.
A Task would exist that would create new Tasks based upon the successful and unsuccessful results of other tasks. The Success is here comparable to the Importance used in the ThoughtScript of the Vision Processor (Tasks may also have an Importance rating). It may even be possible for Spot to have a Task to create a new Part by pushing his physical construction to its limits.
As the physical contruction improved it may be possible to upgrade the Parts available to Spot. This however, would be difficult if Spot had already developed a complex pattern of Task and Part use. It would be a shock to the system and may cause a system collapse, rather like the rejection of an artificial organ in a human. It may be necessary to include a Task that would specifically integrate new Tasks and Parts into the Task and Part memories. It may even be that Spot would reject some new data if integration proved to be too complicated.
Just as Spot remembers the importance of a cat or the success of moving towards a human to have his ears tickled, so it is important that unsuccessful Tasks are forgotten. Simply wiped from memory when they prove to fail regularly. Spot learns and develops his behaviour as much as by what he forgets as by what he remembers, and by what he doesn't do, as by what he does. Core Tasks and Parts would be protected from deletion.
In the case of spotting a cat, the Task will be to run-10m-5o-barking. Such an action may even have a fixed success rate, regardless of how successful it really is, the success being not in the catching of the cat, but in the canine pleasure of chasing one. Chasing a cat makes Spot happy and this happiness is the Success value of the task. Consequently, whilst other activities (chasing parked cars) may rapidly be removed from Task memory as being consummate failures, cats will always be chased.
Such a preference may be built-in to Spot at initialisation, but repeated success might 'fix' the Success value of some behaviours to the point where the behaviour becomes repeated out of pleasure, even if it ceases to yield a result. This may be the first digital psychosis.
The Stream of Consciousness receives the ThoughtScript from the Task Handler and passes it to the Motion Balance Processor. And then it forgets about it. Our legs know how to walk and run. We don't consciously think about doing it after we initialise the activity, unless we get a major failure, walking into something and stubbing our toe, but our legs 'know' how to walk and run.
Consider the operation of the thought processes of Analogue Biological Lifeforms. Here is the conceptual leap. The program is not in the brain. The data is in the DNA and in-coming through life, but the program is the physical structure (the Bioform). This is no metaphor. Our structure is our program, physically, and perhaps within the structure of the brain-the biological Task Handler-itself. Hence the key to AI lies in AI Routing, just as the key to biological inteligence lies in the Routing inherent in the physical structure of the Bioform.
The physical layout of our bodily parts is in itself our program. We can walk because of the design of our bodies and the design of the muscles and tendons inside them. Our legs can no more act like our ears than a spreadsheet can act as a word processor. All we need do is use them, requiring the equivalent of turning a computer on and loading the program. When we learn to use our legs, we can walk (we do not `learn to walk') and we don't forget it unless something goes badly wrong.
The Motion Balance Processor knows how to balance, walk, and run from initialisation as we have programmed it for basic operation the way a child is helped to take those first few steps. It can stop Spot from falling over in most cases. The data for this is fed in initially from simulations and held as core data, although it may be amended by Spot for specific terrains, or unusual individual behaviour patterns. This data is retained rather like the Tasks and Parts utilised by the Task Handler, although here it is specific to the Tasks of motion and balance.
It is possible that Spot may fall and be unable to get up again, but this is acceptable. Sometimes we fall over and cannot get up. There is no development (as there would be with subsumption) from a barely viable crawl through to a 'discovered' advance on this, which may be an advanced and very fast crawl. We teach Spot how to use his legs as we would urge a baby to walk when crawling might seem so much less painful.
On Spot, the legs would be driven by small motors and/or compressed air controlled by the Motion Balance Processor. They would joint in the usual places, but in place of haunches would be to some extent detached from the main bodyshell in that they could slide up against the body and angle the top ends of the legs over the back of the bodyshell. This action would offer considerable shock absorbance required during fast motion.
Yes Spot runs as well as walks in a natural manner. Here it is very important to use inertia, gravity and momentum, rather than to fight them. And not to inch forward like a constipated duck. The main power cell encased in an oil-filled cavity or on a moving sledge would move backwards and forwards (and perhaps sideways) to adjust the centre of balance and would be controlled in association with the legs. This aids balance and improves the potential for movement. Starting requires inertia to be overcome, whilst stopping would bear more than a passing resemblance to a Tom and Jerry cartoon as the legs dig in for shock absorption to soak up the momentum and retain balance, lifting back to stick up above the level of the top of the bodyshell. Once immobility and balance had been achieved, the legs would return to their standing positions in concert with the Centre of Balance.
The entire procedure would be based upon models originally simulated on a computer and downloaded into Spot's Motion Balance Processor's specialised Tasks and Parts memory. This is not 'cheating', but is required to match the complex programming inherent in the construction of the legs and sense of balance of biological lifeforms.
Where required, the Motion Balance Processor would call data from the Vision Processor to check the terrain. This would go through the Stream of Consciousness in the normal manner, as it is never tied-up with any individual piece of ThoughtScript. It is, after all, a router rather than a processor, and doesn't maintain a busy state waiting for any response from a despatched piece of ThoughtScript. Sensors on the feet could also supply useful data in this area.
Spot might start running forward, only to be knocked over if someone jumped out from behind a bush and collided with him. Depending upon the importance of such a collision relating to the Importance of the current Task, Spot may recover his balance and continue after the cat, or move the initial Task of cat chasing onto the memory stack, and substitute a new Task to deal with the collision Event.
Spot would not look for a collision all the time as with the data-hoover model, but would deal with it if, when, and as it happened.
A sample push to Spot's side might be dealt with in this way. The push threatens to destabilise Spot.
Spot raises his legs on the other side, quickly, shortening them, to absorb the shock of the push, and slides them back over the main bodyshell, returning them to the ground at an angle, at speed, to stabilise.
Tipping back, and absorbing the shock again by lifting the other legs up the bodyshell, and finally, returning both legs to the immobile position in concert.
All such actions would be modelled on computer before initialisation, although Spot could learn new techniques as he developed, and forget any that didn't work in the field. It is possible that extendible and jointed bars may be fixed in Spot's side to extend and raise him at an angle that would allow him to regain his feet, to compensate for the loss of bodyshell dexterity in the transition from a 'soft' and flexible skin-and-bone bodyshell, to the 'hard' and inflexible bodyshell of a non-biological lifeform. At least until a more flexible bodyshell might be developed.
When any Task is chosen, the propensities of the individual will play a part in the choice that the Task Handler makes when faced with specific criteria. There may be many choices for dealing with a specific event, but recent experience, the success of some Tasks, and individual propensity will play a part in Routing the event to the most appropriate Task as a response.
New Tasks can be created, assembled from available Parts and previously successful experience. It should also be possible to allow for experimentation in the development of new Tasks, and even in the creation of new Parts. Tasks would be accomplished with a degree of success according to initial data relating to such typical canine behaviour as chasing balls, peeing up lamp-posts, running up to humans, being scratched behind the ears, chasing cats, and basic acts of curiosity. Tasks that repeatedly fail would be wiped from memory. Spot would live and learn. The programmed data, comparable to behaviour traits, instinct, and cognitive abilities underpinning creativity, would be at a low enough level for it to be impossible to gauge, after a run of unique interactions, just what Spot would do next.
The importance of propensity and its relation to Routing might be illustrated by the following. Suppose there is an area in the brain where an electrical signal enters at point A. It might exit at points B or C. If it exits at point B then the individual will have a greater propensity towards violence, but if it exits at point C, then there will be a greater propensity towards non-violence. This is an analogue biological system and so not a binary incident: there may be many such areas with an accumulative effect, or individual areas relating to the propensity for violence or non-violence in different types of situations.
If such a propensity is genetically derived, then there may be a channel from A to B, developed from data held in the DNA. Formative experiences may operate on this area to forge another channel from A to C, or to block up the channel from A to B. There may be a mixture of areas, some controlled from genetically inherited data, some controlled by formative experiences, particularly in youth, with the average or accumulative result determining the propensity for types of reaction in any given circumstances.
Whether this is an accurate depiction or not, it offers some indication of the importance of propensity in determining all reaction, and indicates that simply passing-on propensity data massively affects the initialisation and individual development of the next generation.
Similarly, although the legs have their own processor and memory, capable of updating the simulated walking and running routines through Spot's life, the basic routines should only be included when initialising, unless it was felt that specific amendments offered a better model than was originally supplied through computer simulation, without passing on idiosyncratic behaviour suitable only for the previous host. A new generation Spot should learn to walk with greater viability partly through the core routines in the Motion Balance Memory present on initialisation, and partly through individual experience, rather than inheriting a parent's unique walking and running data.
When Spot sleeps, the Stream of Consciousness would turn off, and may hand over to a Stream of Subconsciousness-a shadow of the main system but with no input or output. This separately powered system would take Events from Spot's conscious Task memory and run through Tasks in attempts to simulate successful reactions. Many routines might be run through, with no interaction with the outside world, and new Tasks and Parts experimented upon.
Purposeful Dreaming might then be established. Just as when we cannot solve a problem or deal with an issue in our conscious minds, so difficult Events that were not successfully dealt with by Spot in real-time, might be ported to the Stream of Unconsciousness. Here many different and experimental attempts might be made to find a successful resolution. Should a simulation achieve success, it might be ported back to Spot's main Task memory to be used when faced with such a difficult issue again.
In humans, our dreams-the windows of our subconscious-operate through symbolic and metaphorical langauge, often requiring the services of a trained psychotherapist to unravel. Here, Spot's dreams run within the 'metaphorical' language of simulation.
Uniquely, it should be possible to tap the datastreams of Spot's subconscious and to watch him dream by visualising the simulation on a screen.
Much as we might go to bed with a problem and wake up with the solution, or consult a therapist to help us interpret the issues we had buried in our subconscious, so Spot might find resolution to events and issues that proved insurmountable in real-time through his dreams.
When Spot has recharged, control would switch back to the Stream of Consciousness.
The ability to tap Spot's datastreams allows us to watch how he thinks, and communicate directly with him perhaps with an IR link and a Newton. He could be trained to recognise certain objects and to do things with them, just as a dog is trained to fetch a stick. This allows for direct interaction between pet and owner, although it would be counterproductive if more control was exercised than was entirely necessary. Spot is no radio-controlled device. Once turned-on, aside from the recharging (eating) and repair (medical treatment), Spot could and should exist autonomously, without the need for any human intervention.
However, should you choose a humanoid shell, or wish to make Spot talk, it would be important to consider that speech is essentially the vocalisation of a cognitive concept, and the Routing of such a concept to a look-up table of speech bears some relation to the Routing of Events to Tasks. Similarly, speech recognition requires the parsing of speech to establish the 'Event' in terms of geophysical objects and cognitive concepts. The 'noise' needs to be removed and the primary concept of the statement turned into an Event. Such recognition should be built from a basic core, but would require a fair amount of processing power to operate in real-time given that speech is associated with analogue systems and we are dealing with a digital system.
Simple speech relating to position and action offers a good core model, with an establishment of such data Routed to parcels of speech within a look-up table, and strung together.
The neural-net model has been dispensed with as inadequate, and I would also suggest that the subsumption method is unintuitive and has its faults. There is no inherent virtue in low-power processing. We should use whatever processing power we need to. The Vision Processor and Motion Balance Processor will require quite a bit of fast computing power to operate in real-time, and this is fair enough given that we are simulating the programming inherent within the complex physical structure of the eye, and the tendons, muscles, and bones of a well-designed leg.
Our battery technology is poor, certainly in relation to the human bioframe. The key to human development has been our ability to do so much on such small amounts of energy, through our design, leaving us time to do other stuff. Spot could not be expected to match such a proficient use of energy-through-design, which is partly why dogs don't tapdance. They only have time to evolve to the point of chasing sticks when we provide their food and a place for them to sleep. Battery technology is improving though. Let it, and until it does, there's nothing wrong with having a Spot prototype on a power lead.
How far can you go? With the small conceptual leaps taken with Spot, further development is largely a matter of upgrading for greater cognitive capacity. Different artificial lifeforms with increasingly complex Task-options can be developed.
You can go as far as you want. Or as far as you dare.
Dr. David Harrison.