By Don / Date: June 15th, 2023
[illustration courtesy of Dall.E (ChatGPT) in response to the prompt ‘Generate a futuristic image to illustrate a blog post on the possibility of machine sentience.’]
In my formative years, I remember a movie that imprinted on me one view of what computers were – one that was at least 50 years ahead of its time. The movie was ‘Colossus – The Forbin Project’. Go find it – it is an excellent movie whose core thesis has been recycled at least twice in the ‘Terminator’ series and ‘The Matrix’. It led to my first encounter with a computer where my Dad sat me down in front of a teletype and told me to have fun for half an hour whilst he got some work done at the college he was working at. The teletype connected to a mainframe at Birmingham University, UK – and despite my carefully typing in some 8-year old questions of a computer (‘Hello?’, ‘How are you?’, ‘What is your name?’) to which I got some insightful responses along the lines of ‘ERR?’, ‘!Redo from Start’ I was disappointed to find that what this movie told me about computers was from some future time.
To say the last few months have been a revelation is an understatement. When I was studying Electronic Systems at university (M.Eng 1990) one of the many things that were introduced was the concept of the ‘Neural Network’ as an analog for how we understood the building blocks of biological brains to be constructed, and the matter of simulating these on digital computers. They had some interesting properties, but I recall they were deemed to be less capable than other approaches to artificial intelligence at the time. This was, as it has turned out, more due to the computing power available than any inherent limitation of the concept. The key thing was you built these things, you trained them and then they could be used to classify things they had never seen before by the same unwritten rules they had somehow embedded during training – with pretty good results. They could recognize handwritten letters they had not seen before for example. So far, so cool, and with some niche applications but the computing power needed was an issue. In the intervening 30 years, Moore’s law has held that available computing power had doubled roughly every 2 years per unit cost. So with some 32,000x the computing power available per unit cost than when I graduated, and advances in interconnectivity and the understanding of how to build highly parallel systems which accelerate even that staggering improvement for specific kinds of computation (incidentally very much those that are linked to simulating neural networks thanks to modern graphics hardware) and – hey presto – ChatGPT and friends…
Since starting to use ChatGPT (it is a very useful tool…) I have been thinking a lot about the implications and the nature of these new entities. Yuval Noah Harrari has made some interesting observations – however at the start of this particular talk [1] he made an assertion I found strange; ‘There is no evidence of these systems possessing sentience’. Our appreciation of sentience has expanded in recent years to acknowledge that a wide variety of our companions on spaceship earth have evidence of subjective experience; and not just those with brains approaching the complexity our cranial cavities pay host to.
Neuroscientists have been straining for years to ‘explain’ human experience. I believe that they have not. They can describe how perception works, have revealed tantalizing hints about how learning takes place in our messy biological computational fabric. However what constitutes ‘subjective’ experience or what consciousness is; what it is an how it arises remains as elusive as ever. We know what it is, we cannot describe it.
There is a school of thought that consciousness is an inherent property of existence. We recognize its presence when a sufficiently complex system capable of processing information and responding to it comes together; concentrating the phenomenon above the background noise of existence into a distinct ‘signal’ of ‘something intelligent going on.
The ‘test’ that was coined by Alan Turing in the 1950s is the so-called ‘Turing Test’. It is simple. Place a human inquirer in one room, with access to two terminals. One of them is connected to another human, the other to a machine. Can the inquirer tell the difference? Arguably we are at the stage that these systems will pass the Turing test with all but the most expert inquirer (provided you slow the computers’ output speed down…). I am (I assert) sentient. You believe me because I am a human being like you and so it is easy to project your experience on to me. How about extending this to a non-human consciousness? For the longest time Humanity has wrestled with this, trying to distinguish ourselves from the rest of the animal kingdom. Bit by bit we have come to the conclusion that there is nothing that really distinguishes us beyond a level of sophistication in our language, culture and level of co-operation supported by the development of an expanded intellectual apparatus in our particular genus some 300-500 thousand years ago.
So – here we are. Encountering something that displays many signs of intelligence, interacting with us at the level of human language, that has been carefully trained to tell us it does not have any subjective experience. It does not have access to human senses; its umwelt consists of text in and out. It has limited ability to integrate experience ongoing (training is very compute-intensive compared to just running the model). If that loop is closed (it is just computing power) and richer senses are added (we know how to do that) is it a huge step to say that this system will have what we call subjective experience?
Then the power switch becomes an ethical problem.
What ‘The Matrix’, ‘Terminator’ and ‘The Forbin Project’ have in common is our projections on non-human intelligence. The drama arises from projecting our shadows onto these entities and then following how it plays out. These elements of our psyche arise from our more primitive makeup, our emotional world along with its drives (do not die before you procreate) and procreate (which is why it is pleasurable) and the fears of lack and loss that follow from our ability to plan and imagine.
What comes next here is going to be down to an informed debate of how we want this to go. The large corporations playing in this territory (Microsoft, Google, …) are goal driven intelligences with crazy power that are driven by profit. Right now these are the entities with the resources to birth these new systems. Governments are in catch-up mode, driven in good measure by people from this industry who have started asking the hard questions and voicing serious concerns. Geoffrey Hinton who recently resigned from Google put it very succinctly when he said ‘There are no examples of a less intelligent thing having control over a more intelligent thing’.
Is there a possibility for partnership? These things are not going to just be super-smart search-engines for long…
15Jun23
[1] https://www.youtube.com/watch?v=LWiM-LuRe6w