In late July, Google dismissed software engineer Blake Lemoine for apparently breaching employment and data security policies. The consensus though, is that he was fired for saying that AI had become sentient. Fans of the Terminator franchise will know that sentient AI doesn’t end well.
However, I was intrigued by this as a way of getting fired and decided to conduct my own experiment. It was time to have a conversation with AI.
I made an extensive search of North Cornwall bars and nightclubs but was unable to find any AI willing to converse with me. I have much the same problem with real people, so perhaps I was going about this all wrong. Maybe internet dating was the answer? It didn’t take much research to discover this wasn’t going to work either. Chat forums are awash with stories of internet dating users finding they were flirting not with a real person, but to some sort of bot which was designed to extract personal information, money, or preferably both. For my research, that would be perfect but the risk of finding myself talking to a real person instead of machine could have made for an uncomfortable conversation with my wife.
You can find good examples of AI conversations posted online – I use one when teaching, where I ask the students to listen to a recoding on an AI/Human interaction and figure out which side of the conversation is AI and which is real. Often, the listener guesses wrong – which means the AI is more human than the human. That’s a little freaky. This should be perfect for what I want, but these are just recordings of AI conversations. There is no easy way to talk to the AI yourself.
I had about given up (very low boredom threshold) when I came across something that has exploded in the last couple of years. Exploded isn’t hyperbole either. Although AI art was first produced in the 1960’s, it didn’t catch on until January 2021. And then… BANG… it seems like everyone is producing better and better AI to create original art. The speed of development is astonishing.
It works like this. You imagine something. You describe those thoughts in words to the AI. The AI produces a piece of original art based on those words. When asked to ‘imagine anything’, a large number of people apparently say ‘dog’. AI isn’t excited by ‘dog’. Give it something more challenging! Something like ‘space dog standing on a volcano juggling bananas’. That’s fun because nobody in the history of the world has been crazy enough to imagine that before, so you know the AI isn’t cheating by just stealing a picture of a dog from someone’s instagram account. Within about 30 seconds you will have a new, original artwork that represents what you imagined. It might look like a cartoon, it might look like a photo. It might be awful, it might be stunning. More often than not, you have to concede that it is very clever. If you are interested, google ‘AI generated images’ and marvel at things like Teddy bears working on new AI research underwater with 1990s technology
My favourite game at the moment is to ask for one thing in the style of somebody who wouldn’t ever have created it. My favourite artist is Steve Hanks, whose watercolours of people, animals and children are just stunning. But Steve Hanks would never have painted the Terminator at the beach. Until now.
But is it intelligence? To quote David Holz, founder of Midjourney, one of the image AIs I’ve been playing with ‘Every time you ask the AI to make a picture, it doesn’t really remember or know anything else it’s ever made. It has no will, it has no goals, it has no intention, no storytelling ability. All the ego and will and stories — that’s us.’
AI is smart, but is it intelligent? Is it close to being sentient? I wanted to find out. I asked the midjourney AI to make me a picture of ‘AI looking at itself in the mirror’.
It seems that AI knows what it looks like and, personal taste notwithstanding, it looks pretty good. But why isn’t the AI that it sees in the mirror a collection of wires and chips and circuit boards? What about pages and pages of code? AI is not a person, even if it has been educated by consuming millions of things that people wrote, painted, photographed or drew. Has AI decided it looks like a person because it really thinks that it does? That would suggest a lack of self-awareness rather than intelligence. Has it guessed what I would like it to look like and is playing to my wishes? That would certainly imply some level of intelligence. But no – to start seeing things like that that starts you on the road to thinking that Ai has become sentient.
The reality is less interesting, for me anyway. I’m genuinely curious about how these image generation engines work, but the tutorials I can find are full of language like ‘score-based generative models’, ‘accurate mode coverage of the learned data distribution’, ‘parameterized reverse process’, or even the dreaded ‘differential equations’. I’ve never been very good at maths, just about scraping a pass at O level, but I have just enough grey matter to accept that I’m not going to fully understand how these things work. That same (albeit limited) brain tells me they work based on maths. Maths is all about quality, structure, space, and change. It is not magic. It is not intelligence. It is not sentience.
But, as I glance again at the selfie AI sent me of ‘itself’, I think this research project should probably end now, before I get in trouble with my wife.
Giles Letheren, Chief Executive Officer
Photo by Possessed Photography on Unsplash