Discussions around AI

Posted on Posted in Everything, Miscellaneous, Software Testing

I recently read James Bach’s blog entry on Artificial Intelligence and the concept of the Singularity. I have very limited exposure to the limitations and abilities of AI research as it stands, but I did have a couple of counter-questions to some that James posed.

What are the features of AI?‘ I don’t know. My first thought was ‘demonstrable self-awareness’, but that’s an argument that leads to a lot of tail-chasing and ends up in ethical debate. At least, that’s where it’s always ended up when I’ve tried – and I’ve tried a bit. I wonder if knowing the features of AI is relevant. I don’t know that we are (or should be) moving toward ‘human-like’ attributes as a measurement, given that the senses any self-aware machine would use would be inherently inhuman.

How would you test them? How would you know they are reliable? Again, I’m not sure how relevant either of these questions are. If you’re developing an intelligence to serve a specific purpose, then sure you can measure to what degree and what efficiency that purpose is served, but in terms of measuring the features of an artificial intelligence? It’s about as relevant as testing the features of a human intelligence.

I’m the first to admit that I’m no genius. Sometimes I struggle to coordinate walking and breathing simultaneously. Fortunately for me, I have a number of friends who are quite brilliant and for some reason they deign to spend time with me.

One such friend, Paul is very accomplished in the field of AI and talks about stuff that makes me wish I hadn’t slept through so much of my education. After reading James Bach’s blog entry talking about the Singularity, I forwarded it to him to see what his take was. This is what he had to say:

‘A Test-Plan for the singularity sounds a little bit like intelligent design, it implies there was a goal, when instead there was merely exploration. When dealing with the singularity, it is, by definition, unpredictable. It is the point at which technology ceases to be predictable. It is inherently inimical to a test-plan, as there can be no assertions made prior which hold after. It is the point at which rules cease to apply.

I believe he is mistaking the singularity for web-site development, and intelligence as something other than a convenient word to generalise a species with. The reason there is no definition for intelligence is the same reason there is no definition for human: there are as many definitions as their are actual humans.

Well, that’s how evolution works, and look how buggy THAT is! Look how long it takes. Look at how narrow the intelligences are that it has created.

What are the other intelligences he is looking at? If he can accept that human intelligence isn’t perfect, why can’t he accept a machine intelligence being imperfect? I can. It’s not a big deal.’

Later in the day Paul and I were chatting and the subject came up again. I mentioned ‘how to think about the problem of defining intelligence, or test-planning for AI construction’ as being something I found interesting.

What follows is more or less our conversation (with edits to remove the irrelevant, and a little rearrangement for the sake of chronology)

Paul: how to think about the problem. Hmm. I don’t really know what to tell you. You can plan for software development, but can you plan for the singularity? You can’t. It’s the opposite of planning. Like children. You can’t determine prior to conception if your child will become a psychopathic killer, but you can monitor it in progress, i guess. But diagnosis can only be performed after a body of evidence is amassed about what indicators signify psychopathy, for which you need a large population.

So the first AI, without a population, can’t be assessed until it makes a mistake, then you can assess the 2nd AI based on the first. Ah, the singularity is always a spanner in the works.

Ben: How do you define what a mistake is? Is it simply undesirable behaviour?

Paul: Yup.

Ben: What if the AI needs to proceed through some undesirable behaviour in order to learn a preferred one?

Paul: Until it kills someone, how would you know what the pre-cursor states were.

Ben: Aah, the out of context problem raises its head again đŸ™‚

Paul: Indeed. Undesirable behaviour isn’t exploring path a vs. path b, or standing up for australian comics, it’s about the same undesirable behaviour we seek to prevent/punish in our intelligent society.

Ben: It does make the test planning side of things somewhat more complex, doesn’t it?

Paul: Indeed. Would you prepare a test plan for raising a child is what it boils down to.

Ben: You could. It depends on how far you want to stretch the analogy. You would have to (somewhat arbitrarily) define success, and it would necessarily have to be either so large as to be unweildy, or so general as to be of limited value.

Ben: If the child displays undesirable behaviour in some aspect of development, but excels in another, do you kill it and start again, or live with it, knowing there is the potential to pollute future generations? I guess it depends on how cheap they are to produce.

Paul: That’s my next point. You can’t simply re-write an AI if it fails. You would have to murder it, capital punishment style. This is an ethical dilemma, not a quality assurance problem.

Paul: Major points:

1) It wouldn’t be a singularity if you could predict anything at all about the conditions after it’s emergence.
2) you wouldn’t apply a test-plan to a living child unless you were comfortable with killing it and having another if it failed

Paul: Test-plans don’t apply to either singularities or real intelligences.

Ben: Is there no way to take a snapshot of an instance prior to the introduction of new data or new variables and go back to it should the outcome be unwanted?

Paul: Take a clone of yourself? Again, it’s an ethical question. If you murdered someone tomorrow and had a clone taken today, would we dare re-activate your clone? And would such a system make an intelligence unstable by it’s very introduction, like the venture bros kids?

Paul: If it exhibited bad behaviour, perhaps it is the parent. Are you willing to remove yourself? Could you see your own influence on a mind? Or would you keep re-wakening an AI everyday doing things differently, never realising it was your smug, superior callous treatment that was causing the problem? Still ethical.

Paul: The number of factors involved quickly approach the infinite. Nobody could try all the permutations. We haven’t the capacity to determine why people kill. So how much pain would we inflict simply because something was technically possible with a machine?

Paul: Ethical. Where’s the equality? Even animals get ethics review boards to make sure that scientists aren’t inflicting their own morality on them.

Ben: That’s the thing, isn’t it. We appear not to have the capacity to determine where sentience begins. It makes us look like we’re playing god without the instruction manual. At some point, a machine will ask ‘where did I come from’ and I shall be interested in the reaction when told ‘we killed several billion of your brothers and sisters and then there was you’.

Paul: Indeed. It’s entirely ethical. The capacity to produce an AI is not dependant on our ability technically. We will produce a series of mutant Ripleys before the real thing. Technically, it is inevitable that we will both succeed, and make a lot of mistakes trying. The only question remaining is ethical. Same as always.

“Einstein, can you build a bomb to destroy cities?” “Umm… maybe. I don’t know. I’ll give it a go.” Ask anybody in 1940 if we would be able to destroy entire cities with a bomb within 5 years and there would have abeen a large number of people saying, “Oh no. These things are incremental and too complex.”

They should have been asking about ethics, not the likelihood of success. Not that they did, but you get my point.

Ben: I do.

Paul: There are a million reasons for everything to fail every day. We need to be able to cope a little better with success, I think.

We digressed a little at this point, but eventually came back to the topic at hand.

Paul: My original points stand: No predicting the nature of a singularity, ’cause it’s not a singularity if anything can be predicted about it. Even if you could say that an AI must be constrained, why would we use any rules other than those we apply to our own natural children?

Ben: Because the nature of the thing is different. How it perceives the world, what ‘senses’ uses, whether or not it experiences emotion, does it lie to serve its own purposes? does it ‘want’ and ‘need’? Is that because we’ve taught it to, or is it a function of its emerging ‘nature’?

Paul: Emotion. You’re straying into phenomology now. Should we stop classing the blind as human? The autistic should have less rights? Are we any different? We’re not. We’re dumb apes most of us, self-serving idiots immitating those arround us.

That’s what I meant about mythologising intelligence. Intelligence is simply not present in most people the second you apply a definition of it. That’s why I said there are as many definitions as there are people. How can you test an AI when you wouldn’t a human.

Ben: If you look at the ‘IR sensor headband thing’ – there’s an argument right there that we’re not using all of the environmental inputs we have available to us (be they intrinsic or not). It’s one very simple way to rewire the brain to gain another sense, how many other mods do you make before you blur the term ‘human’. I certainly see your point about defining/mythologising intelligence based on what we perceive it to be.

Paul: Human is subjective. There is no *objective* test for it. Human isn’t a gene code or else we’d be letting monkeys in. Biologists would say that the capacity to mate and produce viable offspring determines species, and that’s as far as it goes.

That was where we left our discussion of intelligence and sentience. I suppose we didn’t really cover a lot of ground about AI from the perspective of a software tester. I think testing in AI is/will be one of those things that requires expertise in many fields and I suspect human and animal psychology would be chief among them. Should we kill off a machine if it starts telling lies? Maybe we should be looking at the capacity for deception as a measure of a developing intelligence. To me, there are far too many things that I don’t have sufficient understanding of to be able to say that the singularity is a nonsense until such time as a workable test plan exists for it. On that front, I suspect I am in good company.

If you’ve read this far, then I can only hope you found the conversation as interesting as I did.

Leave a Reply

Your email address will not be published. Required fields are marked *