“If you want to get to there, I wouldn’t start from here"
There are, I suppose, two perspectives on AI: an attempt to coax technology into working like people, or a way of doing things that people do.
As AI progresses we find that we can do more of the second thing whilst getting further from the first. That is: with more processing power we can get technology to do things like play chess or drive cars - but how they do it resembles less and less how people do it.
The reason for this is that we didn’t start in the right place:
Conventional AI works by pushing information into a system, figuring out how to access it quickly, then gets stumped at how to identify the right information in the current context. Attempts to solve this problem are largely either brute-force or crowd-source (e.g. language translation where the AI relies on human-translation of similar-looking passages).
Human processing works very differently: we start life with a fairly limited set of reactions to external stimuli: surprise, delight, fear etc. then over the course of many years we differentiate and diversify these affective responses until everything can be encoded or compared according to a myriad of subtle affective responses. (see Affective Context Model)
So here’s the problem: we don’t know how to design disgust or delight into an artificial system. As far as I can tell, nobody has even understood the problem. And since we cannot design even the simplest building blocks of human intelligence, we stand no chance of building a machine that works like we do.
That said, I’m not sure this is a problem for Technology (as distinct from technology - the everyday stuff we use). Technology’s relation to us is somewhat like our genes’ relation to us: we quite like the idea of a machine in our image, but Technology is not terribly concerned with reproducing the idiosyncratic processing abilities of a monkey designed to feed, fight, flee and reproduce.
Frankly, I’m just happy that my toys keep getting better.
No comments:
Post a Comment