Friday, January 10, 2020

What Does Technology Want?


-->
My ‘one word’ is ‘understanding’. What is yours?

What I mean is that my life has revolved around this – it is what I most value in my interactions with people. I guess I come across as a perpetual theorist; constantly challenging convention, constantly looking for better models of the world. I have no doubt that this can be annoying.

My writing often reflects the things I have come to understand, less so the things I don’t.

Affective context has enabled me to answer many questions, but it has exposed one, central mystery, even as it has cleared away the confusion around it.

Our old models were plagued by dualism: questions of soul vs body, of mind vs matter. These were never fully resolved, instead they merely evolved into questions about machines and consciousness - and at what point an inanimate object might magically transition into a thinking thing.

I now know such conversations to be a symptom of flawed thinking: we are not thinking things but feeling things. All cognition is sentiment. There is only a quantitative, not qualitative, distinction between ourselves and other creatures that feel.

But do you see the problem? In vaporising questions of rationality and consciousness (much as we vaporised questions of ‘soul’), we have exposed a deeper question: at what point does something become a feeling thing?

People are animists from birth: we attribute intention – feeling - to inanimate objects such as trees, cars etc quite naturally. This is nowhere near as superficial as you might imagine – in his studies of memory in aplysia (the sea slug) Eric Kandel talks constantly of the organism's ‘fear’ and ‘feeling afraid’ – indicating that whether we like it or not, a model of affective states is a prerequisite even for our most fundamental models of memory and cognition (https://www.youtube.com/watch?v=rPtxuQnpB9A&t=861s for an example).

But where does this start? If Kandel thinks a sea-slug feels, does he think a carrot feels? Where does matter transition from being a non-feeling thing, to being a feeling thing?

I know the answer. I just don’t understand it. I know it because Nietzsche answers my question. In Beyond Good and Evil he writes

“Will can of course operate only on will – and not on matter (not on ‘nerves’ for example): enough, one must venture the hypothesis that wherever ‘effects’ are recognised, will is operating on will – and that all mechanical occurrences, in so far as a force is active in them, are forces of will, effects of will.” (p.67)

And this is the most radical idea I know: ‘everything feels’, the universe is bound together by ‘will’. To summarise the route to this answer: now that we know that all creatures are feeling things, we have a new problem – a new dualism to contend with – how do we resolve it? Nietzsche’s response: feeling (‘will’) exists in all matter*.

Once more: all things are feeling things. The pebble lying on the beach, has ‘concerns’ of a kind we cannot grasp – it desires, it ‘wills’.

Like you, I am used to thinking of concern as a product of nervous systems – an attribute of people and cats – at a pinch sea-slugs. As an ‘emergent property’. So Nietzsche calls us on this – on our magical thinking – on our tendency to posit a ‘soul’ that springs up from nowhere. No, he says, it’s concern all the way down. Even atoms have sentiment.

And this is the thing I struggle to grasp – the most radical idea. I am sure you do too. Look how far removed it is from our normal understanding of feeling – i.e. as some kind of distributed cognitive state (but lest you be too quick to dismiss it out of hand, it’s worth remembering that the world described by quantum mechanics is stranger than we can imagine too).

Why does this matter? Well – it’s now a very practical problem: the difference between AI and you & I, is that our cognition is sentiment-based while a machine’s is not. We cannot have a machine that is remotely like us until we have a machine that feels.

But how to create a machine that feels? Where would you start? How to create a machine that fears, desires or feels pain? Nietzsche’s resolution is elegant: he says 'Nick, you're wrong – those ‘wills’ already exist in matter'. So perhaps we just need to organise it in the right way?

Or perhaps, more bleakly, we are busy organising it in another way – the way that technology wants. Perhaps what technology feels, what technology wants, is right in front of us – we just can’t relate to it: it already feels & wills but not in a way remotely like us. I find this a terrifying thought: not that technology doesn't feel, but that it feels in a way that is incomprehensible to us. Strange as this may sound wouldn't we be tempted to say this about even creatures quite close to us? A bat, for example?

'What does technology want?' looks a lot like the question 'What does God want?' - we can only answer it if we assume He's a lot like us.

I am sure this all sounds quite mystical – but I suppose that is how the limit of our understanding has always felt.


* This equivalence is fair, I think – earlier in Beyond Good & Evil, Nietzsche defines will in terms of its affective components.

Image: Brett Sayles





No comments:

Post a Comment