Tuesday, September 15, 2015

The stopping-off point

What’s your stopping-off point?



I'm reading Lost at Sea by Jon Ronson. One of the things I like about his writing is how quickly he exposes people's 'stopping off point' - what some might call 'core beliefs', but which I like to think of as the point where they decided to stop going any further and said 'this is as far as I'm going'.



It varies a lot. It can be religion, or football, or something your parents said. Or a belief in UFOs. It's something that makes people feel good about themselves; and what makes people feel good about themselves differs. It may be being right, or capable, or smart. And it tends to leak out when people talk about themselves - and Jon is good at spotting it.



I don't really get a sense of Jon's stopping-off point. Like a great actor, he's hidden himself so that we can see someone else's story.



For many people in learning their stopping-off point is... learning. And this would be fine were it not for the fact that many of the problems we are trying to tackle (like performance, for example) are less and less affected by learning. More and more behavioural problems are becoming SatNav-like - that is, they are best solved by systematically eliminating learning. Learning is becoming a leisure pursuit. Like gardening.

In conversations about the 'Future of Work' the stopping-off point is often 'the organisation' (how the organisation of the future will look, work etc.) even though it seems clear that 'organisations' will cease to be the way that we organise activity.



I also find myself in conversations with people about technology and the future. Here, the stopping-off point is usually 'people'; futurologists stop off at the view that technology is somehow all about people. Rather than the other way round.



If you can picture a cluster of bacteria sitting around saying 'so, we've invented humans - the question is what are we going to do with them?' - this is how these conversations sound to me.



I imagine that you might react against that analogy. I'm guessing because bacteria aren't intentional in the way that people are, and we tend to think that this kind of intentionality puts us at the centre of things. But I don't think so.

When I was at University I wrote a paper about ants. It rested on the observation that whilst the behaviour of a colony of ants is complex and seemingly intentional, this behaviour is not hard-wired into the ants - it's not in their genes - it's just a set of 'emergent behaviours' which happen when you put ants together.

The weird thing is, you're pretty much forced to talk about an ant colony in intentional terms if you want to understand it - even though you know the individual components are unlikely to be intentional at all. A similar thing happens in 'The Selfish Gene' - where Richard Dawkins acknowledges that genes don't literally have intentions, but it turns out thinking about them in intentional terms is the only way that we can understand what's going on.

My point is that systems - not just people - can be intentional.

Capital is one such system. We can think if capital as a human creation, but it's only really possible to understand what it does to culture if we think of what capital 'wants'.

Technology is the arch-system, arguably the intentional system to rule them all. Some of you may be familiar with the Hitch-Hiker's Guide to the Galaxy in which Douglas Adams suggests that the world is just a big computer (run by mice) to figure out a question (to which the answer is 42).

Technology is an intentional system. It really doesn't make much sense to talk about what people want to do with technology - any more than it would make sense to talk about what ants want to do with the colony. Or what bacteria want to do with humans. It only makes sense to talk about what technology wants to do with people. 

And who am I, a mere ant, to guess?

No comments:

Post a Comment