Artificial Intelligence versus the Human Mind | What's Left

The fear of robots taking all the jobs (and the silliness of those arguments) is an issue that has been much discussed, including in What's Left This Week. Are robots replacing humans on production lines? Of course. But, it is important to understand the logic that leads to this happening because the problem isn't that robots are taking peoples' jobs.

Page content

Click here to join the free weekly What’s Left email newsletter.

 

Capitalism incentivizes companies to create jobs that require so little skill that virtually any individual can take on the work, and with a huge supply of potential workers, wages can be pushed down. Production line jobs don’t require any specific education or skill that one cannot learn on the job, but they do require a great deal of repetition and precision.

Robots are good at doing a simple, single job with a great deal of precision. They also do not strike, need breaks, or want parental leave. So, it becomes a logical next step, if the debt/investment is cheaper, to eliminate people who require wages and benefits and replace them with robots. (Although, where there are bad labour laws, workers are often still much less costly than machines).

What is happening is not so much that robots are being created to take human jobs, but that human jobs are being created that can easily be replaced with robots. There are important limitations of this paradigm. Robots are not being asked to do extremely complicated tasks, but simple ones … and a lot of jobs still require a great deal of complexity.

Thinking tasks are the definition of complex tasks and this context is important when looking at artificial intelligence (AI). Kevin Kelly’s latest article for Wire Magazine (a magazine that he co-founded) highlights just how limited artificial intelligence is. And, indeed, how limited our ideas about artificial intelligence are. In fact, in many ways he puts into focus our limited understanding of the human mind.

First, he puts to rest the argument that AIs will take over. He discusses how computers are already more intelligent that us in some aspects (calculating numbers, storing vast amounts of information) but that this is just a type of expertise or special skill. We will develop other computers that do other things really well (moving around in zero-gravity for instance) but that does not equate to developing a computer that will suddenly attain a level of intelligence that exceeds ours in every way.

The more interesting part is when he dispels the notion that the human mind is some well-rounded intelligence, that it is superior, that one can create some sort of hierarchy of intelligence. Human minds, like dolphin minds, like gorilla minds, like crow minds have been evolving for thousands upon thousands of years, and they have been evolving in to do specific things very well. Who is to say that the human mind is more advanced that the mind of a dolphin? After all, the minds of dolphins have been evolving for a longer period of time and are capable of mental feats we do not fully understand yet.

Kelly argues that the human mind is not a well-rounded intelligent mind, but actually a specialized one like any other living creature. As AIs are developed, we can’t necessarily compare the AIs expertise to humans expertise because minds cannot be arranged on a hierarchy. That is not how intelligence works. Every type of mind is unique, and should be appreciated for what it’s good at.

Reframing Kelly’s treatise, he makes a strong case for how we should think about and treat each other as humans. Just because someone is not able to complete the same mental tasks as quickly as you do, it does not mean that they should be valued less.

Capitalism moves forward through alienation and undervaluing certain types of labour. Production processes segment labour so that we become alienated from that work and the profit that is extracted from it. Robots are simply the next step of this process of alienation where we cannot see clearly the labour that went into the production of a product. All robots and AI systems are made somewhere via actual human labour. The question should always come back to have we valued that labour correctly? Or, did that machine replace a worker because the machine is produced (or coded) by low wage labour?

The Myth of the Superhuman AI