Domino’s now deliver pizza using driverless cars. Cool, right? But would you let a driverless car drop your child to school? We look at how much you should trust machine intelligence.

Posted by Kayla Toh on 10 Oct, 2018

Our fast approach towards a machine age is reflected in the surge of high profile news reports on data-driven scandals. One particular case study that has been making headlines lately is the Cambridge Analytica scandal, which brought to light the use of algorithms for data breaches. This story shocked the majority, but algorithms and artificial intelligence like this are hiding behind every aspect of our human lives.

Algorithms are used in our supermarkets, mobile devices, courtrooms, hospitals, banks and more. This provokes the questions; how much are we relying on algorithms? And can we really fully trust them? Critical questions like this need to be answered if we’re to determine what kind of world we ultimately live in, as we advance into the not-so-distant future of a social media, algorithm and automation age.


Last month, BrightLemon attended a talk by Hannah Fry at the Royal Society of Arts. Dr Fry is an Associate Professor in the mathematics of cities from University College London and works alongside a unique mix of physicists, mathematicians, computer scientists, architects and geographers to study patterns in human behaviour. She has used mathematical models to uncover questions in everything from love to flu epidemics. And now, with endearing enthusiasm, refreshing simplicity and admirable expertise, she helps us to uncover the answers on ‘How to be Human in the Age of Machine’.

 

Hannah Fry, author of Hello World: How to be Human in the Age of Machine

Image: Dr Hannah Fry (image source: Hannahfry.co.uk)

 

Hannah referenced her new book; ‘Hello World: How to be Human in the Age of Machine’, throughout the talk, as she examined how algorithms are taking over society. People are becoming more reliant on them to determine finance, security, healthcare, transport and more. Hannah gave us a fair-minded account of what the software that now governs our lives can and cannot do. By looking at “the good, the bad and the downright ugly sides of algorithms” we can judge how much confidence we should put in algorithms and whether or not they’re actually an improvement on human intelligence.

 

First and foremost, what are algorithms? It sounds fancier than what it really is. According to the Cambridge Dictionary, algorithms are essentially just a set of mathematical instructions or rules that will help to calculate an answer to a problem. They’re used for machine intelligence, otherwise known as artificial intelligence (AI). AI is the replacement of human labour by a digital computer or robot that performs the tasks typically associated with intellectuals beings, as defined by the Britannica Encyclopedia.

 

In a more unnerving translation, it means machines replacing humans. This may spark a scary vision of a post-apocalyptic robot world in your mind, but as Dr Fry emphasises; machine intelligence is not all doom and gloom. It can be used to create some mind-blowing inventions.

 

Algorithms and algorythms

 

An astounding feat in AI innovation is David Cope’s ‘Experiments in Musical Intelligence’. As a professor in theory and composition, David used machine intelligence to replicate the music of famous musicians, such as Johann Sebastian Bach. He created an AI track version of Bach’s music and it’s practically impossible for the average person to tell the difference between the robotic track and the original Bach track. If you don’t believe me, you can try it out for yourself by listening to the Bach-style Chorale here.

 

The algorithm that David used in his experiment is actually similar to that of the algorithm used in our phones when predictive texting. Your smartphone predicts what you want to type by scanning the history of all of your previous texts.

 

Johann-Sebastian-Bach

Image: Johann Sebastian Bach by E.G. Haussman (image source: Wikipedia)

 

How much should we rely on algorithms?

 

Humans place blind faith in machines. For example, algorithms are regularly used in court cases to predict whether the defendant will commit a crime again in the future and if they’re a risk to society. But would we prefer a human or a robot determining our fate with the law?

 

Hannah argues that there are pros to using a machine, because the machine can be totally unbiased and is consistent and balanced. However, the judge needs to know when to overrule the machine. This is because machines don’t understand context like humans and they can’t see the world in the same way that we do.

 

Justifying herself, Hannah gave an interesting example of a court case that saw a machine determine a young offender (19 years old) who had consensual sex with a minor (aged 14) as a high risk to society. It gave him a jail sentence of 18 months due to the fact that the defendant was young and, in the logic of the machine, would be more likely to commit a crime again in the future. Whereas, the machine determined a 36 year old man who had sexual intercourse with a 14 year old (22 year age gap) as low risk to society and should avoid a jail sentence altogether. It’s clear that there’s still a really big gap between us and AI.

 

Machines have great potential to do a lot of social good, but it’s inevitable that they will make mistakes. We can’t always rely on ourselves to know when to draw the line between when to prioritise AI and when not to. For example, AI is being used for facial recognition in the police force, but it has a 98% failure rate. Yet it is still being trusted and there is currently a scheme to roll it out onto the streets. This is why Hannah insists that we can’t be too trusting of AI. We can’t just think of AI as a lone source of resoluteness. We have to think of the trust issues and failings of the people using and creating the machines.

 

Police facial recognition AI

Learning to trust machines

 

Machine learning is similar to training a dog. You give them an objective and a reward and all the in-between bits they have to figure out for themselves. With AI, trusting the outcome depends on the circumstance. In some cases. It’s very important to be clear on how the machine got that answer. For example, the court case situation. But in other situations, you just need to know that the machine’s answer is extremely accurate and it doesn’t matter how it got to that conclusion. For example, predicting whether or not someone will survive cancer.

 

To improve our trust in AI, we must improve how it’s regulated. Hannah believes that people are beginning to use AI the same way that medicine was used before certain rules and measures came into place - in the sense that it is becoming morally bankrupt. Machine intelligence is currently untested, but yet still administered and there is no one stopping this from happening. If there are stricter regulations on how people use machine intelligence and the causes it’s used for, then we will have less ethical concerns about it.

 

In hindsight, not all the blame can be pinned on machines for their errors. We must remember that human failings are inevitable and the people responsible for designing AI need to consider this.

 

Hannah noted that there has been a push on writing algorithms to explain other algorithms in the last 18 months. This is so that we can trust AI more. But it’s still very important to know when to draw the line with our faith in machines.

 

Machines give us an easy sense of authority. Humans like cognitive shortcuts and they like not having the responsibility of making an important decision, so this is why some people may prioritise the decisions of a robot. But, at the same time we are quick to dismiss robots if they show any kind of flaw or make a minor error.

 

Should we fear machine intelligence?

 

It’s easy for us to fear development in machine intelligence and the rate at which it seems to be improving. Stephen Hawking and Elon Musk have recently been very open about their concerns towards AI and machines edging out humans. But Hannah argues that “worrying about AI in the future is like worrying about overcrowding on the moon.”

 

In other words, a real life incident of a Terminator type occurrence is highly unlikely. This is because we’re a long way away from ever replicating the human brain neurons. Scientists currently can’t replicate the brain neurons of a sea slug, which only has 280 neurons compared to our billions. Machines and robots aren’t a near match for our human minds. Instead of worrying about this, we need to focus on the AI problems that we have here and now in the present.

 

Terminator film: Robot apocolypseImage: The sci-fi film; Terminator (image source: BBC)

 

But if we were to apply Moore’s Law to the rate of pace at which AI develops, then it’s possible we will see a very sudden and fast-paced growth in AI technology in the future. Moore’s Law is a computing term that states that the overall processing power for computers will double every two years.

 

Moore's Law applied to AIImage: Moore's Law applied to High Level Machine Intelligence (image source: Mind and Machine)

 

An example of excellent, but somewhat seemingly threatening, AI innovation is the chess-playing AlphaZero robot. Yes, this is an amazing accomplishment for AI, but Hannah states that we can’t begin to worry about an AI apocalypse based solely on this. A game of chess has a very clear objective and this is drastically different to the objective of driverless cars or trying to solve crime in a city.

 

 

Maybe “artificial intelligence” is the wrong terminology to use and it should be changed to something like “Intelligent Assistance”. Perhaps AI sounds too threatening and this title may be what scares a lot of people off. But Hannah argues that without a title like this, AI wouldn’t get the funding it needs: “This isn’t really a revolution in AI, it’s a revolution in confrontational statistics. But that doesn’t sound as sexy.”

 

There are worries that AI will destroy democracy. Hannah agrees that this should be a concern. She argues that we’re now communicating in a much more splintered way than we’ve ever seen in the past. We no longer have just one channel that we watch the news on, or just a handful of newspapers to select from. People’s idea of the world is much more personalised. Facebook has become the main outlet for people to read news and with all profiles being personalised, it means that people will see articles from different outlets and there’s less regulation over the information we engage with. This means that when we conversate together we all have mismatched stories. This will affect democracy.

 

Knowledge is power, as famously quoted by Francis Bacon, and a concern is who will be using this power. One of the dangers of AI is the possibility of it being used as a military technology. This is a risk on many people’s minds - and rightly so.

 

Drone, artificial intelligence used for warImage: Unmanned aerial vehicle,  otherwise known as a drone (image source: Wikipedia)

 

How does this affect digital?

 

Web development is becoming more complex as more digital platforms seek to incorporate AI via chatbots, bots, blockchains and screenless devices. Therefore, it’s becoming increasingly important for developers to possess at least some basic algorithm knowledge.

 

Compared to other authors, Hannah gives a very balanced look on the effects of AI, celebrating the benefits it can bring to society, as well as the complications of it. It’s transforming our society, but the rate at which it does so and the extent to which it does is something that we need to determine early on, so that we can decide what kind of world we live in.

 

Do we want it to be a world where secret decisions with ambiguous goals are deciding our individual and collective fates? No, we don’t. This would create a machine-determined future working against our best interests; people. To avoid this, we need to regulate AI.

 

AI is a strong force for good, but should only exist alongside human control and ordinance. We can trust robots to make excellent calculated decisions, which speed up processes for people, but there is always room for error. This is why it must not be taken as a final, omniscient answer. People must know when to step in and input human judgement.

 

Fry has outlined the ethical issues that beset AI, but these are being improved on. The ethics of AI are being researched more deeply and we’re figuring out ways to work ethically. For example, Deepmind, an established AI agency in the United Kingdom, have set up their own ethics board and the Government have now established the Office for Artificial Intelligence.

Tags: Artificial Intelligence, Innovation, Technology

GET OUR NEWSLETTER