e the world we live in. It will either pave the way for optimization and efficiency across many fields, or it will all go terribly wrong and we will use AI for malicious purposes.
At SXSW, I heard both scenarios described. Here, I will summarise examples of AI for good and less good, and bring some considerations on both sides of the argument.
The goal of artificial intelligence is to make machines solve problems that are now reserved for humans, as SXSW speaker Eric Horvitz from Microsoft put it. There are many professions that could benefit from this and there any numerous successful intelligent machines that assist, support or even replace human beings.
Healthcare is a good example. Intelligent computers have been proven to find illness in patients and suggest the correct treatment equally well or more precisely than human doctors. Horvitz suggested AI as a safety net beneath the doctor, ensuring correct decision-making and supporting doctors in their work, which often is a question of life or death.
On the more playful side, examples of artificial intelligence can be found in sports. SXSW speaker Diane Bryant from Intel demonstrated how an intelligent computer analyses a baseball batter’s swing, based on movement recognition. From this, the athlete can learn how to improve his or her performance.
AI for a good cause was brought up by Bryant. She described how the National Center for Missing and Exploited Children (NCMEC) use artificial intelligence in their search for missing kids. AI can identify and find the missing children faster and more accurately than manual recognition of images.
Within the field of agriculture, an interesting example of AI, is FarmLogs (https://farmlogs.com) – a service that analyses data and makes predictions and recommendations on a field-by-field basis. The aim is to aid management and decision-making in the farming industry to optimize harvests.
Finally, driving and traffic deserve a mention as an area where AI can make a huge difference. Various sources estimate that around 94% of all traffic accidents are caused by human error or bad decisions (statistics from the US). Eric Horvitz argued in his SXSW session that “we can’t trust humans to drive cars”. The self-driving car to the rescue! Diane Bryant described the self-driving car as a ‘continuously learning system’, that learns from what it experiences in traffic and predicts occurrences with increasing accuracy. Whilst humans are prone to distractions and naturally make mistakes, the self-driving car can be programmed to take us from A to B. As the human error factor is eliminated, the roads become safer.
Whilst there is clearly huge potential for doing good through artificial intelligence, this technology comes with great responsibility. We’ve heard of suggestions of how AI will work against human kind – think H.A.L. 9000, Terminator and Ex Machina. Such doomsday predictions are a bit far-fetched, according to SXSW speakers Porter-Price, Kinnucan and Dulny from Booz Allen Hamilton. But looking at the current AI landscape, it’s easy to spot use of artificial intelligence that crosses the line to an ethical grey area.
An example of ethically dubious use of AI can be found in the US justice system where court rulings have been assisted by artificial intelligence. SXSW speaker Eric Horvitz spoke of intelligent machines that can make recommendations for judges based on predictions of whether the person on trial is likely to commit another crime. The ethical concern is whether ‘potential future criminal activity’ can be calculated from a person’s facial features and their background. Further, Horvitz highlighted examples of cases, where the computer performing the prediction has been biased against for example people of colour.
Attacks on AI were also pointed out by Horvitz as a potential risk. Whilst computers may be programmed to support or protect human beings, the programming of the intelligent machines can be altered maliciously. For instance, altering the AI in self-driving cars to read ‘stop’ signs as a ‘yield’ signs or to overlook them altogether would make trusted machines become dangerous to its owner and others. Other risk scenarios include AI manipulation of the stock market, or cyber-weapons used to attack military systems.
So here we are. Artificial intelligence has great potential to enhance our lives, but for every use of intelligent machines to improve the world, someone, somewhere will come up with a way to abuse AI to do damage. On the governing level, “concrete steps, like agreements on rules of engagement for cyber war, automated weapons and robot troops,” can be the solution to safe AI and intelligent machines that are made to do good, according to researchers within the field.
But although governing organizations have a significant role to play, I believe that it’s right here in the digital studios that we can make a true difference. Design studios like Jayway have a responsibility to stay informed about all the fantastic opportunities that come with artificial intelligence. And we are excited to use AI to its full potential with our client projects! But we also have a responsibility to be aware of the ethics surrounding intelligent machines – and to inform our clients about these so that together we can navigate the new opportunities safely and continue to create technology that improves the world.
It’s every digital designer and developer’s dream to build solutions that solve a problem, ease the workload on someone, create a safer environment. AI is certainly a tool that will enable us to do that. The key is in choosing the right way to go in each client case and in making informed decisions towards the goal of improving lives.