AI and robotics are going to shape our future. Next, there are 10 issues that professionals and researchers need to address in order to design intelligent systems that help humanity.
Misinformation and Fake News
The flow of misinformation together with our natural inability of perceiving reality based on evidence (a phenomenon called confirmation bias) is a threat to having an informed democracy. Russian hackers influencing the US elections, Brexit campaign and Catalonia crisis are examples of how social media can massively spread misinformation and fake news. Recent advances in computer vision make it possible to completely fake a video of President Obama. An open question is how institutions are going to address this threat.
Job Displacement
The scientific revolution in the 18th century and the Industrial Revolution in the 19th marked a complete change in society. For thousands of years before it, economic growth was practically negligible. During the 19th and 20th centuries, the level of social development was remarkable.
In the 19th century, there was a group in the UK called the Luddites, that protested against the automatization of the textile industry by destroying machinery. Since then, a recurrent fear has been that automation and technological advances will produce mass unemployment. Even though that prediction has proven to be incorrect, it is a fact that there has been a painful job displacement. PwC estimates that by 2030 around 30% of the jobs will be automatized. Under these circumstances, governments and companies should provide workers with tools to adapt to these changes, by supporting education and relocating jobs.
Privacy
The importance of privacy is all over the news lately due to the Cambridge Analytica scandal, where 87 million Facebook profiles were stolen and used to influence the US election and Brexit campaign. Privacy is a human right and should be protected against misuse.
Cybersecurity
Cybersecurity is one of the biggest concerns of governments and companies, especially banks. A robbery of $1 billion was reported in banks from Russia, Europe and China in 2015, and half a billion was stolen from the cryptocurrency exchange Coincheck. AI can help protect against these vulnerabilities, but it can be also used by hackers to find new sophisticated ways of attacking institutions.
Mistakes of AI
Last month, a woman was hit and killed overnight by an Uber self-driving car when walking across the street in the US. Like any other technological system, AI systems can make mistakes. It is a common misconception that robots are infallible and infinitely precise. A common way for some professors in my old lab to say hello to their PhD students of robotics was, what have you broken?
Military Robots
There is an ongoing debate about controlling the development of military robots and banning autonomous weapons. An open letter, from 25.000 researchers and professionals of AI, asks to ban autonomous weapons without human supervision to avoid an international military AI arms race.
Algorithmic Bias
We have to work hard to avoid bias and discrimination when developing AI algorithms. A specific example was face detection using Haar Cascades, which has a lower detection rate in dark-skinned people than in light-skinned people. This happens because the algorithm is designed to find a double T pattern in a grayscale image of the person’s face, corresponding to the eyebrows, nose and mouth. This pattern is more difficult to find in a person with dark skin.
Haar Cascades are not racists, how can an algorithm be? but many people can feel insulted. When programing these algorithms, we need to be mindful of their limitations, transparent with users by explaining how the algorithm works or use a more effective technique with dark-skinned people.
Regulation
Existing laws have not been developed with AI in mind, however, that does not mean that AI-based products and services are unregulated. As suggested by Brad Smith, Chief Legal Officer at Microsoft, "Governments must balance support for innovation with the need to ensure consumer safety by holding the makers of AI systems responsible for harm caused by unreasonable practices". Policymakers, researchers, and professionals should work together to make sure that AI provides a benefit to humanity.
Superintelligence
Some tech leaders have shown concerns about the possible threats of AI, one example was Elon Musk, who claimed that AI is riskier than North Korea. These words generated strong criticism from the scientific community.
Superintelligence is generally regarded as a state where a robot starts to recursively improve itself, reaching a point that easily surpasses the most intelligent human by orders of magnitude. Some enthusiasts, like Ray Kurzweil, believe that by 2045 we will reach that state. Others, like François Chollet, believe that it is impossible.
Robot Rights
Should robots have rights? If we think of a robot as an advanced washing machine, then no. However, if robots were able to have emotions or feelings, then the answer is not that clear. One of the pioneers of AI, Marvin Minsky, believed that there is no fundamental difference between humans and machines, and that general AI won't possible unless robots have self-conscious emotions.
A suggestion in the debate around robot rights is that robots should be granted the right to exist and perform their mission, but this should be linked to the duty of serving humans. There is a lot of controversy around this area. Meanwhile, in 2017, Sophia the robot was granted citizenship of Saudi Arabia, and even Will Smith flirted with her.