Artificial intelligence (AI) has raised concerns since its inception. The possibility of being misused for unethical purposes, or the dystopia of a world dominated by computers beyond human control, has been the subject of dozens of films and science fiction novels.
There is no clear consensus on what an ethical Artificial Intelligence consists of and which principles must be adhered to
AI is nice Reality with applications in many areas: some of undisputed social benefit (health or simultaneous translation) and others rather offensive (autonomous weapons or surveillance systems through facial recognition). Assuming that the presence of artificial intelligence in our societies will increase, the debate about its ethical design is one of the big questions of our time. And the truth is, there are signs that, at least in the short term, don’t encourage optimism.
A recent report from researchers at the Pew Research Center and Elon University concludes that ethical AI design is unlikely to be adopted in the next decade. This statement is the result of a survey of 602 people who are involved in the development and regulation of this technology: engineers, programmers, business people, politicians and activists.
The vast majority of respondents assume that AI will remain in focus until 2030 Profit maximization of companies and in developing tools for social control.
The study also shows that there is no clear consensus among experts about what an ethical design of AI implies. For some it is important that this technology is responsible, transparent and accessible to everyone. For others, the ethics of AI must be limited to complying with the laws of the country.
Looking for a consensus
Political institutions and specialist committees have been trying for years to come to an understanding about which principles an ethical artificial intelligence must comply with. In 2017, a group of scientists and thinkers met for months in Asilomar (California) to create a kind of decalogue on this technology.
After much deliberation, 90% of the participants approved a list of 23 principles that Artificial Intelligence should adhere to. These include security, human control, judicial transparency, respect for privacy and the pursuit of the common good.
The European Union has also been dealing with this issue for some time. In 2017 the European Parliament approved a report stating: an ethical code of conduct for these technologies. It is an “ethical framework for the design, production and use of robots”. The 8 principles that structure this code take into account aspects such as the protection of human freedom, privacy or the guarantee of equal access to AI.
Last April, the European Commission presented a report with a regulation proposal for artificial intelligence. Its purpose is to protect the fundamental rights and values of the European Union against the risks of this technology. The document suggests banning facial recognition for surveillance purposes or algorithms that “manipulate human behavior”. On the list of artificial intelligence systems to be banned are social scoring systems, as they are already used in China.