As the speed and scope of Artificial Intelligence (AI) development continues to increase, the risks of its misuse or uncontrollable outcomes become more prominent. AI has the potential to achieve numerous feats and provide enormous benefit to humanity, however if not deployed and monitored carefully, this could have severe consequences both intentionally and unintentionally.
Artificial Intelligence systems are not a new technology, newcomer, but have been developing for years. However, it is in recent months that their usefulness is being questioned and, above all, the dangers they may pose.
Is it necessary to regulate the development of Artificial Intelligences and do they pose a danger to Humanity?
The media boom of the ChatGPT Conversational Intelligence system, developed by OpenIA, has put in the eye of the storm this technology that has many more practical applications than being able to converse with a computer program as if it were with a human, or to create contents elaborated from other published texts.
The advance of Artificial Intelligence – and the scant regulatory framework governing its activity at present – is leading many users to wonder whether we are really ready for it. Even some of the most important names on the international technology scene have shown their concern.
The ever-eccentric Tesla founder and Twitter owner Elon Musk has even made an appeal signed by a thousand researchers. Among them are Apple co-founder Steve Woxnial or Skype co-founder Jaan Tallin, who advocate that the development of Artificial Intelligences be stopped for a few months. According to him, we are in “a dangerous race that is leading to the development of more unpredictable models with ever greater capabilities”.
Risks of Artificial Intelligence for Humanity.
Musk considers AI as one of the main risks to Humanity, despite being one of the founders of OpenIA.
According to the signatories of that document, Artificial Intelligence systems may pose a profound risk to society and humanity, since they are not being developed in a careful and planned way.
Most of the major technology companies are working to develop their own technology, and there is a kind of uncontrolled race to be the first to impose their developments. In many cases, according to the signatories, not even the creators of these systems understand their implications and scope. They even doubt that they can reliably control them.
Pausing developments – but not stopping research on it – would give governments time to be able to legislate regulatory frameworks to ensure the development of these systems. They are also asked to be able to anticipate the situations that their practical applications could entail.
Artificial Intelligence Regulatory Framework
At the moment, it is known that the European Union is preparing a regulation on Artificial Intelligence, the so-called Artificial Intelligence Act, which would establish a system of risk scales. There would be systems that would need little regulation to be able to function, while others identified as too dangerous would be unacceptable.
At the moment, the legislation is under discussion by the European Parliament, so it may be years before a regulatory framework begins to be implemented. Some nations, such as Italy, have taken some measures, such as the recent ban on the use of ChatGPT until the use of personal data by this conversational AI technology can be regulated. It is still a one-off and isolated example.
That the security of Artificial Intelligences should be regulated seems obvious. Some experts are calling for the creation of a supranational body capable of controlling the development of technologies with very dangerous potential both for people and for certain industries, such as aviation or pharmaceuticals.
In short, the development of AI is not supposed to be negative, but it can be negative if it is done without regulation.
What are the main problems with AI?
Taking into account the situation explained above, it is possible to point out a series of risks that the development of Artificial Intelligence in an uncontrolled way may entail:
-More control. That there are “super-intelligences” that are able to control machines and make decisions autonomously could mean, according to some experts, that humans would be subjected to their control.
What if errors – human ones – were to occur in the programming of Artificial Intelligences? What if they were to make risky decisions autonomously? The consequences could be unpredictable. What if, when the time came, artificial intelligences -already superior to the human brain- decided to connect with each other, adding their attributes and power? This would create super-intelligences that man would be unable to control.
In the same way, if institutions or governments had access to this type of technology and decided to use it to have greater control over their citizens, this could be another risk, in practice, that could possibly occur.
Disinformation and loss of reality.
How can we know that the content being generated by a conversational Artificial Intelligence is not manipulated? If there were already problems in identifying Fake News, now the problem is taken to another level. Depending on the level of development of the AI, it could be taking as a source content that is already false. Or the AI could even reach a level of development where it arbitrarily manipulates the veracity of content for a certain purpose. How could this be controlled?
One step would be to force published contents to be identified as AI-generated – as has recently happened with the publication of the first AI-generated photograph in the newspaper El Mundo – but… how many users would not read the text accompanying that image indicating that it has been generated by an AI and believe it? And what if it is shared without pointing out that it has been created by an AI? It would give rise to situations of misinformation whose risk would possibly be difficult to identify today.
-Reality and Data Protection. Related to the previous point, in addition to spreading false information, AIs could even create near-perfect fictional situations, where humans do not know if what they are seeing is real or not.
Situations of bias or racism could also take place that would be difficult to control. Moreover, in such complex situations, it would be virtually impossible to protect user privacy and access to personal information by machines.
Will machines take jobs away from humans?
Will Artificial Intelligences mean the elimination of jobs? Many experts – including Bill Gates – point out that the practical application of AIs should make it easier for workers to do their jobs. They might even allow them to work faster or shorter hours, but they would not replace their work. But how will this be controlled?
It is true that AIs will not be able to replace – at least for the time being – the labor and added value of human work, but in mechanical tasks they could take over. Or it could pose a risk to those who do not perform their work properly.
For example, if an editor is dedicated to copy contents from other web pages… without contributing anything or enriching the texts with other elements… that work could of course be carried out mechanically, and faster, by an Artificial Intelligence.
On the other hand, we must also consider that the arrival of these new technologies will mean the disappearance of some jobs, but also the creation of many others. This theory is defended by many experts who point out that it is part of the evolutionary process, and that throughout history mankind has already seen how the arrival of machines eliminated professions but generated others on many occasions.
Risks for certain industries.
The application of Artificial Intelligence in industries such as pharmaceuticals may carry additional risks involved. For example, when dealing with experimentation on living beings or drug development. The same would be true in the arms industry, where the control of sensitive or nuclear weapons could be left to AIs that, at a certain point in time, could make a decision autonomously.
It is worth reflecting on all this, not to slow down the advance of technology, but to be aware of the risks and try to avoid them now that its implementation is not yet massive and is in an initial phase. Technological development must bring benefits to humankind, but for this it may be necessary to establish a clear regulatory framework in which evolution can develop safely for all.