Surveillance and Ethics in the Use of Artificial Intelligence  

The implementation of artificial intelligence (Al) unearths several ethical issues, including questions which refer to bias, transparency, responsibility and a possible negative impact on employment and society. This topic was discussed by the experts from Serbia and the EU and shared the examples of good and bad practice, as well as recommendations at the annual “Privacy week” organized by NGO Partners Serbia.

Reading Time: 5 minutes

ai algorithm ethics

Illustration: Milica Mijajlovic

“It is harder to recognize when algorithm breaches human rights than physical or legal entity” 

We still don’t have a unique global framework of regulations for artificial intelligence (Al) and the approach to regulations differ between the countries. Some of them adopted special laws and regulations related to Al whereas others introduced Al-related articles, or they adopted guidelines or principles for the use of Al. 

Ideas implemented in the attempts to regulate Al, as well as surveillance and promotion of its ethic implementation during ever greater political polarization and economic inequality were shared by Serbia’s Commissioner for Protection of Equality Brankica Janković

“I believe that the effects of automation and Al are much more significant for society and much bigger than we could conclude based on public discourse. We are aware that its implementation bears great risks, as well as potentials. Breach of human rights by algorithms is much more difficult to recognize than when legal or physical entities breach human rights, and we are often not aware that some kind of unequal approach and breach of human rights has occurred,” the Commissioner says, and adds that whatever your opinion of Al is, no one can stop rapid development of science and technology and the only thing left to discuss is how can Al contribute to the greater good.  

She explains that one of the most visible and most significant effects of Al is visible in the labor market.  

“After the automation process has already affected low-skilled workers, Al is now affecting middle-skilled workers as well. Estimations show that cca 70% of jobs in Europe will be replaced by Al or some other innovative technology. According to the World Economic Forum, automation and implementation of Al should create 133 mn new jobs by 2022 and 75 mn are to disappear. This means that if that’s accurate, Al created 58 mn jobs more than it replaced, Janković said at the panel dubbed “Supervision and ethics in the use of artificial intelligence”. 

“Authorities claim that algorithm does not exist” 

In December 2022, the Serbian government issued Ethical guidelines for development, implementation and use of reliable and responsible artificial intelligence. The extent to which the government obeys its own guidelines was explained by Danilo Ćurčić of The Initiative for Economic and Social Rights – A11 based on the example of the implementation of the Law on Social Card.  

“The Serbian government did not obey standards listed in Ethical guidelines, or standards listed in the Law on Personal Data Protection upon the adoption of the Law on Social Card, namely did not respond to any of the conditions or basic principles listed in Ethical guidelines. The decision-making within the system is not transparent, we are not aware of key data issuing notifications with the instructions to social workers, or solutions we have had an opportunity to see so far,” Ćurčić underlines. 

danilo curcic a11

Source: YouTube/Partners Serbia

He has mentioned an absurd situation which refers to transparency his team faced in the communication with the Labor, Employment, Veteran and Social Affairs Ministry.  

“We have been trying to find out what kind of an algorithm decides on the implementation of rights and social protection services for a year now. Now the Ministry says that the algorithm does not exist. The social card, so far, functions on its own, without any logic in decision-making. However, a coalition between social workers and organizations offering legal advice occurred. Social workers, because they are forced to act in accordance with notifications, and they are aware of the circumstances of users they work with and they know they fulfill conditions for social welfare, when they are refused, direct users to civil society organizations offering legal advice,” the legal expert explains.  

“Are you interested in safety or privacy, freedom of speech or freedom of assembly?” 

According to OSCE National Legal Officer Sanja Stanković, countries across the globe should learn from their own mistakes, test them in detail and check data before Al algorithms are launched in the fields related to human rights which directly affect someone’s life. 

When it comes to data mainly collected via systems based on machine learning and artificial intelligence and, in general, how to protect rights and citizens from misuse in general, is a major debate, worldwide, since risks are huge. We are continuously facing this, partially false, dilemma: Are you interested in safety or privacy, freedom of speech, freedom of assembly, thoughts, because these two cannot go together. Reaching this balance is challenging but a machine cannot do everything, and you always, in the end, end up with a person who can actually estimate the context of the situation,” the panelist believes.    

sanja stankovic oebs

Source: YouTube/Partners Serbia

According to her opinion, it is necessary for everyone with access to citizens’ data to be educated and sensitive in terms of digitalization, as well as the protection of human rights.  

“The business model of platforms based on monitoring and generating a huge amount of user data, with unthinkable processing possibilities until now, from platforms themselves, to the so-called data-mining companies that make great money. Unlike state administration of data, arranged and controlled, the private sector is almost fully unregulated and uncontrolled. States are not banned from buying data from broker companies,” she underlines. 

“It is unbelievable that you launch a system you haven’t tested” 

In addition, Sanja Stanković also presented concrete suggestions for the role of the civil society, independent media, such as academic and international community could play in monitoring and surveillance and ethics in the use of artificial intelligence by the governments. 

  • States should refrain from arbitrary access to mass data with private entities, regardless of how simple and sometimes attractive it can be.  
  • It is necessary to apply all the rights protected offline, when the state wants to access citizens’ data. 
  • A ban of general transfer of data with a platform or an operator to state bodies is recommended. 
  • It is recommended that when it comes to law enforcement bodies, they gather data in relation to the concrete situation and concrete suspicions. 
  • One of the key recommendations is for the states to refrain from the introduction of a system for mass surveillance and mass biometric data processing because this could have serious consequences on the series of human rights.  
  • When it comes to meta data, namely, who communicated, when and with whom, they are in the hands of platforms and operators of electronic communications and the system must have the highest protection level.  
  • Another recommendation is that every illegal surveillance is treated as a crime in legal systems in member countries and that evidence gathered in this way cannot be implemented in further procedure. 
nedelja privatnosti 2023

Source: YouTube/Partners Serbia

Panelists warn that the consequences of mistakes and the lack of attention upon the introduction of new technologies are first felt in the part of the population that is most affected and the most vulnerable. 

“We are talking about people who cannot voice their opposition loud enough. At the same time, if some right is revoked to general population, a major issue would appear. Here we have a process basically under the radar and all those breaches of human rights we are trying to systemize occur on a daily basis, and we know nothing about that. Someone must be held accountable for everything that happened because it is completely unbelievable that the system has been launched without testing and then you even brag with very dubious results,“ Danilo Ćurčić of A11 explains. 

A journalist by day and a podcaster by night. She's not writing to impress but to be understood.

[the_ad_placement id="end-body"]

Subscribe to our newsletter and stay updated !