What ethical obligations come with using AI?
Ethics and artificial intelligence must go hand-in-hand. Ethics of the robot technology and tools deemed to be artificially intelligent have become a research topic in their own right. “Roboethics”, or “machine ethics”, raise a number of questions. Utmost vigilance is required when you consider using artificial intelligence.
Most observers in the field of AI and ethics agree when they say that AI should not replace Man, but serve him. They are generally in favour of artificial and human intelligence working together, with the latter being in a position to set itself certain limits as regards the potential and power offered by machines.
This pre-supposes an appropriate legal structure in order to avoid any deviation that all forms of intelligence made by Man can lead to, and to protect the designers, users, as well as all those who are passively subject to the collation of the personal data for machine processing. Machines are indeed liable to reproduce human decision-making patterns, including prejudice related to the environmental, geographical, ethic, religious and social conditions in which the people live. Widely broadcast scandals these last few years have highlighted the risk of biased artificial intelligence, with a significant impact on the economic and political lives of users.
Supranational institutions and supervisors are already working on the topic. The industry is also trying to self-regulate and sometimes applies an “ethic by design” approach involving the correct programming of AI tools from the time of their design and prior to their implementation, which respects appropriate ethical criteria.
GDPR provides rules as regards personal data, in particular that used by AI. This is an undeniable first step. There remain, however, issues regarding the interpretation of data made by the algorithms used in AI.
In France, the Villani report ordered by the government and submitted in March 2018, brought to the fore various precautions to be taken and offered ideas in particular as regards the ethical aspect of artificial intelligence. In particular, it recommends the implementation of an institutional framework, a “Comité consultatif national d’éthique [National ethics consultative committee]” for digital technologies and artificial intelligence, which would be modelled on the “Comité consultatif national d’éthique” (or CCNE) for life sciences and healthcare. The French President, teaching institutions or the Committee itself could thus refer to an ad-hoc “Commission” to review issues of ethics pertaining to the use of AI.
Measures at a national level are not enough however, and more measures should also be taken at European level. The European Commission has deemed that “new technologies should not mean new values”. Brussels has thus called for guidelines to be drafted by the end of the year, that are in line with the freedoms and values of the European Union as well as with the European principles as regards transparency and data protection. The aim is that the development of AI be carried out in the most appropriate ethical and legal context. Work carried out by the European Group on Ethics in Science and New Technologies will also be useful for the drafting of these non-binding measures.
Companies will thus be able to rely on a secure development of AI and shall need to legally structure their activities that use it.