Many AI systems are “socio – technical”; they involve an interplay between technical and human factors, which in turn entails distinct challenges for the safety and fairness of these systems. While existing research and legislative regulatory proposals often suggest boosting the “socio” component, especially at the design stage, it remains unclear about how this could concretely yield benefits in terms of “trustworthy AI” systems: ultimately, such systems remain market – oriented optimizing mechanisms, with associated incentives different from those valued in a “human – centric society”.
In this context, this research proposal suggests that progress towards “Trustworthy AI” will require the development and application of technical standards that will help ensure the compliance of AI systems with human rights. These standards shall be the backbone of an approach stressing “compliance by design”, i.e., making sure that humans – and their rights – are respected at every stage of an AI system’s lifecycle, as well as in the se systems’ inputs, outputs, and workings. Coupling a legal concept, human rights, with a technical framework such as standards is a challenge. Human rights law, which often entail s a nuanced balancing of competing interests, is indeed not readily transposable to a technical context. These rights’ integration in the development of AI systems will therefore require a complex act of translation that may alter their materiality or content, offering an opportunity to repurpose human rights for the AI age. For this purpose, the research will employ a multi – disciplinary approach, integrating normative legal analysis with empirical data collection and computational modeling.
Let's keep in touch !