
This research area includes all the projects under the umbrella of Rules as Code (RAC), an approach to creating and […]
This research area includes all the projects under the umbrella of Rules as Code (RAC), an approach to creating and publishing rules, legislation and policies in machine- and human-readable form.
Among them, SimpLex aims to provide public administrations with digital tools for administrative simplification and to build an easy-to-use database of regional legislation.
Partenaires
FARI, Easy.brussels, CIRB_CIBG

Many AI systems are “socio – technical”; they involve an interplay between technical and human factors, which in turn entails […]
Many AI systems are “socio – technical”; they involve an interplay between technical and human factors, which in turn entails distinct challenges for the safety and fairness of these systems. While existing research and legislative regulatory proposals often suggest boosting the “socio” component, especially at the design stage, it remains unclear about how this could concretely yield benefits in terms of “trustworthy AI” systems: ultimately, such systems remain market – oriented optimizing mechanisms, with associated incentives different from those valued in a “human – centric society”.
In this context, this research proposal suggests that progress towards “Trustworthy AI” will require the development and application of technical standards that will help ensure the compliance of AI systems with human rights. These standards shall be the backbone of an approach stressing “compliance by design”, i.e., making sure that humans – and their rights – are respected at every stage of an AI system’s lifecycle, as well as in the se systems’ inputs, outputs, and workings. Coupling a legal concept, human rights, with a technical framework such as standards is a challenge. Human rights law, which often entail s a nuanced balancing of competing interests, is indeed not readily transposable to a technical context. These rights’ integration in the development of AI systems will therefore require a complex act of translation that may alter their materiality or content, offering an opportunity to repurpose human rights for the AI age. For this purpose, the research will employ a multi – disciplinary approach, integrating normative legal analysis with empirical data collection and computational modeling.

This project seeks to fill the gap in the legal domain is a key modern challenge, as evidenced by the […]
This project seeks to fill the gap in the legal domain is a key modern challenge, as evidenced by the fact that the European Union’s mooted AI Act explicitly considers such use as “high risk”. Yet, the development of AI for adjudication remains, in fact, under-developed, partly because research on and development of AI methods and tools in this domain remains expensive, data-intensive, and politically sensitive. In addition, language barriers decrease the potential benefits of using AI models trained on a single jurisdiction to other legal contexts.
This project seeks to fill the gap in current AI and legal research by focusing on the development of models fit for AI implementation in civil law jurisdictions. While civil law jurisdictions represent a majority of the world’s legal systems (Graph 1), and share significant characteristics, the majority of AI tools available in civil law jurisdictions are based on methods and models development in common law jurisdictions (including common law datasets) that are retrained and transposed to civil law jurisdictions.
One of them is France. The French Cour de cassation, in partnership with the Ordre des avocats aux Conseils, has decided to take advantage of the potential of artificial intelligence by making court decisions contained in the case law databases administered by the Cour de cassation available free of charge, in order to carry out research into artificial intelligence.
The aim of the project is to study the workflow of cases heard by the Court, using the potential of new technologies. With an initial duration of 18 months, it will accompany the reflections on the role of the Court of Cassation.
The Court will provide researchers with pseudonymised pleadings and rulings in order to identify arguments and legal issues, as well as connections, and to attempt to objectify the notion of the complexity of a case.
Partenaires
Cour de Cassation, Ordre des avocats au conseil d’état et à la cour de cassation, HEC Paris, Polytechnique Paris, Hi!Paris

A Secured Data Platform to Share your Data for Research and Policy in accordance with European Law. The European regulation […]
A Secured Data Platform to Share your Data for Research and Policy in accordance with European Law.
The European regulation on data governance (Data Governance Act – DGA) came into force on 24 September 2023. This regulation encourages the voluntary sharing of data for altruistic purposes, such as scientific research or improving public services, through data altruistic organisations.
In this context, the Belgian Brain Council (BBC), which brings together 26 patient associations and 28 scientific societies, is launching the creation of the first altruistic organisation for brain and mental health data. This organisation will be responsible for managing the Brain Data Hub, a secure IT platform for collecting and sharing data. The BBC aims to be an ethical and trusted intermediary for patients who wish to make their data for scientific research and the creation of a Belgian brain plan.
Partenaires:

The LASO project aims at identifying and developing a methodology to stimulate and support innovation with AI for the common […]
The LASO project aims at identifying and developing a methodology to stimulate and support innovation with AI for the common good in hospitals that have been acknowledged as key elements within health innovation systems. AI driven solutions have a lot of potential to improve the common good within healthcare. Current AI solutions can already work with semantically complex data, analyze large volumes of information in limited time, consistent and precise outcomes improving over time. However, as with any technology there are risks connected to these innovations, most importantly risks that go against biomedical ethical requirements: autonomy, benefit, non-crime, and justice.
The main objective of the LASO project is therefore to de-risk an innovation when implementing anew AI driven solution within hospitals and other complex care organizations, by offering a systematic blueprinting process before an AI solution is acquired, developed, and implemented. This methodology will enable both AI developing companies as well as health care organizations that search for AI-based solutions for the common good to select demand driven use case and anticipate various implementation challenges.
Partenaires
The members of the SmartLaw Hub are involved in training, providing seminars and advising the following partners:
- Banking - Lawyers - Notaries - Doctors - Edenred - Deloitte - Baines - ATOS - Alpha Laval - KPMG - GAINDE - FARI - Getting Ready
The SmartLaw Hub aims to foster interdisciplinary research and collaboration between legal scholars, computer scientists, and practitioners on the topics of law and digital technologies. The SmartLaw Hub also organizes events and workshops to disseminate its findings and promote public awareness and engagement.
Let's keep in touch !