RESEARCH

Generative Pre-trained Transformers (GPTs) are highly effective in generating content and increasing productivity. However, firms have reservations about their use in a professional setting because of concerns related to misinformation, confidentiality, and liability arising from the use of GPTs.

GPTs have become the fastest-adopted technologies in the industry’s history, but in the workplace context, workers sometimes conceal their use of GPTs. This lack of transparency is a major obstacle to regulating the use of GPTs at work and prevents firms and regulators from identifying and mitigating risks.

This project investigates the effects of using GPTs at work on workers’ perception of the risks and benefits associated with GPTs and on workers’ willingness to disclose their use of GPTs. The findings could assist firms and regulators in regulating GPTs in the workplace by promoting transparency.

Partners

HEC Paris Fondation

This project seeks to increase transparency about the way large digital platforms operate and govern their platform, and how they influence their users. An emerging EU platform law imposes new obligations on large digital platforms such as Google, Amazon, Facebook and X. The Digital Services Act (DSA), which is part of this EU platform law, mandates that platforms demonstrate transparency in their content moderation policies and systems, as well as take measures to limit the spread of misinformation

The Content Moderation Project (CoMo project) aims to access the content moderation data of large digital platforms to analyze their content moderation policies and systems in the context of the DSA. The project started with an audit of X crowdsourced content moderation system (Community Notes) and revealed the strengths and weaknesses of X content moderation tool. The project’s objective is to develop methodologies for auditing digital platforms’ content moderation policies and systems, covering the entire process from data access to the presentation of audit results.

Partners

HEC Paris Fondation

The ACT partnership aims to leverage artificial intelligence (AI) to the service of justice stakeholders to improve conflict prevention and resolution. This project brings together a multidisciplinary and international team of 52 researchers and 45 partners representing a number of stakeholders including the world’s leading research centres dedicated to the implementation and use of technologies in the field of justice (cyberjustice), litigants and legal professionals (justice stakeholders), as well as main users and developers of AI for justice in Canada.

David Restrepo Amariles is the lead researcher of the sub-project 4 investigating Smart Contracts and Regulation Technologies. The development of technologies for contracts involving the substitution of the third party by computer technology is making significant progress, requiring changes to be made in state legislation. The future of contractual obligations involves an analysis by the legal community in order to facilitate their legal implementation.

The project “Citizen Involvement and User-Centricity in Rules-as-Code and Digital-Ready Legislation” aims at investigating methodologies to ensure citizen and stakeholder engagement in both the retrofitting of existing legislation and the development of new digital-ready policies and legislations. To achieve this objective, the project will concentrate on two policy areas within the Region of Brussels Capital: urban planning regulations, including the COBAT, and housing policies, with a particular focus on rules pertaining to student housing.

For each of these policy domains, the project will design and prepare the documentation and digital tools for two workshops aimed at testing citizen involvement and user-centric approaches in the digital transformation of current and future policies. These workshops will be prepared by an interdisciplinary team comprising advanced students in both law and computer science. The culmination of the research will involve presenting the key insights derived from these workshops to a panel of policymakers in Brussels including a representative of Denmark administration. Additionally, the project will result in the publication of its findings and the lessons learned, contributing to the enhancement of the SimpLex project.

Partners

Agency for Digital Government

Privatech is an academic legal research project that aims to streamline privacy compliance and consumer protection by focusing on firms (data controllers and processors) rather than on consumers (data subjects). The project aims to ensure companies are able to translate privacy policies disclosed to consumers into effective corporate compliance mechanisms. Privatech’s objective is to detect breaches to General Data Protection Regulation (GDPR) in privacy documents.

The application could serve consumers, lawyers, data protection officers, legal departments, and managers in auditing the privacy documents of a company. Privatech will eventually encourage companies to design and monitor their data processing activities, so they are legal, comprehensive, and easy to understand. More importantly, it aims to further generate privacy compliance in the back end of data flows, or in other words, to ensure companies are informed of data practices so they can take privacy preserving decisions. Privatech allows managers who are not specialized in privacy protection to conduct a preliminary compliance assessment and detect potential issues requiring specialized advice.

Partners

Atos

The use of artificial intelligence in the legal domain is growing rapidly to the extent that the current proposal of the AI Act of the European Union explicitly considers its use in the judicial sector as high risk. At the same time, the development of AI for adjudication remains undeveloped despite the significant efforts of the academic community to advance research in this domain.

The development of AI methods and tools remains expensive, data-intensive, and politically sensitive.
This project takes the perspective of civil law jurisdictions. While civil law jurisdictions represent 60% of the world’s legal systems, it remains hard to capitalize on this mass of data due mainly to language barriers, decreasing the potential benefits of belonging to the same legal family for the development of AI tools. In this project we examine the possibility of developing AI methods to exploit court decisions that can be transferred across civil jurisdictions. We explore 4 main tasks: anonymization, argument mining, predictive models, and legal explainability (i.e., justification of decision by legal reasoning). We conduct an exploratory study on the case law of Belgian courts—as it involves multilingual decisions—to examine the possibility of transferring multi-lingual models developed in one jurisdiction to other jurisdictions with the same language and civil law tradition (e.g., Germany, France, and the Netherlands).”

Partners

University of Amsterdam 

This research area includes all the projects under the umbrella of Rules as Code (RAC), an approach to creating and publishing rules, legislation and policies in machine- and human-readable form.

Among them, SimpLex aims to provide public administrations with digital tools for administrative simplification and to build an easy-to-use database of regional legislation.

Partenaires

FARI, Easy.brusselsCIRB_CIBG 

Many AI systems are “socio – technical”; they involve an interplay between technical and human factors, which in turn entails distinct challenges for the safety and fairness of these systems. While existing research and legislative regulatory proposals often suggest boosting the “socio” component, especially at the design stage, it remains unclear about how this could concretely yield benefits in terms of “trustworthy AI” systems: ultimately, such systems remain market – oriented optimizing mechanisms, with associated incentives different from those valued in a “human – centric society”.

In this context, this research proposal suggests that progress towards “Trustworthy AI” will require the development and application of technical standards that will help ensure the compliance of AI systems with human rights. These standards shall be the backbone of an approach stressing “compliance by design”, i.e., making sure that humans – and their rights – are respected at every stage of an AI system’s lifecycle, as well as in the se systems’ inputs, outputs, and workings. Coupling a legal concept, human rights, with a technical framework such as standards is a challenge. Human rights law, which often entail s a nuanced balancing of competing interests, is indeed not readily transposable to a technical context. These rights’ integration in the development of AI systems will therefore require a complex act of translation that may alter their materiality or content, offering an opportunity to repurpose human rights for the AI age. For this purpose, the research will employ a multi – disciplinary approach, integrating normative legal analysis with empirical data collection and computational modeling.

This project seeks to fill the gap in the legal domain is a key modern challenge, as evidenced by the fact that the European Union’s mooted AI Act explicitly considers such use as “high risk”. Yet, the development of AI for adjudication remains, in fact, under-developed, partly because research on and development of AI methods and tools in this domain remains expensive, data-intensive, and politically sensitive. In addition, language barriers decrease the potential benefits of using AI models trained on a single jurisdiction to other legal contexts.

This project seeks to fill the gap in current AI and legal research by focusing on the development of models fit for AI implementation in civil law jurisdictions. While civil law jurisdictions represent a majority of the world’s legal systems (Graph 1), and share significant characteristics, the majority of AI tools available in civil law jurisdictions are based on methods and models development in common law jurisdictions (including common law datasets) that are retrained and transposed to civil law jurisdictions.

One of them is France. The French Cour de cassation, in partnership with the Ordre des avocats aux Conseils, has decided to take advantage of the potential of artificial intelligence by making court decisions contained in the case law databases administered by the Cour de cassation available free of charge, in order to carry out research into artificial intelligence.

The aim of the project is to study the workflow of cases heard by the Court, using the potential of new technologies. With an initial duration of 18 months, it will accompany the reflections on the role of the Court of Cassation.

The Court will provide researchers with pseudonymised pleadings and rulings in order to identify arguments and legal issues, as well as connections, and to attempt to objectify the notion of the complexity of a case.

Partenaires

Cour de Cassation, Ordre des avocats au conseil d’état et à la cour de cassation, HEC Paris, Polytechnique Paris, Hi!Paris 

A Secured Data Platform to Share your Data for Research and Policy in accordance with European Law.

The European regulation on data governance (Data Governance Act – DGA) came into force on 24 September 2023. This regulation encourages the voluntary sharing of data for altruistic purposes, such as scientific research or improving public services, through data altruistic organisations.

In this context, the Belgian Brain Council (BBC), which brings together 26 patient associations and 28 scientific societies, is launching the creation of the first altruistic organisation for brain and mental health data. This organisation will be responsible for managing the Brain Data Hub, a secure IT platform for collecting and sharing data. The BBC aims to be an ethical and trusted intermediary for patients who wish to make their data for scientific research and the creation of a Belgian brain plan.

Partenaires:

Belgian Brain Council

The LASO project aims at identifying and developing a methodology to stimulate and support innovation with AI for the common good in hospitals that have been acknowledged as key elements within health innovation systems. AI driven solutions have a lot of potential to improve the common good within healthcare. Current AI solutions can already work with semantically complex data, analyze large volumes of information in limited time, consistent and precise outcomes improving over time. However, as with any technology there are risks connected to these innovations, most importantly risks that go against biomedical ethical requirements: autonomy, benefit, non-crime, and justice.

The main objective of the LASO project is therefore to de-risk an innovation when implementing anew AI driven solution within hospitals and other complex care organizations, by offering a systematic blueprinting process before an AI solution is acquired, developed, and implemented. This methodology will enable both AI developing companies as well as health care organizations that search for AI-based solutions for the common good to select demand driven use case and anticipate various implementation challenges.

Partenaires

ULB, iCITE, VUB, SMIT , Sagacify , UZBrussel

BUILD

EXPERTISE

The members of the SmartLaw Hub are involved in training, providing seminars and advising the following partners:

  • Banking
  • Lawyers
  • Notaries
  • Doctors
  • Edenred
  • Deloitte
  • Baines
  • ATOS
  • Alpha Laval
  • KPMG - GAINDE
  • FARI - Getting Ready

Let's keep in touch !