AEQUITAS


Assessment and Engineering of eQuitable, Unbiased, Impartial and Trustworthy Ai Systems

(description)

AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policy-making. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems to amplify this phenomenon but rather mitigate it. To trust these systems, domain experts and stakeholders need to trust the decisions. Fairness stands as one of the main principles of Trustworthy AI promoted at EU level. How these principles, in particular fairness, translate into technical, functional social, and lawful requirements in the AI system design is still an open question. Similarly we don’t know how to test if a system is compliant with these principles and repair it in case it is not. AEQUITAS proposes the design of a controlled experimentation environment for developers and users to create controlled experiments for - assessing the bias in AI systems, e.g., identifying potential causes of bias in data, algorithms, and interpretation of results, - providing, when possible, effective methods and engineering guidelines to repair, remove, and mitigate bias, - provide fairness-by-design guidelines, methodologies, and software engineering techniques to design new bias-free systems The experimentation environment generates synthetic data sets with different features influencing fairness for a test in laboratories. Real use cases in health care, human resources and social disadvantaged group challenges further test the experimentation platform showcasing the effectiveness of the solution proposed. The experimentation playground will be integrated on the AI-on-demand platform to boost its uptake, but a stand-alone release will enable on-premise privacy-preserving test of AI-systems fairness. AEQUITAS relies on a strong consortium featuring AI experts, domain experts in the use case sectors as well as social scientists and associations defending rights of minorities and discriminated groups.

(objective)

AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policy-making. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems to amplify this phenomenon but rather mitigate it. To trust these systems, domain experts and stakeholders need to trust the decisions.

Fairness stands as one of the main principles of Trustworthy AI promoted at EU level. How these principles, in particular fairness, translate into technical, functional social, and lawful requirements in the AI system design is still an open question. Similarly we don’t know how to test if a system is compliant with these principles and repair it in case it is not.

AEQUITAS proposes the design of a controlled experimentation environment for developers and users to create controlled experiments for

  • assessing the bias in AI systems, e.g. identifying potential causes of bias in data, algorithms, and interpretation of results,
  • providing, when possible, effective methods and engineering guidelines to repair, remove, and mitigate bias,
  • provide fairness-by-design guidelines, methodologies, and software engineering techniques to design new bias-free systems

The experimentation environment generates synthetic data sets with different features influencing fairness for a test in laboratories. Real use cases in health care, human resources and social disadvantaged group challenges further test the experimentation platform showcasing the effectiveness of the solution proposed. The experimentation playground will be integrated on the AI-on-demand platform to boost its uptake, but a stand-alone release will enable on-premise privacy-preserving test of AI-systems fairness.

AEQUITAS relies on a strong consortium featuring AI experts, domain experts in the use case sectors as well as social scientists and associations defending rights of minorities and discriminated groups.

(keywords) fairness measure  •  assessment  •  repair in AI systems  •  Software methodologies for fair-by-design AI systems
(fields of science) artificial intelligence  •  software  •  social inequalities  •  ethic

Tags:

Logo

AEQUITAS

Home
Partners
 UNIBO    UMU    UCC    ADC    AKKODIS    AOUBO    PRE    LOBA    ALLAI    PRD    AG    WAI    ECD    ITI    ULL    AR    UPV  
Deliverables
Meetings
Workpackages
 WP1    WP2    WP3    WP4    WP5    WP6    WP7    WP8  
Publications
Talks
Tags

Project

— acronym

AEQUITAS

project coordination

— coordinating unit

UNIBO

— project coordinator

Roberta Calegari

when

— start date

01/11/2022

— end date

31/10/2025

— duration

36 months

URL & ID

— EC

CORDIS

status & type

— status

ongoing

— sort

competitive

— context

international

funding

— funding body

European Community

— funding program

Horizon Europe

— grant agreement

101070363

— funding

3,493,990

presentation

logo    video

Partita IVA: 01131710376 — Copyright © 2008–2023 APICe@DISI – PRIVACY