About the TAI Repository

This online repository is designed to support the AI community in achieving Trustworthy AI (TAI).



Trustworthy AI is AI that respects fundamental rights, and is lawful, ethical and robust.



This effort was supported by the ZonMw-funded project ‘DECIDE-VerA’ (grant no. 08540122120004).

We offer a curated collection of practical assessment tools that can inform you on how to realise your AI system is trustworthy. The tools in this repository address at least one of the seven trustworthy AI requirements of TAI established by the EU High-Level Expert Group on AI (AI HLEG):

1

Human agency and oversight

2

Technical robustness and safety

3

Privacy and data governance

4

Transparency

5

Diversity, non-discrimination, and fairness

6

Environmental well-being

7

Accountability

image alt

The Team

Maria José Villalobos

Senior Onderzoeker

Hine van Os

Stafadviseur

Prof. Dr. Niels Chavannes

Hoogleraar Huisartsgeneeskunde (LUMC)

Who is this for?

Whether you're an AI developer, ethicist, researcher, implementation expert, or healthcare provider working with or on AI, if your goal is to create or apply responsible, trustworthy AI, this repository was generated with you in mind.

What kind of tools will you find here?

The TAI Repository focuses on procedural assessment tools. These are practical tools that assess TAI form an operational and procedural perspective. A well-known example is the Assessment List for Trustworthy AI (ALTAI), developed by the AI HLEG.

What's not included

This repository does not include: theoretical or descriptive frameworks, technical testing tools, exploratory methods such as focus groups or participatory design.

Supporting the AI community

This repository does not include: theoretical or descriptive frameworks, technical testing tools, exploratory methods such as focus groups or participatory design.

Frequently asked questions

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

This repository does not include: theoretical or descriptive frameworks, technical testing tools, exploratory methods such as focus groups or participatory design.

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

The TAI Repository focuses on procedural assessment tools. These are practical tools that assess TAI form an operational and procedural perspective. A well-known example is the Assessment List for Trustworthy AI (ALTAI), developed by the AI HLEG.

This repository does not include: theoretical or descriptive frameworks, technical testing tools, exploratory methods such as focus groups or participatory design.

Whether you're an AI developer, ethicist, researcher, implementation expert, or healthcare provider working with or on AI, if your goal is to create or apply responsible, trustworthy AI, this repository was generated with you in mind.