AI Ethics Institute

DO YOU TRUST
GENERATIVE AI
TOOLS?

Introducing the AI Safety, Security, Ethics, & Society [AISSES] Trust Index.
(Pronounced "Assess")

Why We Need a Trust Index

Mainstream media headlines are dominated by the perspectives of builders and sellers of AI technologies, which position AI tools as “magical” rather than products with serious limitations. Public engagement in the AI discourse is often limited to increasing adoption, without serious consideration of the harms or accountability when things go wrong.

To address these urgent concerns, WAIE+ has launched the AISSES Trust Index - a unique multi-year project to examine the public perception of popular Generative AI tools and the companies building them.

This pioneering project aims to shift power to the public by giving you an opportunity to rate and rank these tools based on their trustworthiness. This collective feedback will be the foundation for greater transparency and accountability for AI developers.

"Are tech vendors prioritizing our safety, or their profits?"

— The Core Question

What Does AISSES Measure?

The methodology is based on internationally accepted guidelines and norms, such as the UNESCO Recommendation on the Ethics of AI and the WAIE+ AI Safety & Security framework, to ensure that it is centered around user experience and perspectives rather than technology vendor hype.

Safety

Protecting users and communities from direct physical, psychological, and digital harm.

Security

Safeguarding personal data, ensuring system integrity, and preventing malicious exploitation.

Ethics

Ensuring fairness, mitigating biases, and promoting transparency and accountability in AI tools.

Society

Evaluating the broader impact of AI on workforces, marginalized groups, and global communities.

Make Your Voice Heard

We invite you to share your experience with 16 popular Generative AI tools and assess 15 AI vendors on 30+ dimensions across AI safety, security, ethics, & societal impact.

(This survey takes approximately 10 minutes)