Applying Generative AI with Responsibility

Generative AI has the potential to democratise knowledge, but risks like misinformation and over-reliance from those using it need to be carefully considered, especially in education and healthcare.

This research explores how tuning the parameter settings in Generative AI models can optimise responses to enhance comprehension, foster innovation, and reduce misinformation. Specifically, the research examines how communication styles of AI tools influence individuals’ understanding and decision-making in education and healthcare contexts. By conducting both online and field experiments, the research assesses the impact of AI-generated responses on diverse demographic groups, with a focus on responsible AI deployment in underserved communities. This project is supported by the Sui Foundation.

WORLDWIDE

The Challenge

Generative AI’s potential to democratise knowledge is accompanied by risks. Over-reliance on AI-generated information and the spread of misinformation are concerns, especially for individuals with limited critical thinking skills or less exposure to technology. These issues could exacerbate existing disparities, particularly in education and healthcare, where accurate, personalised information is crucial for effective outcomes.

The intervention

This research focuses on tuning the parameter settings of Generative AI models. In one condition, the AI tool will provide more deterministic, precise outputs, while in the other condition, the tool will introduce creativity and diversity at the expense of coherence and accuracy. The study will examine how different parameter settings affect comprehension, innovation, and decision-making across various groups, particularly those from disadvantaged backgrounds.

The Potential Impact

By understanding the effects of communication style in AI responses, this study aims to improve the way Generative AI is deployed in sensitive sectors. The findings will provide valuable insights for developers, educators, and policymakers, promoting responsible AI use while ensuring that its applications are equitable. Ultimately, the research seeks to contribute to advancing both the theoretical and practical understanding of human-AI interaction, particularly in how AI can support innovation while minimising risks such as misinformation.