IIT-M study advocates participatory approach to AI governance
The researchers’ model highlights the importance of involving relevant stakeholders in shaping the design, implementation, and oversight of AI systems
by The Hindu Bureau · The HinduA study by the Indian Institute of Technology-Madras (IIT-M) suggests participatory approach to artificial intelligence (AI) governance. Such an effort will build transparency, foster public trust, and make it easier to gain widespread acceptance.
The researchers from the institute’s Centre for Responsible AI under the Wadhwani School of Data Science and AI (WSAI) and Vidhi Centre for Legal Policy, a think-tank on legal and technology policy, published a study identifying how a participatory approach in AI development could improve the outcomes of AI algorithm and enhance the fairness of the process.
The researchers’ model highlights the importance of involving relevant stakeholders in shaping the design, implementation, and oversight of AI systems. The research was conducted in two parts, and the findings were published in arXiv, an open-access archive.
B. Ravindran, Head of the WSAI, said the study found that persons most impacted by deployment of AI systems had little to no say in how they were developed. Hence, it would be beneficial to have a participatory approach to build and use more responsible, safe, and human-centric AI systems.
“Increasing transparency and accountability in AI systems fosters public trust, making it easier for these technologies to gain widespread acceptance. By involving a wide range of stakeholders, we can reduce risks like bias, privacy violations, and lack of explainability, making AI systems safer and more reliable,” he said.
Shehnaz Ahmed, lead, law and technology at Vidhi Centre, said the report offered a sector-agnostic framework answering questions such as identifying stakeholders, involving them throughout the AI lifecycle, and integrating their feedback.
The researchers used two case studies in law enforcement and healthcare. Facial recognition technology (FRT) used by the police could lead to biases. By engaging stakeholders, such as civil society groups, undertrials, and legal experts, a fair and transparent system that respects individuals could be deployed, they said.
In healthcare, the large language models could sometimes generate inaccurate or fabricated information. Such models, when trained on biased data, could exacerbate healthcare disparities. If doctors, patients, legal teams, and developers are involved in the development and deployment of the models, the systems would be more accurate, equitable, and transparent, the researchers explained.
Published - November 07, 2024 09:57 pm IST