Stakeholder engagement becomes paramount when it comes to responsible and effective implementation of AI in industries. Using artificial intelligence safely involves dealing with the intricacies of machine learning, finding and addressing risks concerned with AI bias, and safeguarding the privacy and integrity of data used to train AI models. Stakeholders must also take cognisance of the broader social implications of AI technologies.
This is because, in theory, AI systems can cause serious issues, which raises questions about accountability and responsibility. For this season, involving a diverse group of stakeholders is important throughout the process. Transparent collaboration between stakeholders is crucial to accountable and responsible AI, allowing AI strategies to fall in line with ethical guidelines and regulatory standards.
Responsibility and accountability
Firstly, the end goal isn’t just to create state-of-the-art AI models; it is also to ensure these systems are fair and less susceptible to risks. For this reason, AI accountability is important, as it directly influences the company’s reputation, customer trust, ethical standards, and legal responsibility. So, it is important to address the question of accountability and responsibility, as well as who is ultimately responsible for the deployment of AI models.
The responsibility lies with various stakeholders—from project and product managers to system engineers and end users. Researchers Dan Harborne and Alun Preece outlined the roles of the following stakeholders and their importance in their research paper titled ‘Stakeholders in Explainable AI’. These stakeholders include:
-
Developers
-
Theorists
-
Ethicists
-
AI users
-
Developers
As key stakeholders, developers, who are primarily tasked with building AI applications, must ensure robust system debugging, testing, and evaluation. These are professionals who work with large corporations, MSMEs, academia, and the public sector. While terms like ‘interpretability’ and ‘explainability’ are often used to describe their work, developers focus on improving the quality and reliability of AI systems.
By improving the application performance, developers can ensure the efficient performance of AI models. They also often use open-source libraries to generate explanations, with some of the popular tools including deep Taylor decomposition and LIME.
-
Theorists
Theorists are also key stakeholders since they are tasked with developing AI theory, particularly in deep neural networks. You usually encounter theorists in academia or industrial research who are concerned with broadening AI knowledge. They are more concerned with expanding the frontiers of AI knowledge than with the practical applications. Such stakeholders employ the term ‘interpretability’ instead of ‘explainability’, highlighting their interest in the fundamental properties of AI models.
These individuals are often regarded as system creators. For instance, theorists can undertake research on deep neural network technology without being involved in the actual building of the system.
-
Ethicists
In AI systems, ethicists are primarily tasked with ensuring accountability, fairness, and transparency. A few such stakeholders include critics, commentators, and policymakers from various disciplines, such as journalism, law, social science, and economics. Ethicists are primarily computer scientists or engineers; although, these individuals offer an interdisciplinary approach to AI ethics.
Ethicists require explanations that go beyond technical quality to include fairness, unbiased behaviour, and transparency. These are essential to ensure AI accountability, auditability, and legal compliance.
-
AI users
AI users are individuals who interact with—or are affected by AI systems. They do not usually contribute to the academic literature on AI interpretability or explainability. That said, they need concise explanations to make informed decisions depending on AI outputs and to justify their actions. The AI user community encompasses direct end-users and those involved in processes affected by artificial intelligence.
Role of stakeholders in AI ethics
-
Identifying ethical concerns: Through their experience and concerns, stakeholders can assist in the identification of possible biases, invasions of privacy, or social harms that could occur due to AI systems.
-
Feedback on design and development: Inviting stakeholders to participate early during development provides the opportunity for feedback on designing AI systems based on ethical standards such as fairness, transparency, and accountability.
-
Feedback on impact assessment: Stakeholders are able to analyse the possible effect of AI systems on various groups and communities and make sure the benefits are equitably distributed.
-
Raising awareness of ethical concerns: By speaking out, stakeholders can raise awareness of possible ethical issues associated with AI within organisations and the general public.
Stakeholders can help monitor AI systems in real-time to detect and rectify ethical issues as they are encountered.
When it comes to AI ethics, stakeholders must collaborate with each other by sharing knowledge, setting standards, engaging users in the design process, and addressing concerns. Focusing on AI ethics is a team effort, with every stakeholder doing their job effectively.
It is important since NBFCs can ethically seek alternative sources of data in order to determine the creditworthiness of potential customers. Even online marketplaces can include AI tools for enhancing customer experiences. By using stakeholder input, AI systems can be built and deployed in such a way that human values are respected, along with promoting social good while preventing harm whenever possible.