Report: Applying Ethical Principles for Artificial Intelligence in Regulatory Reform

Article by Ronald JJ Wong.

Artificial Intelligence (“AI”) systems, automated decision-making, and autonomous vehicles are become increasingly ubiquitous. While these can benefit society, there can be significant risks of harms and unfair outcomes from their use.

For example, an Israeli organisation used AI to flag and suggest responses for potential high-risk patients of COVID-19 based on past medical data. However, if the data or algorithms are biased or if the medical data was not properly collected, there can be serious repercussions. Imagine the unfairness of being denied medical insurance based on erroneous AI systems. How would such AI-assisted or generated decisions be reviewed?

A controversial example is the Northpointe’s COMPAS AI which predicts criminal reoffending. It was used for bail and sentencing judicial decisions. ProPublica claimed that the AI was biased against African Americans. Northpointe claimed ProPublica misinterpreted the data.

However, it is possible they were comparing different metrics for fairness. Complex algorithms do not mean they and their designers or deployers should be immunised from review or recourse.

That is why the Monetary Authority of Singapore (“MAS”) is working with a Veritas consortium of organisations to develop fairness metrics regarding financial services so as not to systematically disadvantage any particular individuals or groups.

As these social justice issues of race, class, inequality, and vulnerable communities threaten to fracture societies, AI systems could either worsen fault lines or diminish them.

However, policy makers and societies are often catching up after technological developments. A balance should be struck between encouraging innovation and managing risks.

This report by the Subcommittee identifies issues that law and policy makers may face in applying ethical principles when developing or reforming policies and laws regarding AI. Specifically, the report discusses the following ethical principles:

  • Law and Fundamental Interests: ensure AI systems’ compliance with existing laws and not violating fundamental human interests;

  • Considering AI Systems’ Effects: planning for reasonably foreseeable effects of AI systems throughout their lifecycle;

  • Respect for Values and Culture: taking into consideration cultural diversity and values in different societies in AI deployment;

  • Risk Management: assessing and eliminating or controlling risks;

  • Wellbeing and Safety: assessing AI systems’ intended and unintended effects against holistic wellbeing and safety metrics;

  • Accountability: holding appropriate persons accountable for AI systems based on their roles, the context, and consistency with the state of art;

  • Transparency: designing AI systems to enable discovery of how and why they behaved the way they did; striving towards traceability, explainability, verifiability, and interpretability of AI systems and their outcomes insofar possible;

  • Ethical Data Use: good privacy and personal data management practices.

AI system designers or deployers should consider these ethical principles and related issues when developing, training, testing, or deploying AI systems. The report could thus serve as a framework for risk assessment and management. This is especially pertinent for AI systems which have significant risks of creating harms or legal liabilities.

While most countries do not have specific legislation regulating AI, giving room for jurisdictional arbitrage, most legal systems would have existing laws that could hold AI system designers, manufactures, or deployers responsible for civil or even criminal liability. It would probably not, and should not, be a good defence to say one had no idea what the AI system was doing.

The framework could thus also serve as a roadmap to develop best practices within one’s specific field of application. For instance, consideration of cultural nuances and slangs in natural language processing AI systems, data privacy management, and security, clear explanations about the role of automated decision making (“ADM”) to end users and the possibility of reviewing such ADMs could bolster credibility, adoption and success of one’s AI-enabled product or service.

READ THE REPORT: Applying Ethical Principles for Artificial Intelligence in Regulatory Reform (pdf document)

Ronald JJ Wong (Director) is a member of the Subcommittee on Robotics and Artificial Intelligence under the Singapore Academy of Law’s Law Reform Committee and contributed to the writing of the report.

Ronald JJ Wong

Ronald believes that lawyering is about serving people to bring about justice, well-being, and peace.

https://www.covenantchambers.com/ronald-wong
Previous
Previous

What to do if a company that owes me money has been wound up?

Next
Next

COVID-19: 6 Ways Businesses Can Maximise Their Downtime