• Skip to main content

DistilGovHealth

DistilNFO GovHealth Advisory

  • Publications
    • Home
    • DistilINFO HealthPlan
    • DistilINFO HospitalIT
    • DistilINFO IT
    • DistilINFO Retail
    • DistilINFO POPHealth
    • DistilINFO Ageing
    • DistilINFO Life Sciences
    • DistilINFO GovHealth
    • DistilINFO EHS
    • DistilINFO HealthIndia
    • Subscribe
    • Submit Article
    • Advertise
    • Newsletters

Europe’s Quest For Ethics In Artificial Intelligence

Share:

April 17, 2019

This week a group of 52 experts appointed by the European Commission published extensive Ethics Guidelines for Artificial Intelligence (AI), which seek to promote the development of “Trustworthy AI” (full disclosure: I am one of the 52 experts). This is an extremely ambitious document. For the first time, ethical principles will not simply be listed, but will be put to the test in a large-scale piloting exercise. The pilot is fully supported by the EC, which endorsed the Guidelines and called on the private sector to start using it, with the hope of making it a global standard.

Europe is not alone in the quest for ethics in AI. Over the past few years, countries like Canada and Japan have published AI strategies that contain ethical principles, and the OECD is adopting a recommendation in this domain. Private initiatives such as the Partnership on AI, which groups more than 80 corporations and civil society organizations, have developed ethical principles. AI developers agreed on the Asilomar Principles and the Institute of Electrical and Electronics Engineers (IEEE) worked hard on an ethics framework.. Most high-tech giants already have their own principles, and civil society has worked on documents, including the Toronto Declaration focused on human rights. A study led by Oxford Professor Luciano Floridi found significant alignment between many of the existing declarations, despite varying terminologies. They also share a distinctive feature: they are not binding, and not meant to be enforced.

The European Guidelines are also not directly enforceable, but go further than these previous attempts in many respects. They focus on four ethical principles (respect for human autonomy, prevention of harm, fairness, and explainability) and go beyond, specifying that Trustworthy AI also implies compliance with EU law and fundamental rights (including privacy), as well as a high level of socio-technical robustness. Anyone who wishes to design, train and market a Trustworthy AI system will be asked to carefully consider the risks that the system will generate, and be accountable for the measure taken to mitigate them. The Guidelines offer a detailed framework to be used as guidance for such assessment.

For those looking for strong statements, the Guidelines may not be a great read. You will find no mention of Frankenstein, no fear of singularity, no resounding provisions such as “AI should always be explainable”, “AI should never interfere with humans”, “there should always be a human in the loop”, or “AI should never discriminate”. These statements are intuitively attractive, but are very far from the reality of AI deployment and likely to prove disproportionate when converted into a policy framework.

Want to publish your own articles on DistilINFO Publications?

Send us an email, we will get in touch with you.

Users do not need a detailed explanation and understanding of how an AI-enabled refrigerator works, or even how an autonomous vehicle takes ordinary decisions. They need to trust the process that brought them to the market, and be able to rely on experts that may intervene whenever things go wrong. But users should be entitled to know why they were refused access to a government file, or why someone cut the line as recipient of a subsidy, or a kidney. Likewise, a human in the loop will make no sense in some cases (think about humans sitting at the steering wheel in autonomous cars); yet a human “on the loop”, or a “human in command” may be required. And while discrimination will be often inevitable because our society is already biased, excessive, unjustified, unlawful discrimination should be outlawed, and prompt redress should be given to the damaged individuals. Importantly, the Guidelines also include examples of “areas of critical concern”, which are most likely to fall short of meeting the requirements of trustworthy AI: identifying and tracking individuals with AI, deploying covert AI systems, developing AI-enabled citizen scoring in violation of fundamental rights, and using AI to develop Lethal Autonomous Weapons (LAWs).

The concept of Trustworthy AI is still only an “aspirational goal” in the wording of the High-Level Expert Group. It will be up to the EU institutions to decide in the coming months whether to make it a binding framework, and for which use cases. This may entail the use of both hard law (such as amended rules on torts, and sector-specific legislation making Trustworthy AI binding in some contexts, ad hoc competition rules), and also softer instruments. Among other initiatives, the EU could decide that all public procurement be limited to Trustworthy AI; or mandate that AI applications in healthcare be trustworthy. There may be a need for some form of certification to ensure that the new system is correctly implemented, and information is correctly presented to users.

A different issue is whether this system will help Europe set global AI standards and thereby relaunch its competitiveness. IBM has declared that they will apply the framework across the globe. But given (1) the United States is considered to provide for inadequate privacy protection of the end users, and (2) U.S.-based platforms are regularly accused of excessive interference with users’ autonomy and self-determination, Trustworthy AI could also be used to shut the door to non-compliant (or non-European) players in the near future. The expert group that drafted the Guidelines did not discuss any such industrial and trade policy scenarios. But the EC hinted at this possibility by advocating, in a recent official document, the development of ethical, secure and cutting-edge AI “made in Europe”.

Date: April 17, 2019

Source: Forbes

Coffee with DistilINFO's Morning Updates...

Sign up for DistilINFO e-Newsletters.

Just a little bit more about you...
PROCEED
Choose Lists
BACK

Related Stories

  • Major Payers Find HHS Finalized Nondiscrimination Rule Too NarrowMajor Payers Find HHS Finalized Nondiscrimination Rule Too Narrow
  • New Clinically Validated Sleepcheck App LaunchesNew Clinically Validated Sleepcheck App Launches
  • Apple Still has a Lot of Room to Grow in the $3.5 Trillion Health Care SectorApple Still has a Lot of Room to Grow in the $3.5 Trillion Health Care Sector
  • Google Moves Further Into Healthcare: a Timeline of the Last YearGoogle Moves Further Into Healthcare: a Timeline of the Last Year
  • Superb Healthcare At Ultra-Low Prices? How Singapore Does ItSuperb Healthcare At Ultra-Low Prices? How Singapore Does It
  • AI, Machine Learning, and Blockchain are Key for Healthcare InnovationAI, Machine Learning, and Blockchain are Key for Healthcare Innovation

Trending This Week

Sorry. No data so far.

About Us

DistilINFO is media company that publishes Industry news, views and Interviews. We distil the information for you – saving time and keeping you up to date on your interest areas.

More About Us

Follow Us


Useful Links

  • Subscribe
  • Contact
  • Advertise
  • Privacy Policy
  • Terms of Service
  • Feedback

All Publications

  • DistilINFO HealthPlan Advisory
  • DistilINFO HospitalIT Advisory
  • DistilINFO IT Advisory
  • DistilINFO Retail Advisory
  • DistilINFO POPHealth Advisory
  • DistilINFO Ageing Advisory
  • DistilINFO Life Sciences Advisory
  • DistilINFO GovHealth Advisory
  • DistilINFO EHS Advisory
  • DistilINFO HealthIndia Advisory

© DistilINFO Publications