By John W. Davis
|
Picture this: a photographer on assignment covering a building fire walks to the rear alley for a better shot. His attention is drawn to screams of two young children on the fire escape several stories up. A small explosion violently shakes the building as he captures what becomes a Pulitzer prize winning photo of the two children falling to their deaths.
The photographer’s colleagues however challenge his ethics and his motives for capturing this image. Whose interest was the photographer serving, they ask.
I use this true story to illuminate a basic truth about ethics: the answer is less important than the reason for it. Ethical questions are tough and today those of us building and selling AI to the federal government need to directly answer the ethical questions raised by this emergent technology.
Take for example the CMS AI Health Outcomes Challenge. AI, which includes the fields of machine learning, natural language processing, and robotics, can be applied to almost any field in medicine and the potential seems limitless. The CMS AI Health Outcomes Challenge will engage with innovators from all sectors – not just from healthcare – to harness AI solutions to predict health outcomes for potential use in CMS’ Innovation Center (CMMI) payment and service delivery models.
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
This nascent yet limitless field pits this powerful technology against a novel set of ethical challenges that must be identified and mitigated. However, current policy and ethical guidelines for AI technology are lagging behind the progress AI is making in the health care field. Defining what is ethical and trustworthy AI is in dispute. The good news, as I see it, is that the conversations are at least starting.
Examining the Standards
Recently I led a lively discussion with my friend Alban DeBergevin, Microsoft’s Federal Sales Director for Data and AI, and his federal sales team. Our two companies work well together on a host of engagements. I kicked things off by remarking about Google’s decision to cancel the “Advanced Technology External Advisory Council,” after an uproar from within the ranks largely due to the announcement that Kay Coles James, president of the conservative Heritage Foundation, was appointed to the Council.
I posed the question – should one’s political or social views be a limiting factor for inclusion in the forthcoming public debate on ethical standards in AI? The scope of the Council was to consider some of Google’s most complex challenges and how to define AI principles.
Similarly, the European Union announced its own set of guidelines and key requirements, including maintaining human oversight, traceable procedures and system accountability. But with the slow rate of AI adoption in the federal space and virtually no guidelines coming from the federal government, what standards should federal contractors follow?
If Amazon’s Alexa is randomly but always listening in on devices being used “to improve their software,” should Amazon have responsibility when one of the thousands of employees worldwide listening to recordings in homes across the globe should it be required to report information to the authorities? Or what about health insurance fraud – Will Amazon be required to disclose evidence contrary to a health insurance claim?
AI raises complex legal questions regarding health care professionals’ and technology manufacturers’ liability, particularly if they cannot explain recommendations generated by AI technology. With the advent and expansion of Generative Adversarial Networks (GAN) and other technology designed to eliminate the distance between truth and fiction, how can public trust be guarded? How will traceable procedures remain in place when “black boxes” are generated via irreversible neural networks? Without a national discussion, we only have an opaque, risker path ahead.
So, let’s not just leave it up to the policymakers, because all of us must answer the ultimate question of who’s interest we are serving. Tech companies should accept their social and ethical responsibility when their technology is used both for good and bad. Owning the outcome should be an ingredient of transparency.
Date: April 17, 2019