Where AI Ethics Stands
Ethics has always been a concern when it comes to artificial intelligence. The topic is probably most familiar to the average person in the form of films about humanoid forms of AI that develop independence from their human counterparts, like Ex Machina and Transcendence. These movies, and much older ones, bring up questions of labor ethics, slavery, and what makes something “human” in the first place. Although these fictional representations of AI ethics have not yet surfaced in the real world, questions of ethics in AI regularly make the news. Most recently, Google faced accusations of racial bias and discrimination within its actual AI ethics team, making observers question how the team itself could possibly bring justice and equality to the world of artificial intelligence when it was so internally affected by these issues. Could a team guilty of racism effectively conduct critical AI bias research, or effectively evaluate discriminatory and hateful content on a site like YouTube?
There are many more ethical concerns about AI, however. Here are some of the most common.
- Labor concerns. This may be the most common criticism of artificial intelligence solutions, although it is not always framed as an ethical problem. It is now being said that AI will inevitably replace entire categories of jobs, from certain sectors of customer service to some aspects of transportation, retail, and manufacturing. Long-term projections suggest that AI will probably create new jobs to replace those that it eliminates, but this does not solve the problem of displaced, unemployed workers that may arise as digital transformations proceed apace.
- Healthcare concerns. There are many legitimate and appropriate applications of AI in healthcare, from insurance management to billing systems. Although issues of patient privacy and security often arise even in those contexts, the deeper concerns in healthcare are around use of AI for instances like patient triage, analysis of medical imaging and tests, diagnosis itself. There have already been problems with medical AI displaying biases that render them problematic. Most of these issues are caused by a lack of diversity in data available for training AI, a problem that is unlikely to be solved quickly and easily.
- Law enforcement concerns. Just as in healthcare, AI applications in law enforcement are heavily affected by bias in the training data used to create automated solutions. Instances of bias in facial recognition software used by law enforcement departments has made the news multiple times, giving rise to pushes for reform and in some cases new policies governing the use of AI for law enforcement agencies.
- Human resources and admissions concerns. AI is already being used for talent management and school admissions, in most cases without incident. However, structural biases have arisen in many cases where prospective students and employees were rejected based on qualities that agencies and schools purport not to consider, like race, gender, income level, and country of origin. In many cases, these prospective employees and students were not even aware that they would be evaluated by artificial means, and did not give their consent for their information to be used this way.
All of these concerns are legitimate, but luckily they can all be addressed, if not quickly and inexpensively. Several solutions for identifying and correcting these ethical problems are being discussed worldwide, and some are even in effect already.
How AI Ethics Could Proceed
One of the most common suggestions for how to enforce ethical standards in the AI community is to establish an external AI board of ethics, either nationally or internationally, to evaluate new and old AI and potentially sanction organizations and individuals producing and using unethical forms of artificial intelligence. An example of this has been in the news very recently: the European Union’s effort to establish regulations on AI and its use with their Artificial Intelligence Act. Rules around GDPR are cited as precedent for this far-reaching effort, but it remains to be seen whether it will come into effect and whether it would prove useful. There is no counterpart to this type of regulation in the United States, although the Department of Defense is attempting one.
Another solution that may be easier to establish, and one that is already in practice in some companies, is the incorporation of internal boards of AI ethics that enforce company-wide policies. IBM and Microsoft have already done this to a certain extent, so far more successfully than the aforementioned example of Google at the beginning of this article. These internal boards strive to prioritize data privacy for customers as well as transparency in their organizations, something akin to “explainable AI” that would allow researchers and data scientists both within and outside of companies to evaluate the ethical standards of different AI solutions throughout their development.
Regardless of how individual countries, regions, and businesses choose to proceed, it seems that the “wild west” of AI development and application is drawing to a close as the need for ethical standards is recognized globally. Long-term, this can only be good news for the field of AI as it grows and struggles to establish trust among potential adopters of technology, especially in high-stakes fields like healthcare and law enforcement.