How to design AI ethical models that work — and are not simply optimistic forecasting

AI’s unquestioning trust of human judgment is already a critical challenge that poses particular risks in situations involving cruelty or discrimination. One problem for ethics researchers is their inability to design software to perform well in real-world situations. But this challenge can be overcome with newly developed approaches that emphasize software’s transparency and its behavioral design.

Algorithms are typically designed in secret. Because of this, policymakers, technologists, developers, and everybody else with an interest in AI’s development can’t learn about it until it is applied, even when these algorithms significantly affect people. Researchers have faced similar constraints with respect to AI ethics. If, for example, researchers want to design software to be more transparent about its behavior, they must navigate through a potentially labyrinthine set of assumptions about the ethics and values of future software that contradicts others, and also that may harm people.

This problem was discussed in depth in a panel at the Humanities and Sciences Ethics Conference (HS) in New York City. Monica Djang, a doctoral student at New York University, analyzed an approach being developed by the company Tensorflow, the Google-backed research and development center focused on machine learning. The project aims to solve the secrecy problem for complex decisions and policies that involve many decisions on a single file.

Though undergoing significant development, the application is still being tested. Tensorflow’s technology will allow future software to explicitly and transparently characterize decisions. But now, it is still in its test and development phase. What do researchers need to know about their AI as they go through this developmental process? What can be known now to make these decisions better for the future?

In short, studies on modeling of morality within the context of ethics can help inform future decisions. Djang began by reviewing studies on models of moral judgment using a depth of knowledge approach, which helps to tease out underlying issues that are at play when designing algorithms that have to make decisions in a fluid and changing situation. By following a sequence of arguments as individual decisions, this approach helps researchers understand what choices to make for each specific decision at a particular time.

Dr. Djang then compared that approach to a more case- and context-specific approach, where researchers build a better knowledge base of people who have had a particular experience. She showed that it is possible to view a model or model-like system with knowledge of, but not understanding, the whole system. An example of this approach is for example when a system of sociological studies gets a poor score on general knowledge or analysis of the economics of war. That particular model does not reflect the whole context of the data that the study is using, as well as the whole set of moral problems and choices that it is acting on. This context-specific approach builds a knowledge base for a particular model for that specific set of problems, and provides more detail about how an individual’s experience or their specific human identity affects their moral judgment.

The point is that humans are formed in different ways, so providing models of decision making that can make predictions about human behavior based on it is important.

As Jemit Ekosti, a research consultant at J.P. Morgan, pointed out, making guesses about a future event based on a projection of past behavior may not produce the kind of data needed to help explain those events. From an economic perspective, these types of hypothetical projections can often be quite optimistic or pessimistic. But in an ethical sense, the moral situation the model is currently modeled on may be impossible to simulate. The experience and history of a person, however, can be more representative of the future moral guidance the model is trying to supply. And so, in this example, the reasoning behind the moral judgment might simply be wrong. The factual basis of an analysis is often the foundation of a moral decision, not the reasoning supporting it.

With that in mind, Doshi and others at Tensorflow are working to overcome these challenges of bringing knowledge to the testing of a large number of different ethical decisions using predictive AI technology. Djang described how public decisions will be made as AI software is deployed in the real world. As more model-like software is developed, policymakers will be able to determine which models may be good models for predicting moral judgments. As more data is available about how people typically make moral judgments, these models can be refined over time.

Leave a Comment