The rush to deploy powerful new generative AI technologies such as ChatGPT has raised alarms possible damage and abuse. The law’s icy response to such threats has done just that requested requirements that the companies developing these technologies implement AI ‘ethically’.
But what exactly does that mean?
The simple answer would be to align a company’s operations around one or more of the following dozens of sets of AI ethical principles that governments, multi-stakeholder groups and academics have produced. But that’s easier said than done.
We and our colleagues have spent two years of interviewing and surveying AI ethics professionals across a range of sectors to try to understand how they were trying to achieve ethical AI – and what they might be missing. We’ve learned that pursuing AI ethics in practice is less about mapping ethical principles into business operations than about implementing management structures and processes that enable an organization to detect and mitigate threats.
This will likely be disappointing news for organizations looking for unambiguous guidelines that avoid gray areas, and for consumers hoping for clear and protective standards. But it points to a better understanding of how companies can pursue ethical AI.
Struggling with ethical uncertainties
Our studywhich forms the basis for a upcoming book, aimed at those responsible for managing AI ethics issues at large companies using AI. From late 2017 to early 2019, we interviewed 23 such managers. Their titles ranged from privacy officer and privacy advisor to a title that was new at the time but is becoming increasingly common today: data ethics officer. Our conversations with these AI ethics managers yielded four key insights.
First, among its many benefits, the business use of AI comes with significant risks, and companies know it. Ethical managers in the field of AI expressed their concerns about this privacy, manipulation, bias, opacity, inequality and labor displacement. In a well-known example, Amazon has developed an AI tool to sort resumes and trained it to find candidates similar to those it had hired in the past. The male dominance of the technology industry meant that most of Amazon’s employees were men. The tool therefore learned to reject female candidates. Unable to resolve the issue, Amazon ultimately had to scrap the project.
Second, companies pursuing ethical AI are doing so largely for strategic reasons. They want to maintain trust between customers, business partners and employees. And they want to anticipate or prepare for emerging regulations. The Facebook-Cambridge Analytica scandalwhere Cambridge Analytica used Facebook user data, which was shared without consent infer the psychological types of the users and targeting them with manipulative political ads demonstrated the unethical use of advanced analytics can damage a company’s reputation or even, as in the case of Cambridge Analytica itself, taking it down. The companies we spoke to instead wanted to be seen as responsible stewards of people’s data.
The challenge AI ethics managers faced was figuring out how best to achieve “ethical AI.” They first looked at the ethical principles of AI, especially those rooted in bioethics or human rights principles, but found these insufficient. It wasn’t just that there were many competing principles. It was that justice, fairness, beneficence, autonomy and other similar principles are contested, subject to interpretation and can come into conflict with each other.
This led to our third conclusion: managers needed more than high-level AI principles to decide what to do in specific situations. An AI ethics manager described how he tried to translate human rights principles into a set of questions developers could ask themselves to produce more ethical AI software systems. “After 34 pages of questions we stopped,” said the manager.
Fourth, professionals struggling with ethical uncertainties turned to organizational structures and procedures to make judgments about what to do. Some of these were clearly inadequate. But others, while still largely in development, were more useful, such as:
- Hiring an AI ethics officer to build and oversee the program.
- Establishment of an internal AI ethics committee to weigh and decide difficult issues.
- Creating data ethics checklists and requiring frontline data scientists to complete them.
- Contact academics, former regulators, and advocates for alternative perspectives.
- Conducting algorithmic impact assessments of the type already used in environmental and privacy management.
Ethics as responsible decision making
The key idea that emerged from that our study is this: Companies that want to use AI in an ethical way should not expect to discover a simple set of principles that yield correct answers from an omniscient, God’s-eye perspective. Instead, they should focus on the very human task of trying to make responsible decisions in a world of finite understanding and changing circumstances, even if some decisions ultimately prove imperfect.
In the absence of explicit legal requirements, companies, like individuals, can only do their best to make themselves aware of how AI affects people and the environment and stay abreast of public concerns and the latest research and ideas from experts . They can also seek input from a large and diverse group of stakeholders and seriously engage with high-level ethical principles.
This simple idea changes the conversation in important ways. It encourages AI ethics professionals to focus their energy less on identifying and applying AI principles – even as they remain part of the story – and more on adopting decision-making structures and processes to ensure they are taken into account with the implications, views and expectations of the public that should inform the information. their business decisions.
Ultimately, we believe that legislation and regulations will have to provide substantive benchmarks that organizations can strive for. But the structures and processes of responsible decision-making are a starting point and should, over time, help build the knowledge necessary to develop protective and workable substantive legal standards.
Indeed, emerging AI legislation and policy focuses on process. New York City passed a law requiring companies to check their AI systems for harmful biases before using those systems to make hiring decisions. Members of Congress has introduced bills That would require companies to conduct algorithmic impact assessments before using AI for lending, employment, insurance and other such resulting decisions. These laws emphasize processes that address the many threats posed by AI in advance.
Some generative AI developers have taken a very different approach. OpenAI CEO Sam Altman initially explained that the company wanted to do just that by releasing ChatGPT to the public give the chatbot “sufficient exposure to the real world that you come across some abuse cases that you hadn’t thought of, so you can build better tools.” For us, that is not responsible AI. It treats people like guinea pigs in a risky experiment.
Altmans call for a Senate hearing in May 2023 on government regulation of AI shows greater awareness of the problem. But we believe that he goes too far in shifting the responsibilities that the developers of generative AI must also bear to the government. To maintain public trust and prevent harm to society, companies will have to better face their responsibilities.