Who Is Accountable When AI Fails?

© AdobeStock
Accountability is a uniquely human ethical priority—one we should embed in the tools we use and the systems that surround them.

As the Chief Procurement Officer of BAM, Inc., Akmal considered himself more progressive than CPOs at other companies, and he had the suite of predictive AI tools to prove it.

“There’s more to procurement than managing constraints and winning in the margins,” he had said to the CFO when making the case for the system. “We need more visibility into our supplier tiers, and we have got to be more nimble.”

The CFO was somewhat less than enthusiastic at first. What was a CPO doing thinking about AI anyway? After some convincing, however, the CFO signed off on the investment and the Chief Technology Officer joined the effort to bring AI to procurement at BAM, Inc.

Several months later, Akmal sat down at his desk and turned on his computer. There were no paper documents to push around, not even a spreadsheet on his desktop. Instead, Akmal opened a procurement program that took in bills of materials, autonomously scoured the networks for availability and cost, and made recommendations for what should be purchased from whom and sent to where. All in a day’s work for an AI-fueled procurement office.

Then the phone rang. It was the CFO, and he was not happy.

“Akmal, I’m looking at invoices for raw materials on the Anderson project, and we are paying nearly 15% more per ton than we were before your fancy new AI tools.”

“No, that can’t be right,” said Akmal. “We have optimized for efficiency and timeliness, sure, but that couldn’t possibly lead to such a large increase.”

“Who is responsible for this?”

“I don’t know, sir. I can’t believe we’ve lost so much in margin.”

“Fix it, or I’ll put you in the margin.”

Akmal’s morning had taken an unexpected and unwelcome turn. Deflated and concerned, he set about trying to figure out who was accountable for the error and who was positioned to correct it.

Accountability is an intuitive aspect of human morality, so much so that we expect it in every context. Accountability underpins the rule of law and guides how restitution is calculated. It is a component of social trust between citizens, and a necessary component of professional activities in business and government. Throughout, because people and organizations are accountable for their actions, we make predictions about how others will act and how that might impact us.

What happens when an AI model makes a decision with a negative impact on an individual or organization? Who is accountable for that? The model itself cannot face any real consequence. It cannot make an apology; it has no anima. Instead, accountability is a uniquely human ethical priority—one we should embed in the tools we use and the systems that surround them.

The starting point for unraveling the challenges and potential solutions is a closer definition of this vital ethical concept.

Accountability is essential for trust in people, organizations, and systems, and given its importance, it has received expansive study and debate across disciplines. The judicial system is the most obvious field concerned with accountability, but there is also managerial accountability in terms of financial management and political accountability as it relates to faithful representation of the electorate. Ultimately, there are numerous types of accountability and AI touches all of them.

To break open the concept, take AI ethicist Virginia Dignum’s view that accountability means the AI system is able to explain its decisions, and the decisions can be “derivable from, and explained by, the decision-making mechanisms used.” According to Dignum, accountability is not a discrete characteristic of a single AI tool but instead the components of a larger socio- technical system that provides for accountability and is based on moral values and governance frameworks.

Other scholars posit that accountability is the product of acknowledging one’s “answerability” for decisions and actions. By this view, accountability can be seen as a feature of the AI system, a determination of individual or group responsibility (also called algorithmic accountability), and a quality of the sociotechnical system.

For our purposes, accountability means that not only can the AI system explain its decisions, the stakeholders who develop and use the system can also explain its decisions, their own decisions, and understand that they are accountable for those decisions. This is the necessary basis for human culpability in AI decisions.

Because the decisions machines make take place in the context of other social and technical systems, there is a range of stakeholders in AI application, and they are governed by a variety of laws, regulations, and social expectations. This creates an enormously complex landscape where there may be numerous individuals and entities with some accountability for AI outcomes.

In the case of Akmal, the CPO, his sudden challenge to identify the responsible parties in his AI error raised pressing questions. Who is accountable for the incident? Is it the vendor who supplied the systems? The data scientists who tuned it to BAM Inc.’s requirements? Was it Akmal and his procurement team optimizing for the wrong functions? Was it the CFO for approving the investment? Are we attempting to decide who is most accountable or everyone who is accountable?

This question of causality is incredibly dense and does not lend itself to a neat parsing of accountable parties. The challenge becomes that much greater when the AI tool’s complexity increases, the consequence of its decisions magnifies, and it is deployed at scale alongside dozens or hundreds of other systems.

Part of the challenge is that while other large systems (e.g., financial, logistics) have received decades of study, investigation, legislation, and debate, AI has not yet received the same full treatment. The pace of innovation in AI is so rapid that we have constructed powerful tools that raise profound ethical questions far in advance of the development of rules and expectations for accountability. The broader sociotechnical system in which AI exists is still in its formative stages. Industry leading practices, governance frameworks, laws and regulations, declarations in terms of use – these and other features of a sociotechnical system that prizes accountability are in flux.

The result is a fuzzy consensus of who is accountable for what and to whom. Enterprises deploying AI are left to define what accountability means in the context of their policies and stakeholders. And for better or worse, they cannot wait for these things to be debated in academia and codified in law before taking action.

There are myriad individuals involved in developing and using AI tools, and when something unexpected occurs, a neat distribution of responsibility is unlikely. In an enterprise, AI responsibility may fall across the data science team, the business unit leaders, the frontlines sales professionals, and many others. The challenge of identifying responsibility becomes that much more problematic when deploying AI models that change over time or those that create new algorithms.

No matter the challenge, determining who is responsible is necessary to instill a real sense of accountability throughout the workforce. An advisory board is potentially well positioned to promote accountability with oversight, defined processes, and clear penalties for bad consequences from AI application. When employees (at any level of the enterprise) understand and embrace their accountability in the AI lifecycle, it can create a chain of people who collectively move toward AI that adheres to the applicable areas of trustworthy AI. It places visibility on the individual. At the end of the day, it is the human (not the AI system) that must answer for the outcomes, good or bad.

Yet, there is a tension between bold innovation that leads to more powerful solutions and an individual’s accountability and responsibility. If data scientists, AI engineers, and others are overly concerned about professional ramifications from outcomes that they may be unable to predict, they may limit the scope of their efforts. However, this tension between innovation and accountability is not an either/or consideration. Accountability cannot be an afterthought in the pursuit of powerful, game-changing AI.

Thus, enterprises may be best served by not focusing on who is to blame when things go wrong but instead who to call on to make things right. With a clear articulation of who is accountable for what and to whom, the business is prepared to respond to AI outcomes and take real accountability for addressing errors with corrective actions.

When AI stakeholders have confidence that humans understand their accountability, it engenders trust in the AI tool and the broader AI ecosystem. Longer term, when AI accountability is embedded throughout the organization with established and tested expectations, it promotes trust in AI generally. The standards for accountability that are established today will have a real and important impact on reaching the full potential of AI going forward.


  • Get the StrategicCFO360 Briefing

    Sign up today to get weekly access to the latest issues affecting CFOs in every industry

    "*" indicates required fields

    Name*
    Send me more information about the CFO Peer Network.
    A members-only peer network for CFOs. Members meet both online and in-person a few times a year.
    This field is for validation purposes and should be left unchanged.
  • MORE INSIGHTS