Posted on 2 Comments

Developing a Practical Model for Ethical AI in the Business World: Stage 3 – Operational Deployment

In this blog post series, Amalgam Insights is providing a practical model for businesses to plan the ethical governance of their AI projects.

To read the introduction, click here.

To read about Stage 1: Executive Design, click here

To read about Stage 2: Technical Development, click here.

This blog focuses on Operational Deployment, the third of the Three Keys to Ethical AI described in the introduction.

Figure 1: The Three Keys to Ethical AI

Stage 3: Operational Deployment

Once an AI model is developed, organizations have to translate this model into actual value, whether it be by providing the direct outputs to relevant users or by embedding these outputs into relevant applications and process automation. But this part of AI also requires its own set of ethical considerations for companies to truly maintain an ethical perspective.

  • Who has access to the outputs?
  • How can users trace the lineage of the data and analysis?
  • How will the outputs be used to support decisions and actions?

Figure 2: Deployment Strategy

Who has access to the outputs?

Just as with data and analytics, the value of AI scales as it goes out to additional relevant users. The power of Amazon, Apple, Facebook, Google, and Microsoft in today’s global economy shows the power of opening up AI to billions of users. But as organizations open up AI to additional users, they have to provide appropriate context to users. Otherwise, these new users are effectively consuming AI blindly rather than as informed consumers. At this point, AI ethics expands beyond a technical problem into an operational business problem that affects every end user affected by AI.

Understanding the context and impact of AI at scale is especially important for AI initiatives that are focused on continuous improvement focused on increasing user value. Amalgam Insights recommends a focus on directly engaging user feedback for user experience and preference rather than simply depending on A/B testing. It takes a combination of quantitative and qualitative experience to optimize AI at a time when we are still far from truly understanding how the brain works and how people interact with relevant data and algorithms. Human feedback is a vital aspect for AI training and to understand the perception and impact of AI.

How can users trace the lineage of the data and analysis?

Users accessing AI in an ethical manner should have basic access to the data and assumptions used to support the AI. This means both providing quantitative logic and qualitative assumptions that can communicate the sources, assumptions, and intended results of the AI to relevant users. This context is important in supporting an ethical AI project as AI is fundamentally based not just on a basic transformation of data, but on a set of logical assumptions that may not be inherently obvious to the user.

From a practical perspective, most users will not fully understand the mathematical logic associated with AI, but users will understand the data and basic conceptual assumptions being made to provide AI-based outputs. Although Amalgam Insights believes that the rise of AI will lead to a broader grasp of statistics, modeling, and transformations over time, it is more important that both executive and technical stakeholders are able to explain how AI technologies in production are productive, relevant, and ethical based on both a business and technical basis.

How will the outputs be used to support decisions and actions?

Although this topic should already have been explored at the executive level, operational users will have deeper knowledge of how the technology will be used on a day-to-day basis and should revisit this topic based on their understanding of processes, internal operations, and customer-facing outcomes.

There are a variety of ways that AI can be used to support the decisions we make. In some cases, such as with search engines and basic prioritization exercises, AI is typically used as the primary source of output. For a more complex scenario, such as sales and marketing use cases or complex business or organizational decisions, AI may be a secondary source to provide an additional perspective or an exploratory and experimental perspective simply to provide context for how an AI perspective would differ from a human-oriented perspective.

But it is important for ethical AI outputs to be matched up with appropriate decisions and outcomes. A current example creating headlines is focused on the current launch of the Apple credit card and decisions being made about disparate credit limits for a married man and woman based on “the algorithm.” In this example, the man was initially given a much larger credit limit than the woman despite the fact that the couple filed taxes jointly and effectively shared joint income.

In this case, the challenge of giving “the algorithm” an automated and primary (and, likely, exclusive) role in determining a credit limit has created issues that are now in the public eye. Although this is a current and prominent example, it is less of a statement about Apple in particular and more of a statement regarding the increasing dependence that financial services has on non-transparent algorithms to accelerate decisions and provide an initial experience to new customers.

A more ethical and human approach would have been to figure out if there were inherent biases in the algorithm. If the algorithm had not been sufficiently tested, it should have been a secondary source for a credit limit decision that would ultimately be made by a human.

So, based on these explorations, we create a starting point for practical business AI ethics.

Figure 3: A Practical Framework

Recommendations

Maintain a set of basic ethical precepts for each AI project across design, development, and deployment. As mentioned in Part 1, these ethical statements should be focused on a few key goals that should be consistently explored from executive to technical to operational deployment. These should be short enough to fit onto every major project update memo and key documentation associated with the project. By providing a consistent starting point of what is considered ethical and must be governed, AI can be managed more consistently.

Conduct due diligence across bias, funding, champions, development, and users to improve ethical AI usage. The due diligence on AI currently focuses too heavily on the construction of models, rather than the full business context of AI. Companies continue to hurt their brands and reputation by putting out models and AI logic that would not pass a basic business or operational review.

Align AI to responsibilities that reflect the maturity, transparency, and fit of models. For instance, experimental models should not be used to run core business processes. For AI to take over significant operational responsibilities from an automation, analytical, or prescriptive perspective, the algorithms and production of AI need to be enterprise-ready just as traditional IT is. Just because AI is new does not mean that it should bypass key business and technical deployment rules.

Review and update AI on a regular basis. Once an AI project has been successfully brought to the wild and is providing results, it must be managed and reviewed on a regular basis.  Over time, the models will need to be tweaked to reflect real-life changes in business processes, customer preferences, macroeconomic changes, or strategic goals. AI that is abandoned or ignored will become technical debt just as any outdated technology is. If there is no dedicated review and update process for AI, the models and algorithms used will eventually become outdated and potentially less ethical and accurate from a business perspective.

We hope this guide and framework are helpful in supporting more ethical and practical AI projects. If you are seeking additional information on ethical AI, the ROI of AI, or guidance across data management, analytics, machine learning, and application development, please feel free to contact us at research@amalgaminsights.com and send us your questions. We would love to work with you.

Leave a Reply