Abstract
Executives in business and government seek to leverage artificial intelligence (AI), the biggest driver of technological change, to inform decision-making. The intelligence behind AI comes from machine learning (ML) algorithms applied to large datasets. The goal of this research is to investigate the adage that while humans are fallible, computers are impartial with no implicit bias. Toward this purpose, the author used the Delphi research technique to achieve these three objectives: (1) identify and categorize the sources of flaws in algorithm design; (2) validate a framework for auditing AI-powered systems; and (3) propose strategies for resolving the sources and mitigating the flaws in algorithm design. The paper begins with a concise theoretical framework for algorithm design to familiarize the readers with the science of ML algorithms. Next, the author describes the research methodology including population and sample of the study. The findings of the Delphi study are presented in three sections that correspond with the three objectives of this investigation. A discussion of the findings is enhanced with supporting evidence from existing literature of unintended undesirable societal impacts of flaws in algorithm design in fields like education, healthcare, criminal justice, human resource management, and financial services. The expected outcome is cautious use of AI by decision-makers—employing AI while being aware of its limitations and shortcomings.