According to Mckinsey’s State of AI in 2021 report, 56 percent of all respondents said they had adopted AI in at least one role in their organizations – a six percent increase from 2020. The adoption rate is highest among Indian companies, closely followed of those in the Asia Pacific region.
AI is changing the game for many companies, but like any other technology, it also has its downsides. In recent years we have seen an increase in concern about the risks associated with AI. The biggest challenges include:
- Inexplicable models
- racial profiling and discrimination;
- gender bias
- model drift
In 2021, Meta apologized for advising users to watch a black male video if they wanted to see more “videos about primates.” In 2020, a group of black content creators sued YouTube for using AI to censor their videos based on race.
According to Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus report, the C4 dataset was extensively “filtered” to remove Black and Hispanic authors. It also filtered out material related to gay and lesbian identities.
In 2019, the U.S. Department of Housing and Urban Development (HUD) said it was considering enacting new rules that would “effectively protect landlords, banks and insurance companies that use algorithmic models from lawsuits that claim their practices have an unwarranted.” discriminatory effect”.
So what do we do about it?
Does this mean we are abandoning AI? The answer is obviously no. Instead, we need to take the right actions when using AI in business processes. Our focus should be on ensuring best practices when building artificial intelligence applications. Since data is the building block for algorithms, it is crucial to have a set of guidelines and principles for using AI technology responsibly.
explainability and transparency
With AI being used extensively for business decision-making, it is important to understand why an algorithm made a particular decision. The black-box nature of AI models is a recipe for trouble. Transparency and explainability help stakeholders to understand the AI processes and decisions and adjust the model accordingly.
Accountability and Governance
The right governance processes are critical to building robust AI frameworks. Accountability in AI means properly defining the roles of the people involved in creating and deploying the algorithms. AI governance requires model and application review, including documentation of data source, lineage, model facts, high-level data sources, application audience information, etc.
Security and privacy are major concerns. AI models are based on huge amounts of data. Data often contains sensitive information such as race, gender, sexual orientation, etc. Organizations must take note of and protect user data. Ideally, companies should disclose to end users how they use sensitive data.
The robustness of an AI model includes the effectiveness of algorithms in extracting insights and how it withstands adversary attacks.
fairness and impartiality
As humans, we all have some kind of bias—both intentional and unintentional. These biases can creep into the data that is essentially the foundation of AI systems. Organizations should intervene to minimize algorithmic bias.
How can you overcome such challenges?
The components of ethical AI should already be integrated in the product development phase. Any AI/ML product developed should be viewed from an ethical perspective. Questions about data collection, data protection, transparency of the models, etc. should be addressed.
For example, a simple post hoc analysis of an AI model’s decision making can reveal biases. Accountability is another factor that needs to be considered from the start. When something goes wrong when deploying an AI system, who should be held accountable—the company, the data scientists or engineers who built it, or other stakeholders—are decisions that need to be made from the start.
Correct processes can only be built using the right methods and technologies. This starts with hiring the right people in the company. It is crucial to attract the best talent who are not only extremely competent in what they do, but also understand the ethical and long-term implications of using AI algorithms.
Improving the quality of the data used and data preparation methods should also be a focus for the company. Data scientists need to evaluate whether the data they use to build a solution is actually representative of the group they serve.
AI is a multidisciplinary field and not just made up of data scientists and ML engineers. It’s important to have advocates, AI ethicists, policymakers, and chief data officers (CDO) to get a 360-degree view of the use and deployment of any given algorithm. Organizations are beginning to realize the importance of having a diverse group of people from different backgrounds in the AI workflow.
According to the NewVantage Partners 2021 survey, 65% of data companies have a CDO. A CDO is essentially a leadership role that manages the governance and management of data in an organization. CDO’s role combines “accountability and responsibility for information protection and privacy, information governance, data quality and data lifecycle management, and leveraging data assets to create business value,” the report added.
An AI ethicist works with both the team and the legal team to understand the challenges of deploying an AI model. After taking stock of the pros and cons of building a particular model and the potential implications, he/she can decide how and where to deploy such AI systems and what precautions the engineering teams should take.
This article was written by a member of the AIM Leaders Council. AIM Leaders Council is an invitation-only forum for senior leaders in the data science and analytics industry. To check if you are eligible for membership, please fill out the form here.