Accountability and oversight must be continuous because AI models can change over time; indeed, the hype around deep learning, in contrast to conventional data tools, is predicated on its flexibility to adjust and modify in response to shifting data. But that can lead to problems like model drift, in which a model’s performance in, for example, predictive accuracy, deteriorates over time, or begins to exhibit flaws and biases, the longer it lives in the wild. Explainability techniques and human-in-the-loop oversight systems can not only help data scientists and product owners make higher-quality AI models from the beginning, but also be used through post-deployment monitoring systems to ensure models do not decrease in quality over time.
“We don’t just focus on model training or making sure our training models are not biased; we also focus on all the dimensions involved in the machine learning development lifecycle,” says Cukor. “It is a challenge, but this is the future of AI,” he says. “Everyone wants to see that level of discipline.”
Prioritizing responsible AI
There is clear business consensus that RAI is important and not just a nice-to-have. In PwC’s 2022 AI Business Survey, 98% of respondents said they have at least some plans to make AI responsible through measures including improving AI governance, monitoring and reporting on AI model performance, and making sure decisions are interpretable and easily explainable.
Notwithstanding these aspirations, some companies have struggled to implement RAI. The PwC poll found that fewer than half of respondents have planned concrete RAI actions. Another survey by MIT Sloan Management Review and Boston Consulting Group found that while most firms view RAI as instrumental to mitigating technology’s risks—including risks related to safety, bias, fairness, and privacy—they acknowledge a failure to prioritize it, with 56% saying it is a top priority, and only 25% having a fully mature program in place. Challenges can come from organizational complexity and culture, lack of consensus on ethical practices or tools, insufficient capacity or employee training, regulatory uncertainty, and integration with existing risk and data practices.
For Cukor, RAI is not optional despite these significant operational challenges. “For many, investing in the guardrails and practices that enable responsible innovation at speed feels like a trade-off. JPMorgan Chase has a duty to our customers to innovate responsibly, which means carefully balancing the challenges between issues like resourcing, robustness, privacy, power, explainability, and business impact.” Investing in the proper controls and risk management practices, early on, across all stages of the data-AI lifecycle, will allow the firm to accelerate innovation and ultimately serve as a competitive advantage for the firm, he argues.
For RAI initiatives to be successful, RAI needs to be embedded into the culture of the organization, rather than merely added on as a technical checkmark. Implementing these cultural changes require the right skills and mindset. An MIT Sloan Management Review and Boston Consulting Group poll found 54% of respondents struggled to find RAI expertise and talent, with 53% indicating a lack of training or knowledge among current staff members.
Finding talent is easier said than done. RAI is a nascent field and its practitioners have noted the clear multidisciplinary nature of the work, with contributions coming from sociologists, data scientists, philosophers, designers, policy experts, and lawyers, to name just a few areas.
“Given this unique context and the newness of our field, it is rare to find individuals with a trifecta: technical skills in AI/ML, expertise in ethics, and domain expertise in finance,” says Cukor. “This is why RAI in finance must be a multidisciplinary practice with collaboration at its core. To get the right mix of talents and perspectives you need to hire experts across different domains so they can have the hard conversations and surface issues that others might overlook.”
This article is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.