Connor Heaton, SRM’s AI lead, on the transformative power of AI in financial services, from compliance to efficiency gains.
Hello Connor, could you share your journey into AI and what led you to your current role at SRM?
When considering my path for education and career, AI was one of the big domains (alongside biotechnology/genetics) which looked like they had the potential to transform society in my lifetime. I studied cognitive science and decision theory in university in part to stay close to developments in AI, and I steered my consulting career towards AI and automation during my time at Deloitte. I helped to build and scale Deloitte’s federal automation practice, and SRM brought me on initially to do much the same thing for them, which put me in a perfect position to expand SRM’s AI practice to focus on LLMs (large language models) and other transformer-based AI technologies when ChatGPT’s launch kicked off the current wave of interest and investment in the space.
Generative AI is rapidly changing the technological landscape. How do you see its impact specifically within the financial services sector?
While AI itself isn’t new in banking, generative AI models like ChatGPT represent a paradigm shift akin to the transition from mainframes to PCs, or flip phones to smartphones. LLMs are democratizing advanced AI capabilities – making them more affordable, accessible and user-friendly. This is unlocking a vast range of newly automatable tasks and forming the foundation for an evolving ecosystem of AI-enabled solutions. For financial institutions, LLMs are already demonstrating value across domains like development, marketing, customer service, internal support, operations, and knowledge management. In the longer term, I expect AI to radically change nearly every role at a financial institution – responsibilities will be refactored to take best advantage of AI tools, with employees overseeing or augmented by AI.
Financial institutions have long used AI for various tasks. What new challenges and opportunities do you see arising with the integration of generative AI, such as LLMs, into banking products?
Most of the opportunities center around access to advanced AI capabilities enabling faster and more efficient work, like Klarna’s use of AI in their contact centers to do the work of 700 agents There are a number of opportunities which are effectively new to the FI space, especially the community FI space – scaled personalization and marketing, intranet and overall knowledge search with a truly flexible conversational interface, very broadly applicable image identification and extraction, automated workflow documentation, and others.
LLMs do pose new challenges. Hallucinations, or confident outputs that are factually incorrect, can create compliance and reputational risks without proper human oversight. Data privacy vulnerabilities, AI bias, and a fluid regulatory landscape demand robust governance. There are also open questions around AI’s impact on jobs and how extensively banking processes and products will need to be re-engineered to fully capitalize on the technology.
With AI tools like ChatGPT being incorporated into financial services, how do you recommend institutions balance innovation with the need for compliance and risk management?
As with any new technology, this comes down to the institution’s individualized risk tolerance. FIs should be educating their leadership thoroughly on generative AI and its implications before developing an articulation of risk appetite which steers how aggressive the organization will be in its adoption of AI tools.
Data privacy is a major concern with AI adoption. What steps should financial institutions take to ensure that their AI implementations do not compromise sensitive information?
The nature of large language models, which are trained on vast datasets and can potentially be probed to expose sensitive information, introduces novel vulnerabilities that demand rigorous safeguards.
The bedrock of any financial institution’s AI privacy strategy should be a comprehensive data governance framework. This encompasses policies and procedures for data collection, storage, access, usage, and disposal across the AI lifecycle.
Risk from employee use of third-party AI tools is best mitigated through robust policy and education, and through launching a safe internal AI tool which protects data for employees to use instead of external solutions.
In an age of AI disruption, a wait-and-see approach is no longer viable. Financial institutions that take a proactive, structured approach to AI adoption will be best positioned to thrive in an increasingly competitive and complex world. It’s a challenging undertaking, but one that will define the industry’s future leaders.
The use of unauthorized AI tools by employees and vendors poses significant risks. How can financial institutions effectively monitor and control AI usage to safeguard their data?
All staff interfacing with AI systems should receive training on data privacy best practices, incident reporting protocols, and the consequences of violations. Building a culture of privacy awareness can help mitigate risks stemming from human error or negligence.
When engaging with third-party AI providers, thorough due diligence is essential. FIs must carefully vet vendors’ data handling practices, security measures, and privacy track records. Contracts should clearly delineate data ownership, usage limitations, and audit rights. It’s important not just to apply this due diligence to new vendors, but to existing vendors which may be adding AI capabilities into their offerings and using third-party AI tools internally with client data.
Could you elaborate on what a proactive and structured AI adoption strategy looks like, and why it is crucial for financial institutions today?
At its core, a robust AI adoption strategy is a roadmap for harnessing the technology’s potential in a way that aligns with an organization’s goals, values, and risk appetite. It goes beyond piecemeal experimentation to provide a cohesive framework for identifying, prioritizing, and scaling high-value use cases.
The first step is establishing a clear vision and governance structure. This involves defining AI’s role in the FI’s overall business strategy, setting measurable objectives, and designating leadership accountable for results. Strategy should be aligned to and guided by the overall organization’s existing strategic goals, helping to determine which areas and use cases should be adopted fastest.
The foundation of adoption is a robust yet nuanced AI policy. This policy should delineate approved use cases, data handling protocols, oversight mechanisms, and educational components to ensure employees grasp the strengths and limitations of generative AI. The policy can act as a blueprint for adoption, laying the groundwork of what employees and business units need to know to use AI compliantly and effectively. Operationalizing the policy requires deliberate change management. Access controls, targeted training, and mechanisms to monitor usage are critical to steer adoption.
In your experience, how can financial institutions best leverage AI to enhance efficiency and customer experience without increasing their risk exposure?
Some level of additional risk is inevitable; the most conservative implementations will focus on use of more time-tested traditional AI solutions, using fully transparent and explicable ML algorithms. Highly risk-averse adoption of generative AI involves pilot groups of employees using generative AI to augment their work for specific low-risk internal use cases under close monitoring, with human review of all outputs.
Of course, these tools are part of the environment now, and nearly 70% of knowledge workers are using them, with many of them using their chosen external tools. Unaware organizations will likely find that they have already adopted generative AI in a way that increases risk exposure and must be addressed to maintain compliance.
What are some common pitfalls that financial institutions should avoid when integrating AI technologies into their operations?
Early in the wake of ChatGPT’s release, many FIs attempted to ban and bock access to third party tools like ChatGPT – research swiftly showed that such flat prohibitions were likely to be circumvented and create shadow IT, with employees simply finding other free online options or using tools from their phones. FIs seek to craft guidelines that enable constrained, responsible usage, and launch their own safe internal tools for use with sensitive data.
“Shiny object syndrome” is alive and well – it’s necessary for FIs to get a handle on the fundamentals of generative AI for compliance at a minimum, but selection of AI vendors and solutions should still be judicious. There’s still a lot of churn in the AI industry, and the risk of a given vendor being acquired or going out of business is higher than ever. Additionally, purchasing a generative AI solution without doing the foundational work of AI policy and strategy will frequently backfire.
Internally, a common stumbling block is failing to secure broad organizational buy-in for AI initiatives. Resistance can come from many quarters—front-line employees fearful of being replaced, middle managers skeptical of ceding control, or executives wary of reputational risks. Overcoming these barriers requires a concerted change management effort, including clear communication about AI’s benefits, comprehensive training programs, and visible leadership support.
Looking ahead, how do you envision the role of AI evolving in the financial industry, and what steps is SRM taking to stay ahead in this rapidly changing environment?
It’s clear that AI is not just a passing fad, but a transformative force that will reshape the financial industry in profound ways. While the specifics may be hard to predict, we can extrapolate from past cycles of innovation to paint a picture of what lies ahead.
In the near term, we can expect to see AI becoming more deeply embedded into the fabric of financial services. Generative AI and LLMs will likely be integrated into almost every financial institution, whether through deliberate adoption or indirectly via employees and vendors. Solutions will also become more specialized and turnkey for the financial space. The pressure to harness these tools for efficiency, personalization, and innovation will only intensify as the technology matures and customer expectations evolve.
In the medium term, we expect more comprehensive transformation of knowledge work with the introduction of increasingly capable AI agents, and a variety of knock-on effects to processes like RFPs, audits, offshoring, helpdesk, fraud, identity proofing, content creation, and many others.
Ultimately, the future of AI in finance will be shaped by those who approach it with a spirit of responsible experimentation, continuous learning, and stakeholder collaboration. SRM has launched our own internal generative AI policy and strategy, education and coaching, tooling to augment employees, KPI and KRI monitoring, and have audited our vendors and modernized our data environment. We also conduct continuous horizon scanning around AI advancements and regulation which feeds into our top-level strategic planning.
SRM remains committed to being a trusted partner to our clients on this journey—helping them harness the power of AI to drive efficiency, enhance customer experiences, and unlock new frontiers of value creation.