A Regulator’s Assessment of the Impact of Artificial Intelligence on Financial Services

Written by Eamonn K. Moran

In a speech delivered at “Fintech and the New Financial Landscape” in Philadelphia on November 13, 2018, Federal Reserve Board Governor Lael Brainard discussed how technology is changing the financial landscape and the lessons being learned about artificial intelligence (AI) in financial services. According to Governor Brainard,”[a]lthough it is still in the early days, it is already evident that the application of artificial intelligence (AI) in financial services is potentially quite important and merits our attention.” She noted that the Fintech working group is working across the Federal Reserve System “to take a deliberate approach to understanding the potential implications of AI for financial services, particularly as they relate to our responsibilities.”

The Growing Use of Artificial Intelligence in Financial Services

The focus of Governor Brainard’s speech was on the branch of AI known as machine learning – which applies and refines a series of algorithms on a large data set in order to identify patterns and make predictions for new data. Brainard highlighted how recent technological advances have made the three key components of AI – algorithms, processing power, and big data – all increasingly accessible. As a result, she observed how many financial services firms are devoting increasing money, attention, and time to developing and using AI approaches. At a high level, she noted how there is particular interest in at least the below five capabilities:

  • First, financial services firms view AI approaches “as potentially having superior ability for pattern recognition, such as identifying relationships among variables that are not intuitive or not revealed by more traditional modeling.”
  • Second, financial services firms see potential cost efficiencies “where AI approaches may be able to arrive at outcomes more cheaply with no reduction in performance.”
  • Third, AI approaches “might have greater accuracy in processing because of their greater automation compared to approaches that have more human input and higher ‘operator error.’"
  • Fourth, financial services firms “may see better predictive power with AI compared to more traditional approaches – for instance, in improving investment performance or expanding credit access.”
  • Finally, AI approaches “are better than conventional approaches at accommodating very large and less-structured data sets and processing those data more efficiently and effectively. Some machine learning approaches can be ‘let loose’ on data sets to identify patterns or develop predictions without the need to specify a functional form ex ante.”

With respect to the impact of the above capabilities on the banking sector, Governor Brainard pointed to the four areas identified by the Financial Stability Board:

  • (i) Customer-facing uses “could combine expanded consumer data sets with new algorithms to assess credit quality or price insurance policies” (and she highlighted how chatbots could provide assistance and even financial advice to consumers, without having to wait to speak with a live operator);
  • (ii) The potential for strengthening back-office operations, such as advanced models for capital optimization, model risk management, stress testing, and market impact analysis.
  • (iii) AI approaches could be applied to trading and investment strategies, from identifying new signals on price movements to using past trading behavior to anticipate a client's next order,
  • (iv) There are likely to be AI advancements in compliance and risk mitigation by banks.

Current Regulatory and Supervisory Approaches

Governor Brainard then pivoted to discuss how we should approach regulation and supervision, given how the “potential breadth and power of these new AI applications inevitably raise questions about potential risks to bank safety and soundness, consumer protection, or the financial system.” In her view, “[i]t is incumbent on regulators to review the potential consequences of AI, including the possible risks, and take a balanced view about its use by supervised firms.” She emphasized the importance of crafting regulation and supervision so as to ensure that risks are appropriately mitigated while not creating obstacles to responsible innovations that might expand “access and convenience for consumers and small businesses or bring greater efficiency, risk detection, and accuracy.” At the same time, she highlighted the importance of not driving responsible innovation away from supervised institutions and toward “less regulated and more opaque spaces in the financial system.”

In this vain, she pointed to the existing regulatory and supervisory guardrails as a good place to start in assessing the appropriate approach for AI processes. In terms of banking services, she highlighted a few generally applicable laws, regulations, guidance, and supervisory approaches that appear “particularly relevant to the use of AI tools,” including:

  • The Federal Reserve’s “Guidance on Model Risk Management” (SR Letter 11-7), which highlights the importance to safety and soundness of embedding critical analysis throughout the development, implementation, and use of models, which include complex algorithms like AI.
  • The Federal Reserve’s guidance on vendor risk management (SR 13-19/CA 13-21), along with the prudential regulators’ guidance on technology service providers, highlights considerations financial services firms should weigh when outsourcing business functions or activities – “and could be expected to apply as well to AI-based tools or services that are externally sourced.”
  • The Federal Reserve’s risk-focused supervisory approach – the level of scrutiny should be commensurate with the potential risk posed by the approach, tool, model, or process used – which should serve as a model to the approach taken by financial services firms to the different approaches they use. In short, “firms should apply more care and caution to a tool they use for major decisions or that could have a material impact on consumers, compliance, or safety and soundness.”
  • The Federal Reserve’s expectation that financial services firms would apply “robust analysis and prudent risk management and controls to AI tools, as they do in other areas, as well as to monitor potential changes and ongoing developments.”

Brainard then turned to the consumer credit space, which is ripe for the many new consumer benefits that AI may offer, but, as she pointed out, “is not immune from fair lending and other consumer protection risks.” In particular, Brainard pointed to the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA) requirements for creditors to provide notice of the factors involved in taking actions that are adverse or unfavorable for the consumer, which imply finding a way to explain AI decisions even though “the opacity of some AI tools may make it challenging to explain credit decisions to consumers.” She highlighted how the AI community is responding with important advances in developing "explainable" AI tools with a focus on expanding consumer access to credit.

Looking Ahead

Brainard closed her remarks by pointing to what she deems to perhaps be one of the most important early lessons so far: that not all potential consequences are knowable now and, accordingly, “firms should be continually vigilant for new issues in the rapidly evolving area of AI.” Relatedly, because things can go wrong, Brainard underscored the need for firms to recognize AI’s possible pitfalls and employ sound controls now to prevent and mitigate possible future problems, and not assume that AI approaches are less susceptible to problems because they are purported to be able to “learn” or less prone to human error.

“When considering financial innovation of any type, our task is to facilitate an environment in which socially beneficial, responsible innovation can progress with appropriate mitigation of risk and consistent with applicable statutes and regulations. As with other technological advances, AI presents regulators with a responsibility to act with thoughtfulness and perspective in carrying out their mandates, learning from the experience in other areas,” stated Brainard.

Latest Thinking

View more Insights
Insights Center
close
Loading...
Knowledge assets are defined in the study as confidential information critical to the development, performance and marketing of a company’s core business, other than personal information that would trigger notice requirements under law. For example,
The new study shows dramatic increases in threats and awareness of threats to these “crown jewels,” as well as dramatic improvements in addressing those threats by the highest performing organizations. Awareness of the risk to knowledge assets increased as more respondents acknowledged that their