Link to Content Area

Financial Supervisory Commission logo

News

The FSC released the "Guidelines for Artificial Intelligence (AI) Applications in the Financial Industry"

2024-06-20
The Financial Supervisory Commission (FSC) released the "Guidelines for Artificial Intelligence (AI) Applications in the Financial Industry" today to encourage financial institutions to harness technology effectively , with responsible innovation at its core, and to employ trustworthy AI in developing financial services that meet the public's needs.

The FSC developed the guidelines based on the "Core Principles and Policies for Artificial Intelligence (AI) Applications in the Financial Industry" on October 17, 2023. It has since referenced relevant guidance documents issued by other governments and international organizations and consulted the opinions of different sectors, experts, and academics via the public policy participation platform "JOIN", related FinTech meetings, and two AI guidance consultation meetings. After several rounds of adjustments, the FSC published the " Guidelines for AI Applications in the Financial Industry" today (provided in the attachment) as the reference for the adoption, use, and management of AI by financial institutions.

The "Guidelines for AI Applications in the Financial Industry" (hereinafter referred to as the Guidelines) mainly include the General Provisions and six major chapters. The General Provisions mainly explain common issues such as AI-related definitions, AI life cycle, risk assessment factors, risk-based implementation of core principles, and supervision and management of third-party operators. The six chapters explain the key points of concern and measures that can be adopted by financial institutions based on the AI life cycle and assessed risks when they implement the six major principles. They include the purpose, main ideas, the corresponding matters of note for each principle, implementation methods, and measures to be adopted.

The FSC has received a wide range of industry feedback during the external consultation period for the draft, and has made adjustments to increase the flexibility of the Guidelines and meet the industry’s actual requirements. The important contents of the Guidelines are outlined below to address matters of concern to financial institutions:
I.  Preface: To provide financial institutions with flexible choices among suitable risk management measures, the Guidelines specify that "depending on the risks associated with the use of AI systems, financial institutions may choose risk mitigation mechanisms and implementation methods in accordance with the core principles, including adopting more cost-effective methods to achieve their objectives."
II. General Provisions:
(I)Financial institutions value the division of responsibilities for supervising and managing third-party operators. The Guidelines therefore specify that "when using AI systems, financial institutions are advised to identify the extent to which they can monitor and control the risks, and to define the division of responsibilities for risk monitoring and control with vendors through contracts or other means for parts or matters over which they have less control." The Guidelines also specify the supervisory and management measures when financial institutions appoint a third-party operator to introduce AI system-related operations, including conditions for suspension of outsourced operations and setting mechanisms for the transfer of data or systems.
(II)When financial institutions use AI systems, they should implement the core principles in a risk-based manner. To help financial institutions assess the factors that should be included in risk assessment, the Guidelines provide examples for reference. The Guidelines therefore specify that "the following are examples that help illustrate the risk assessment scenarios and are not intended to regulate the risk level of the relevant usage scenarios. The level of risk involved in using AI systems must still be determined by the financial institutions after considering all factors in the risk assessment."
III. Chapter 1: The risk management of financial institutions may involve external reviews. A review mechanism is adopted to provide financial institutions with the flexibility to make decisions based on their own professionalism and resources. The Guidelines specify that "after evaluating the risk, internal resources, and requirements for professional services of AI systems, financial institutions may, if necessary, establish mechanisms for an independent third-party with AI expertise to carry out the review and assessment."
IV. Chapter 2: Considering that financial institutions may not full control over the outcome of generative AI, and to ensure the fairness and human control of AI systems, the Guidelines specify that "when a financial institution uses generative AI developed by a third party and is unable to control the training process and ensure that its data or the results of its calculations meet fairness requirements, risks associated with information produced by generative AI must be objectively and professionally managed by its personnel."
V. Chapter 3: To protect customer privacy, the Guidelines reference the General Data Protection Regulation (GDPR) of the European Union and specify that financial institutions should adopt the principle of "data minimization" and avoid the collection of excessive or unnecessary data. Data minimization means that the collection of personal data must be adequate, relevant, and limited to what is necessary in relation to the purposes for which it is processed.
VI. Chapter 5:
(I)Financial institutions should ensure that the operation of AI systems is explainable. However, if a financial institution commissions another operator to develop or purchase an AI system, it may not fully understand the detailed operation of the AI system as it contains trade secrets belonging to the operator. The Guidelines thus limit the explainability to "the ability to clearly explain the operation of the AI system developed or commissioned by the financial institution for use, as well as the logic behind its prediction or decision-making process."
(II)In terms of transparency, to enhance market trust in the AI systems of financial institutions, the Guidelines specify that financial institutions may, if necessary, proactively inform stakeholders about their AI practices through the publication of reports, technical documents, or the disclosure of relevant information on their websites.

The FSC states that the Guidelines primarily provide considerations for financial institutions when adopting or utilizing AI and fall under the category of administrative guidance. If financial industry associations establish self-regulatory standards for AI usage, they can incorporate relevant key considerations and measures from the Guidelines. In cases where no specific self-regulatory standards are established, financial institutions can refer to the Guidelines for the adoption, utilization, and management of AI systems. When financial institutions use AI systems for financial innovation businesses, the FSC encourages those financial institutions to conduct tests through mechanisms such as FinTech Innovation Experimentation or financial business trials, if deemed necessary.

The FSC will continue to monitor the challenges and opportunities in the introduction of AI technologies by financial institutions, encourage financial institutions to enhance risk management, consumer protection, data security, and the digital rights of disadvantaged groups. The FSC will also help the industry invest in technological innovation and improve the efficiency, quality and competitiveness of financial services while supporting consumer rights and interests and maintaining order in the financial market.

 
  • Visitor: 1076
  • Update: 2025-07-29
Top