Given the current and proposed regulatory framework, it is vital for broker-dealers and investment advisers to have a firm understanding of the AI tools they use and then implement appropriate policies and procedures for those AI tools. Firms should not wait to assess their use of AI, including future use, and put guardrails in place to ensure customers are protected and the firms satisfy all regulatory expectations.
Firms should begin by assessing what AI technology they are actually using or plan to use. After this is complete, assessing whether such use presents any conflicts of interest, potential customer harm, or violation of applicable rules and regulations is recommended. Firms should also consider keeping an inventory of all the AI applications they use, the risks posed by each AI application, and mitigating controls to address each AI-related risk.
Next, firms should implement and periodically review their written policies and procedures to address AI governance and the regulatory risks posed by AI. Any existing policies and procedures may be similarly enhanced to address conflicts of interest related to AI, potential customer harm, and potential regulatory violations. For example, firms may determine to be deliberate and intentional about their use of any new AI systems, explicitly requiring review and assessment of such AI before personnel are permitted to use it. Further, supervision by cross-function teams and periodic testing is also helpful to understand how the AI systems are performing.
‘Enforcement 40’ for 2020
Join Us On LinkedIn
Join the Securities Litigation and Enforcement Group on LinkedIn