Compliance Professionals: The Ethical Risks in using Generative AI in Regulatory Compliance

Employers need to invest in reskilling and upskilling their workforce to develop the skills that are needed.

Compliance Professionals: The Ethical Risks in using Generative AI in Regulatory Compliance
Compliance Professionals: The Ethical Risks in using Generative AI in Regulatory Compliance
October 20, 2023
Business

The use of generative AI in regulatory compliance for financial service providers is a topic that has gained significant attention in recent years and while there are many potential benefits associated with its use, there are also several ethical risks that need to be considered.

Generative AI is a type of artificial intelligence that can create new content, such as text, images, code, and more. It works by using machine learning models to learn from large amounts of data and then generate new content that is like the data. Some examples of generative AI are ChatGPT, DALL-E, Google Bard, and Duet AI123.

There are many potential applications and associated benefits which may be accrued from utilising generative AI solutions, such as improving customer interactions, exploring unstructured data, and automating repetitive tasks. However, these solutions also come with some dangers and limitations, which should be considered such as ethical issues, quality control, and data privacy. Generative AI is different from general AI and machine learning in that it can produce novel and diverse outputs without being explicitly programmed or supervised and it is therefore one of the most exciting and innovative fields of AI research and development.

Let’s consider some of the potential applications of this technology in the area of financial services compliance. These tools can scan contracts, policies, and other documents for errors, inconsistencies, and even specific terms and can also generate summaries, reports, and recommendations based on the analysis which can help compliance professionals save time and resources, as well as improve accuracy and efficiency.

AI also has the potential to monitor and analyse large amounts of data from various sources, such as social media, news, customer feedback, and internal records, and can then identify patterns, anomalies, and emerging issues that may pose compliance risks or opportunities. This can help compliance professionals proactively address potential problems while leveraging best practices.

As outlined above, as AI tools gain more widespread appeal there are also some ethical concerns which have arisen, with the main one being associated with the use of generative AI in regulatory compliance creating potential for bias. This is because AI models are only as good as the data it is trained on, and if the data is biased, then the model will also be biased, which can lead to unfair treatment of certain groups of people and can result in regulatory non-compliance.

To mitigate this risk, financial service providers need to ensure that they are using high-quality data to train their generative AI models, while also ensuring that their models are regularly audited to prevent the generation of biased results. Additionally, financial service providers should consider using diverse teams to develop and test their generative AI models, again to ensure that they are not inadvertently introducing bias into the process.

The use of generative AI in regulatory compliance does also create the potential for privacy violations as these models can be used to analyse large amounts of data, including personal data. Firms will need to ensure that they are complying with all relevant privacy laws and regulations when using generative AI and to minimise the potential risk, should consider implementing privacy-by-design principles when developing their generative AI models. This includes ensuring that personal data is only collected and used for specific purposes and that it is protected by appropriate security measures.

These tools can be used to automate many of the processes involved in regulatory compliance, which can lead to increased efficiency and reduced costs, although it also has the potential to create additional unintended consequences which may result in regulatory non-compliance. Financial Service providers will need to ensure that they are regularly monitoring their AI models to identify any potential issues and to ensure accuracy and reliability. Firms should also have contingency plans in place to prevent issues where possible but also to identify and resolve issues if and when they do arise.

One of the concerns which is often raised in relation to the use of generative AI in regulatory compliance is the potential for job displacement, given that AI does have the potential to automate many of the processes involved in regulatory compliance, which could lead to job displacement for compliance professionals. According to some estimates, generative AI could affect up to 40% of all working hours and 62% of the total time employees work, meaning that many jobs and tasks could be automated or augmented.

However, this does not necessarily mean that machines will simply replace humans, but rather, it means that humans will need to adapt and learn new skills to work effectively with generative AI. Some jobs and tasks may decline or disappear, while others may grow or emerge, including roles for AI and machine learning specialists, data analysts and scientists, and digital transformation specialists which are expected to grow rapidly.

The potential for generative AI to reshape the world of work means that employers and workers need to start preparing now to avoid being left behind. Employers need to invest in reskilling and upskilling their workforce to develop the technical and soft skills that are needed in this age of generative AI. Workers also need to embrace lifelong learning and be open to new opportunities and challenges. Generative AI can be a powerful tool that can transform how organisations approach compliance, innovation, and productivity.

Financial Service providers should also consider retraining their risk and compliance teams so that they can work alongside generative AI systems as initially this will help to verify the accuracy of the model, while allowing team members to become familiar with the tool. This will also help to ensure that compliance professionals remain relevant and valuable members of the workforce.

In conclusion, while there are many potential benefits associated with the use of generative AI in regulatory compliance for financial service providers, there are also several ethical risks that need to be considered. Financial service providers need to ensure that they are using high-quality data to train their generative AI models and that their models are regularly audited to ensure that they are not producing biased results. They also need to ensure that they are complying with all relevant privacy laws and regulations when using generative AI and that they have contingency plans in place in case any unintended consequences arise from the use of generative AI systems.