As the use of artificial intelligence (AI) continues to grow in the workplace, it is important to understand how it affects risk and compliance professionals. ChatGPT, a language model developed by OpenAI, has gained popularity as a tool for communication and decision-making. However, concerns around data privacy, security, and ethics have been raised.
Here at Coopman, we carried out a survey that encouraged participants to share their opinions on the use of ChatGPT in their organisation and to contribute to the understanding of its impact on risk and compliance functions.
We surveyed those at executive and senior management level who work within compliance and risk functions or who oversee entire business operations, with 62.5% of respondents who answered the questions having moderate to very high knowledge of ChatGPT, below are some of the findings:
Regarding ChatGPT’s contributions in an organisation, respondents reported the following observations:
• Data entry and processing
A significant portion of respondents (28.57%) indicated that ChatGPT has proven useful in streamlining data entry and processing tasks. Its automation capabilities can reduce manual effort, minimise errors, and improve overall data management efficiency.
• Improved speed and accuracy of risk identification along with enhanced compliance
According to 28.57% of respondents, ChatGPT has the potential to increase the efficiency and precision of identifying risks. This capability can significantly contribute to proactive risk mitigation and decision-making processes.
• Drafting of policy, process and other memos
57.14% of respondents suggested that ChatGPT has been effective in drafting first versions of required documentation and communication, assistance with programming was also another use within firms.
DATA PRIVACY & SECURITY:
Our survey findings shed light on the perceptions of how the use of ChatGPT affects data privacy and security within an organisation:
• 100% of respondents believe ChatGPT should be regulated under GDPR.
• ChatGPT poses risks that can be mitigated. The majority of respondents (75%) acknowledged that the use of ChatGPT poses risks to data privacy and security. However, they emphasised that these risks can be mitigated effectively through the implementation of appropriate security measures. This viewpoint suggests a balanced perspective that acknowledges both the potential risks and the importance of proactive security measures.
• 50% believe that ChatGPT conversations should be treated as personal data under GDPR with the other 50% in either disagreement or unsure.
REGULATION AND RISK:
Important considerations regarding the regulation and risk management of ChatGPT within the organisation indicated:
• Operational risks
62.5% of all respondents consider technology failures or malfunctions, compliance and regulatory issues, human errors or mistakes, or data breaches or security incidents as the most relevant operational risks when considering to use ChatGPT.
• Regulation of ChatGPT usage
According to 50% of respondents, there is a consensus that strict guidelines and policies should be implemented to ensure compliance when utilising ChatGPT. This underscores the importance of establishing clear frameworks and standards to govern its usage effectively.
To ensure compliance with regulations when using ChatGPT, the following measures were suggested by survey participants:
• Employee training and education
It is essential to provide comprehensive training and education to employees regarding the proper use of ChatGPT. This enables them to understand the regulations, potential risks, and best practices associated with its usage.
• Automated monitoring and auditing
Implementing automated monitoring systems and regular auditing processes can help detect any non-compliant usage of ChatGPT. By monitoring its activities, organisations can promptly address any potential violations and ensure adherence to regulations.
• Peer reviews
Incorporating peer review mechanisms can serve as an additional layer of compliance assurance. Peer reviews enable colleagues to assess and validate the appropriateness and compliance of ChatGPT outputs, reducing the risk of unintended errors or biased outcomes.
Finally, in managing and mitigating the identified risks associated with ChatGPT usage, all respondents recommended a combination of the following approaches:
• Implementing controls and procedures
Organisations should establish robust controls and procedures to reduce the likelihood of risks occurring. This may involve implementing access controls, data security measures, and regular risk assessments to identify and address potential vulnerabilities.
• Preparing contingency plans
It is crucial to develop contingency plans to effectively manage risks if they do occur. These plans should outline steps to be taken, roles and responsibilities, and communication channels to address any adverse situations arising from ChatGPT usage.
• Transferring risks
Organisations may consider transferring certain risks associated with ChatGPT to third-party vendors or insurance policies. This can provide an additional layer of protection and mitigate potential financial and operational impacts.
Given 42.85% of respondents are currently using several AI tools such as machine learning (ML), natural language processing (NLP) Robotic Process Automation (RPA), by adopting a comprehensive approach encompassing regulation, compliance measures, and risk mitigation strategies, organisations can effectively manage the usage of AI tools such as ChatGPT while ensuring compliance with regulations and minimising potential risks.
If you would like to discuss the above or how we can support your hiring processes in the current market, please get in touch with Director and Co-Founder Andrew Murphy at firstname.lastname@example.org