3 minute read

ChatGPT is a new route for cyberthieves

Kayla Casillo & Alexander Powell ENSafrica

OpenAI has rolled out a ChatGPT application programming interface (API) and made it available to the public, enabling companies to incorporate customised ChatGPTfunctionality (such as content generation or summarisation) into their platforms or systems

However, using the ChatGPT API, or any API for that matter, comes with certain risks concerning cybersecurity, availability and functionality, and data protection, which will need to be carefully monitored and mitigated if necessary

Cybersecurity

The ChatGPT API can serve as an additional attack vector within the organisation, where cybercriminals can seek to gain access to the company ’ s API key According to OpenAI, a compromised API key may allow a person to gain access to the API, which not only could result in your API credit account being consumed and depleted therefore leading to unexpected charges, but also result in potential data losses and disruption to ChatGPT access Furthermore, if a cybercriminal gains access to the ChatGPT API key, they may be able to unlawfully harvest or exfiltrate company data by accessing the company ’ s databases Therefore, cybersecurity concerns remain one of the key risks that a company will need to mitigate when using the ChatGPT API functionality will be to the business and assess this against the extent to which they wish to rely on a thirdparty tool

Data Protection And Confidential Info

To the extent that users disclose confidential information or personal information on the platform, there is a risk that the API incorporated into the platform may expose the company to data privacy risks, including the unauthorised access to or disclosure of sensitive information which may be stored in thirdparty databases

Availability And Functionality

The availability and functionality of the ChatGPT API is dependent on a third party (for example OpenAI) If ChatGPT’ s API is experiences downtime, it is likely that the company will experience disruption to the functionality of the ChatGPT component incorporated in its platform Companies should evaluate how critical ChatGPT’ s

Although OpenAI recently updated its data usage policy to reduce the period for storage of personal data to 30 days before it is deleted and to exclude the usage of personal data for model improvement purposes, these changes may not entirely mitigate the risks and companies may still be susceptible to disclosed company data being used as an input to further train Chat GPT

Illegal And Malicious Conduct

It may be possible for cybercriminals to bypass the ChatGPT API’ s anti-abuse restrictions by using the ChatGPT

API as a means to execute cyberattacks by generating malware code, phishing emails and the like Cybercriminals have become sophisticated in their approach to cyberattacks, and there is a risk that the cybercriminals will set up Telegram bots which are linked to, and capable of, prompting ChatGPT to generate such illegal and malicious content

Security is one of the biggest concerns regarding the use of the ChatGPT API, and a company should ensure it applies best practice and measures for API key safeguarding, which include:

● Always using a unique API key for each team member on the company ’ s account;

● Not sharing API keys (which is prohibited by OpenAI’ s terms of use);

● Never committing API keys to repositories;

● Using a key management service; and

● Monitoring the token usage and rotation of API keys when needed

As an alternative to using ChatGPT’ s API, a company can develop its own artificial intelligence language model Large language models (LLMs) can be easily replicated and deployed within an insulated enterprise environ- ment Recent examples, such as the Stanford University’ s Alpaca, show that LLMs may be less costly to develop and offer similar functionality and advantages to ChatGPT This approach may mitigate the company ’ s exposure to risks surrounding the company ’ s intellectual property, data privacy and disclosure of confidential information

As a secondary alternative, OpenAI has recently started releasing ChatGPT plugins, which are tools that allow ChatGPT to connect to a company ’ s API to retrieve information from the company in order to retrieve realtime information or to assist users with certain actions

Since the ChatGPT plugins would be connected to a company ’ s system and granted access to certain company information in realtime, the risks are akin to those identified for the ChatGPT API, and could expose the company to security vulnerabilities, performance impacts and delays, and compatibility issues causing reduced functionality of the system

The risks associated with the use of ChatGPT plugins can be mitigated by implementing the following measures:

● Evaluation and curation of plugins;

● Conducting security assessments;

● Updating plugins regularly;

● Setting up user access control;

● Establishing a contingency plan; and

● Training users on the acceptable use of the ChatGPTplugin Therefore, if a company plans to integrate the ChatGPTAPI or plugins into its systems, the company should ensure it has implemented all necessary safeguards to mitigate the identified potential risks referred to in this article, most importantly ensuring the company stores and maintains the confidentiality of its API keys and deploys measures to counteract the possibility of security vulnerabilities caused by the use of the ChatGPT API or plugins Regardless of whether a company is looking to use the free or premium version of ChatGPT within its organisation, use the ChatGPT API or use ChatGPT plugins, the company should have a formal policy in place to ensure and promote the responsible use of artificial intelligence and associated tools within the organisation, and should provide adequate training to users on the associated risk

This article is from: