Generative AI tools including ChatGPT are changing the way we do business. However, the question of whether ChatGPT is a security risk continues to be a topic of discussion.
One of the most notable recent developments in the area of Artificial intelligence is OpenAI’s ChatGPT, an AI-powered tool which uses very large and sophisticated Generative AI language models to generate human-like responses in text formats. The generative artificial intelligence tool reached 100 million users just two months after launching.
But does ChatGPT pose cybersecurity risks? How can you mitigate risks from cybercriminals who attempt to use ChatGPT to devise sophisticated attack strategies, often outpacing existing cybersecurity defenses.
In this article, we talked to representatives of the businesses to delve into potential ChatGPT cybersecurity risks and best practices for preventing them from becoming threats to your business.
How Does ChatGPT Work?
Developed by the Artificial Intelligence Research Laboratory OpenAI, ChatGPT (Chat Generative Pre-Trained Transformer) is an AI-powered tool which uses very large and sophisticated Generative AI language models to generate human-like responses in text formats.
Potential Security Risks of ChatGPT
While ChatGPT and similar Generative AI tools have a very positive impact on most industries and on daily life, they come with inherent risks, like any other technology, says Amit Roy, General Manager & Head-Cybersecurity-META, Eviden (An Atos Business).
AI has long been inseparable from cybersecurity. When it comes to Generative AI and ChatGPT, security professionals are unanimous on the risks and threats Generative AI applications and ChatGPT are capable of posing due to the way they have been trained, their massive training data from the Internet and intended or unintended inputs to the application.
Cybercriminals always will use the latest trends or technology to line their pockets. And ChatGPT is no exception. Though ChatGPT as a technology is not dangerous, it, when in wrong hands, has the potential of posing security risks.
“As an AI language model, ChatGPT has the potential to pose certain security risks, although it’s important to note that these threats are contingent on misuse, not the technology itself,” says Karl Lankford, Regional Vice President, Solutions Engineering, BeyondTrust.
“Let me say one thing up front: ChatGPT itself is not dangerous. However, in the wrong hands, it potentially facilitates the creation of malicious code and, at first glance, well-worded phishing emails,” cautions Martin Holste, CTO Cloud at Trellix
“While ChatGPT offers valuable benefits to businesses, its widespread use presents increased cybersecurity risks, mostly due to lack of sufficient awareness about the risks,” says Sergey Shykevich, Threat Intelligence Group Manager at Check Point Research.
Generative AI tools have gained popularity in various industries, including education, healthcare and customer service. However, they also pose cybersecurity risks.
Tarek KUZBARI, Regional Director of the Middle East & Turkey, HUMAN, says, “ChatGPT, while transformative in its language capabilities, presents notable risks. Misinformation dissemination, privacy violations, cybersecurity, and job displacement are key concerns.”
“ChatGPT is one of the fastest-growing consumer applications in history. While it has become one of the most popular applications this year, it is also attracting the attention of scammers seeking to benefit from using wording and domain names that appear related to the site,” says Tarek Abbas, Senior Director, Systems Engineering at Palo Alto Networks, MENA, CIS and Turkey.
As organizations starts to deploy generative AI, they are likely to encounter a host of trust, risk, security, privacy comprise and ethical questions”
Advanced Phishing and Fraud Attacks
ChatGPT is the subject of discussion in the cybersecurity sector due to its potential to create phishing emails.
“Due to easiness of its use, cybercriminals are also exploring how it can used to improve their operations in malware and phishing creation,” says Sergey Shykevich of Check Point Research.
Phishing is by far the most common Cyber threat globally. However, most phishing scams can be easily detected (unless well crafted) as they are often cluttered with poor grammar, bad phrases, and misspellings. “With ChatGPT, hackers across the globe can have a fluency in English and we can expect to see more sophisticated AI-Generated Phishing scams,” says Amit Roy of Eviden.
One of the major risks is the increasing number of copycat AI chatbots applications, which can increase cybersecurity risks. When using an online service such as ChatGPT accessed only through web browsers, data breaches are very common.
“Data breaches are a potential risk when using any online service, including ChatGPT. You can’t download Chat GPT, so you must access it through web browsers. In that context, a data breach could occur if an unauthorized party gains access to your conversation logs, user information, or other sensitive data,” says Tarek Abbas of Palo Alto Networks.
Best Practices for Using ChatGPT Securely
OpenAI has implemented several security controls to protect users’ data and privacy and to ensure the overall security. However, users and enterprises should also adopt security best practices to minimize any security risks while using ChatGPT.
“While there are currently no specific regulations that directly govern ChatGPT or similar generative AI tools, however, enterprises should adopt the existing Data protection and privacy regulations such as GDPR or other local government regulations for safe and secure use of AI, “ says Amit Roy of Eviden.
Establish a comprehensive AI usage policy
Arun Chandrasekaran, Distinguished VP, Analyst, Gartner, says, “User education and training is important so that ChatGPT is used for what it is good at and in a safe manner.”
“This should set out clear guidelines on how AI tools like ChatGPT are to be used within the organization to prevent misuse,” says Tarek KUZBARI of HUMAN.
“It’s vital to educate users about the potential risks posed by AI-generated phishing or social engineering attacks. This includes training to recognise such attacks and understand the importance of verifying suspicious emails or requests,” says Karl Lankford of BeyondTrust.
Ways to Use ChatGPT Securely
ChatGPT has groundbreaking potential across various industry-use applications in today’s digital landscape. However, keeping yourself abreast of the increased risks associated with such advanced technology is of utmost importance.
“This powerful technology poses new risks but also oﬀers extraordinary opportunities. Balancing the two means treading carefully. A measured approach today can provide the foundations on which further rules can be added in the future. But the time to start building those foundations is now,” says Tarek KUZBARI of HUMAN.
Anything you enter into ChatGPT becomes public domain. This means confidential information can be accessed making you prone to confidentiality risks.
Besides confidentiality risks, data privacy and security are at risk if an employee enters sensitive data into ChatGPT. “One concern is the privacy and data exposure risk, while employees upload sensitive corporate information to ChatGPT without proper awareness of the risks involved,” says Sergey Shykevich Research.
Using ChatGPT securely comes down to exercising caution with suspicious emails or links related to the Generative AI tool.
“To stay safe, ChatGPT users should exercise caution with suspicious emails or links related to ChatGPT. The usage of copycat chatbots will bring extra security risks, and it is recommended that users always access ChatGPT through the official OpenAI website,” says Tarek Abbas of Palo Alto Networks.
For the companies to minimize ChatGPT risks Karl Lankford of BeyondTrust advises them to consider Privileged Access Management (PAM), User Education, Regular Patching and Updates, Incident Response Plan and Policy. According to Karl Lankford, it is necessary to create a generative AI policy within your organisation to ensure it is clear to employees what should and should not be done with generative AI services, such as ChatGPT.
Implementing robust AI usage policies, educating employees on AI-generated threats, employing AI monitoring tools to detect anomalies, ensuring data privacy, and actively participating in AI ethics and regulation discussions. These strategies foster a proactive cybersecurity culture.
According to Martin Holste, CTO Cloud at Trellix, there are a number of things that can be done immediately to confidently use ChatGPT to boost productivity and prevent attacks that leverage AI from achieving their goal:
- Deploy tech on endpoints that can control what data is put into ChatGPT.
- Get visibility into all important business applications and infrastructure. This can be done by collecting activity and audit logs from devices and SaaS applications and making it useful to security staff and AI.
- Give your security staff tools that can leverage AI for generating remediation actions.