Despite the fact that many AI services claim not to cache sent information, several incidents have been reported, especially with ChatGPT, where sensitive data has been exposed to vulnerabilities in the build of these systems. To further this issue, Check Point Research has reported and offered a solution to a new vulnerability in the Wide Language Models (LLM) used in ChatGPT, Google Bard, Microsoft Bing Chat and other generative AI services. “If companies do not have effective security for this type of AI applications and services, your private data could be compromised and become part of their responses” explains Eusebio Nieva, technical director of Check Point Software for Spain and Portugal.
The use of AI in the development of software code is increasingly common due to its ease and speed. However, these tools can also be an inadvertent source of data breaches.
Present in leading Generative AI applications from ChatGPT, Google and Microsoft, Check Point Research has helped prevent sensitive data leaks for the tens of millions of users who use these tools
The combination of careless users and the vast amount of information shared creates an opportunity for cybercriminals looking to obtain sensitive data such as credit card numbers and logs of queries made in these chats. As part of its solution, Check Point Software is bringing a URL filtering tool to businesses to identify these generative AI websites, with a new category added to its suite of traffic management controls. In addition, Check Point’s firewall and firewall-as-a-service solutions Software also include data loss prevention (DLP). This allows network security administrators to block specific types of data (software code, personally identifiable information, sensitive information, etc.) from being uploaded when using generative AI applications. Thus, security measures can be enabled to protect against misuse of ChatGPT, Bard, or Bing Chat. Configuration is done in just a few clicks through Check Point Software’s unified security policy.