Wed. Dec 6th, 2023

Despite the fact that ChatGPT is not designed for criminal purposes, cybercriminals are using this AI for illegal activities, such as creating fraudulent emails or text messages to deceive unsuspecting people. In relation to this issue, Hillstone Networks has wanted to clarify some aspects about ChatGPT and its relation to cybercrime. “It is important to note that any tool, including ChatGPT, can be used for both legitimate and illegitimate purposes. The responsibility lies with the users of the tool to use it in an ethical and legal manner” comments Marcelo Palazzo, Commercial Director for the European region of Hillstone Networks.

ChatGPT: a behavior-based tool

OpenAI, creator of ChatGPT, presented GPT-4, an update that improves reasoning ability and reduces by 82% the chances of responding to inappropriate requests. In relation to this, its use to create malicious attacks caused concern in the chat market. cybersecurity. Therefore, various questions have been raised about the possibility that this AI could be used by cybercriminals to carry out malware attacks.

Hillstone iSource XDR has the ability to detect behavior-based threats and therefore identify if a file is doing things it was not designed to do, such as ChatGPT

To curb this problem, current cybersecurity tools developed by Hillstone Networks, such as iSource XDR, protect enterprise infrastructure and data through threat intelligence and visibility services. Hillstone iSource XDR has the ability to detect threats based on behavior , that is, the technology makes it possible to identify if a file is performing tasks for which it was not designed. Using this type of technology, companies can be sure that they have the best tools to deal with malicious attacks, even when Malicious users have made use of tools such as ChatGPT or the new version GPT-4.

By Alvaro Rivers

Award-winning student. Incurable social media fanatic. Music scholar. Beer maven. Writer.