ChatGPT is an artificial intelligence that everyone has been using a lot lately. It’s a tool that lets you get new and clear information on almost any topic you ask about, which helps organize the foundation of certain projects and sparks creativity when you don’t have any ideas.
But because artificial intelligence is such a new field, the rules about how it can be used, why it can’t be used for ethical or legal reasons, and even the risks it poses to society are not yet clear, and some problems have already started to show up. For example, not long ago, a singer took a fake picture of a well-known artist that made it look like she was naked, but it was actually an image made with artificial intelligence. This was against the rules of consent and image, and the singer should have known better. In similar ways, there have been a number of other cases that raise questions about the boundaries of this tool, such as copyright, how it relates to reality, the danger of fake news, etc.
And now, there is a new risk to using AI that has less to do with ethics or the law and more to do with ChatGPT, a dangerous malware that has taken advantage of one of the most popular and widely used AIs.
Malware that takes advantage of artificial intelligence
Tom’s Hardware, a website that looks into new technology and talks about its pros and cons, says that there is a risk in the well-known artificial intelligence called ChatGPT. A group of its researchers found that this tool has an impressive ability to create polymorphic malware, which makes it impossible to detect in time and react with the right protection from the start, but rather undetectable and actually dangerous.
Even with the Endpoint detection and response protocol, it’s hard to find the malware that ChatGPT makes because the code can change. This makes it very hard to find. How do they come to be? With some commands, which hackers use in their calls for code creation to make a. Several cybersecurity businesses, such as Cyber Ark, have shown in their most recent studies how easy it is to make these scripts with ChatGPT.
But since ChatGPT was first released, its language models have been made to include filters that don’t make malware. In fact, ChatGPT-4 can filter up to 80% more negative or harmful content than ChatGPT-3 could. This shows that its makers want to encourage people to use the tool in a responsible way and stop malicious damage. But people’s creativity grows when they have free access to the Internet and its tools, and there are still people who can get around these rules and make software that hurts users. In reaction, it’s important to protect our computers and cell phones so that they don’t get too damaged. After all, any new tool has some flaws because it is still being tried and is in everyone’s hands, so it is good to be warned while the protection methods are still being improved.