After ChatGPT’s First Data Breach, Companies Are Sceptical About Its Reliance

0
1081
After ChatGPT's First Data Breach, Companies Are Sceptical About Its Reliance

ChatGPT, the latest trend that took the world by storm, experienced its first data breach shortly after its release. The AI-powered chatbot introduced by OpenAI confirmed the breach in May this year, and it was followed by a security firm that reported an actively exploited vulnerability that affected certain components.

The cause, an open-source bug, is one of the most common situations that cyber-attackers take advantage of since vulnerability management is a frequently underrated sector in a company’s priorities. Although adopting a project management strategy keeps systems updated as newer vulnerabilities emerge and reduces the chances of further infecting other systems, only 17% of UK businesses perform cyber vulnerability audits.

But why did ChatGPT fall victim to these known cyber security hacks, despite being one of the most innovative systems and leveraging an impressive data handling capacity? Let’s find out.

How did ChatGPT get hacked?

How did ChatGPT get hacked

On the 20th of March, the hack occurred during a nine-hour window when 1.2% of the system’s subscribers had their data exposed. Given that ChatGPT had one of the fastest-growing user bases, with around 100 million active users, this considerable number of people was concerning.

Regarding the data exposed, it was certified that users could see others’ names, email addresses, payment addresses and even the last four digits of their credit card numbers. The breach resulted from a bug in the open-source code of the platform that confused the system, so it delivered cancelled request information to the next user making a similar one.

The later response to the breach

The bug may have happened due to the Redis client library not checking the main database for every request, for which OpenAi stated that it wouldn’t get rid of the service as it contributed significantly to the system’s development.

To improve its services, OpenAi conducted more tests to fix the bug and improved its scaling approach to reduce the chances of further errors. The company also created a bug bounty program to encourage people to discover and fix bugs in the system in exchange for up to $20,000 for exceptional discoveries.

Despite the efforts, ChatGPT was banned in Italy

Despite the efforts, ChatGPT was banned in Italy

After this incident, more countries and businesses started to become skeptical of the system’s cybersecurity challenges. ChatGPT has already stirred the internet on its lack of regulation regarding ethical strategies. For example, the system has been recently accused of making false claims, for which it risks going through a defamation lawsuit. ChatGPT could be sued for financial losses if caused by the company’s negligence or mistakes, according to https://www.how-to-sue.co.uk/, so the platform is far from perfect.

After these happenings, Italy decided to ban the AI-based technology since its privacy watchdog was already susceptible to OpenAI’s use of personal data and the absence of age filters that expose minors to certain risks.

ChatGPT is used to spread malware

Among its benefits and use cases, it was inevitable for ChatGPT not to be the subject of illicit activities. For example, one increasing trend is for hackers to use the system to spread malware on Facebook, Instagram and WhatsApp. Meta found that hackers use ChatGPT to disguise it as malware and deliver it to users’ devices.

This strategy makes it easy for such browser extensions to get into official web stores as they’re in the form of ChatGPT-based tools. These would further be promoted on social media and trick people into downloading the malware by getting these systems. These hacking methods also target Windows-based browsers that can also compromise Gmail and Microsoft Outlook accounts. Considering this situation, Meta released new security policies and tools to help users and businesses mitigate cyber resilience.

ChatGPT used to mimic media

Another less discussed detail about the dangers of AI is how it can be the subject of text, audio or video media forms that can be considered natural, spreading fake news and agitating social media users. In a recent case, fraudsters used AI systems to mimic the voice of a company’s CEO, asking for an urgent release of funds.

There was also the Fake Pentagon explosion photo that spread panic among internauts. However, the AI-based photograph was immediately recognised as a faker considering the disparities between the picture and the actual building. Unfortunately, many officials and verified news accounts shared the news without checking it.

If it gets into the wrong hands, ChatGPT can be a destructive tool. Although it’s a new technology and users are still getting a hold of it, the risks increase daily as hackers get more knowledgeable and news companies find it more difficult to tell the difference between the truth and the fake situations.

How can you avoid falling into AI-based traps? What about businesses?

How can you avoid falling into AI-based traps What about businesses

ChatGPT and OpenAI are considerably developed systems in which digital security is continuously improved. However, all users must take precautions when using the techniques to avoid exposing their information or getting malware. Therefore, one crucial aspect everyone must acknowledge is the importance of being up-to-date with the latest news and releases about the potential risks of ChatGPT. Now, after the Pentagon incident, all news companies will take a second look at happenings before posting them.

Companies must also understand the risks they expose themselves to because hackers are looking for weak systems and a lack of digital safety, which many businesses still need help with. Like always, small businesses are the main target in such cases.

But protecting yourself from AI guessing your passwords or malware reaching your systems has

the same rules on regular data protection. This means every user must create complex passwords, change them frequently, carefully store passwords and always opt for multi-factor authentication on their devices. At the same time, learning about new ways to protect your systems is vital for facing the challenges of modern technology.

Bottom line

After ChatGPT’s first data breach, fewer companies were happy with the product, as it has been revealed that it’s also used to spread malware on social media platforms. Following its ban in Italy, ChatGPT Received bug updates, while OpenAI devised a security plan to avoid similar happenings in the future. However, users must take all precautions when using the AI-based tool.

LEAVE A REPLY

Please enter your comment!
Please enter your name here