The security risks of ChatGPT: how to stay safe with AI models
Posted on May 18, 2023 by Rob May
Artificial intelligence (AI) is a game-changing technology that has captured the attention of the public and policymakers alike. With AI-powered Large Language Models (LLM’s) such as ChatGPT now available to everyday users, there is no doubt that AI has the potential to revolutionise the way we work, learn, and communicate online. However, like any technology, there are risks associated with the use of AI that users should be aware of in order to stay safe online.
Before diving headfirst into the world of AI, it is essential to evaluate the security and privacy risks involved. Here are a few key factors to consider when utilising AI-powered language models or other AI services, such as Microsoft’s ChatGPT within the workplace, at home, or for your children and their schoolwork.
Staying safe with ChatGPT
Do you already have a problem?
Many business leaders are unaware of the use of some of these services within their own business. Staff have begun to use the growing number of free-to-access services like ChatGPT without either consent or a proper understanding of the risks for the business beforehand. Leaders should broach this subject rather than adopting a head in the sand approach and agree both a sensible approach and usage policies.
Don’t overshare!
AI models learn, at least in part, from the information users input into the system. Consequently, it is advisable not to entrust an AI model with any information you wish to keep private. Whether it’s your company’s proprietary code or sensitive details about your family, exercising caution is paramount. Treat AI models as powerful tools but please be mindful of the data you share with them.
Prompting isn’t creating!
It’s important to understand that when using AI models, prompting is not the same as creating. While AI can certainly help with tasks like homework or research, it is important to keep in mind that copy and pasting results from an AI model is not the same as doing the work yourself!
Users should also be careful when relying on AI models such as ChatGPT for fact-based questions. These models have been known to give incorrect or even bizarre, unsettling responses, known as “hallucinations“, with absolute conviction. Always fact-check any information obtained from AI models before using it!
Privacy
There is a major concern when it comes to AI models around privacy. Many experts are worried about how these models scrape the web and what kind of personal information they may be collecting. Remember, your conversations with an AI are not necessarily private, as the company behind the AI can often view your inputs, even if they are anonymised. It is really important that you and your staff carefully read the privacy notices of any AI service you use and ensure that you are comfortable with the data that it collects.
AI and Cybercrime
With the rise of AI, cybercriminals have also started leveraging its capabilities to enhance their illicit activities. They are using AI to craft more deceptive phishing emails and develop more advanced malware. In light of this, it is a good idea to review your cybersecurity basics, such as using strong passwords and enabling multi-factor authentication for all accounts that allow it.
Remember, AI should be seen as a tool that complements your skills, not as a replacement for your own capabilities and whilst it has the potential to revolutionise the way we work and communicate online, users should be aware of the risks associated with its use.
By following the tips outlined in this article and being vigilant about cybersecurity, users can enjoy the benefits of AI while staying safe online.
Worried about your IT security?
If you’re increasingly worried about your IT security as the world of AI continues to evolve, speak to an expert at ramsac about your cybersecurity concerns. If you are serious about cybersecurity, ramsac are the secure choice™