The Risks of Using ChatGPT in the Workplace
"With its tireless dedication, vast knowledge, and remarkable adaptability, ChatGPT is the indispensable colleague that transcends time and expertise, empowering teams to achieve their full potential in the digital era.”
If you’re wondering where that quote comes from, it’s a response ChatGPT generated when asked to create a quote about itself.
OpenAI’s ChatGPT, in simple terms, is a general-purpose chatbot that uses artificial intelligence (AI) to understand user prompts and generate human-like responses.
Released on November 30, 2022, ChatGPT gained
one million users in five days and over
100 million users in less than two months, making it the fastest-growing consumer app in history.
People are talking about it and talking
to it, but it’s not all sunshine and rainbows.
Less than four months after it was released, ChatGPT suffered a data breach—highlighting the growing
importance of cybersecurity in AI applications.
The ChatGPT Data Breach

On March 20, 2023, OpenAI discovered a cybersecurity issue related to a bug in the Redis client open-source library, redis-py, which OpenAI uses to cache ChatGPT user information for faster recall and access.
According to
OpenAI, the ChatGPT exploit exposed some users’ personal and payment information to other users. Such data included:
- First and last name
- Email address
- Payment address
- Credit card type
- Last four digits of their credit card number
- Credit card expiration date
“[The bug] allowed some users to see titles from another active user’s chat history,” admitted OpenAI. “It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.”
This incident underscores the critical importance of
robust cybersecurity
practices when handling sensitive user data, especially in AI and cloud-based platforms.
What Are the Risks of ChatGPT?

With its user-friendly interface, human-like responses, and the breadth of topics it can handle, ChatGPT is racing past other large language models (LLMs) like Bard by Google and LLaMA by Meta.
However, with so much knowledge and potential at their fingertips, users can’t help but ask crucial questions like:
What are the drawbacks of ChatGPT? And what are the risks of using them in one’s business?
Security and Privacy Issues
According to Security Intelligence by IBM Security, “Whenever you have a popular app or technology, it’s only a matter of time until threat actors target it.”
Depending on what you use it for and how you use it, using LLMs in the workplace may involve sharing sensitive or confidential information with an external service provider. So if you use ChatGPT, your prompt and all its details will be visible to OpenAI. The LLM you use will store your query and use it for further development.
From a cybersecurity perspective, this raises concerns. If your employees use an LLM for work-related tasks and cybercriminals start targeting it, your company data could be at risk of getting leaked or made public. Because of these security and privacy risks, major companies like Apple, Samsung, JPMorgan, Bank of America, and Citigroup have banned ChatGPT in the workplace to protect confidential information.
The UK’s National Cyber Security Centre (NCSC) recommends thoroughly understanding your LLM’s terms of use and privacy policy before adding sensitive questions or prompts or allowing your team to use it in the workplace.
Read More:
How to Keep Your Data Off The Dark Web
Inaccurate or Unreliable Information
OpenAI’s list of ChatGPT limitations states, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
ChatGPT and other large language models (LLMs) generate responses based on patterns and examples from a vast amount of data available on the Internet. While their sources include scientific research, journals, and books, they also include social media posts, fake news, and offensive material—raising concerns around
data integrity and
cybersecurity risks associated with AI-generated content.
LLM responses are not always fact-checked and may not always produce accurate or reliable information. As a result, LLMs sometimes “hallucinate,” leading to
misinformation threats and challenges in maintaining
digital trust in automated systems.
According to
Zapier,
hallucination
is a term used to describe instances when AI tools like ChatGPT generate inaccurate or completely made-up responses because “they lack the reasoning to apply logic or consider any factual inconsistencies they're spitting out. In other words, AI will sometimes go off the rails trying to please you.” These hallucinations show a growing concern for
cybersecurity professionals, as unchecked AI errors can lead to the spread of false information and
compromise information accuracy in digital environments.
For example, writers and journalists from several agencies were
shocked to find their names attached to articles and bylines that were never published—ChatGPT had fabricated the links and citations.
Legal and Ethical Repercussions
Using AI models such as ChatGPT may raise legal, ethical, and cybersecurity concerns, especially when used in the workplace.
ChatGPT may inadvertently generate content and cite incorrect sources—or not cite any sources at all—which could result in
data integrity issues, infringe on
intellectual property rights, or violate
copyright protections.
Another ethical and cyber risk involving ChatGPT and other AI tools is their ability to perpetuate bias and discrimination, both intentionally and unintentionally. This can lead to security vulnerabilities, especially in sensitive organizational environments.
ChatGPT generates content based on the massive amount of training data fed to it. So if that data contains biases, ChatGPT can produce discriminatory responses that can affect
compliance with cybersecurity and data governance policies.
Unfortunately, unintentional bias introduction isn’t the only way a user can bring out toxic content from ChatGPT. By adjusting a system parameter, users can assign a persona to ChatGPT. A
recent study shows that, when given a persona, ChatGPT’s toxicity can increase up to six times—which can increase
AI misuse risks in
enterprise cybersecurity settings.
The study also shows that the responses ChatGPT generates can vary significantly depending on the subject of the chat. For example, the toxicity directed towards an individual based on their sexual orientation and gender is 50% higher than that of their race. These search highlight the need for
AI security audits,
robust monitoring, and
responsible AI deployment practices in both public and private sectors.
How to Protect Yourself When Using ChatGPT

While there certainly are risks to using ChatGPT for work-related tasks, AI tools still hold significant potential in shaping the future of business operations.
A study by
Microsoft shows that many employees are looking forward to an “AI-employee alliance,” with 70% claiming they would delegate as much work as possible to AI to lessen their workloads.
The same study also shows that business leaders are more interested in using AI to empower their employees than to replace them.
If you’re a business leader who wants to leverage AI tools like ChatGPT to drive efficiency in your workplace, here are several measures you must take to protect personal and corporate data:
Proprietary Code or Algorithms
AI models like ChatGPT have the potential to store any data you enter and disseminate it to other users, which can lead to serious cybersecurity risks, so keep proprietary code and algorithms away from it. As Security Intelligence puts it, “Anything in the chatbot’s memory becomes fair game for other users.”
In April 2023, several employees of Samsung’s semiconductor business copied a few lines of confidential code from their database. They
pasted the code onto ChatGPT to fix a bug, optimize code, and summarize meeting notes. By doing so, the employees leaked corporate information, risking the possibility of exposing the code in the chatbot’s future responses.
In response to this recent cybersecurity incident, Samsung has limited each employee’s prompt to ChatGPT to 1,024 bytes and is considering reinstating its ChatGPT ban.
If your company handles proprietary code or algorithms, you may want to learn from Samsung’s experience. Work with a
managed IT services provider to develop and enforce robust
IT security policies. These policies should clearly define how employees can use AI tools while protecting company assets. Leveraging
managed cybersecurity solutions can also help monitor and control access to external platforms like ChatGPT, ensuring compliance with internal data protection protocols.
Read More:
Are You Sure You’re Cybersecure?
Sensitive Information
Even though your team doesn’t handle top-secret code, you still need to be very careful about the data that you have access to. Avoid providing LLMs with any sensitive details, such as usernames, passwords, access tokens, or other credentials that, when exposed, could potentially compromise your company's security or privacy.
Sharing sensitive information on ChatGPT puts your data at risk, and your company could also face privacy or data protection regulation violations.
In March 2023, the Italian Data Protection Authority officially announced a
country-wide ban on ChatGPT because its security and privacy concerns infringed the European Union’s General Data Protection Regulation (GDPR), arguably the strictest privacy law in the world.
Avoid putting your sensitive information and company reputation at risk. Familiarize yourself with relevant laws, regulations, and policies on data handling before incorporating AI tools like ChatGPT into the workplace.
Protected Health Information (PHI)
If you’re managing a healthcare organization, you already know that sharing PHI or any other personally identifiable information with ChatGPT is a HIPAA violation. The HIPAA Privacy Rule clearly states the need to restrict access to PHI, and entering such information on ChatGPT could result in significant legal and financial penalties for your company.
Read More:
HIPAA Compliance and Your Practice
Stay
HIPAA compliant and share PHI only through communication and collaboration tools vetted and designed to maintain the security and privacy of patient data. If you must access and transfer PHI via a unified communications platform, for example, make sure you partner with one that
signs a business associate agreement (BAA).
Read More:
A HIPAA-Compliant Phone System: What It is and Why It’s Important
Transform Your Workplace Securely With ER Tech Pros

Whether the world likes it or not, AI-powered tools like ChatGPT are revolutionizing the workplace. If you’re a company leader who wants to future-proof their business and empower their teams, the smart move is to adapt to technology, not shun it.
However, while there are massive benefits to using ChatGPT in your workflow, every new technology comes with potential
cybersecurity threats, compliance challenges, and operational risks. Your responsibility as a leader is to ensure your company, clients, employees, and society are safe even as you transition into more automated operations.
Take careful precautions before adopting the latest groundbreaking technology. Start by seeking the IT and cybersecurity advice of
reputable IT experts. If your company doesn’t have a trusted IT partner, ER Tech Pros is ready to help!
With our strong team of
IT,
cloud,
compliance, and
cybersecurity engineers, ER Tech Pros offers comprehensive
managed IT services that include risk assessments, policy development, network protection, and 24/7 IT support. We can help you assess your current infrastructure, implement robust
cybersecurity protocols, and ensure your business is secure, compliant, and fully equipped for the future of work.
Frequently Asked Questions (FAQs)
What are the cybersecurity risks of using ChatGPT in the workplace?
ChatGPT can pose several cybersecurity risks, including data leakage, phishing attacks, and exposure to malicious prompts. Without proper IT security measures in place, businesses risk compromising sensitive company or client data.
How can managed IT services help reduce ChatGPT risks?
Managed IT services offer expert support in assessing risks, monitoring usage, implementing access controls, and establishing cybersecurity protocols. Partnering with a managed IT provider like ER Tech Pros ensures your AI tools are integrated safely and responsibly.
What IT policies should I have before using ChatGPT in my business?
Companies should implement policies around acceptable use, data sharing, access permissions, and employee training. A managed IT services provider can help create and enforce these cybersecurity and compliance policies.
Is ChatGPT safe for regulated industries like healthcare or finance?
ChatGPT can be used safely in regulated industries, but only with strict oversight and support from a trusted IT partner. Compliance with regulations like HIPAA or GDPR requires secure configurations and ongoing cybersecurity monitoring.
Can ER Tech Pros help with secure ChatGPT implementation?
Yes. ER Tech Pros provides managed IT services, cybersecurity consulting, and AI integration support. We ensure your business uses ChatGPT in a secure, compliant, and future-proof manner.
Search Articles