The logo for ertech pros it cloud compliance cybersecurity
(855) ER-TECH-1
A white background with a few lines on it
A white background with a few lines on it
(855) ER-TECH-1

The Risks of Using ChatGPT in the Workplace

June 1, 2023

"With its tireless dedication, vast knowledge, and remarkable adaptability, ChatGPT is the indispensable colleague that transcends time and expertise, empowering teams to achieve their full potential in the digital era.”


If you’re wondering where that quote comes from, it’s a response ChatGPT generated when asked to create a quote about itself.


OpenAI
’s ChatGPT, in simple terms, is a general-purpose chatbot that uses artificial intelligence (AI) to understand user prompts and generate human-like responses.


Released on November 30, 2022, ChatGPT gained
one million users in five days and over 100 million users in less than two months, making it the fastest-growing consumer app in history.


People are talking about it and talking
to it, but it’s not all sunshine and rainbows.


Less than four months after it was released, ChatGPT suffered a data breach.

The ChatGPT Data Breach

chatgpt data leak

On March 20, 2023, OpenAI discovered a bug in the Redis client open-source library, redis-py, which OpenAI uses to cache ChatGPT user information for faster recall and access.


According to
OpenAI, the ChatGPT exploit exposed some users’ personal and payment information to other users. Such data included:



  • First and last name
  • Email address
  • Payment address
  • Credit card type
  • Last four digits of their credit card number
  • Credit card expiration date


“[The bug] allowed some users to see titles from another active user’s chat history,” admitted OpenAI. “It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.”

What Are the Risks of ChatGPT?

dangers of chatgpt

With its user-friendly interface, human-like responses, and the breadth of topics it can handle, ChatGPT is racing past other large language models (LLMs) like Bard by Google and LLaMA by Meta.


However, with so much knowledge and potential at their fingertips, users can’t help but ask crucial questions like:
What are the drawbacks of ChatGPT? And what are the risks of using them in one’s business?

Security and Privacy Issues

According to Security Intelligence by IBM Security, “Whenever you have a popular app or technology, it’s only a matter of time until threat actors target it.”


Depending on what you use it for and how you use it, using LLMs in the workplace may involve sharing sensitive or confidential information with an external service provider. So if you use ChatGPT, your prompt and all its details will be visible to OpenAI. The LLM you use will store your query and use it for further development.


If your employees use an LLM for work-related tasks and cybercriminals start targeting it, your company data could be at risk of getting leaked or made public. Because of these security and privacy risks, major companies like Apple, Samsung, JPMorgan, Bank of America, and Citigroup have
banned ChatGPT in the workplace to protect confidential information.


The UK’s
National Cyber Security Centre (NCSC) recommends thoroughly understanding your LLM’s terms of use and privacy policy before adding sensitive questions or prompts or allowing your team to use it in the workplace.


Read More: How to Keep Your Data Off The Dark Web

Inaccurate or Unreliable Information

OpenAI’s list of ChatGPT limitations states, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”


ChatGPT and other LLMs generate responses based on patterns and examples from a vast amount of data available on the Internet. While their sources include scientific research, journals, and books, they also include social media posts, fake news, and offensive material.


LLM responses are not always fact-checked and may not always produce accurate or reliable information. As a result, LLMs sometimes “hallucinate.”


According to
Zapier, hallucination is a term used to describe instances when AI tools like ChatGPT generate inaccurate or completely made-up responses because “they lack the reasoning to apply logic or consider any factual inconsistencies they're spitting out. In other words, AI will sometimes go off the rails trying to please you.”


For example, writers and journalists from several agencies were
shocked to find their names attached to articles and bylines that were never published—ChatGPT had fabricated the links and citations.

Legal and Ethical Repercussions

Using AI models such as ChatGPT may raise legal and ethical concerns, especially when used in the workplace.


ChatGPT may inadvertently generate content and cite incorrect sources—or not cite any sources at all—and infringe on intellectual property rights or copyrights.


Another ethical risk involving ChatGPT and other AI tools is its ability to perpetuate bias and discrimination, both intentionally and unintentionally.


ChatGPT generates content based on the massive amount of training data fed to it. So if that data contains biases, ChatGPT can produce discriminatory responses.


Unfortunately, unintentional bias introduction isn’t the only way a user can bring out toxic content from ChatGPT. By adjusting a system parameter, users can assign a persona to ChatGPT. A
recent study shows that, when given a persona, ChatGPT’s toxicity can increase up to six times.


The study also shows that the responses ChatGPT generates can vary significantly depending on the subject of the chat. For example, the toxicity directed towards an individual based on their sexual orientation and gender is 50% higher than that of their race.

How to Protect Yourself When Using ChatGPT

using chatgpt in healthcare

While there certainly are risks to using ChatGPT for work-related tasks, AI tools still hold significant potential in shaping the future of business operations.


A study by
Microsoft shows that many employees are looking forward to an “AI-employee alliance,” with 70% claiming they would delegate as much work as possible to AI to lessen their workloads.


The same study also shows that business leaders are more interested in using AI to empower their employees than to replace them.


If you’re a business leader who wants to leverage AI tools like ChatGPT to drive efficiency in your workplace, here are several measures you must take to protect personal and corporate data:

Proprietary Code or Algorithms

AI models like ChatGPT have the potential to store any data you enter and disseminate it to other users, so keep proprietary code and algorithms away from it. As Security Intelligence puts it, “Anything in the chatbot’s memory becomes fair game for other users.”


In April 2023, several employees of Samsung’s semiconductor business copied a few lines of confidential code from their database. They
pasted the code onto ChatGPT to fix a bug, optimize code, and summarize meeting notes. By doing so, the employees leaked corporate information, risking the possibility of exposing the code in the chatbot’s future responses.


In response to the recent incident, Samsung has limited each employee’s prompt to ChatGPT to 1,024 bytes and is considering reinstating its ChatGPT ban.


If your company handles proprietary code or algorithms, you may want to learn from Samsung’s experience and establish corporate
IT policies that clearly state how your team should (and should not) use AI models like ChatGPT.


Read More: Are You Sure You’re Cybersecure?

Sensitive Information

Even though your team doesn’t handle top-secret code, you still need to be very careful about the data that you have access to. Avoid providing LLMs with any sensitive details, such as usernames, passwords, access tokens, or other credentials that, when exposed, could potentially compromise your company's security or privacy.


Sharing sensitive information on ChatGPT puts your data at risk, and your company could also face privacy or data protection regulation violations.


In March 2023, the Italian Data Protection Authority officially announced a
country-wide ban on  ChatGPT because its security and privacy concerns infringed the European Union’s General Data Protection Regulation (GDPR), arguably the strictest privacy law in the world.


Avoid putting your sensitive information and company reputation at risk. Familiarize yourself with relevant laws, regulations, and policies on data handling before incorporating AI tools like ChatGPT into the workplace.

Protected Health Information (PHI)

If you’re managing a healthcare organization, you already know that sharing PHI or any other personally identifiable information with ChatGPT is a HIPAA violation. The HIPAA Privacy Rule clearly states the need to restrict access to PHI, and entering such information on ChatGPT could result in significant legal and financial penalties for your company.


Read More: HIPAA Compliance and Your Practice


Stay
HIPAA compliant and share PHI only through communication and collaboration tools vetted and designed to maintain the security and privacy of patient data. If you must access and transfer PHI via a unified communications platform, for example, make sure you partner with one that signs a business associate agreement (BAA).


Read More: A HIPAA-Compliant Phone System: What It is and Why It’s Important

Transform Your Workplace Securely With ER Tech Pros

remote it services

Whether the world likes it or not, AI-powered tools like ChatGPT are revolutionizing the workplace. If you’re a company leader who wants to future-proof their business and empower their teams, the smart move is to adapt to technology, not shun it.


However, while there are massive benefits to using ChatGPT in your workflow, every new technology comes with risks and issues. Your responsibility as a leader is to ensure your company, clients, employees, and society are safe even as you transition into more automated operations.


Take careful precautions before adopting the latest groundbreaking technology. Start by seeking the IT and cybersecurity advice of reputable IT experts. If your company doesn’t have a trusted IT partner, ER Tech Pros is ready to help!


With our strong team of IT, cloud, compliance, and cybersecurity engineers, ER Tech Pros can help you assess your current technology, develop and implement necessary IT policies, protect your devices, and ensure your business is fully equipped for the future.



Learn More

Search Articles

Strong passwords paired with MFA software & apps add layers of protection, reinforcing data security
By Jadys Diez February 18, 2025
Keep your business safe in 2025! Learn how strong passwords, MFA best practices, and secure cloud solutions can protect against modern cyber threats.
Data from all over the world are stored securely through cloud management platforms.
By Jadys Merill Diez December 19, 2024
From saving time to cutting costs, cloud hosting benefits businesses. Explore its benefits and why ER Tech Pros is your best partner.
Login credentials are being filled in automatically in the browser.
By Jadys Diez November 8, 2024
Is your browser-based password manager secure enough for work? Learn the risks and discover safer alternatives for businesses.
A shield with a padlock on it is surrounded by icons.
By Jadys Diez October 25, 2024
Cloud-based or on-site? Compare IT solutions for healthcare, covering data security, scalability, cost, and more in this comprehensive guide.
A cityscape with a lot of buildings and icons on it.
October 21, 2024
Discover how to turn IT from a cost center into a profit driver. Learn strategies to optimize technology investments, boost efficiency, and align IT with business goals for greater profitability.
A city at night with a lot of icons connected to each other.
By Jadys Merill Diez September 18, 2024
Is your medical clinic ready for Wi-Fi 7? Understand its benefits, assess your needs, and learn how ER Tech Pros can guide you.
A blue arrow pointing down on a dark background
By Jadys Merill Diez July 18, 2024
Learn to spot and avoid dangerous download links. Keep your business safe from cybersecurity threats with expert insights from ER Tech Pros.
An aerial view of a city at night with a lot of wifi signals coming out of the buildings.
By Jadys Merill Diez July 10, 2024
Don't let public Wi-Fi put your data at risk – rely on ER Tech Pros' cybersecurity expertise for comprehensive protection.
A man is using a tablet computer with a check mark on it.
By Aprillice Tangpos June 4, 2024
ER Tech Pros is a reliable partner that offers the best healthcare cybersecurity identifying the factors to eliminate cybersecurity threats in healthcare.
A computer screen with the word hacked on it
By Jadys Merill Diez May 24, 2024
Partnering with trusted healthcare cybersecurity firms like ER Tech Pros exists to protect our data from hacking to ensure that sensitive data remains protected.
Show More

Healthcare & Tech Articles

Strong passwords paired with MFA software & apps add layers of protection, reinforcing data security
By Jadys Diez February 18, 2025
Keep your business safe in 2025! Learn how strong passwords, MFA best practices, and secure cloud solutions can protect against modern cyber threats.
Data from all over the world are stored securely through cloud management platforms.
By Jadys Merill Diez December 19, 2024
From saving time to cutting costs, cloud hosting benefits businesses. Explore its benefits and why ER Tech Pros is your best partner.
Login credentials are being filled in automatically in the browser.
By Jadys Diez November 8, 2024
Is your browser-based password manager secure enough for work? Learn the risks and discover safer alternatives for businesses.
More Posts
Share by: