ER Tech Pros uses Google Analytics to understand how users interact with our website, helping us improve your experience.
Data collected (e.g., pages visited, time spent, location) is anonymized and stored securely, with no personal information shared.
Learn more via Google’s Privacy Policy . To opt out, adjust your browser settings or use the Google Analytics Opt-out Add-on .
By clicking “Accept,” you consent to this data collection.
"With its tireless dedication, vast knowledge, and remarkable adaptability, ChatGPT is the indispensable colleague that transcends time and expertise, empowering teams to achieve their full potential in the digital era.”
If you’re wondering where that quote comes from, it’s a response ChatGPT generated when asked to create a quote about itself.
OpenAI’s ChatGPT, in simple terms, is a general-purpose chatbot that uses artificial intelligence (AI) to understand user prompts and generate human-like responses.
Released on November 30, 2022, ChatGPT gained
one million users in five days and over
100 million users in less than two months, making it the fastest-growing consumer app in history.
People are talking about it and talking
to it, but it’s not all sunshine and rainbows.
Less than four months after it was released, ChatGPT suffered a data breach.
On March 20, 2023, OpenAI discovered a bug in the Redis client open-source library, redis-py, which OpenAI uses to cache ChatGPT user information for faster recall and access.
According to
OpenAI, the ChatGPT exploit exposed some users’ personal and payment information to other users. Such data included:
“[The bug] allowed some users to see titles from another active user’s chat history,” admitted OpenAI. “It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time.”
With its user-friendly interface, human-like responses, and the breadth of topics it can handle, ChatGPT is racing past other large language models (LLMs) like Bard by Google and LLaMA by Meta.
However, with so much knowledge and potential at their fingertips, users can’t help but ask crucial questions like:
What are the drawbacks of ChatGPT? And what are the risks of using them in one’s business?
According to Security Intelligence by IBM Security, “Whenever you have a popular app or technology, it’s only a matter of time until threat actors target it.”
Depending on what you use it for and how you use it, using LLMs in the workplace may involve sharing sensitive or confidential information with an external service provider. So if you use ChatGPT, your prompt and all its details will be visible to OpenAI. The LLM you use will store your query and use it for further development.
If your employees use an LLM for work-related tasks and cybercriminals start targeting it, your company data could be at risk of getting leaked or made public. Because of these security and privacy risks, major companies like Apple, Samsung, JPMorgan, Bank of America, and Citigroup have
banned ChatGPT in the workplace to protect confidential information.
The UK’s
National Cyber Security Centre (NCSC) recommends thoroughly understanding your LLM’s terms of use and privacy policy before adding sensitive questions or prompts or allowing your team to use it in the workplace.
Read More:
How to Keep Your Data Off The Dark Web
OpenAI’s list of ChatGPT limitations states, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
ChatGPT and other LLMs generate responses based on patterns and examples from a vast amount of data available on the Internet. While their sources include scientific research, journals, and books, they also include social media posts, fake news, and offensive material.
LLM responses are not always fact-checked and may not always produce accurate or reliable information. As a result, LLMs sometimes “hallucinate.”
According to
Zapier,
hallucination
is a term used to describe instances when AI tools like ChatGPT generate inaccurate or completely made-up responses because “they lack the reasoning to apply logic or consider any factual inconsistencies they're spitting out. In other words, AI will sometimes go off the rails trying to please you.”
For example, writers and journalists from several agencies were
shocked to find their names attached to articles and bylines that were never published—ChatGPT had fabricated the links and citations.
Using AI models such as ChatGPT may raise legal and ethical concerns, especially when used in the workplace.
ChatGPT may inadvertently generate content and cite incorrect sources—or not cite any sources at all—and infringe on intellectual property rights or copyrights.
Another ethical risk involving ChatGPT and other AI tools is its ability to perpetuate bias and discrimination, both intentionally and unintentionally.
ChatGPT generates content based on the massive amount of training data fed to it. So if that data contains biases, ChatGPT can produce discriminatory responses.
Unfortunately, unintentional bias introduction isn’t the only way a user can bring out toxic content from ChatGPT. By adjusting a system parameter, users can assign a persona to ChatGPT. A
recent study shows that, when given a persona, ChatGPT’s toxicity can increase up to six times.
The study also shows that the responses ChatGPT generates can vary significantly depending on the subject of the chat. For example, the toxicity directed towards an individual based on their sexual orientation and gender is 50% higher than that of their race.
While there certainly are risks to using ChatGPT for work-related tasks, AI tools still hold significant potential in shaping the future of business operations.
A study by
Microsoft shows that many employees are looking forward to an “AI-employee alliance,” with 70% claiming they would delegate as much work as possible to AI to lessen their workloads.
The same study also shows that business leaders are more interested in using AI to empower their employees than to replace them.
If you’re a business leader who wants to leverage AI tools like ChatGPT to drive efficiency in your workplace, here are several measures you must take to protect personal and corporate data:
AI models like ChatGPT have the potential to store any data you enter and disseminate it to other users, so keep proprietary code and algorithms away from it. As Security Intelligence puts it, “Anything in the chatbot’s memory becomes fair game for other users.”
In April 2023, several employees of Samsung’s semiconductor business copied a few lines of confidential code from their database. They
pasted the code onto ChatGPT to fix a bug, optimize code, and summarize meeting notes. By doing so, the employees leaked corporate information, risking the possibility of exposing the code in the chatbot’s future responses.
In response to the recent incident, Samsung has limited each employee’s prompt to ChatGPT to 1,024 bytes and is considering reinstating its ChatGPT ban.
If your company handles proprietary code or algorithms, you may want to learn from Samsung’s experience and establish corporate
IT policies that clearly state how your team should (and should not) use AI models like ChatGPT.
Read More:
Are You Sure You’re Cybersecure?
Even though your team doesn’t handle top-secret code, you still need to be very careful about the data that you have access to. Avoid providing LLMs with any sensitive details, such as usernames, passwords, access tokens, or other credentials that, when exposed, could potentially compromise your company's security or privacy.
Sharing sensitive information on ChatGPT puts your data at risk, and your company could also face privacy or data protection regulation violations.
In March 2023, the Italian Data Protection Authority officially announced a
country-wide ban on ChatGPT because its security and privacy concerns infringed the European Union’s General Data Protection Regulation (GDPR), arguably the strictest privacy law in the world.
Avoid putting your sensitive information and company reputation at risk. Familiarize yourself with relevant laws, regulations, and policies on data handling before incorporating AI tools like ChatGPT into the workplace.
If you’re managing a healthcare organization, you already know that sharing PHI or any other personally identifiable information with ChatGPT is a HIPAA violation. The HIPAA Privacy Rule clearly states the need to restrict access to PHI, and entering such information on ChatGPT could result in significant legal and financial penalties for your company.
Read More:
HIPAA Compliance and Your Practice
Stay
HIPAA compliant and share PHI only through communication and collaboration tools vetted and designed to maintain the security and privacy of patient data. If you must access and transfer PHI via a unified communications platform, for example, make sure you partner with one that
signs a business associate agreement (BAA).
Read More:
A HIPAA-Compliant Phone System: What It is and Why It’s Important
Whether the world likes it or not, AI-powered tools like ChatGPT are revolutionizing the workplace. If you’re a company leader who wants to future-proof their business and empower their teams, the smart move is to adapt to technology, not shun it.
However, while there are massive benefits to using ChatGPT in your workflow, every new technology comes with risks and issues. Your responsibility as a leader is to ensure your company, clients, employees, and society are safe even as you transition into more automated operations.
Take careful precautions before adopting the latest groundbreaking technology. Start by seeking the IT and cybersecurity advice of
reputable IT experts. If your company doesn’t have a trusted IT partner, ER Tech Pros is ready to help!
With our strong team of
IT,
cloud,
compliance, and
cybersecurity engineers, ER Tech Pros can help you assess your current technology, develop and implement necessary IT policies, protect your devices, and ensure your business is fully equipped for the future.
Search Articles
ER Tech Pros is a managed service provider (MSP) that specializes in catering to the IT needs of businesses across the globe. We have offices in Sacramento and the Greater Fresno area.
We use our cutting-edge technology, extensive experience, and global team of technology experts to ensure your IT network is in its most secure and optimal state.
We focus on your IT so you can focus on growing your company.
8795 Folsom Blvd, Ste 205
Sacramento, CA 95826
1501 Howard Rd, Ste 2
Madera, CA 93637
(855) ER-TECH-1 / (855) 378-3241
info@ertech.io
Resources
Search this Site
ERTech Pros | All Rights Reserved.