Open AI’s recent release of ChatGPT has generated a lot of interest and media buzz as the application set records as the fastest application to reach 1 million users (five days) and then 100 million users (two months).
It’s a powerful tool that can summarize information quickly and provide helpful suggestions in an easily digestible format that can quickly answer your questions. We recognize that our employees are interacting with ChatGPT and other AIs to become more productive and efficient, but it’s important to know the risks associated with its use.
We recommend that all our employees exercise caution when using ChatGPT and avoid sharing sensitive company or personal information through the platform. Additionally, we encourage all our employees to report any concerns related to its use.
More About ChatGPT
ChatGPT is a large language model that was trained with a massive dataset of human language, including web pages, books, and other sources. This allows it to generate coherent and contextually appropriate responses to a wide range of inputs. It is a model that gets better over time based on feedback from a team of reviewers at OpenAI and users of the tool.
However, it is essential to understand that the model is not perfect and can make mistakes. In some cases, these mistakes can lead to incorrect or harmful responses. For example:
- The model may inadvertently generate offensive or biased language, or it may provide inaccurate information that could lead to negative consequences.
- It can generate text for events that never occurred or create links to websites and pages that do not exist.
- It is important to understand that the model was trained on legacy data and therefore has limited knowledge of events that occurred after September 2021. Any questions on current events could contain misleading or incorrect information.
There are also concerns over privacy. OpenAI gathers user information, including name and contact details, and could be shared with third parties, including vendors and service providers. Any information that is sent to the model as input and output could potentially be used to help retrain the model. That data could be at risk if there is a data breach.
Tips for Using ChatGPT
Here are some helpful tips to keep in mind when utilizing the ChatGPT and other AI tools:
- Use ChatGPT as a tool to supplement, not replace, human expertise.
- ChatGPT can be a valuable resource for finding information and answering questions, but it is essential to recognize its limitations.
- When in doubt, seek out advice and guidance from human experts within our organization.
- Be cautious about the information you share with ChatGPT.
- Avoid sharing sensitive or confidential information such as financial data, personally identifiable information, or intellectual property as that information becomes part of the AIs model going forward.
- If you are unsure whether a piece of information is appropriate to share, consult with your supervisor or the IT department before proceeding.
- Verify the information provided by ChatGPT.
- While it is a powerful tool, it may not always provide accurate or complete information. Remember AI is taught and is only as accurate as the material used to teach it.
- Stay up to date on security best practices.
- Keep an eye out for training and awareness programs offered by IT and Risk.
- Stay informed about the latest threats and vulnerabilities and follow recommended security measures.
- Be prepared to escalate any issues or concerns to the IT department or your supervisor.
- If you encounter any suspicious activity, unusual behavior, or other potential security risks while using ChatGPT, report it immediately.
- Consider opting out of sharing your data by filling out this Privacy Request provided by OpenAI.
- The best way to secure your data is to not allow OpenAI to use it.
Links for Reference
OpenAI’s Terms of Use
OpenAI’s Privacy Policy
Introducing ChatGPT
ChatGPT General FAQ
ChatGPT wiki
Building trust in AI is a Shared Responsibility an article by KPMG (Feb 2023)