Understanding the risks of AI tools: Protecting your data, devices at Ohio University
Artificial intelligence (AI) tools continue to evolve quickly, offering new ways to draft content, analyze information and streamline daily tasks.
While these tools can be incredibly helpful, it’s important to understand the risks involved when sharing information with them – especially as we work to safeguard Ohio University data and maintain compliance with established security standards.
Why it matters
Publicly available AI tools, such as ChatGPT, Google Gemini, Perplexity and others, are operated by third-party vendors. When you paste text, upload a file or allow these tools access to your device, your information may be stored on external servers, reused to train future models, or processed outside Ohio’s legal jurisdiction.
Ohio University’s Secure Use of Artificial Intelligence (AI) Tools Standard states that unapproved AI tools cannot be held accountable for institutional data governance or security requirements, creating potential organizational, legal and regulatory risks.
Key risks to be aware of
1. Data exposure
Anything entered into a public AI system may be:
- Stored on external servers
- Used to improve the vendor’s model
- Shared with subcontractors
- Exposed during a data breach
This is especially important for regulated or sensitive data. The OHIO standard prohibits entering information such as FERPA-protected student records, HIPAA data, export-controlled research or proprietary university information into public AI tools.
2. Loss of institutional control
Once data is entered into an AI tool that hasn’t been vetted through Ohio University processes, the University has no authority over how it is retained, processed or shared. Public AI tools may be hosted outside Ohio or the U.S., which further complicates compliance and oversight.
3. Inaccurate or misleading output
AI systems can generate incorrect or fabricated information. OHIO’s AI cybersecurity guidance stresses that data produced by unapproved tools should not be assumed to be factual and must always be verified before use in university work.
4. Device and privacy concerns
Some AI browser extensions or apps may request permissions to view your browsing history, read data on your device or access files. Unvetted tools increase the risk of data leakage or malware.
Safer ways to use AI at OHIO
Ohio University encourages the thoughtful and secure use of AI tools while ensuring the protection of institutional and personal data. Here are the recommended best practices:
- Use the protected version of Microsoft Copilot when working with sensitive or internal University information. OHIO’s enterprise version provides commercial data protection and does not use your inputs to train AI models. Access it at https://copilot.microsoft.com and sign in with your OHIO credentials to confirm you see the green shield indicating you are using the protected version.
- Feel free to use public AI tools for public or low-impact information, such as summarizing publicly available web content or generating general ideas.
- Avoid sharing sensitive data with any AI tool other than the protected version of Microsoft Copilot.
- Request a technology review for any new AI tool you want to use as part of University business.
- Verify all AI-generated content, no matter the source.
- Ask for help any time you’re unsure whether data is appropriate for AI use.
If something goes wrong
If you think you may have unintentionally shared sensitive information with an AI tool, contact the Office of Information Security as soon as possible at security@ohio.edu or 740-566-7233. The team can help assess the situation and recommend next steps.
A simple rule of thumb
If you wouldn’t send the content to an unknown vendor, don’t paste it into a public AI tool.
To learn more, visit: