For those of us in higher education, artificial Intelligence (AI) has become a key focus, largely resulting from the unveiling of ChatGPT in late 2022 and other similar large language models (LLM). These tools have the potential to optimize processes across departments and provide better student learning experiences. While its potential is huge, AI technology is still somewhat raw, and there has not been sufficient time to fully understand the security, reliability, and privacy implications of these new tools. However, there are some known considerations, and colleges and universities should review them when considering the integration of AI, and specifically LLMs, into their business processes.

Cybersecurity Risks Associated with AI and Large Language Models

Some inherent safety issues have been outlined by governmental agencies, think tanks, and cybersecurity companies. The UK’s National Cyber Security Centre outlines serious flaws of LLMs, including:

  • They can get things wrong and “hallucinate” incorrect facts;
  • They can be biased and are often gullible (in responding to leading questions, for example);
  • They require huge compute resources and vast data to train from scratch; and
  • They can be coaxed into creating toxic content and are prone to injection attacks.

Some of the most significant cybersecurity risks in AI include:

  • Use of generative AI to build complex and evolving cyberattacks;
  • Use of AI for automated delivery of malware;
  • Leaking of private user chat history (this already happened with ChatGPT);
  • Data manipulation and data poisoning;
  • Impersonation;
  • Reputational damage through data loss, technology malfunctions, wrong information shared, etc.; and
  • Unintended risks from using AI for things like note-taking and transcribing meetings or phone calls.

Take Note

AI note-taking tools like those provided in leading conference call software are an example of how easy it can be to incorporate AI into our daily routines without thinking about the consequences, including where the information is stored and who can access it. That’s why it’s so important to communicate with your staff about the potential risks and best practices.

How to Reduce the Risks of AI Chatbots and Generative AI

While this technology is new and experimental, there are methods that can reduce the risks associated with its use. Here are my recommendations:

  1. Avoid sharing sensitive institutional data and personal information with AI;
  2. Focus on immutable backups, strong access control technologies, and encryption to ensure that data is not leaked or poisoned;
  3. Train staff to recognize the risks of AI, LLMs, and generative AI, and watch for problems;
  4. Engage with third parties to monitor/audit your use of AI;
  5. Build a strong AI incident response procedure to manage incidents and exercise these procedures;
  6. Leverage AI tools to help manage security and privacy, including threat detection, user monitoring, etc.;
  7. Implement added access control for key AI users, especially those that are interacting with corporate data and autonomous systems;
  8. Publish policies and guidelines on the schoolwide use of AI tools;
  9. Build systems to enforce usage guidelines at your institution; and,
  10. Document and share publicly the use of AI tools and any incidents that arise from AI harm.

Proceed with Caution … and Excitement

AI has a lot of potential for improving processes and student experiences, but it’s important to have policies in place to govern the use of AI using the recommendations mentioned above. Although there are risks, the Collegis Education team is excited about the potential for AI to drive innovation and effectiveness for our partner schools. Let’s start a conversation about using AI to transform your school.

Author: Jason Nairn

Dr. Jason Nairn serves as Collegis Education’s vice president of Information Technology and Security providing IT leadership for several college and university. Prior to joining Collegis, Nairn was vice president of Information Technology and chief information officer for Concordia University in Portland, OR.