Staff Guidelines

At the University of Richmond, we recognize the transformative potential of generative AI technologies in enhancing administrative efficiencies, fostering creative approaches to problem solving, and facilitating the impactful work that our staff engage in daily. As we navigate the evolving landscape of AI-assisted tools, our administrative staff will need to evaluate the benefits and limitations of these technologies.

Utilizing generative AI (including AI-based chat bots, image and audio generators, video transcription services and other use cases) can streamline operational tasks and aid in content creation. However, it's imperative to approach these tools with a critical eye, ensuring alignment with our institutional values regarding responsible use of technology and data, academic integrity, and the preservation of the personalized experience that defines a University of Richmond education. It is important as staff members to ensure these tools complement rather than compromise the quality and integrity of our work at the university.

Evaluating Generative AI Tools

When sourcing AI tools for administrative work, we should consider the technical, security and ethical ramifications of using Generative AI.

Data Privacy and Security

When using generative AI in our work at the university, it's essential to adhere strictly to data security and privacy standards outlined by the University to protect sensitive and confidential information. Publicly available generative AI tools, such as those provided by OpenAI, Google and Microsoft, use the data that we provide them to train further iterations of their models. As such, never send information classified as confidential, restricted, or official-use to one of these services. This type of information includes but is not limited to:

  • Personally Identifiable Information (PII)
  • Health records
  • Financial data
  • Student records
  • Employment records.

Always ensure that only information classified as public is utilized with generative AI tools. Public Information, as defined by the University's classification, is information that has been made available for General Public use and does not infringe on privacy rights or data protection regulations.

Data Security Policy
For comprehensive guidelines regarding data security at the University of Richmond, refer to the official data security policy and the external data transfer policy. Always ensure your use of generative AI falls within the boundaries definied within the policy

Use Effectively

Learning to use AI tools effectively requires an understanding of the strengths of the technology, as well as how to interact with it in order to get the best possible output for the intended use case. The university provides a LinkedIn Learning license for all staff members. This service offers a number of courses that can increase AI literacy. Some examples are:

There are many more resources on this platform that deal with AI use in specific areas, such as project management, HR, research and writing, finance and accounting, and more. These resources can help our staff learn how to use Generative AI effectively and responsibly before incorporating these tools in our work.

Understand Limitations

Just as understanding of the strengths of this technology is crucial to effective use, understanding the limitations of generative AI is equally important when evaluating tools. While Large Language Models (LLMs), like ChatGPT, may appear to be sentient, they are simply probability-based models that return the most likely output based on the user’s prompt within the context of their training.

  • These tools have no self-awareness or inherent understanding of the "truth" of their responses.
  • Generative AI always has a risk to "hallicunate" (respond with false information). Different tools may be more likely to hallucinate when facing certain tasks. Always verify the information generated by AI tools.
  • Remember that we are all ultimately responsible for the accuracy, quality, and content of our work even if it is generated in part by an AI tool.

Use Reputable Resources

When evaluating tools for use in your workflow, ensure that the tools and the companies behind them are reputable, legitimate and responsible. Remember only publicly available data may be submitted to these tools if the University does not have a contract and the product hasn't been vetted by IT Governance. If you have questions or concerns over particular tools contact Information Services for consultation and assistance.

Boatwright Memorial Library LibGuide

For more information on sourcing and using Generative AI tools, visit the Boatwright Memorial Library’s guide on Generative Artificial Intelligence

Ethical Considerations

There are a number of ethical issues to consider when incorporating Generative AI tools into your official work for the University. Below, is a list of just some of issues to consider before using a GenAI tool.

Respecting Intellectual Property

Generative AI's ability to produce content that mirrors human creativity poses significant questions about intellectual property rights. When using AI to generate text, images, or any form of creative content, it is crucial to:

  • Acknowledge the source of AI-generated content where appropriate.
  • Ensure that the use of AI-generated content does not infringe on existing copyrights or plagiarize the work of others.

Ensuring Fair Use and Non-Discrimination

AI systems reflect the historical biases that are present in their training data. As such, they can perpetuate or even exacerbate these biases in their outputs. In order to uphold our values of diversity, equity, inclusion and belonging

  • Regularly review and audit AI tools to identify and mitigate biases.
  • Ensure that AI implementations do not discriminate against any individual or group, particularly in sensitive areas such as recruitment, evaluation, and student support services.
  • Be transparent about the use of AI, allowing individuals to understand when/how AI affects decisions that impact them.

Privacy and Consent

The use of AI technologies must be balanced with the need to protect personal privacy and secure consent where necessary. This includes:

  • Being transparent with students, staff, and faculty about the use of AI technologies and the data they collect and process.
  • Ensuring that data used in AI applications is collected ethically, with proper consent, and used in ways that respect the privacy and rights of individuals.

Continuous Learning and Adaptation

The field of AI is rapidly evolving, necessitating ongoing education and adaptation. If you are going to use Generative AI in your work, commit to ongoing learning and professional development to keep pace with advancements in AI and its ethical implications.

By considering these ethical issues around the use of Generative AI, the University stays commited to using this technology in a manner that is respectful, responsible, and reflective of our commitment to excellence and integrity.

Boatwright Memorial Library LibGuide

For more information on the ethical considerations of using Generative AI, including topics not covered here such as environmental and human labor costs, visit the Boatwright Memorial Library’s guide on Generative Artificial Intelligence

While Generative AI brings a set of new considerations to our work, it also has the potential to reduce certain stressors and improve our ability to work creatively, effectively, and collaboratively. We invite you to be a part of these conversations on campus by exploring this website, collaborating with IS and others in projects involving generative AI or attending one of the events or workshops that the university is hosting in this area.

Important Questions to Ask

Expand All
  • Data Privacy and Security
    • "Have I ensured that no sensitive information is inadvertently included in the data I provide to generative AI tools?"
    • "Am I taking the necessary steps to anonymize or de-identify any data that could potentially be linked back to an individual before using it with generative AI tools?"
    • "Have I verified that the data I am using or considering for use with AI tools is classified as public and not confidential or restricted?"
  • Use Effectively
    • "Have I identified specific tasks or processes within my role where AI tools could enhance efficiency or quality?"
    • "Do I regularly evaluate the outcomes provided by AI tools to ensure they meet the expected standards of quality and accuracy for my projects?"
    • "Have I practiced prompt engineering to effectively communicate with AI tools, and do I understand how different prompts can yield different outcomes?"
  • Understanding Limitations
    • "What processes do I have in place to ensure the content generated by AI tools is reviewed and validated before it is finalized or disseminated?"
    • "Do I understand the importance of maintaining a critical eye towards the limitations of AI, especially in contexts that require nuanced understanding or ethical considerations?"
    • "Am I aware of the types of tasks or questions that are more prone to generating inaccurate or ’hallucinated’ responses from AI tools?"
  • Evaluating Reputable Resources
    • "Have I sought out reviews, case studies, or testimonials from other reputable institutions or users regarding their experience with the AI tools I’m interested in?"
    • "Do I understand the data privacy policies of the AI tools I’m considering, and how they align with the University’s standards for data protection?"
    • "Am I prepared to switch tools or seek alternatives if my current AI tool no longer meets the necessary standards of reputability and responsibility?"