Guiding Principles for Responsible AI
UAB's approach to generative AI is grounded in shared values and institutional responsibility. These principles guide how AI is explored, adopted, and used across the enterprise as well as act as guideposts, helping the community move forward thoughtfully even as AI technologies continue to evolve.
Safety
AI tools must prioritize the safety of students, patients, employees, researchers, and the community.
Ethics
AI use must reflect UAB's commitment to ethical conduct, integrity, and responsible decision making.
Transparency
Users should communicate how and when AI is being used, consistent with UAB guidance.
Privacy & Security
AI applications must protect sensitive data and comply with UAB's data protection and security policies.
Accessibility
AI should support accessibility for all members of the UAB community.
Accountability
Human judgment remains essential; employees are responsible for decisions informed by AI.
Collaboration
AI activity should be approached through shared leadership and partnerships across academic, administrative, clinical, research, and operational units. The initiative supports coordination, not replacement, of existing responsibilities.
Innovation
AI should foster creative, forward thinking solutions that advance UAB's academic, clinical, research and operational excellence.
Stewardship
UAB will approach AI investments with fiscal discipline and institutional coordination. AI initiatives should demonstrate measurable value, align with enterprise priorities, avoid unnecessary duplication, and leverage shared capabilities when appropriate.
Workforce Development & Adaptation
Artificial intelligence will change how work is performed across the enterprise. UAB should prepare its workforce through education, reskilling opportunities, and practical support to ensure faculty, staff, clinicians, and researchers can thrive in an evolving environment. Responsible AI adoption requires parallel investment in human capability.