AI systems deployed at UAB must align with the following core principles. These principles guide the responsible acquisition, development, and use of AI across our academic, research, and administrative missions.
Appropriateness and Benefits
AI systems at UAB must serve a distinct purpose, focus on achieving our mission and vision, and allow us to innovate and strive for excellence across our mission.
Human Values
AI systems used at UAB should place humans at the center of deployment; stakeholders and users must learn how to critically evaluate and use the tools, exercise decision making and judgment, and retain control over the use of AI.
Fairness and Non-Discrimination
AI systems approved for use at UAB should promote equity by avoiding algorithmic discrimination, which refers to errors in computer systems that create unequal outcomes.
Risk and Safety
Prior to deployment, AI systems at UAB should undergo risk assessment and mitigation to protect our community and intellectual property.
Related Resources
Transparency
The UAB community and those we serve should be informed about the use of AI, the nature of that use, and the outcomes expected.
Privacy and Security
UAB users of AI systems should not share restricted, proprietary, sensitive, or private data to unauthorized systems. UAB users will protect the confidentiality and integrity of UAB data when building or using AI systems.
Related Resources
Liability
Individual users employed by UAB assume liability if they agree to terms with any system that is not a system of record.
Related Resources
Accountability
UAB users are accountable for their actions utilizing AI.
Related Resources