AI with Integrity:
ASU’s Design Principles for Beneficial and Responsible AI
Read our Design Principles for Beneficial and Responsible AI
Read our Design Principles for Beneficial and Responsible AI
ASU is a place of innovation, but also of inclusion. With AI advancing rapidly, we can’t afford to leave ethics behind. These principles provide a compass for daily decision-making, whether you’re building AI tools, integrating them into curricula, or evaluating their impact on students, research, and operations.
They aren’t static rules, they’re living guidelines meant to evolve alongside the technology and our collective understanding of it.
We have a responsibility to create AI experiences that open and amplify possibilities (as opposed to limiting or closing down pathways or options) in service of respecting human autonomy and empowering individuals and communities. We must always put humans first. We recognize that all data and models are incomplete and flawed, tending to create bias as they replicate formal legacy systems and ways of thinking. By amplifying possibilities we can mitigate the harm that can come from limiting options or biased pathways which can have the effect of reinforcing inequities, lead to coercion, undermining human dignity, and restrict autonomy and choice. The goal is to create AI that respects the diversity of human experiences and values and reflects the ASU charter and design aspirations, striving to ensure that AI serves to enhance the human experience rather than diminish it.
We have a responsibility to bring the best of what technology has to offer to the ASU community while being aware of potential risks, and to keep pace with the rapid progression of generative AI. This requires us to embrace experimentation and agility in the learning process, determining what works, adopting a mindset of learning fast, learning forward, and sharing knowledge.
Before release — and on an ongoing basis, we must rigorously evaluate AI tools, platforms, models, and experiences for possible impacts and potential harm. We must continually seek to improve transparency and increase observability. Our commitment extends to continuous improvement, actively working to mitigate harm, and decisively removing technologies or procedures that fall short of our ethical standards.
We design for equity. That means making sure AI doesn’t unintentionally widen existing disparities or other gaps based on demographic. We commit to measuring impact, protecting privacy, and prioritizing access for all.
We have a responsibility to develop and deploy AI models and applications with attention to the rights of individuals’ privacy and agency in the use of their data, individually and in aggregate. We should prioritize transparency of scope, purpose, and risks inherent in disclosing data to ASU and leverage opportunities for disclosure of privacy terms to educate our stakeholders in being informed and empowered data citizens.
Developing and using generative AI responsibly and beneficially is a shared responsibility between the enterprise and individuals. This responsibility should be iterative and reciprocal in nature.
The Faculty Ethics Committee on AI Technology supports Enterprise Technology in guiding the ethical development and responsible design of AI in an environment of constant change. This transdisciplinary group of faculty experts advises on the creation of ethical guidelines and guardrails that shape how AI is used across ASU’s technology ecosystem.
The committee’s primary mission is to review, advise, and influence policies and practices related to AI-enabled technologies. Their work focuses on three key goals:
The committee brings together deep expertise in education, law, global futures, film, and business. Their collective insights allow them to assess complex issues such as bias, transparency, data privacy, accountability, and social impact.
Through close collaboration with researchers, administrators, technologists, and external experts, the committee fosters a comprehensive, university-wide approach to AI ethics.
They also play an active role in:
With their guidance, ASU continues to lead with integrity—ensuring our use of AI aligns with our charter, reflects our values, and serves the public good.