Responsible AI: Principles for Advancing a More Equitable Innovation Future
Responsible AI Principles
Utilize a sociotechnical framework to identify challenges and meaningful solutions.
We must clearly define the types of societal problems we aim to solve, evaluate whether AI can and should be deployed as a tool to address these challenges, and consider the broader societal dynamics in which the AI tool is situated.
Incorporate prosocial design principles and continually assess broader societal impacts.
We must employ prosocial and design justice principles by ensuring that solutions are designed with societal benefit at the forefront, the communities most impacted by AI are centered in the design process, regular audits of impact are conducted, and the entire lifecycle of AI development can achieve its intended social good.
Support AI initiatives that shift power.
We must support new and more inclusive business models, compensation/incentive structures, and investment strategies, while building power across researchers, academic institutions, nonprofits/grassroots organizations, and policy advocates to raise concerns and propose solutions to address harms of AI.
Promote critical AI literacy and education across society.
We must expand access to computing education for all students, while advancing critical AI literacies amongst innovators, workers, consumers, and advocates to ensure they are empowered to make decisions about AI’s development, adoption/use, and impact.Â
Build collective mechanisms for governance and accountability.
We must build the capacity of a broad coalition of journalists, research scholars, whistleblowers, policymakers, and advocacy groups to shape the future of technological innovation through responsible regulation and accountability mechanisms.