Skip to main content

The Kapor Foundation’s Grantmaking and Investment Principles to Advancing Responsible AI

Key Issues

Artificial intelligence (AI) has been at the forefront of innovation and embedded in the backdrop of technology for decades. More recently, advancements in AI have revolutionized how we tackle some of the most complex issues facing societies, such as enabling early medical diagnoses, forecasting wildfire paths to inform communities and first responders, and tracking carbon emissions for environmental monitoring. As AI capabilities accelerate and demand grows across sectors and industries, public and private funding for research and development has surged.

In the most recent federal budget, proposed funding for AI initiatives exceeds billions of dollars to support AI integration into several federal agencies, including information technology, defense, commerce, and health and human services. In the private sector, the US continues to outpace other countries in AI investments at a rate nearly 12 times greater than China, the second highest investor. In 2025 alone, four major US-based tech companies –Meta, Microsoft, Amazon, and Alphabet – committed $325B towards AI infrastructure. Moreover, the venture capital (VC) sector has fueled the surge in AI optimism, with 42% of US venture funding directed towards AI companies in 2024.

Access Full Report

The Kapor Foundation Commits to

Our current innovation ecosystem concentrates power in a select few. Workers—from civil servants to software engineers to freelance artists—are already being impacted or replaced by AI, while startups are being incentivized to limit employee headcount. With AI’s potential to automate rote tasks, improve efficiencies, and increase productivity, the drive for corporate profit will fundamentally change the labor workforce and will likely exacerbate economic inequality. To combat this, we must shift power away from traditional business models that focus on maximizing profit (with little regard to anything else) and towards models that strengthen and uplift people and communities. This means supporting researchers, academic institutions, startups, smaller tech companies, nonprofits, policy advocates, and communities themselves—especially those with proximity to the challenges they are tackling.

In particular, we need to encourage participation from those who are most impacted by AI’s harms but often have little involvement in the creation of AI-powered technologies. This means enabling more Black, Latine, and Native people, disabled people, low-income people, LGBTQ+ people—anyone traditionally pushed to the margins—to meaningfully participate in all aspects of AI design, development, and deployment. Without the full, valued participation of these communities, we will continue to create AI models that are extractive and harmful. In contrast, technology that recognizes, respects, and reflects the diversity of lived experiences will lead to more innovative, narrowly-scoped solutions that honor people’s autonomy, respect our Earth’s finite resources, and uplift communities.

Several resources have recognized the sociotechnical nature of AI, which means acknowledging the impact society and technology have on each other, rather than viewing them separately. Alarmingly, AI innovations are often built on the intellectual property of human artists and creators without their consent. The uncredited and uncompensated roles that humans play in AI model development require an understanding beyond technical solutions alone. Common responsible AI principles, such as privacy, safety and security, transparency and explainability, accountability, fairness, and non-discrimination can be enhanced by integrating knowledge of social norms, human behavior, context, ethics, legal rights, and more into the AI development process. Expanding our lens invites people beyond those who identify as technologists to participate in the tech ecosystem, to take ownership over responsible AI design, development, and deployment. This interdisciplinary collaboration between technical and non-technical experts, organizations, and community members can leverage domain-specific knowledge to create AI solutions.

It doesn’t matter if we use AI to advance just causes if the process of creating that AI in the first place was unjust. These unjust processes include issues such as the environmental cost of training and using AI, the exploitation of international data workers, the use of copyrighted material in training AI models, and the lack of diversity on AI teams.

That being said, AI’s purposes, outcomes, and impacts must also be socially just. AI used for harm (e.g., military purposes, predictive policing) is not responsible, regardless of how much it aligns with a company’s technical definition of “responsible,” such as being safe, private, and algorithmically unbiased. To ensure just outcomes, there also needs to be processes for appealing decisions and redressing harms that result from the use of AI, as well as a commitment to continual improvement on environmental, civil, and worker rights.

While we have long supported expanding access to computer science education, which extends to AI literacy and education, we have done so through a justice-oriented approach that encourages students and educators to think critically about technology and its impact on society. We want students to become critical consumers and creators of technology—including the option not to engage. Comparably, the promotion of AI literacy and education in the workforce will necessitate individuals to not only have an understanding of how the technology works but also knowledge to enable equitable decision-making in AI adoption. As industries rapidly evolve in the face of AI integration, we must invest in upskilling and reskilling efforts that encourage individuals to critically assess AI technologies that shape our futures. Finally, AI literacy and education must also extend to consumers at large. With expanded use of AI across industries, personal data has been commodified across a variety of data systems to increase corporate earnings. Data privacy and ownership are becoming increasingly challenging, which will require building individuals’ agency and understanding of their data rights in this new digital era.

These varied AI literacy and education efforts will necessitate that organizations, advocates, and funders share a critical lens when promoting AI literacy.

We recognize that this work cannot be done in isolation and collective action amplifies our voice—especially as the government bans regulations and tech companies roll back guidelines and commitments to safe and responsible AI. Tech innovations impact those beyond the industry, thereby requiring a broader swath of people to demand accountability. We must act now before AI monopolies further consolidate their power.

We hold a unique privilege in that our work across the tech ecosystem has positioned us with connections throughout the country, including tech companies, foundations, investors, policy advocates, researchers, educators, startup founders, nonprofits, and more. We will leverage the privilege of these connections to mobilize collective action to create and sustain AI that supports a just and equitable future.

Conclusion

The trajectory of AI deployment that Big Tech arrogantly keeps pushing is often overhyped and is not inevitable. On the contrary, as more people join in sounding the alarm for years and leverage our collective voice to prioritize people over profits, Big Tech’s influence grows ever more precarious. This is why we can’t take our collective foot off the pedal. It is imperative for everyone across the tech ecosystem, especially those holding power, to join the rising chorus while it is still possible to avoid the consequences of our own AI making. In the coming months, we’ll share examples of how these principles can be applied across the tech ecosystem.

Access Full Report