The Kapor Foundation’s Grantmaking and Investment Principles to Advancing Responsible AI
Key Issues
Artificial intelligence (AI) has been at the forefront of innovation and embedded in the backdrop of technology for decades. More recently, advancements in AI have revolutionized how we tackle some of the most complex issues facing societies, such as enabling early medical diagnoses, forecasting wildfire paths to inform communities and first responders, and tracking carbon emissions for environmental monitoring. As AI capabilities accelerate and demand grows across sectors and industries, public and private funding for research and development has surged.
In the most recent federal budget, proposed funding for AI initiatives exceeds billions of dollars to support AI integration into several federal agencies, including information technology, defense, commerce, and health and human services. In the private sector, the US continues to outpace other countries in AI investments at a rate nearly 12 times greater than China, the second highest investor. In 2025 alone, four major US-based tech companies –Meta, Microsoft, Amazon, and Alphabet – committed $325B towards AI infrastructure. Moreover, the venture capital (VC) sector has fueled the surge in AI optimism, with 42% of US venture funding directed towards AI companies in 2024.

Simultaneously, the unfettered growth of AI adoption has raised the alarm on risks inherent in these systems, including their impact on unemployment, mental health, environmental injustice, digital civil rights, and the disproportionate harm to marginalized communities. From the increasing use of surveillance technology in policing and protest monitoring to the usage across the federal government to eliminate billions in scientific and social safety net funding, AI has already caused irreparable damage to the most marginalized communities. Furthermore, these harms are likely to be exacerbated with the proposed federal restrictions on state AI regulation. Large tech companies have repeatedly prioritized capitalistic interest, scale, and speed at all costs, demonstrating that responsible AI development is unattainable without systemic changes in incentives and consequences. This is especially evident as many companies roll back responsible AI commitments, further sidelining these efforts on a voluntary basis.
Concerningly, we are also collectively witnessing a consolidation of power in the AI race. Investments are coming from an increasingly smaller number of large VC firms. Last year, nine firms raised over half of the total capital raised by VCs, with just four firms accounting for over one-third of the total capital. This is making it more challenging for smaller VCs with less capital to take risks in offering more context-specific solutions or explore new use cases to integrate AI. As a result, fewer, more high-profile AI companies (e.g., OpenAI) receive the vast majority of funding, while smaller and more agile ventures are excluded. A funding environment that favors only established companies will continue to lead to power imbalances, stifle innovation, and raise the risk of harm.
For AI to have a positive impact on society, it needs to be created by diverse groups of people committed to that impact. This is why now, more than ever, we need to leverage the influence that the entire tech ecosystem can have on ensuring the design, development, and deployment of responsible AI. As an organization that works across this entire ecosystem, we are sharing our commitments that shape the AI landscape we are willing to support and invest in, with the hope that it helps others—especially investors, funders, and founders—join us in creating a future in which AI benefits everyone.
The Kapor Foundation Commits to
- Supporting AI Initiatives That Shift Power
- Promoting Sociotechnical Approaches And Solutions
- Elevating The Process As Well As The Outcomes
- Encouraging AI Literacy And Education That Adopts A Critical Lens
- Leveraging Our Social Capital To Collectively Demand Accountability
Our current innovation ecosystem concentrates power in a select few. Workers—from civil servants to software engineers to freelance artists—are already being impacted or replaced by AI, while startups are being incentivized to limit employee headcount. With AI’s potential to automate rote tasks, improve efficiencies, and increase productivity, the drive for corporate profit will fundamentally change the labor workforce and will likely exacerbate economic inequality. To combat this, we must shift power away from traditional business models that focus on maximizing profit (with little regard to anything else) and towards models that strengthen and uplift people and communities. This means supporting researchers, academic institutions, startups, smaller tech companies, nonprofits, policy advocates, and communities themselves—especially those with proximity to the challenges they are tackling.
In particular, we need to encourage participation from those who are most impacted by AI’s harms but often have little involvement in the creation of AI-powered technologies. This means enabling more Black, Latine, and Native people, disabled people, low-income people, LGBTQ+ people—anyone traditionally pushed to the margins—to meaningfully participate in all aspects of AI design, development, and deployment. Without the full, valued participation of these communities, we will continue to create AI models that are extractive and harmful. In contrast, technology that recognizes, respects, and reflects the diversity of lived experiences will lead to more innovative, narrowly-scoped solutions that honor people’s autonomy, respect our Earth’s finite resources, and uplift communities.
Several resources have recognized the sociotechnical nature of AI, which means acknowledging the impact society and technology have on each other, rather than viewing them separately. Alarmingly, AI innovations are often built on the intellectual property of human artists and creators without their consent. The uncredited and uncompensated roles that humans play in AI model development require an understanding beyond technical solutions alone. Common responsible AI principles, such as privacy, safety and security, transparency and explainability, accountability, fairness, and non-discrimination can be enhanced by integrating knowledge of social norms, human behavior, context, ethics, legal rights, and more into the AI development process. Expanding our lens invites people beyond those who identify as technologists to participate in the tech ecosystem, to take ownership over responsible AI design, development, and deployment. This interdisciplinary collaboration between technical and non-technical experts, organizations, and community members can leverage domain-specific knowledge to create AI solutions.
It doesn’t matter if we use AI to advance just causes if the process of creating that AI in the first place was unjust. These unjust processes include issues such as the environmental cost of training and using AI, the exploitation of international data workers, the use of copyrighted material in training AI models, and the lack of diversity on AI teams.
That being said, AI’s purposes, outcomes, and impacts must also be socially just. AI used for harm (e.g., military purposes, predictive policing) is not responsible, regardless of how much it aligns with a company’s technical definition of “responsible,” such as being safe, private, and algorithmically unbiased. To ensure just outcomes, there also needs to be processes for appealing decisions and redressing harms that result from the use of AI, as well as a commitment to continual improvement on environmental, civil, and worker rights.
While we have long supported expanding access to computer science education, which extends to AI literacy and education, we have done so through a justice-oriented approach that encourages students and educators to think critically about technology and its impact on society. We want students to become critical consumers and creators of technology—including the option not to engage. Comparably, the promotion of AI literacy and education in the workforce will necessitate individuals to not only have an understanding of how the technology works but also knowledge to enable equitable decision-making in AI adoption. As industries rapidly evolve in the face of AI integration, we must invest in upskilling and reskilling efforts that encourage individuals to critically assess AI technologies that shape our futures. Finally, AI literacy and education must also extend to consumers at large. With expanded use of AI across industries, personal data has been commodified across a variety of data systems to increase corporate earnings. Data privacy and ownership are becoming increasingly challenging, which will require building individuals’ agency and understanding of their data rights in this new digital era.
These varied AI literacy and education efforts will necessitate that organizations, advocates, and funders share a critical lens when promoting AI literacy.
Conclusion
The trajectory of AI deployment that Big Tech arrogantly keeps pushing is often overhyped and is not inevitable. On the contrary, as more people join in sounding the alarm for years and leverage our collective voice to prioritize people over profits, Big Tech’s influence grows ever more precarious. This is why we can’t take our collective foot off the pedal. It is imperative for everyone across the tech ecosystem, especially those holding power, to join the rising chorus while it is still possible to avoid the consequences of our own AI making. In the coming months, we’ll share examples of how these principles can be applied across the tech ecosystem.