We need collective action to document the harms of AI, advance responsible AI policies and accountability mechanisms, and invest in responsible AI solutions. Read our full statement below.
CEO
Kapor Foundation
VP of Technology Policy and Government Affairs
Kapor Center Advocacy
Chief Technology Community Officer
Kapor Foundation
A New Era of Tech Accountability: Why AI Guardrails are Urgently Needed to Protect Youth and Foster Innovation
When tech companies are allowed to operate without guardrails or accountability, communities are harmed, public trust erodes, and we miss the opportunity to accelerate innovations that could be harnessed to improve people’s lives.
Despite years of advocates, researchers, parents, and whistleblowers raising alarm bells about harms and demanding action, companies have largely refused to change their policies, practices, and algorithms. Most recently, they have disbanded teams focused on ethics, trust, and safety, and spent millions of dollars to lobby against any regulation of their technologies. Meanwhile, public trust in Big Tech companies and products has been significantly eroded. The overwhelming majority of Americans disapprove of Big Tech CEOs, confidence in Big Tech firms has declined, and Americans are more supportive of government intervention. Countries around the world, like Australia and France, have already moved to ban social media for children outright, with many more countriesconsidering bans. Parents in the United States have played a significant role in advocating for child online safety bills in Congress, but to no avail. Several states have taken action by passing their own restrictions–while fighting against efforts by Big Tech pushing for federal preemption that would remove state-level protections. The social media companies at the heart of these trials are now key players in AI development. They have provided a model for the AI industry to follow, including adopting anti-regulation stances and refusing to implement practices to protect children. They appear content pursuing the same approach as social media companies did, opting to “move fast and break things” by accelerating AI deployment to young people at all costs and despite credible risks.Some important data points to highlight:
- Two-thirds of teens have used AI chatbots, and Black and Latino youth are more likely to use chatbots than their peers. U.S. adults are far more concerned about AItechnologies than hopeful about its promise.
- Red flags were raised when two teens committed suicide and their parents filed lawsuits against OpenAI and Character.AI. A wave of additional lawsuits have filed against AI companies to hold them accountable for their AI chatbots contributing to teen suicide and addiction.
- Character.AI and Google agreed in January 2026 to settle lawsuits alleging the AI chatbot contributed to mental health crises and suicides among young people. Snap and TikTok also settled ahead of trial at the beginning of the year, with thousands of cases from teens, parents, and attorneys general still unresolved.
- In August 2025, a bipartisan coalition of 44 state attorneys general sent a formal letter to Google, Meta, and OpenAI expressing grave concerns about the safety of children using AI chatbot technologies.
- There are over 95 chatbot-specific bills under consideration across 34 states and at the federal level.
This trend is noteworthy. The same wave of public outrage, litigation, and regulatory action that eventually came for social media is already impacting AI–and we have the opportunity to get it right this time by keeping up the pressure for policies to protect young people.
- Organizing and Advocacy: We need robust, well-resourced advocacy efforts for identifying harms of AI, raising awareness, and advancing policies or other accountability mechanisms. Grassroots organizations, civil rights groups, coalitions of parents, and whistleblowers have mobilized and fought for vulnerable populations; we need to equip them with the resources to continue working with and advocating on behalf of communities.
- Policy Change: We need government officials who will be champions at the local, state, and federal levels to enact policies that establish real guardrails, ensure safety, and give people the confidence to adopt and benefit from AI for good. We cannot allow tech lobbyists to limit progress, and we must invest more heavily in pursuing policy priorities that benefit people and protect kids.
- Investing in Responsible AI Development: We need to drive investments in technology solutions that adopt principles for responsible, ethical, and equitable innovation, understanding that responsible AI investments can be both profitable and beneficial to society.
We all have a role to play. The time to act is now.
The Kapor Foundation and Kapor Center Advocacy would like to specifically thank the youth and parent advocates, grassroots organizers, scholars, journalists, and legal experts who have worked tirelessly for many years to push for greater tech safety and accountability. Your efforts have been central to these wins and to building a more equitable tech sector.Allison Scott, Ph.D. CEO, Kapor Foundation Patrick Armstrong, VP of Tech Policy and Government Affairs, Kapor Center Advocacy Lili Gangas, Chief Technology Community Officer, Kapor Foundation
Learn more about Kapor Foundation’s Responsible AI Principles and Responsible AI and Tech Justice Guide for K-12.