Alliance Magazine — GPF 2026 Reflection · April 2026
We Have More Agency Than We Think
Roshan Ghadamian
There was a moment in the AI governance session at GPF 2026 that shifted the register of the whole conversation.
Sheila Warren named something most people in philanthropy have been reluctant to say directly: the trust we extended to social platforms was not rewarded. We believed, collectively, that good actors building powerful technology would exercise good judgment about how it shaped public life. We got evidence to the contrary at extraordinary scale. The harms were not bugs. They were, in many cases, the product.
That history matters for how philanthropy approaches AI — not as a reason for pessimism but as a reason to stop waiting. The instinct to look upward for solutions — federal regulation, platform self-governance, international frameworks — made sense when the assumption was that powerful institutions would eventually correct themselves if given enough time and pressure. That assumption has been tested. The lesson is not that powerful institutions are irredeemably bad. It is that good will is not infrastructure.
What replaced the pessimism in that session was something more useful: a reorientation toward agency. The argument — and it landed in the room — was that change in AI governance is not coming from the top down. It is already happening at the local and state level, in the institutions closest to the people affected, by practitioners who stopped waiting and started building. Communities deciding what data practices they will accept from vendors. States creating accountability standards that predate federal action. Organisations building the institutional memory to hold technology partners to previous commitments.
This is not a consolation prize for people who couldn't get federal policy. It is historically how durable governance gets built. Local and state experimentation produces the proof cases that determine what federal frameworks eventually look like. Philanthropy has disproportionate leverage at this level — the relationships are closer, the institutional memory is deeper, the competition for influence is lower. A foundation funding AI governance infrastructure in three mid-sized cities is not making a small bet. It is potentially writing the template.
The shift Warren was describing — and that the session collectively arrived at — is from governance as heroism to governance as architecture. The heroism model requires exceptional individuals to make exceptional choices under pressure and hope the decisions stick. It doesn't scale and it isn't auditable. The architecture model builds the infrastructure that makes good decisions the default — visible, contestable, updatable as conditions change. It doesn't require trusting that the right people will always be in the room.
That reorientation felt like the most important thing said in three days. Not because it resolved the AI governance problem — it didn't — but because it located agency in the right place. Not in platforms deciding to behave better. Not in legislation that may or may not pass. In the institutions that decide, before the question becomes urgent, what kind of infrastructure they want to be running on.
Philanthropy can fund that. It is exactly the kind of work that markets won't build and governments won't prioritise until someone has already proved it works.
Roshan Ghadamian presented at the 2026 GPF Leaders Summit. He is the founder of Elevate and the principal researcher at the Institute for Regenerative Systems Architecture.
Read the explainer: Pre-Governing
The governance-as-architecture approach described here is the design philosophy behind Constellation, IRSA's institutional governance platform.