For more than two decades, CU Boulder’s Silicon Flatirons has convened rigorous debates on technology and policy. This year’s Flagship Conference, including Monday afternoon’s Spectrum Security and Resilience Summit, revealed a widening gap between AI governance and the infrastructure risks it increasingly shapes.
Day One: Power, agency, and the limits of choice
The 2026 Flagship Conference opened with a familiar but still unresolved set of questions: Who benefits from AI adoption, who bears its risks, and who retains meaningful agency as AI systems become ambient rather than optional. Discussions focused on governance, accountability, and the widening asymmetry between those who can shape or disengage from AI-driven systems – and those who cannot.
A recurring theme was that access to AI is no longer the core issue. Instead, the more consequential divide lies in the ability to refuse, pause, or meaningfully influence how AI is integrated into institutions, products, and daily life. That idea, and the power dynamics behind it, are explored more fully in a companion piece by Crys Black ("Who holds the levers? Power, trust, and the new AI governance mess") which examines the growing importance of what some speakers described as the “freedom to be off.”
The Flagship Conference also grounded these concerns in policy reality. Colorado Representative Brianna Titone discussed the intent behind Senate Bill 24-205, the Colorado AI Act, which seeks to introduce transparency, disclosure, and accountability for high-risk AI systems. Regardless of how the law ultimately evolves, the discussion reflected a broader recognition that today’s AI incentives reward speed and scale, while shifting risk downward to deployers, workers, and consumers – unless governance intervenes.

Two conversations, one risk surface
As the first day of the Flagship Conference concluded, the conversation flowed directly into Day Two, which included the four-hour Spectrum Security and Resilience Summit, an event with its own history and purpose that has long focused on emerging threats to communications infrastructure. The shift in tone was immediate and deliberate.
Where Day One of the conference emphasized governance, power, and agency, the Day Two focused on systems that cannot fail without real-world consequences: public safety communications linking emergency responders, secured communications used across government agencies, and spectrum-dependent systems that support law enforcement and public-sector operations. These are not abstract networks. They are the connective tissue of emergency response, public safety coordination, and government continuity, and they are largely reliant on ex post enforcement models – responding after any harm occurs rather than preventing it.
Panels examined interference, jamming, spoofing, congestion, and the growing difficulty of attribution in an increasingly crowded and contested spectrum (“wireless”) environment. The Summit was kicked off with a keynote address by Anna Gomez, Commissioner of the FCC, and featured several current and former members of the FCC (responsible for governing non-federal/commercial use of the radio frequency spectrum) and NTIA (responsible for federal use). Also present was David Goldman, VP of Satellite Policy at SpaceX.
While Gomez focused her comments on the proliferation of consumer-grade, illegal jamming devices and their ilk, Goldman opened the audience’s eyes to the implications of satellite communications on our largely terrestrial-bound spectrum. SpaceX (which owns Starlink) acquired EchoStar’s unpaired AWS-3 and 2GHz/AWS-4 licenses in 2025 for a combined $19.6 billion in cash and stock, opening the way to developing a direct-to-cell network and expanding on their existing T-Satellite solution for T-Mobile users.
Goldman predicted that 2026 will bring significant discussion around potential international responses to U.S. space dominance, and he advocates for a global conversation on spectrum licensing and governance.

The quiet absence: AI in infrastructure threat models
Despite the Summit’s forward-looking focus, artificial intelligence was rarely framed as a central driver of spectrum risk, nor was there any mention of quantum technologies, which are expected to upend the entire cryptographic industry on which secure communications depend. These absences were notable, not as a failure of the discussion, but as a reflection of how governance conversations remain segmented even as systems converge.
AI is already shaping how spectrum is monitored, optimized, and contested. It lowers the barrier to interference by automating tasks that once required specialized expertise. It accelerates both defensive and malicious capabilities, compresses response timelines, and complicates attribution. As AI becomes embedded in the management of shared physical resources, failures will no longer surface simply as flawed recommendations or biased outputs.
In public-sector contexts, the implications are especially acute. Disruption or degradation in spectrum-dependent systems can impair emergency dispatch, fracture coordination between agencies, or compromise secured government communications. In law-enforcement environments, the spectrum threat landscape already includes drones used to deliver contraband and mobile devices later leveraged to coordinate criminal activity. These are problems that scale faster as detection, evasion, and interference techniques become more automated.
The Summit treated spectrum correctly, as essential infrastructure. It was described as “the lifeblood of wireless” by Monisha Ghosh, a professor of electrical engineering at Notre Dame and contributing member of VCAT (the Visiting Committee on Advanced Technology), an advisory committee within NIST (the National Institute of Standards and Technology). What remains unresolved is how AI governance adapts once AI itself becomes part of that infrastructure.
Why this convergence matters now
The urgency of this gap was underscored by developments unfolding alongside the Summit. On the same day, SpaceX announced its acquisition of xAI, further integrating large-scale AI development with space-based infrastructure. At the same time, SpaceX has filed an application with the FCC to deploy satellite constellations that could function as orbital data centers.
The elephant in the room here is Grok, which was not mentioned once during the entire, two-dayconference. Created by xAI, Grok is intentionally permissive in its implementation of guardrails and has been met with controversy over its lack of consumer protection mechanisms. Despite being notably excluded from one panelist’s list of “dominant large language model systems,” this is the LLM that today is embedded in Tesla vehicles and will soon be in SpaceX rockets, satellites, and communications systems.
These developments are not speculative. They illustrate a clear direction of travel: AI, compute, and spectrum are converging faster than existing governance models are evolving. Decisions about AI risk can no longer be confined to software systems or enterprise deployments when AI-enabled platforms increasingly operate at national or global scale and rely on shared public resources.
Colorado’s AI policy efforts, including SB 24-205, represent early attempts to grapple with accountability in this environment. But they also raise difficult questions: How do disclosure, transparency, and liability function when AI systems influence shared infrastructure? What does accountability look like when harms are systemic, cross-jurisdictional, or emergent rather than intentional?
A signal, not a critique
None of this diminishes the seriousness or success of either event. The Flagship Conference surfaced the human and institutional stakes of AI deployment. The Spectrum Security and Resilience Summit highlighted the fragility of the infrastructure that modern society depends on. Together, they revealed a governance challenge that sits between those domains.
If AI governance continues to focus primarily on software, models, and individual decisions, it risks falling behind the systems that AI is already reshaping. The next AI-related failures are unlikely to look like chatbots gone wrong. They are more likely to surface as disrupted emergency communications, degraded coordination between public-safety agencies, or compromised secured government systems.
Silicon Flatirons has long played a role in identifying governance challenges before they fully materialize. This year’s events suggest that the next phase of AI governance must extend beyond code and into the public infrastructure that AI increasingly inhabits.