Exploring AI at a Mile High

Human Input: Who holds the levers? Power, trust, and the new AI governance mess

The Silicon Flatirons Flagship Conference focused on the emerging “have and have not” divide in artificial intelligence. Crys Black shares her thoughts.

Crys Black

Longmont, Colorado

Last updated on Feb 9, 2026

Posted on Feb 9, 2026

When people argue about “AI regulation,” they often act like it’s a technical dispute: model safety, accuracy, bias. After spending a full day at the Silicon Flatirons Flagship Conference at Colorado Law, I think that framing misses the point. The real question is who holds the levers, who gets to set defaults, shape markets, define what policymakers “know,” and decide whether opting out is even possible.

The phrase I kept coming back to all day was “the freedom to be off.” In the keynote, Brett Frischmann of Villanova University, described a growing asymmetry in which people with resources can choose when and under what conditions they are subject to surveillance, nudging, and manipulation. Everyone else gets whatever defaults show up in their tools, schools, and workplaces.

That idea echoed later in sharper human-rights language from Calli Schroeder of the Electronic Privacy Information Center: the rich and powerful enjoy the luxury of choosing when and under what conditions they are subject to surveillance, nudging, and manipulation.

Once you see the power problem, you start seeing it everywhere. You see it in debates about competition, in fights over enforcement, and in subtler moves that shape the information environment regulators rely on. By the end of the day, my takeaway was simple: AI is changing power structures, and trust is the contested currency.

Trust does not come from slogans or “responsible AI” landing pages. It also does not come from a single, silver-bullet “right,” no matter how emotionally satisfying that right may sound.

Several speakers returned to the “opt-out” option and “the freedom to be off,” and I understand why. The problem is that opt-out is rarely neutral in practice. People who opt out can lose access to jobs, services, and basic participation. In other words, the right exists on paper, and the penalty shows up in real life.

A more workable definition of trust is narrower and more operational. Trust looks like clear disclosure when AI is in the loop, honest boundaries on where it is used, and meaningful ways to challenge harmful outcomes. It also requires institutions that can apply rules consistently, not to mention technologists who can translate good intent into enforceable mechanisms for non-deterministic systems.

None of this stays in the policy room. It shows up in product roadmaps, vendor contracts, and procurement checklists, as well as in the uncomfortable questions employees and customers are already asking.

Lever 1: Defaults are power

Frischmann’s keynote hit what operators already feel in their inboxes and product update logs. GenAI is being integrated into business and consumer software without meaningful user input. He argued the rush is driven by fear of falling behind, getting ahead of regulators, harvesting more data, and normalizing AI as inevitable. Then he said the quiet part out loud: none of these answers genuinely reflect user preferences. He does not want it, and he wants to be able to opt out.

As a fractional operator, I translate that into a simpler statement: Defaults are strategy. If AI is on by default, most people will use it by default. Then the burden shifts to the individual to figure out what is happening, what it is doing, and how to say no.

Frischmann also offered a concrete, implementable idea: a standardized opt-out signal similar to Global Privacy Control (GPC), paired with state laws that make it enforceable. Colorado’s Attorney General has a dedicated page on approved universal opt-out mechanisms under the Colorado Privacy Act, including GPC (Colorado AG: Universal Opt-Out Mechanism). 

For a plain-English overview of how Colorado approached this, the IAPP’s write-up is also helpful (IAPP on Colorado UOOM). That kind of policy idea matters because it is legible. It forces a real question for vendors and deployers. If your AI rollout cannot survive genuine opt-out, what does that say about the value exchange?

Lever 2: The fight over reality (epistemic capture)

One of the most useful concepts I picked up that day was “epistemic capture.” It sounds academic, but it is painfully practical. It is when companies do not just try to influence rules. They influence what policymakers can know while the rules are being written.

In the morning, the discussion framed it as control over information flows and knowledge creation, including the ability to withhold information, selectively disseminate it, and “flood the regulatory zone.” If you have ever watched a small team get buried under a pile of vendor PDFs, you already understand the dynamic.

This is where knowledge ownership becomes power. When one side controls the briefs, the benchmarks, the access, and the expert time, it also gets to steer the story. Hype fits into that story as a convenient fog. “We have to move fast or we lose” narrows the conversation to speed and scale, while the harder questions, about rights, limits, and accountability, get pushed aside as if they are optional.

Trust relies on shared reality. If the information environment is tilted, if regulators cannot see inputs or interrogate incentives, then oversight becomes symbolic. Rules start to sound strong while enforcement stays weak.

The AI Market Forces panel: Harry Surden, Asad Ramzanali, Elettra Bietti, Christopher Yoo, and Richard Whitt. (Photo credit: Korey Mercier)

Lever 3: Competition is not about how many players. It is about whether entry is still possible.

As a startup advisor, I’m always listening for the difference between a market that is evolving and a market that is freezing.

In the competition panel moderated by Harry Surden, Professor of Law, University of Colorado, the framing was refreshingly direct. The question is not “what is the desirable number of competitors,” but what the market needs so that a new entrant has the ability to enter, especially if there is genuine innovation in how models are developed. Then came the barriers: compute costs, exclusionary contracts, limited access to compute, and vertical integration.

This is where Asad Ramzanali, Director of Artificial Intelligence and Technology Policy, Vanderbilt Policy Accelerator, offered the most concrete example of how messy this gets when model providers also compete with their customers.

He described Windsurf as an AI coding tool with a model picker and a default that called Anthropic’s Claude via API. TechCrunch reported that Windsurf said Anthropic was limiting its direct access to Claude models after acquisition rumors. A follow-up story captured Anthropic leadership’s stated rationale in competitive terms.

When providers who control critical infrastructure can also build competing applications, competition stops being a market feature and becomes a permission structure. Ramzanali’s bottom line was the part that I circled too: What we do not want is infrastructure providers who can pick winners and losers.

TechCrunch’s reporting on Windsurf’s access limitations gives us a clean external peg for this story. For context on how the broader Windsurf acquisition story evolved, see TechCrunch’s later recap when the OpenAI acquisition fell apart and Windsurf’s leadership team was scooped up by Google. 

The Administrative State & Tech Access panel: Blake Reid, Chris Lewis, Gus Hurwitz, Jennifer Huddleston, and Tejas Narechania. (Photo credit: Korey Mercier)

Lever 4: Unitary executive theory and the slow-motion dissolution of independent agencies

This was the panel that made the day feel bigger than AI.

Chris Lewis, President and CEO, Public Knowledge, described what he called a war on independent agencies, driven by unitary executive theory and led by the presidency.

In plain terms, the unitary executive theory is the idea that the President must be able to directly control the executive branch, including through removal power over officials who run agencies. The legal trendline has been moving toward stronger presidential control, with modern cases narrowing the space for insulating independent agency leadership.

Then things got real in the Q&A. An audience member asked what it means for citizens and industry if a president can threaten to withhold congressionally appropriated broadband funding while also being able to fire agency heads. Lewis answered with a single word: “lawlessness.”

That response matters because communications policy is the older sibling of AI policy. The FCC is one of the places where the country has historically wrestled with infrastructure power, speech, and market concentration. If independent agencies are weakened, reinterpreted, or brought under tighter presidential control, then enforcement becomes more volatile.

Volatility favors incumbents. Volatility punishes smaller entrants and any long-term investment in compliance or trust-building.

And that connects directly back to AI. AI governance is not only rules about model safety. It is also whether the institutions capable of enforcing rules retain the independence to do it consistently across administrations and against powerful actors. If they do not, enforcement becomes unpredictable.

Using AI Tools in Significant Decisions: Margot Kaminski, Calli Schroeder, Rep. Brianna Titone, Phil Gordon, Nicholson Price, and Stevie DeGroff. (Photo credit: Korey Mercier)

What trust looks like when it is real: Section 230, liability, and why “right to cure” is harder than it sounds

In the Colorado policy discussion, Rep. Brianna Titone, State Representative, Colorado General Assembly, raised the fear that AI companies want blanket protections similar to Section 230.

Section 230 is part of the Communications Decency Act of 1996, which is commonly understood as providing broad liability protections for online platforms for many kinds of third-party user content, alongside a “Good Samaritan” provision related to moderation. Cornell’s Legal Information Institute provides the statutory text in a readable format (47 U.S.C. § 230). For a short, neutral summary of what it does and does not do, the Congressional Research Service has a helpful brief (CRS: Section 230 Overview). 

Titone’s caution is understandable. A sweeping immunity regime for AI could function as an accountability off-ramp at exactly the moment systems are being deployed into high-stakes decisions.

She then described what she wants instead: disclosure, transparency about inputs and outputs, and a “right to cure,” comparing it to credit score disputes.

The credit-dispute analogy is emotionally appealing, but it can be technically misleading. Credit files are largely legible records. Many AI-driven decisions are produced by systems where the inputs are high dimensional, the logic is learned rather than written, and a single “reason” for an outcome can be probabilistic, indirect, or emergent.

Meaningful contestability is still possible. It just does not look like correcting a wrong address and clicking “fix.”

This is why we need real technologists helping write these bills. Not as decorative reviewers at the end, but as co-authors who can translate good intent into workable mechanisms. Otherwise we get laws that sound strong but collapse into either unenforceability or performative compliance.

Colorado’s own AI law, SB24-205, is a good reminder of why technical translation matters. It focuses on “high-risk” AI in consequential decisions and sets a compliance timeline that companies will feel in real operational terms. The official bill page includes a readable summary and the June 30, 2026 effective date (Colorado Legislature: SB24-205). The National Association of Attorneys General published a practical overview of the law and enforcement posture, found at NAAG: Deep Dive into Colorado’s AI Act.

A small but telling “what wasn’t said”

One more observation from the day. In the frontier-model conversations, I did not hear anyone mention Grok or xAI, even in passing. That surprised me, especially because the conference’s Day 2 focus leaned hard into communications infrastructure, spectrum, and what comes next.

In the same week as the conference, news broke that SpaceX had acquired xAI, and the public rationale centered on building space-based, solar-powered data centers to support frontier-scale compute, although some experts are skeptical of the concept. That alone should have made Grok and xAI a natural discussion point in a room focused on communications and infrastructure. 

The point is simple: frontier AI is starting to collide with launch economics, orbital risk, and energy systems. The model debate is the loud part. The quieter part is concrete, capital intensive, and hard to unwind once it is poured.

That tension came up explicitly in the “AI Boomer Bust” fireside chat on lessons from the telecom bubble. Phil Weiser, Attorney General for the state of Colorado and the moderator of this session, noted that the AI bubble may look more like the telecom bubble than that of the dot-com era, because of the physical infrastructure being built, including data centers and other real-world buildouts that will likely “end up somewhere.” 

Larissa Herda, formerly CEO and President of Time Warner Telecom Inc., described it even more plainly: The infrastructure investment looks “very telecom-like,” with data centers, power requirements, long-term leases, and capital eventually asking when it gets paid back.

It also gave me a second “missing” signal: Even though the Flagship Conference theme explicitly name-checked quantum computing, it barely surfaced in the Monday discussion. Colorado is a serious quantum research hub. If quantum-plus-AI is not on the policy agenda yet, it is not because it will not matter. It is because policy attention follows what is loud, deployed, and monetized right now.

For a strong recap of the second day of the conference, check out Korey Mercier’s article in Colorado AI News, "AI governance can’t stop at software: What Silicon Flatirons revealed about infrastructure risk."

My trust test for Colorado leaders

Here’s the test I will keep using with founders and operators, because it matches the levers I heard discussed all day:

  1. Is there real disclosure? Not legalese, but plain language that tells people when AI is in the loop and what it is doing.
  2. Is there meaningful recourse? Not “submit a ticket,” but a way to challenge outcomes, escalate, and get a human decision that can actually change the result.
  3. Is the market still open? If a handful of infrastructure players can pick winners and losers, trust becomes a branding exercise.
  4. Is enforcement durable? If rules depend on who is in office or which agency gets gutted next, compliance becomes roulette.

And on the “freedom to be off” question, I’ve landed here: It’s a useful moral signal, but it’s not a solution by itself. Opt-out rights that come with social or economic penalties are not freedom. They are paperwork.

The question is not whether AI will show up everywhere. The question is who gets to decide, and what earns trust when they do.

; ; ; ;

Share on

Tags

Subscribe for free to keep up with Colorado AI News!

Sign up today to get weekly email updates and to comment on selected articles.

Subscribe Now