I'm saying there's something structurally different form autonomous systems generally and from an LLM corpus which has all of the information in one place and at least in theory extractable by one user.
I point that out a little bit when I refer to agencies being discouraged from sharing information. The CIA may be worried about losing HUMINT data to the NSA for example. You may be referring to them worrying about compartmentalizing the information away from the president as well which you are right happens to some extent now but shouldn't 'in theory'. Maybe it's a don't ask don't tell. I think Cheney blew the cover of an intel asset though.
> compartmentalizing the information away from the president as well which you are right happens to some extent now
This is nothing new, and has been happening since at least the 1940s, to multiple administrations from both parties. Roosevelt, Truman, Kennedy, Nixon, Reagan...and that's just some of the instances which were publicly documented.
Thanks for the comment. Interesting to think about but I am also skeptical of who will be doing the "collecting" and "synthesizing". Both tasks are potentially loaded with political bias. Perhaps it's better than our current system though.
There's not a direct tie to what I'm trying to sell admittedly. I just thought it was a worthwhile topic of discussion - it doesn't need to be politically divisive and I might as well post it on my company site.
I don't think there are easy answers to the questions I am posing and any engineering solution would fall short. Thanks for reading.
It’s a call to patriotism. China versus America. “Who will you back?” This has become a common plea from the Silicon Valley elite over the last six months. I heard the move up close at the Harvard Kennedy School, where a visiting Eric Schmidt warned that AI may soon cross into autonomous self-improvement, argued that someone will need to “raise their hand” and impose limits, and then pivoted into the geopolitical register, contrasting American and Chinese trajectories and urging policy and funding choices aligned with “American values.” Others have also made versions of this argument in different forums. Tarun Chhabra, head of national security policy at Anthropic, has made a similar argument, urging an “American stack” and treating model governance as a geopolitical contest. Putting aside the awkwardness of nationalist messaging coming from the Bay Area’s long-time borderless “global citizens,” the incentives are not hard to see. If you can frame the open vs closed models debate as a national security referendum, you can frame restrictive rules as patriotism and you can frame “responsible control” as synonymous with dominance by a small circle of incumbent providers.
The posture makes sense once you consider two facts. One: industries which may live and die on capricious regulatory rule making must make their case to those with their hands on the levers of power. In 2026 America, those hands are professed patriotic Republicans. Two: Big Frontier LLM is losing the tech battle, or at least losing the easy assumption that America’s lead is automatic and permanent. They are on their back foot so they must frame the open vs closed model debate wrongfully as a fight between America and China. America cannot afford to lose a battle to China and by extension Anthropic, OpenAI and Alphabet cannot afford to lose to their competition.
Yet there is nothing inherently Chinese about open models and nothing inherently American about closed models. If anything, it is the opposite. Open models are decentralized, inspectable, forkable, and difficult to monopolize. That aligns with an American instinct to diffuse power, prefer competition over permission, and distrust single points of control. Closed models concentrate capability behind a small number of gatekeepers, wrapped in secrecy, and sustained by privileged access to regulators. That logic is far closer to centralized control than to open competition. The real fault line is not America versus China. It is democratic diffusion versus unnatural scarcity, and good tech versus bad tech.
Agreed with everything you said! Somewhere in the last 10 years, Silicon Valley has gone full swing in support of "say whatever gets you the result you need, even if it's outright misleading, propaganda, or a lie" (I'm looking at you Sam Altman). Corporations always tend toward fascism, their agendas are aligned.
Take a look at that sea of kids taking the GaoKao to get into Beijing univ and you know the software stack is lost already, it's now only about the fabs (hence the Greenland?).
Either way, not sure protectionism and siphoning money to frontier model owners will help us.
But also by that argument they would have beaten us to frontier model tech as well. Their education system appeared better than ours 20 years ago. We could have a bigger and broader conversation comparing the two systems and China's has a lot of flaws
In 2000, President Bill Clinton famously looked at Beijing’s early internet controls and quipped: “Good luck. That’s sort of like trying to nail Jell-O to the wall.”
So far he’s been proven wrong. The CCP didn’t just contain the internet; it has effectively used the internet as a tool to entrench its control by building a system that fuses chokepoints, platform governance, and punitive enforcement into something like a sovereign information utility. That said, the jury is still out, and Clinton may still be vindicated.
On the one hand, LLMs can be understood as a natural outgrowth of Clinton’s (and Gore’s) internet but it can also be seen as its next evolution. By amplifying individual autonomy, LLMs present significant opportunities for economic growth but in pursuing growth they will also amplify individual agency. Therefore, the Party faces a quandary: pursue a strategy of economic growth and risk an erosion of Party authority or crack down and risk being left behind in the technology of the future.
What do they (or you) have to say about the Lee Sedol AlphaGo move 78. It seems like that was "new knowledge." Are games just iterable and the real world idea space not? I am playing with these ideas a little.
With LLMs the synthesis cycles could happen at a much higher frequency. Decades condensed to weeks or days?
I imagine possible buffers on that conjecture synthesis being epxerimentation and acceptance by the scientific community. AIs can come up with new ideas every day but Nature won't publish those ideas for years.