Thank you for pointing this out. The phrase “AI living among us” was not meant as a literal ontological claim, but rather as a social metaphor: how people might perceive and integrate AI into everyday life.
I agree that governance must avoid anthropomorphizing tools. At the same time, in policy discussions metaphors often serve to highlight social risks and expectations.
Your “AAA” framing (Autonomy, Ambition, Access) is an interesting lens — I see value in exploring how licensing frameworks like AIBL could act as safeguards around exactly those dimensions.
This piece argues for an AI Social Contract as a safeguard, with staged licensing and human oversight in “gray zones.”
It suggests that imperfection itself should be a design principle.
Do we need governance frameworks like this — or is existing liability law enough?
It’s true that much of the debate around AI swings between extremes — utopian promises on one side, dystopian collapse on the other. But institutions don’t operate well in extremes.
What matters is how we design governance that acknowledges uncertainty while still enabling progress. In practice, that means imperfect but adaptive frameworks — guardrails that evolve as technology and society evolve.
Instead of asking “which fallacy is right,” we might ask: how do we build systems that remain trustworthy even when our assumptions about AI turn out to be wrong?
I think the real challenge is not whether AI will “replace” people, but how we preserve the spaces where skills are actually practiced and refined.
Entry-level jobs, internships, and junior projects have always been more about learning curves than efficiency. If AI shortcuts those too aggressively, we risk cutting off the very ladder that produces the next generation of capable engineers and creators.
Maybe the question isn’t “Will AI take jobs?” but “How do we redesign pathways so humans still get the training ground they need—while AI handles the repetitive load?”
Free-for-All was a natural assumption in the early internet, but in the age of AI, alignment with contracts and governance becomes essential. Technical capability alone is not enough — without mechanisms like licensing or audits to ensure legitimacy, such practices may prove socially unsustainable.
People don’t really use ChatGPT as a search engine replacement. It’s more about decision support, writing, and formatting tasks. That matches what I see at work: younger colleagues often use it for drafting text or templates, but not for “just looking things up.”
> People don’t really use ChatGPT as a search engine replacement
Some do, and they think that they are using it as a replacement. I've been doing research on its use among college students and I've heard firsthand that some of them (especially from students in non-STEM fields) think ChatGPT can be as useful as, if not better than, search engines at times for _seeking_ information.
You may be talking to a specific subset of the population, but once you branch out and observe/hear from broader demographics, you'd be surprised to learn about people's mental model of the genAI technologies.
Generative AI may automate some entry-level tasks, but young professionals are not just “replaceable labor.” They bring growth potential, adaptation, and social learning. Without frameworks to manage AI’s role, we risk undermining the very training grounds that prepare the next generation of experts.
I agree that high turnover is a real constraint. That’s why the answer isn’t “10 years of apprenticeship” but designing scaffolds that combine learning with contribution in a shorter timeframe. Things like short rotations, micro-credentials, or mentorship stipends let juniors add value while they’re still on the job. Even if they leave after a few years, the investment isn’t wasted — both sides still capture meaningful returns.
Interesting thought — long-term contracts could indeed align incentives for growth and stability. The challenge, as you note, is trust: few employees or companies are willing to bind themselves for 5–10 years in today’s fluid market.
That’s why governance frameworks (whether in labor or in AI) matter: they provide external guarantees of trust where bilateral promises may not hold.
How do they bring more “growth potential” than a mid level developer with 3-5 years of experience? The average tenure of a developer is 2-3 years. I expect that to increase going forward slightly as the job market continues to suck. But why would I care about the growth of the company when my promotion criteria is based on delivering quarterly or yearly goals? Those goals can much more easily be met by paying slightly more for a mid level developer who doesn’t do negative work both directly and by taking time away from the existing team?
You’re absolutely right that mid-level hires buy immediate productivity. But “growth potential” isn’t just romanticism — it’s an investable trajectory. With the right project design, feedback loops, and domain exposure, juniors can grow into “multipliers” — people who combine technical skills with adaptability or domain expertise. That’s a kind of return you rarely get from simply adding another mid-level hire. In practice, resilient organizations balance both: mid-levels for immediate throughput, and juniors for long-term strength.
Of course I meant 3-5 years of experience not 35 years of experience. :) I just edited it.
You’re not “investing” in anyone if their tenure is going to be 2-3 years with the first one doing negative work.
And why should juniors stay? Because of salary compression and inversion, where HR determines raises. But the free market determines comp for new employees, it makes sense for them to jump ship to make more money. I’ve seen this at every company I’ve worked for from startups, to mid size companies, to boring old enterprise companies to BigTech.
Where even managers can’t fight for employees to get raises at market rates. But they can get an open req to pay at market rates when that employee leaves.
And who is incentivize to care about “the organization” when line level managers and even directors or incentivized to care about the next quarter to the next year?
I hear you — salary compression and inversion, along with short tenure, are very real structural problems. It’s understandable that managers and even directors end up focused only on the next quarter.
My broader point is that when these short-term incentives dominate, organizations (and societies) lose the capacity to build for the long term. That’s exactly why governance frameworks matter: they help create safeguards against purely short-term dynamics — whether in HR policy or in AI policy.
Nobody is really behaving in a manner that is tied with long term thinking anymore.
Everything is short term. Just look at the equity market - its all pricing not intrinsic valuation, based on forecasting cash flows out in perpetuity.
Folks need to wake up to this realisation and just accept it as a flaw of the system we operate in. Until the system is revised and redesigned, its not gonna change.
True — short-termism is deeply baked into the current system, from equity markets to corporate incentives.
But that’s exactly why we need governance frameworks: markets alone won’t correct for long-term stability. Well-designed institutions can act as the counterweight — whether in finance or in AI policy.
We are exploring the idea of AI Behavior Licensing (AIBL) — a framework to license embodied AI behaviors, similar to how we regulate human drivers or medical devices.
The goal is to create an institutional safeguard before embodied AI becomes mainstream.
Question to the community: Should AI be licensed like human professionals, or is existing liability law enough?
I agree that governance must avoid anthropomorphizing tools. At the same time, in policy discussions metaphors often serve to highlight social risks and expectations.
Your “AAA” framing (Autonomy, Ambition, Access) is an interesting lens — I see value in exploring how licensing frameworks like AIBL could act as safeguards around exactly those dimensions.