There is indeed a painful dissonance here. I like this new world, but feel sorrow for the loss of something. I try to remember how empowering AI is. It is already allowing millions of people to finally use the devices they've been sitting in front of all these years. No longer do they have to feel constrained by software creators who have made choices for them. Now it is their tool through-and-through, and they can construct software on-the-fly to match their needs precisely. They have been buying computers with both hands tied behind their backs. Now they are in control.
I disagree. There's definitely _some_ who will use these tools to build systems for themselves. But do you think the chef who's been pulling insane hours in the restaurant wants to come home and build his own software? Or the teacher who just had to deal with an annoying classroom all day?
People want software that just works, they'll pay for it, they don't want to use their computers to build their own software. That idea is just software and computer geeks (said affectionately) projecting their own desires on a larger community.
Does it have to be mutually exclusive? On-the-fly software does not destroy software. Gatekeeping software creation does not mean shoving the existing creators out, it just means creating a larger space that others can occupy, like when 'real' programmers had to slowly permit 'script kiddies' into their spaces. All feels a bit 'old guard' vs 'new guard'.
Not mutually exclusive, but I thought your initial post painted an overly rosy picture with the sentence "[..] allowing millions of people to finally use the devices they've been sitting in front of all these years".
I don't think it's happening at this scale. I'll admit I have no real data to back that up, it's just a hunch really. But I find it hard to believe that those people whom previously weren't interested in building software are now suddenly interested to build stuff with an LLM. I'm sure _some_ people are doing this, and then they either hit roadblocks and quit or stick with it an learn actual software engineering.
Looking at my non-tech bubble of friends and family, I don't see anyone actually doing that. I think it's a vocal minority that is doing this. That's just anecdata of course.
I think using GPT et.al. to create a bespoke tool to do what you need is giving the average home user too much credit. What I see more of is just using the prompt in the place of software to create an outcome. "Transcribe this recording", "give me a synopsis of the Godfather films", "How can I wow my girlfriend?". The fraction of home users who are using this to create software is likely highly limited to people with no skills trying to make apps to sell, which is not a tool to help them with something else. Even the software devs I know are using tools made for them, not making their own Claude Code or Cursor.
Right now, the greenfield is in how you use these tools. Making a bespoke specialized tool for yourself, or automating onboarding or CICD setups with simple commands or building bridges between "gatekept" existing software and agents are ripe for growth.
I get that we should see this as a good thing, but I see it as entering the last act of a play. Thousands of people are doing these things and coming up with uses for the tools around the clock. Novel uses for the technology will all be exhausted in the next couple of years and there will be less room for innovation than there was before LLMs.
We’re not there yet, but that chef or that teacher definitely would want an AI voice assistant as good as the computer in Star Trek. Maybe to achieve that, a language model builds software entirely autonomously and runs it to carry out the user’s command. Or maybe they want the computer to build them software that they can then use to do their own work more efficiently.
> We’re not there yet, but that chef or that teacher definitely would want an AI voice assistant as good as the computer in Star Trek.
Since you brought up Star Trek, a good analogue for AI would be the holodeck. Given the appropriate prompts, it produces amazing scenery and even immersive fantasy narratives.
But occasionally, it goes haywire, the safeties no longer work, and the characters from your fictional adventure try to kill you.
That really is a great thing. I do wonder at the segment of the population that from the 70s to today that sculpted their brain to think like a von Neumann machine. What will be lost when the last of us passes. It will likely be viewed as an oddity by future generations and people will try to replicate it as a hobby. But many of us began shortly after learning a primary human language, and that degree of specialization isn't something a hobby can reproduce.
I don't even know what this means. Were programming languages and compilers unavailable? I think this overstates the predicament that users were in quite a lot. In fact, I don't think any users even asked or wanted this really other than maybe very simple musings of wanting to say "computer, enhance" or something like that. I'm not sure how vibe-coding has now given the average user something that they know will not have their hands tied behind their backs. Most users will never vibecode anything and most of the non-technical people that will, will try it once or twice as a novelty or an attempt to solve a problem and then give up.
Now, what you might end up doing is removing the need for them to even have a computer to begin with. Someone without a job doesn't need business productivity software that much.
Can anyone familiar with the technology help disillusion naive people like me as to why on earth palantir needs to exist? It feels like a big pile of nothing. But tbf that's how I feel about Salesforce and Jira too. Big fat database schemas with big fat CRUD atop and layers of snazzy sparklines to make PMs and clients feel nurtured and fuzzy that they've done something material.
Like how Tableau is a great UI for grammar of graphics, Palantir is a great UI for ontological expert systems. Technically you could do everything without it but organizations and especially government typically don’t cultivate that level of expertise in their staff.
In my view expert systems typically failed because the organizations would degrade bureaucratically faster than any expert system could accommodate. With AI there isn’t a pre-requisite need for organizational expertise so the tooling will still work in largely dysfunctional orgs which is a property that did not previously exist. With the help of AI people who don’t understand ontologies can still successfully build one.
Separately it is my opinion that Palantir is a CIA cut-out for the Peter Thiel faction. So paying Palantir is like paying tribute to that particular faction. Similar to how other large military purchases are less about the military hardware and more of a client state subscription to ‘align interests’ such that the US is more likely to act in the donor countries interest.
> Similar to how other large military purchases are less about the military hardware and more of a client state subscription to ‘align interests’ such that the US is more likely to act in the donor countries interest.
I have a feeling this is no longer a viable model. If "subscribers" get threatened every other day, they will be looking for alternatives.
So long as not subscribing is worse than subscribing countries will still do it. Even if it not in the interest of the country the decision makers can and do still get kickbacks / speaking engagements.
It’s interesting to read of the ineffectiveness of influence the gulf states thought they had, though I think that speaks more to the relative cost effectiveness of tributes versus blackmail. These states don’t have the security apparatus to both blackmail US politicians and prevent others from blackmailing those same politicians. This second part is essential as it is what maintains the relative advantage.
I do think they will be less enthusiastic subscribers in the future, and perhaps even shop around for more cost effective approaches. Modi in India is intentionally creating an Indian diaspora as one example and I believe he is bribing politicians to help make this happen.
> read of the ineffectiveness of influence the gulf states thought they had
The primary players in the Gulf - Saudi and the UAE - have been aligned with the ongoing Iran strikes.
KSA's Mohammad Bin Salman has been lobbying Trump to strike Iran [0], just like his predecessor King Abdullah was doing [1]. Similarly, the UAE has an ongoing land dispute with Iran [2].
2. The operationalization of the Iran-Central Asia-China railway in 2025 [0], which allows China to bypass Malacca
3. Iran's relative weakness following the collapse of the Assad regime, the death of much od Hezbollah's leadership, and the Houthis comparative weakness
4. Continued anger amongst policymakers in the Gulf, Israel, and the US that Iran-backed Hamas launched the 10/7 attack barely 3 weeks after the US+EU launched the IMEC project and were about to loop Saudi Arabia into the Abraham Accords [1]
I was hoping to hear the case made as to why Israel was not the primary reason but instead you seem to have chosen to elide it altogether. It seems to be a conspicuous omission especially when both the US and Israeli admin have repeatedly made the case that Israel was the primary reason.
Primary reason is because Israel and American zionists (mostly evangelical christians) lobby for it. The KSA and friends also lobbying for it is just icing on the cake for American politicians.
China also views the EU as a junior partner [0], is running an ongoing disinfo campaign against the industrial exports of an EU member state [1], and has doubled down on it's support for Russia [2] in Ukraine in return for Russia backing China's claim on Taiwan [3].
And the EU is uninterested in building domestic capacity for most critical technologies.
Heck, last week [4] the EU excluded AI, Quantum, Semiconductors, and other technologies from the Industrial Accelerator Act (aka the "Made in EU" act) in order to concentrate on automotive and "net-zero" technologies.
Given that Chinese technology imports are already under the radar in the EU due to the Ukraine war, this is basically the EU creating a carveout for the US.
Even the major European Telecom and Space companies like Eutelsat, Deutsche Telekom, and Telefónica bluntly stated that they view the EU's digital sovereignity strategy as dead in the water [5] in it's current form.
Edit: can't reply
> They/we will go to domestic producers as much as possible, then China, then US, then rest of the world in that order. At least that would make a rational approach since (for now) unique things like f-35 can become an expensive paperweight on a whim of a lonely sick man. You can't build any sort of defense strategy on that, can you
But as I clearly showed, the EU is doing otherwise.
And the EU cannot work with China as long as China backs Russia and undermines European industrial exports.
All the rhetoric about digital sovereignity and domestic capacity has been just that - rhetoric.
They/we will go to domestic producers as much as possible, then China, then US, then rest of the world in that order. At least that would make a rational approach since (for now) unique things like f-35 can become an expensive paperweight on a whim of a lonely sick man. You can't build any sort of defense strategy on that, can you.
> And the EU cannot work with China as long as China backs Russia and undermines European industrial exports.
I mean, that is not that huge a difference compared to the USA (lifting sanctions against Russia, no tariffs there either, but plenty tariffs for "allies"; threatening NATO members in several ways; taking over Russia's "peace" plans for Ukraine 1:1 and putting the pressure solely on Ukraine; (I could go on for pages)).
I am not sure Americans really understand how much trust is already gone.
> that is not that huge a difference compared to the USA
It is for the EU.
The EU dislikes the current deprioritization of the Ukraine Conflict by the US, but also recognizes that the PRC is directly providing material support and subsidizing Russia's military industrial complex [0]. That is the red line for much of the EU.
Similarly, for the PRC it's continued support of Russia in their war in Ukraine is also a non-negotiatable [1], and the CCP's foreign mouthpieces continue to reiterate that "the mainstay of EU foreign policy — supporting Ukraine in a conflict to defeat Russia — has turned into a quagmire of sunk costs with little hope of success" [2].
> I am not sure Americans really understand how much trust is already gone
We know. And we don't care.
As long as the EU views Ukraine's territorial integrity as non-negotiable and a large portion of EU states view Russia as the primary national security threat, the US will remain the less bad option than the PRC or Russia.
Both the US and China are aligned in that we view the EU as a junior party that can be pressured [3].
If the EU views Russia as a threat, it will have to accept American vassalage becuase the PRC will continue to back Russia [1].
If the EU views America as a threat, it will have to accept Chinese vassalage, give up Ukraine, and accept Russia as the primary European military power.
Based on the carveouts within the Industrial Acceleration Act, the EU has chosen American vassalage.
Very bold words. I am not even convinced the USA will stay relevant on the world stage, in the long run. Cutting ties hurts, but the process is underway. Also, "vassalage" is a bold word, if the US cannot make the EU give up Greenland or come running to help them in the Strait of Hormuz (there are also other examples). It is almost as if European politicians are playing it smart.
And my question is - are you fine sacrificing Ukraine in return for a Russian and Chinese military umbrella? This is the hard requirement for China to engage with the EU [0].
The answer in Poland, the Baltics, Czechia, and Finland is NO and that Russia is worse and that Ukraine must be supported, and will back the US no matter how transactional we become.
The answer in Hungary, Slovakia, and Belgium [1] is YES and that sacrificing Ukraine for Russia is acceptable.
> if the Chinese support for Russia can be broken, by economical incentive...
China is not interested in breaking with Russia.
Russia helps China put pressure on Japan [0], helps China put pressure on South Korea [1], allows China to expand it's influence in Central Asia [2], acts as a backchannel for China-India diplomatic normalization [3], gives China the ability to access ONG without dealing with Hormuz or Malaccas [4], and allows China to run the Chongqing-Xinjiang-Europe railway [5] which continues to supply Europe with no sanctions despite the ongoing war in Ukraine.
On the other hand, the EU is tariffing Chinese goods [6]; signing FTAs with Chinese rivals like India [7], Japan [8], and South Korea [9]; and signing defense pacts with Japan [10], South Korea [11], and India [12] while allowing them to participate in ReArm Europe 2030.
Additionally, China-EU trade only represents a little over 10% of all Chinese trade [13], and is easily replaceable with expanded trade with ASEAN, Japan, South Korea, and India.
China views Russia the same way America views the EU - a weak junior partner who can be bullied. The US is somewhat trying to pull Russia to our side, and China is somewhat trying to pull the EU to their side, but the reality is both the US and China view the EU and Russia as junior partners.
> the Chinese support for Russia can be broken, by ... threat
What threat can the EU give to China? Chinese foreign policy already views the EU as sanctimonious [14], weak [15], and declining [16].
> over short or long the EU needs to build its own military to a strength it can at least work as a strong deterrence for aggressors
Yep.
But that will takes decades, which is why the US and China can both bully the EU with complete impunity today.
Heck, both China [17] and the US under Trump [18] are supporting Viktor Orban because he is a great Trojan horse.
Whenever either the US or China feels the EU is leaning towards one at the expense of the other, they then start breaking EU institutions as a result.
You have a very static view there. In my estimation the US is on the way down, at least economically/financially. Their internal stability is already somewhat broken. It will be hard to continue to project power without real allies and the internal issues they have and will have.
So, if the EU is so much inferior, why did they not buckle in the Greenland issue, but Trump was called back by his puppeteers? Why can they say "no" to supporting the US and Israel against Iran? And if they wanted the EU leaders could go further and match tariffs one by one and nothing serious would happen. The picture you are painting does not account for the facts. The relationship is not between equals but lord and vassal is also not a good fit.
I am not sure about the trade figures in your link [13]. It does not open for me. I seem to recall a significantly higher export volume going to Europe. But anyway, China is going to have their own internal issues with an aging populace, an end to strong economical growth and ever-growing social inequality. They are also too rational (compared to the US) to disrupt good business by mutual bullying (at least overtly and systematically).
What threat can the EU give to China? Chinese foreign policy already views the EU as sanctimonious [14], weak [15], and declining [16].
15 is an opinion piece written by a failed politician from Kyrgyzstan for China Daily and 16 is another opinion piece written by a right-wing politician from Slovakia. Neither represent Chinese opinions. 14 doesn't open for me.
The message matters less than the messenger - China Daily is the English language newspaper of the CCP's Propaganda Department and the Global Times is the English language newspaper of the CCP's Central Committee.
The fact that the mouthpieces of two of the CCP's most important committees are constantly publishing content that is dismissive of the EU highlights how China's leadership actually views Europe.
Europeans really need to get it in their head that both the US and China look at the EU dismissively and as a junior partner. Neither the US nor China is interested in a relationship of equals with the EU.
The UK NHS is one of the biggest employers in the world. It absolutely could choose to hire and cultivate that level of expertise but then how would senior management retire into Palantir sinecures?
(It actually has quite a few expert staff who are not delighted with the tools they have been given but they don't have the lobbying power of Palantir and the cluster of consulting firms around it)
Plenty of companies don't "need to exist". A company exists because someone decided to start it (usually to make some money) and lasts until someone decides to end it (usually when it stops making money).
If you're asking why Palantir (and Salesforce, Jira, etc) continue to make money despite not having any novel or complex technologies, my experience has been that these are not prerequisites for solving the vast majority of business problems. Usually network effects, customer relationships, brand identity, user interface, inertia, etc are all more important than the technology.
It is not always easy for a technologist to admit, but companies whose ongoing success is primarily due to some sort of (non-UX) technological superiority are the exception rather than the rule.
> This discounts the value of user experience, which people will pay a premium for.
The people making purchasing decisions at this level aren't the ones using it and don't care one whit about UX.
That isn't to say that it isn't valuable, but it's basically a non-factor. The technology itself is a non-factor. Everything is about connections, buzz words and pretty slide decks.
They literally do, since the people making purchasing decisions are usually the ones that ranked up through a system they used and know the intricacies of, including all the pain points.
As someone who used to teach UX grad courses, I'm happy you feel that way!
But I'm unsure why you feel that my response pointing out that a product's user interface is typically a more important factor in success than the product's underlying technologies was discounting the value of user experience?
> Good design IS technological superiority.
Hmm, I was attempting to respond to someone who wrote "It feels like a big pile of nothing... Big fat database schemas with big fat CRUD atop and layers of snazzy sparklines" which seemed to dramatically undervalue good schemas, CRUD implementations, or sparklines as "nothing". So to contrast those I used "technical superiority" as a catchall for the sort of challenging technical implementations that some developers lionize. Does that make sense? Is there a different term you'd suggest for that? For now I've changed to "(non-UX) technological superiority".
“Palantir is a tech platform that consumes data from their clients in return for providing high level data-driven insights. They assign FDEs (or consultants) to really learn the details of a customers data. Foundry allows them to get single pane view of the data in an org and they actually have both the tech and engineering skills to do the dirty data cleaning jobs.
For an extravagant fee, you give them your data, they clean it for you, and then those same FDEs can tell you interesting things that you should have known, had you actually done proper data architecture in the first place.”
They’re also missing the tidbit that, like any other consultancy, they provide a means for laundering a conclusion that middle management has already come to, confirmation bias be damned. Unsurprising that they’re also useful for parallel construction for LEOs.
The first half is true. They bring in their FDEs to clean and organize your data.
But the difference in what they leave behind is what separates them from classic consultancies and pure tech companies.
They don't leave behind "insights." They leave behind a suite of operational (ie have write capabilities not just dashboards) applications that are "custom" built to actually solve those insights. I put custom in quotes because while the applications are usually bespoke to your company, they are built in Palantir's app-building product Workshop, which significantly lowers the cost of building these custom apps.
So in the end, your company's processes are improved because your employees are using the apps that the FDE's built.
This is distinct from traditional consultancies because those will only leave behind the insights. Also distinct from most SaaS because those have a one-size-fits all approach, so you wind up having to change your company to fit the design of the application, where as Palantir builds its applications to fit your company.
> Contrary to some media reports, we are not a surveillance company. We do not sell personal data of any kind. We don’t provide data-mining as a service.
The commercial product, Foundry, is very well documented and an extensive Data Platform that allows to build data pipelines (similar to Databricks) and build low code / no code applications on top. If you master it, its incredibly powerful but complex
It's not rocket science. Those particular database schemas, together with those particular CRUD layers, do something useful, and neither building nor maintaining those applications is part of the core business for most companies, so buying prebuilt from somebody else, and letting them maintain it for you, makes perfect business sense.
I don't know how you think a b2b company could run sales without a CRM like Salesforce.
To give your question a generous interpretation, Salesforce is more valuable than Apptio or your home grown CRM because it already has all the features any sales org needs, and all the fragmented sales and marketing tooling are already integrated with it.
And Sales is a very expensive and also high ROI activity. You don't want your sales team hung up trying to figure out how to get the random CRM to do something. You're not looking to cut costs in this area, you're looking to enhance the overall productivity of the org. Sales tooling overall is very expensive for this reason, any marginal edge is worth a lot.
It's also worth noting that a big value of things like Salesforce is that it lets management check up on what people are doing, because as much as HN doesn't like to admit it, people are often not very careful or diligent, and you need to perform supervision on the vast majority of people to improve their performance.
Jira is similar, in that eng is very expensive, and its probably better than what these companies were doing beforehand, even if it is suboptimal.
It's true, literally no b2b sales companies existed before Salesforce. We must all continue to pay for Salesforce and support its workflows for now until the endless future, lest b2b sales vanish again.
Because it's hard for the government[1] to build computer systems.
Government salaries are pretty low compared to dev salaries. If the government wants to hire devs and pay them as much as private industry does, they'd have to pay them much more than what their superriors (and their superriors' superriors) make, which would destroy workplace morale. They could raise everyone's salaries, but that's deeply unpopular, as a large part of the population view all high-level government functionaries as crooks by definition.
The way you get around that is by using contractors. Contractors let you hide the cost of software development. Instead of paying $150k to a software developer (which is probably more than the director makes), you pay $10m to a company, not unusual when you also hire companies to build you planes and bridges. How that company allocates that 10m and how much they pay their engineers is no longer your concern, and no longer an embarrassment to your hierarchy and salaries.
However, writing contracts for software is hard, for the same reason waterfall is hard. You just don't really know what the requirements are before the project starts, and in a traditional RFP process, you can't accurately model what requirements are the costliest and should perhaps be reconsidered. This means contracted government projects usually turn into an exercise in checkbox-checking and terrible, unusable UIs which technically fulfill the acceptance criteria, and therefore have to be accepted.
Palantir has somehow managed to actually collaborate with the government, sending forward-deployed engineers to figure out what their actual needs are, and then writing software which fulfills exactly those needs, bringing techniques which modern tech companies have learned along the way. I don't actually know how they managed to circumvent the RFP process well enough to do this.
[1] "The government" here can apply to any government you like, not necessarily the US government.
Palantir’s product is light years ahead of anything any government IT project has ever, and in my opinion can ever, deliver. They’re not even in the same league.
Counter to that I’ve seen a £37m contract for a form on gov uk with absolutely no change in process, just going from a letter received to a online form
GDS is amazing. However, unless we double/triple the GDS salary grades, it'll inevitably be hollowed out. From what I heard, that might've already happened.
Look for yourself, GDS is hiring a "Lead Technical Architect" for £67,126–£91,453 https://gds.blog.gov.uk/jobs/ . FAANG (and Palantir) pays up to triple that. How can GDS compete for talent?
But how many people can you attract, and how quickly can they get the stuff done? There are a lot of sacrifices you have to make working for the gov that not everyone will make.
Horseshit, mate ... basically just pumped up database software aided and abetted by "consultants" parachuted into the client org ... like the industry has been doing since the 80's ...
Edit: I found the following on Glassdoor and, while I don't know the poster personally, it pretty much sums it up:
"If you are in Business Development (BD) - i.e. Delta or Echo - this job will be your life. They deliberately underhire - they claim it's to maintain the culture, but really it's to squeeze every ounce of productivity out of you. You are thrown into chaotic situations with no way out but to "chew glass and excrete product". Don't let the flat heirarchy and encouragement of confrontation / open debate deceive you. Karp has majority founder shares and calls the shots. The company is a dictatorship, not a democracy. Resourcing is a black box. If you are a U.S person without a clearance, you will be bait-and-switched into defense even if you thought you could avoid it. With clearance, you'll end up on something much worse. Trust your gut - the company's leadership are not wise, nuanced philosophers - they are spineless, shifty edgelords with no ethical red lines. As a FDE, you will spend half your time working around stupid limitations in the platform you could not foresee when making grand promises to the customer. Foundry is not a cutting edge product, just like Microsoft Suite is not a cutting edge product. Its just too broad for any other company to easily copy it. Palantir just brought middle-of-the-road Silicon valley tech to old-school government, slapped some AI integration onto it and shrouded it in a veil of mystery to make it seem cool and mysterious and appeal to retail investors."
It's 100% laziness on the side of procurement, aided by some good marketing and a complete lack of guardrails. Exactly the same mindset that has led to every European government now being tied to US big tech.
In government you have to deliver, most of the time the mode of delivery is boring, small, conservative, and disjointed from other government groups because large efforts of work attract big budgets, oversight and doubt.
Consultants are magic, because they come with no baggage and promise the world. They take you hostage with sunk cost fallacy and then after years they deliver something.
At the end you're so tired you think that what they did was beyond your government agency and the cycle continues.
Somebody needs to wrap up open source AI/ML and sell it to governments / defense, and do the integration... (e.g. open source Python face recognition libraries, openCV, YOLO object detection, etc. and more recently LLMs.)
Why does Palantir specifically need to exist? To funnel those juicy government budgets into shareholders' pockets.
Why does anyone bother to use them? Because they have convincing marketing (which may or may not include buttering government palms with, um, "incentives" ...)
Occam's razor: It's a big pile of "list of things being handled by an outside entity so I neither have to think about it, nor hire for them."
If Palantir wasn’t highly effective at aggregating data no one would care about the. They are considered a threat to privacy and freedom because they are a good product
Is Palantir actually that good? Or did all the governments just have enough brain drain they can't think of an alternative?
Like if their product was so good why isn't Amazon using it? Like their case studies all seem to be pre-internet companies that probably never developed a computer competency.
If I bring a themostat back into the past all the peasants are going to think it's black magic. If I show it off as a college project I'm not getting a passing grade.
That's part of it, but not the whole story. If Palantir were a book, explaining how to implement data aggregation systems effectively, people wouldn't be so wary of it. (Critics would still criticise that data aggregation was performed in the first place, of course, but there wouldn't be the additional "and it's Palantir".)
Yep :/ There are just no good heuristics left for quality clothing. It's horrible. One thing I do genuinely have good experience with is Japanese denim. But that's about it.
My approach is that, "you may as well" hammer Claude and get it to brute-force-investigate your codebase; worst case, you learn nothing and get a bunch of false-positive nonsense. Best case, you get new visibility into issues. Of _course_ you should be doing your own in-depth audits, but the plain fact is that people do not have time, or do not care sufficiently. But you can set up a battery of agents to do this work for you. So.. why not?
> I work in software and for single line I write I read hundredths of them.
I'm not sure whether this should humble or confuse me. I am definitely WAY heavier on the write-side of this equation. I love programming. And writing. I love them both so much that I wrote a book about programming. But I don't like reading other peoples' code. Nor reading generally. I can't read faster than I can talk. I envy those who can. So, reading code has always been a pain. That said, I love little clever golf-y code, nuggets of perl or bitwise magic. But whole reams of code? Hundreds upon hundreds of lines? Gosh no. But I respect anyone who has that patience. FWIW I find that one can still gain incredibly rich understanding without having to read too heavily by finding the implied contracts/interfaces and then writing up a bunch of assertions to see if you're right, TDD style.
Most of the software engineers out there do the support, augmenting source code behemoths the least possible way to achieve desired outcome. I believe that more than 90% of software development was support roles as early as 2K or so.
Not that I had an opportunity to write new code, but most of my work through my experience was either to fix bugs or to add new functionality to an existing system with as little code as possible. Both goals mean reuse and understanding of the existing code. For both "reuse" and "understanding" you have to thoroughly read existing code a dozen or so times over.
Tests (in TDD) can show you presence of bugs, not the absence of them. For the absence of bugs one has to thoroughly know problem domain and source code solving the problems.
> Nowhere is it stated that it is a score out of 100.
It says it right on the homepage. Twice. Once for people, once for organisations. It’s right there in green: “BEST (SCORED OUT OF 100)”. And if you go into any of them, you see a score like N/100.
Found the methodology page, and it clarifies it goes from -100 to 100.
What value does this give you? Part of why I deleted my account was I couldn't think of a single thing of value in my chats from the past couple years? Maybe some nostalgia looking at what bugs I was fixing?
For me this is very valuable. The results of personal "research projects" are in there. I use it for reference. Of course I could ask Claude to get me those answers but why waste the energy?
Thanks, but I guess I understand the sentiment. I probably should have not said that I couldn't think of "a single thing of value" when that is a bit of a judgement along with my question. Anyways, it is interesting hearing what people ask it, I think I've only ever used it like a search engine / bug fixing while it seems some people have much deeper conversations or discussions that are worth remembering.
I'm glad I upvoted. Your perspective and questions are valid, no matter the depth of conversation. You'd be surprised what fresh questions can do for a topic.
I for one might use these chats as an input for switching over to keep the learning process fast. For me it took a while for ChatGPT to get me. I know that other people delete memories because they want a clean slate experience with every chat. I use chatGPT mostly private (use claud code for work for instance) and I prefer that memories travel across chats.
In my case, I would rather keep it than lose it. It's just text so small amount of data. You can trivially get a GPT Embedding for it and search it in DuckDB later for things you asked.
The AI industry, and SV tech generally, has a pattern of recruiting talent by flattering people's self-image as builders and discoverers, which makes it psychologically very difficult for those people to reckon honestly with downstream harm.
I'm surprised to see so little coverage of AI legislation news here tbh. Maybe there's an apathy and exhaustion to it. But if you're developing AI stuff, you need to keep on top of this. This is a pretty pivotal moment. NY has been busy with RAISE (frontier AI safety protocols, audits, incident reporting), S8420A (must disclose AI-generated performers in ads), GBL Article 47 (crisis detection & disclaimers for AI chatbots), S7676B (protects performers from unauthorized AI likenesses), NYC LL144 (bias audits for AI hiring tools), SAFE for Kids Act [pending] (restricts algorithmic feeds for minors). At least three of those are relevant even if your app only _serves_ people in NY. It doesn't matter where you're based. That's just one US state's laws on AI.
It's kinda funny the oft-held animosity towards EU's heavy-handed regulations when navigating US state law is a complete minefield of its own.
> Because no one believes these laws or bills or acts or whatever will be enforced.
Time will tell. Texas' sat on its biometric data act quite quietly then hammered meta with a $1.4B settlement 20 years after the bill's enactment. Once these laws are enacted, they lay quietly until someone has a big enough bone to pick with someone else. There are already many traumatic events occurring downstream from slapdash AI development.
That's even worse, because then it's not really a law, it's a license for political persecution of anyone disfavored by whoever happens to be in power.
Every law is like this. Only fools and schoolchildren believe that the rule of law means anything other than selective punishment of those who displease the ruling class.
I agree that is how it currently is in the US, but I don't believe it is universally true or that nothing can be done to change it if enough people resisted.
My statement has nothing to do with contemporary politics and is not unique in the slightest to the US. For an example you are likely sympathetic to, consider the experience of Pavel Durov since late 2024.
"Every law" seems like a huge exaggeration. Assuming for a moment we agree Pavel is a victim of selective prosecution, notice they're not charging him with a clear, straightforward crime like murder, they're charging him with things like[1] failing to prevent illicit activity on Telegram, and "provision of cryptology services [...] without a declaration of conformity". Those laws seem far more prone to abuse as a tool for selective prosecution than most others. (Some of the things he's charged with don't even sound to me like they should be illegal in the first place.)
Every law in the sense of cumulatively, the ‘rule of law’ system has the same property of “Show me the man and I’ll show you the crime” that Beria’s system did.
I see this as roughly equivalent to amortized big O complexity. If I push to a vector repeatedly, sometimes I will incur a significant cost O(n) of reallocation, but most of the time it's still O(1).
Similarly, if Meta violates the law, and is infrequently fined a small fraction of their revenue by a small number of governments, in general it will not be a big deal for them.
You also have to ask "how much is the specific thing in the lawsuit worth to Meta?"
I don't know how much automatically opting everyone in to automatic photo tagging made Meta, but I assume its "less than 100% of their revenue".
Barring the point of contention being integral to the business's revenue model or management of the company being infected with oppositional defiant disorder a lawsuit is just an opportunity for some middle manager + team to get praised for making a revenue-negative change that reduces the risk of future fines.
Work like that is a gold mind; several people will probably get promoted for it.
I think time is different because it's finite. I admit I'll still opt for store brand to save a few bucks even making an engineering salary. But I'll also do something "illegal" (like parking at a metered spot without paying) to save time or otherwise do what I want and just deal with whatever financial cost incurred if I know it won't break me.
A saying I've heard is that if the punishment for a crime is financial, then it is only a deterrent for those who lack the means to pay. Small business gets caught doing bad stuff, a $30k fine could mean shutting down. Meta gets caught doing bad stuff, a billion dollar fine is almost a rounding error in their operational expenses.
Wow, it's always amazing to me how the law of unintended consequences (with capitalistic incentives acting as the Monkey's Paw) strikes everytime some well-intended new law gets passed.
I don't like the opposite any more though, i.e. commercial food being effectively limited to the lowest common denominator of allergens and other dietary as well as religious restrictions. I see that happen a lot more than this one example and it doesn't even need any laws to cause it.
There just can’t be a way to discriminate on the spectrum from “we use AI to tidy up the spelling and grammar” to “we just asked ChatGPT to write a story on x”, so the disclaimer will make it look like everyone just asked ChatGPT.
>There just can’t be a way to discriminate on the spectrum from “we use AI to tidy up the spelling and grammar” to “we just asked ChatGPT to write a story on x”
Why though? Did the AI play the role of an editor or did it play the role of a reporter seems like a clear distinction to me and likely anyone else familiar enough with how journalism works.
People know what it _should_ mean, but if you say that it’s fine to have an AI editor, then there will be a bunch of people saying something like “my reporting is that x is a story, and my editor, ChatGPT, just tidied that idea up into a full story”. There’s all sorts of hoops people can jump through like that. So you end up putting a banner on all AI, or only penalizing the honest people who follow the distinction that’s supposed to exist.
Fair enough, but my main response to that is that people need to support independent journalism. It's entirely possible I'm paying some fraud(s), but as someone who certainly spends more than the average person on online journalism, I trust the people I support at the very least know that putting their byline on an AI written article would be a career destroying scandal in the eyes of their current audience.
I'm fine with that. I want neither AI-hallucinated stories nor AI-expanded fluff. If it's not worth it for a real human editor it's probably not worth reading.
I just came across this for the first time. I ordered a precision screw driver kit and it came with a cancer warning on it. I was really taken aback and then learned about this.
Some legislation which sounds good in concept and is well-intended ends up being having little to no positive impact in practice. But it still leaves businesses with ongoing compliance costs/risks, taxpayers footing the bill for an enforcement bureaucracy forever and consumers with either annoying warning interruptions or yet more 'warning message noise'.
It's odd that legislators seem largely incapable of learning from the rich history of past legislative mistakes. Regulation needs to be narrowly targeted, clearly defined and have someone smart actually think through how the real-world will implement complying as well as identifying likely unintended consequences and perverse incentives. Another net improvement would be for any new regs passed to have an automatic sunset provision where they need to be renewed a few years later under a process which makes it easy to revise or relax certain provisions.
If you don't notice then it was probably not something you considered essential. Breaking the tracking of you and your personal information is kind of the point.
I do believe this is an unfair comparison. With tobacco the warnings are always true, but with prop 65 the product might not contain any cancer causing ingredients, but the warning is there just in case.
It's much easier to tell yourself prop 65 doesn't have to be avoided because "it's probably just there to cover their asses" wile tobacco products have real warnings that definitely mean danger (though there are people who convince themselves otherwise_
Also even if there's a prop 65 warning because there are cancer-causing ingredients, those ingredients may not be user-accessible or may be in tiny enough quantities that they'd statistically never result in cancer even with lifetime use by every human on the planet. E.g. lead in a circuit board inside an IP-68 rated sealed device would require a prop 65 warning even though it won't pose any cancer risk to the user unless they grind up the device & ingest or inhale the lead.
But that is because the requirement is binary - warning vs. no warning. This problem doesn't happen if the requirement is to disclose what was used although it could still lead to other issues.
I don't know of anyone (seriously not one person) who actually believes those labels. And the reason why is precisely because the government was foolish enough to put them on everything under the sun. Now nobody listens to them because the seriousness got diluted.
The primary obstacle is discussions like this one. It will be enforced if people insist it's enforced - the power comes from the voters. If a large portion of the population - especially the informed population, represented to some extent here on HN - thinks it's hopeless then it will be. If they believe they will get together to make it succeed, it will. It's that simple: Whatever people believe is the number one determination of outcome. Why do you think so many invest so much in manipulating public opinion?
Many people here love SV hackers who have done the impossible, like Musk. Could you imagine this conversation at an early SpaceX planning meeting? That was a much harder task, requiring inventing new technology and enormous sums of money.
Lots of regulations are enforced and effective. Your food, drugs, highways, airplane flights, etc. are all pretty safe. Voters compelling their representatives is commonplace.
It's right out of psyops to get people to despair - look at messages used by militaries targeted at opposing troops. If those opposing this bill created propaganda, it would look like the comments in this thread.
> Because no one believes these laws or bills or acts or whatever will be enforced.
That’s because they can’t be.
People assume they’ve already figured out how AI behaves and that they can just mandate specific "proper" ways to use it.
The reality is that AI companies and users are going to keep refining these tools until they're indistinguishable from human work whenever they want them to be.
Even if the models still make mistakes, the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.
You’re essentially passing laws that only apply to people who volunteer to follow them, because once someone decides to hide their AI use, you won't be able to prove it anyway.
> the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.
By that token bans on illegal drugs are fantasy. Whereas in fact, enforcement doesn't need to be guaranteed to be effective.
There may be little technical means to distinguish at the moment. But could that have something to do with lack of motivation? Let's see how many "AI" $$$ suddenly become available to this once this law provides the incentive.
I always wanted to try specific two, but first cannot be had in the safest form because of the specific precursor ban, and all of them suffer from an insane (to me) risk of adulteration.
In twenty minutes I could probably find 10 "reputable" shops/markets, but still with 0 guarantee I won't get the specific thing laced with something for strength.
Even if I wanted pot (I don't, I found it repetitive and extremely boring, except for one experience), I would have to grow it myself (stench!) but then.... where I find sane seeds (healthy ratio CBD to THC)?
Similarly, I wouldn't buy the moonshine from someone risking prosecution to make and sell it. It's guaranteed this risk is offset.
So ... I can't get what I want because there's extremely high chance of getting hurt. An example being poisoning with pills sold as mdma - every music festival, multiple people hurt. Not by Molly, by additives.
I don't want a random weed. I can easily get it myself on the street (there are several places with the distinct smell), and I know at least 3-4 people who I know smoke.
But I want safe (not pcp/fentanyl sprinkled) and sane (not engineered for a 'kick').
I don't know anyone who's a cultivator themselves :)
Sure they can be enforced. Your comment seems to be based on the idea of detecting AI writing from the output. But you can enforce this law based on the way content is created. The same way you can enforce food safety laws from conditions of the kitchen, not the taste of the food. Child labor laws can be enforced. And so on.
Unless you're trying to tell me that writers won't report on their business that's trying to replace them with AI.
The idea that you can just ban drinking and driving is a fantasy because there’s no technical way to actually guarantee enforcement.
I know that sounds ridiculous but it kind of illustrates the problem with your logic. We don’t just write laws that are guaranteed to have 100% compliance and/or 100% successful enforcement. If that were the case, we’d have way fewer laws and little need for courts/a broader judicial system.
The goal is getting most AI companies to comply and making sure that most of those that don’t follow the law face sufficient punishment to discourage them (and others). Additionally, you use that opportunity to undo what damage you can, be it restitution or otherwise for those negatively impacted.
> passing laws that only apply to people who volunteer to follow them
That's a concerning lens to view regulations. Obviously true, but for all laws. Regulations don't apply to only to what would be immediately observable offenses.
There are lots of bad actors and instances where the law is ignored because getting caught isn't likely. Those are conspiracies! They get harder to maintain with more people involved and the reason for whistle-blower protections.
VW's Dieselgate[1] comes to mind albeit via measurable discrepancy. Maybe Enron or WorldCom (via Cynthia Cooper) [2] is a better example.
But most regulations are, and can be, enforced because the perpetrator can simply be caught. That’s the difference. This is not enforceable in any meaningful way. The only way it could change anything would be through whistleblowers, for example someone inside a major outlet like the New York Times reporting to authorities that AI was being used. On the contrary, if you systematically create laws that are, by their nature, impossible to enforce, you weaken trust in the law itself by turning it into something that exists more on paper than in reality.
* I suspect many existing and reasonable regulations do not meet that "simply caught" classification. @rconti's comment above[1] gives some examples of regulations on process that are not observed in the output (food, child labor). I'll add accounting, information control (HIPAA, CUI, etc), environmental protections.
* Newsroom staff is incentivized to enforce the regulation. It protects their livelihood. From the article:
> Notably, the bill would cement some labor protections for newsroom workers
* Mandatory AI labeling is not impossible to enforce. At worst, it requires random audits (who was paid to write this story, do they attest to doing so). At best, it encourages preemptive provenance tracking (that could even be accessible to the news consumer! I'd like that).
One reason for the regulation is we fear hallucinations slipping into the public record -- even if most LLM usage is useful/harmless. Legal restrictions ideally prevent this, but also give a mechanism for recourse when it does happen.
Say a news story goes off the rails and reports a police officer turned into a frog [2] or makes up some law[3]. Someone thinks that's odd and alerts whatever authority. The publisher can be investigated, reprimanded, and ideally motivated to provide better labeling or QC on their LLM usage.
No, that doesn't really work so well. A lot of the LLM style hallmarks are still present when you ask them to write in another style, so a good quantitative linguist can find them: https://hdsr.mitpress.mit.edu/pub/pyo0xs3k/release/2
That was with GPT4, but my own work with other LLMs show they have very distinctive styles even if you specifically prompt them with a chunk of human text to imitate. I think instruction-tuning with tasks like summarization predisposes them to certain grammatical structures, so their output is always more information-dense and formal than humans.
Probably worse than that. I can totally see it being weaponized. A media company critic o a particular group or individual being scrutinized and fined. I haven’t looked at any of these laws, but I bet their language gives plenty of room for interpretation and enforcement, perhaps even if you are not generating any content with AI.
>But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally.
As with everything else BigCo with their legal team will explain to the enforcers why their "right up to the line if not over it" solution is compliant and mediumco and smallco will be the ones getting fined or being forced to waste money staying far from the line or paying a 3rd party to do what bigco's legal team does at cost.
Highly selective enforcement along partisan lines to suppress dissent. Government officials forcing you to prove that your post is not AI generated if they don't like it. Those same officials claiming that it is AI generated regardless of the facts on the ground to have it removed and you arrested.
If you assume the use of law will be that capricious in general, then any law at all would be considered too dangerous for fear of use as a partisan tool.
Why accuse your enemies of using AI-generated content in posts? Just call them domestic terrorists for violently misleading the public via the content of their posts and send the FBI or DHS after them. A new law or lack thereof changes nothing.
> SAFE for Kids Act [pending] (restricts algorithmic feeds for minors).
i personally would love to see something like this but changed a little:
for every user (not just minors) require a toggle: upfront, not buried, always in your face toggle to turn off algorithmic feeds, where you’ll only see posts from people you follow, in the order in which they post it. again, no dark patterns, once a user toggles to a non-algorithmic feeds, it should stick.
this would do a lot to restore trust. i don’t really use the big social medias much any more, but when i did i can not tell you how many posts i missed because the algorithms are kinda dumb af. like i missed friends anniversary celebrations, events that were right up my alley, community projects, etc… because the algorithms didn’t think the posts announcing the events would be addictive enough for me.
no need to force it “for the kids” when they can just give everyone the choice.
None of those bills/laws involve legislating publishing though. This bill would require a disclaimer on something published. That’s a freedom of speech issue, so it going to be tougher to enforce and keep from getting overturned in the courts. The question here are what are the limits the government can have on what a company publishes, regardless of how the content is generated.
IMO, It’s a much tougher problem (legally) than protecting actors from AI infringement on their likeness. AI services are easier to regulate.. published AI generated content, much more difficult.
The article also mentions efforts by news unions of guilds. This might be a more effective mechanism. If a person/union/guild required members to add a tagline in their content/articles, this would have a similar effect - showing what is and what is not AI content without restricting speech.
Not thrilled about it, and I personally would rather see them repealed. I will concede compelled speech impositions have been interpreted more generously when they are commercial. I don't necessarily agree with it, but even if we concede they can happen, I hope that distinction is made for commercial vs non-commercial content. Though I'm not thrilled with it happening for either.
I agree in general and that should be the position but it's probably more nuanced than this in practice: who published it when it's a dev that writes a script that just spits junk into the wild or reinforces someone else's troll-speech?
In general, I think LLM content has been found to not be copyrightable, but it would still speech when it's published. It would be the speech of the company publishing it, not the dev that wrote the script. So, ai-junk-news.com is still publishing some kind of speech, even if it was an LLM that wrote it. At least, that would be my interpretataion.
I'll bet AI is going to be simply outlawed for hiring, and possibly algorithmic hiring practices altogether. You can't audit a non-deterministic system unless you train the AI from scratch, which is an expense only the wealthiest companies can take on.
Don't ding the amusingly scoped animosity, it's very convenient: we get to say stuff like "Sure, our laws may keep us at the mercy of big corps unlike these other people, BUT..." and have a ready rationalization for why our side is actually still superior when you look at it. Imagine what would happen if the populace figured it's getting collectively shafted in a way others may not.
I believe it’s because it will be impossible to enforce. It might have some teeth with LLMs that add watermarks to their images but otherwise you could have one human in the loop for 10,000 articles and not call it AI.
I honestly just don't see any point in these laws because they're all predicated on the people who own the AI's acting in good faith. In a way I actually think they're a net negative because they seem to be giving a false impression that these problems have an obvious solution.
One of the most persistent and also the dumbest opinion I keep seeing both among laymen and people who really ought to know better is that we can solve the deepfake problem by mandating digital watermarks on generated content.
~Everything will use AI at some point. This is like requiring a disclaimer for using Javascript back when it was introduced. It's unfortunate but I think ultimately a losing battle.
Plus if you want to mandate it, hidden markers (stenography) to verify which model generated the text so people can independently verify if articles were written by humans (emitted directly by the model) is probably the only feasible way. But its not like humans are impartial anyway already when writing news so I don't even see the point of that.
It would make sense to have a more general law about accountability for the contents of news. If news is significantly misleading or plagiarizing, it shouldn’t matter if it is due to the use of AI or not, the human editorship should be liable in either case.
This is a concept at least in some EU countries, that there has to always be one person responsible in terms of press law for what is being published.
That's government censorship and it not allowed here, unlike the EU. As for plagiarism, every single major news outlet is guilty of it in basically every single article. Have you ever seen the NYT cite a source?
You’re still allowed to say virtually anything you want if you make it clear that it’s an opinion and not news reporting.
Not citing sources doesn’t imply plagiarism, as long as you don’t misrepresent someone else’s research as your own (such as in an academic paper). Giving an account of news that you heard elsewhere in your own words isn’t plagiarism. The hurdles for plagiarism are generally relatively high.
If a news person in the USA publishes something that's actually criminal, the the corporate veil can be pierced. If the editor printed CSAM they would be in prison lickity split. Unless they have close connections to the executive.
Most regulations around disclaimers in the USA are just civil and the corporate veil won't be pierced.
I agree with that the most. That's why I added the bit about humans. In the end if what you're writing is not sourced properly or too biased it shouldn't matter if AI is involved or not. The truth is more the thing that matters with news.
> I'm surprised to see so little coverage of AI legislation news here tbh.
I think the reason is that most people don't believe, at least on sufficiently long times scales, that legacy states are likely to be able to shape AI (or for that matter, the internet). The legitimacy of the US state appears to be in a sort of free-fall, for example.
It takes a long time to fully (or even mostly) understand the various machinations of legislative action (let alone executive discretion, and then judicial interpretation), and in that time, regardless of what happens in various capitol buildings, the tests pass and the code runs - for better and for worse.
And even amidst a diversity of views/assessments of the future of the state, there seems to be near consensus regarding the underlying impetus: obviously humans and AI are distinct, and hearing the news from a human, particular a human with a strong web-of-trust connection in your local society, is massively more credible. What's not clear is whether states have a role to play in lending clarity to the situation, or whether that will happen of the internet's accord.
reply