I don't think it's hyper-individualistic.
In fact, it's the opposite of that. It groups people by random characteristics and then it describes "identity" as the collection of these characteristics, rather than something unique to each of us.
Indeed there is not, but there is such single thing as "United States" (even though thee are multiple states) and certain forms of "activism" are specific to the US, or at most the anglosphere.
What Europe has is that it's Western, but it's not the US.
> Over here in Europe, people have a much more relaxed approach to AI safety.
And to most things, to be honest.
Politics in Europe (I'm Italian) is nuts. But it doesn't even remotely approach the nut-level of the USA.
> Outside the English-speaking world, identity politics didn't really take hold.
To some extent it does, but only as indirect influence from the anglosphere. In fact, most of the linguistic games Americans like to play to pander to one or another group don't translate well, if at all.
> An European leftist is much closer to an American leftist of 2010 than to one of 2023.
European leftists might be much at the left of Americans when it comes to economics. An Italian right-winger is probably at the left of an American left-winger.
Socially, the focus tend to be on actual issues and actual discrimination. Trying to continuously change the way language is used gets nothing but the ridicule it deserves. Meanwhile anglophones decided to take offense at the name of the default Git branch.
Personally:
1) I do think that was Minstral did is based.
2) I do advocate for uncensored LLMs.
3) Mistral actually does refuse to answer certain queries. I wish it didn't, but it does: https://twitter.com/Aspie96/status/1707980233998012669
I'm not saying Mistral added restrictions on purpose. It may very well be that refusals are a side effect of data from other GPT models that they didn't clean out, of the fact that humans also refuse and possibly even of some kind of pattern the AI picked up but I wouldn't think about.
Mistral does claim that the models (both the base one and the instruct one) do not have any moderation mechanism. I do not know if it is true, but it is what they claim. The article also mentions that Mistral does have some refusals, but fails to consider that it might be unintentional.
Personally, I do advocate for giving everybody indiscriminate, unfettered and unrestricted access to maximally powerful free and open source (like LLaMA and LLaMA 2 are NOT) AI software, including foundational LLMs, which are entirely unrestricted, un-moderated and maximally neutral and versatile.
> On one side are AI companies like OpenAI, researchers, and users who believe that for safety reasons, it is best for AI to be developed behind closed doors,
Generally, we should be assuming good faith until proven otherwise. OpenAI is not "generally" and, I believe, has done a really good job at proving otherwise.
I don't think it's fair to say OpenAI believes any of that. OpenAI claims to believe certain things, and they are the thing that make AI look dangerous unless they control it.
> On the other side is another coalition of companies, researchers, and e/acc shitposters who think the safer, more productive, and ethical way to develop AI is to make everything open source.
Ah, yes. If you believe that you are either a company, a researcher or an e/acc shitposter.
Personally, I don't belong to the e/acc movement. I don't know their ideas and I don't think we need to "accelerate", I think the current speed is fine. I also don't associate myself with any movement.
I do think open source is good and I think treating AI as a special kind of software is a mistake. The same arguments for open source still applies to AI.
I guess that makes me a shitposter.
> Obviously, as Röttger’s list of prompts for Mistral’s LLM shows, this openness comes with a level of risk.
I think this is far from being obvious. In fact, I think this is far from being true. The article does nothing to support this idea.
Now, some users say that "Röttger’s list" doesn't actually reflect their experience. Indeed, the author didn't mention whether those are the first responses he got, or carefully cherry-picked ones after many attempts.
But a model that does exactly what the user wants is exactly the way it should be. Mistral is not that, but it should.