50(1) states that AI systems which interact directly with public must inform that they are interacting with AI system.
50(2) states that AI generated synthetic audio, image, video or text content must be marked as such. However this requirement applies to "providers" of AI systems. And according Article 1(3) that is:
> ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;
So it sounds like it would apply to e.g. Anthropic via Claude Code, not to users of Claude Code.
It's also unclear if this would apply to the compiled output or not.
> It would be excellent to see some progress, in expanding & respecting our human rights to privacy.
There are many laws in place in EU which forbids many kind of practices which infringe on privacy, but the issue is that governments don't really enforce them proactively. And in some cases where they are the ones breaking them (e.g. by enacting law that is not compatible with EU Charter or ECHR) it will take long time to get judgement which forbids the practice.
Often the path is that you complain to DPA, you appeal to court, you appeal to higher court, (repeat last step X times), during court appeals you may need to wait for CJEU ruling and finally you might be able to file appeal to ECtHR.
In one "recent" case from Finland the original DPA decision was issued in 8/2020. I'm not sure how long this exact case took, but there are some recent decisions which took 5 years to issue. It was appealed to administrative court and court made request to CJEU on 11/2021. CJEU gave ruling on 6/2023. Administrative court gave ruling on 12/2023. It was appealed and higher administrative court gave ruling on 6/2025.
So it could take 10 years to annul an illegal law or practice.
But there was a period of time when using Openclaw via Claude Code (via -p) was not allowed and it even gave an error message in that case. That's why people find the constantly changing messaging confusing.
What new requirements can be set by the board? As far as I understand EDPB can only issue guidelines, recommendations and best practices. All of these are just guidelines on how to interpret GDPR. Courts are the ones who ultimately decide if are complying with GDPR. Local DPA likely won't harshly punish you if you follow EDPB's recommendations if they end up getting overturned by court.
DPA won't punish you for not following EDPB's recommendations, they will punish you for breaking GDPR. You are free to ignore EDPB if you think your legal position is strong, but you carry the risk if you are wrong.
Google has specifically said that certain API keys like Firebase are not secrets (since people will find them)... though Gemini then ended up changing stuff. https://news.ycombinator.com/item?id=47156925
If PIs can "legally" do it then it sounds like there is a law which allows them to do it. That law can be revoked (unless the power comes from Constitution which would make it effectively impossible to revoke).
Note that PIs are effectively illegal under GDPR by default. They would generally need to provide Article 13 notice, i.e. you would become aware of them unless they were just asking around without actually following you. Member states can make them legal though (via Article 23) and likely in many cases they have done so.
In the US, PI licensing is only about PIing for hire. The actual act of going through public records, following cars and whatnot do not require a license, you can spy on anyone without a license as long as you don't get paid for it.
EU is more complicated, but Article 14.5.b allows withholding notice if it would impair/defeat the purpose of processing. The PI must however apply "safeguards", whatever it could mean.
Article 14(5)(b) does, but that only applies for Article 14 notice (personal data not directly obtained from data subject). Article 13 (personal data obtained directly from data subject) does not have such exception in GDPR itself.
This becomes extremely relevant when you read it in the light of the C-422/24 decision. In that personal data collected via body worn cameras was determined to be "directly obtained". Paragraph 41 from the judgement:
> If it were accepted that Article 14 of the GDPR applies where personal data are collected by means of a body camera, the data subject would not receive any information at the time of collection, even though he or she is the source of those data, which would allow the controller not to provide information to that data subject immediately. Therefore, such an interpretation would carry the risk of the collection of personal data escaping the knowledge of the data subject and giving rise to hidden surveillance practices. Such a consequence would be incompatible with the objective, referred to in the preceding paragraph, of ensuring a high level of protection of the fundamental rights and freedoms of natural persons.
Given this it's very unlikely that PI observing (especially if they record) could be considered to be Article 14 instead of Article 13 type of collection as it's exactly "hidden surveillance practice" that the Court warned about.
Member states do have a right to restrict the Article 13 disclosure obligations via Article 23 restriction, but that requires specific law in the member state & the law itself must fulfill the obligations that Article 23 requires. Article 23(2) essentially forbids leaving everything up to the controller.
And as far as PI in the US goes, actions between stalking and PI "for self" tend to be so similar that I wouldn't necessarily recommend anyone to try it.
Surely EU members should care if Spain blocks the access to government services offered by EU members. In Finland various government services (like Police's website) do use Cloudflare.
And Spain is not blocking access to Spain's citizens, it's blocking access people in Spain. These could be citizens of other EU members who need to access their government's website for reason or another (e.g. renewing passport) while they visit Spain or reside in Spain.
> Seems obvious at this point there needs to be EU-level regulations against individual countries, such as Spain and Italy, implementing these absurd restrictions.
I don't think there is EU-level "regulation" in this specific thing. However there is something somewhat better: European Convention on Human Rights. It's just that challenging these kind of bans via that route is very slow (similar how slow it is to challenge the laws which go against the Constitution in the US via Supreme Court).
Yeah, if this is stopped, it'll be because of the European Charter of Fundamental Rights or the ECHR.
The Charter and the European Court of Justice is why we don't have blanket data retention in the EU but it took twelve years to strike down the Data Retention Directive (though it was killed off much faster in some national courts).
You don't need to colocate the solar, but you need to make sure you can get that power when you actually need it.
During crisis nations are going to restrict exporting electricity and prioritizing their own residents. Electricity that is generated in Germany is not going to warm up Nordic countries if Germany doesn't let it.
Wires are also susceptible to sabotage, especially undersea ones (which are the current major connection points to Europe).
Sure, that is the current situation but if the Nordic countries started relying on solar from central Europe (especially Finland since it doesn't have the hydro capacity Norway & Sweden have) things could get ugly during crisis.
The GP essentially framed overprovisioned solar as solution to anyone who might rely on nuclear without taking in account realities in many countries.
50(1) states that AI systems which interact directly with public must inform that they are interacting with AI system.
50(2) states that AI generated synthetic audio, image, video or text content must be marked as such. However this requirement applies to "providers" of AI systems. And according Article 1(3) that is:
> ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;
So it sounds like it would apply to e.g. Anthropic via Claude Code, not to users of Claude Code.
It's also unclear if this would apply to the compiled output or not.
reply