Hacker Newsnew | past | comments | ask | show | jobs | submit | Aerroon's commentslogin

I'll bite: what's the safety reason for an app to ask for age verification?

What kind of apps do you people use that are so dangerous? Does the computer zap you if you misuse the app or what?


For example an app may want to disable chat or private messages with other people if the user is a kid.

For anyone wondering:

EAW = European Arrest Warrant

EIO = European Investigative Order (basically lets different jurisdictions demand information from each other)

CJEU = Court of Justice of the EU (think of it as a supreme court)


Also IANAL: I Am Not A Lawyer. If you really want to guard yourself from a legal standpoint, write the full sentence. "IANAL" could mean anything.

That being said, I am not a lawyer, I am not a legal professional, this is not a legal advice.


Would people botting in video games get a similar sentence?

I'm jealous. It took me far longer and much more frustration to get it to run.

Had to get the right Python version and make sure it didn't break anything with the previous Python version. A friend suggested using Docker, so I started down that path until I realized I'd probably have to set the whole thing up there myself. Eventually got it to run and I think I didn't break anything else.

I hate Python so much.


Nowadays these frustrations shouldn't be a thing any more. If the author used uv, the script would be able to install its own dependencies and just work.

yeah let me add uv and conda support to make it easier.

Thanks! I asked my bot to make me a plugin for it and it one-shotted it, the resulting script was ~20 lines, very nice!

One of the most responsive developers I’ve ever seen, kudos

why you don't use some kind of environment, Conda or something like that?

I used uv, which should have generated a stable environment. No dice. There's a bug in spacey.

I suspect success is highly variable on macOS vs. Linux; the spacey bug is only in newer (3.14 only or later) Pythons, which Linux will have.


thanks for pointing these errors out. we're looking into this and will help fix this.

Even the built in venv would've solved most of his issues too. But I agree with him in that Python documentation could be better. Or have a more unified system in place. I feel like every other how to doc I read on setting something Python up uses a different environment containment product.

Conda was fantastic up to some point last year and since then I've had quite a few unresolvable version issues with it. It is really annoying, especially when you're tying multiple things together and each requires its own set of mutually exclusive specific versions of libraries. The latest like that was gnu radio and some out-of-tree stuff at the same time as a bluetooth library. High drama. I eventually gave up, rewrote the whole thing in a different language and it took less time than I had spent on trying to get the python solution duct-taped together.

I should learn to give up quicker.


Because I need a new version of python very rarely (years go by). I don't remember all the arcane incantations to set everything up.

I did eventually do that though, and I'm pretty sure I had to mess about with installing and uninstalling torch.

I dread using anything made in python because of this. It's always annoying and never just works (if the version of python is incompatible, otherwise it's fine) .


I don't know, I'm pretty happy with Conda. I just create a new environment and install on it. It normally works.

Even if you have to install using pip it just affect the active environment.

Maybe I'm only trying simple things.


Two words; Nix Flakes

damnn, really sorry for the inconv, looks like some folks are having bad env issues. we're working on fixing this.

It's absolutely not your fault. It's a skill issue and compatibility issue on my end and/or python. You guys are doing amazing.

>usage subsidization

Is this actually the case though? Because I can't imagine what kind of hardware they're running to have costs per 1M tokens be above like $3.


The politicians can talk, but they needed to set up an environment that would've let a European company have a decent shot at competing with the best AI models. But they didn't. Should've thought of that before being proud of setting up those strict tech regulations.

That is not how EU does things. If you want no regulation and access to capital you should go to the US.

AI will take over a lot and the biggest AI company will be in US and China. But there will be room for Europe also on the top 10 list.

But there will be an environment that is creating sovereignty from US much more the before. We have learned our lesson


Will there be room for Europe though? Doesn't look like it based on other tech markets.

You're absolutely right to notice that! Let's break it down:

It's not just a trope—it's a mindset. And the name? It's the answer. Delving into the intricate tapestry of language reveals the underlying formulation: “It's not X, it's Y.” That is the name.

Would you like me to draw up a list of other common AI phrases for you?


Felt an instant urge to nuke your comment if I could. Excellent work.

Either some q3 or since it's a MoE, maybe a REAP version of q4 might work (or could be terrible, I'm not sure about REAP'd models).

The SD ecosystem in large part was grassroots and focused on nsfw. I think current LLM companies would have a hard time getting that to happen due to their safety stuff.

Fine-tuning does exist on the major model providers, and presumably already uses LoRA. (Not sure though.)

We saw last year that it's remarkably easy to bypass safety filters by fine-tuning GPT, even when the fine-tuning seems innocuous. e.g. the paper about security research finetuning (getting the model to add vulnerabilities) producing misaligned outputs in other areas. It seems like it flipped some kind of global evil neuron. (Maybe they can freeze that one during finetuning? haha)

Found it: Emergent Misalignment

https://news.ycombinator.com/item?id=43176553

https://news.ycombinator.com/item?id=44554865


How about not making these unenforced laws in the first place so that European companies could actually have a chance at competing? We're going to suffer the externalities of AI either way, but at least there would be a chance that a European company could be relevant.

The AI Act absolutely befuddled me. How could you release relatively strict regulation for a technology that isn't really being used yet and is in the early stages of development? How did they not foresee this kneecapping AI investment and development in Europe? If I were a tinfoil hat wearer I'd probably say that this was intentional sabotage, because this was such an obvious consequence.

Mistral is great, but they haven't kept up with Qwen (at least with Mistral Small 4). Leanstral seems interesting, so we'll have to see how it does.


Because the AI act was mostly written to address issues with ML products and services. It was mostly done before ChatGPT happened, so all the foundation model stuff got shoehorned in.

Speaking as someone who's been doing stats and ML for a while now, the AI act is pretty good. The compliance burden falls mostly on the companies big enough to handle it.

The foundation model parts are stupid though.


>Because the AI act was mostly written to address issues with ML products and services. It was mostly done before ChatGPT happened, so all the foundation model stuff got shoehorned in.

It's not an excuse. Anybody with half a working brain should've been able to tell that this was going to happen. You can't regulate a field in its infancy and expect it to ever function.

>The compliance burden falls mostly on the companies big enough to handle it.

You mean it falls on anyone that tries to compete with a model. There's a random 10^25 FLOPS compute rule in there. The B300 does 2500-3750 TFLOPS at fp16. 200 of these can hit that compute number in 6 months, which means that in a few years time pretty much every model is going to hit that.

And if somebody figures out fp8 training then it would only take 10 of these GPUs to hit it in 6 months.

The copyright rule and having to disclose what was trained on also means that it will be impossible to have enough training data for an EU model. And this even applies to people that make the model free and open weights.

I don't see how it is possible for any European AI model to compete. Even if these restrictions were lifted it would still push away investors because of the increased risk of stupid regulation.


> It's not an excuse. Anybody with half a working brain should've been able to tell that this was going to happen. You can't regulate a field in its infancy and expect it to ever function.

As I said, the core of the AI act was written about supervised ML, not generative ML, as generative ML wasn't as big a deal pre Chat GPT.

> You mean it falls on anyone that tries to compete with a model. There's a random 10^25 FLOPS compute rule in there. The B300 does 2500-3750 TFLOPS at fp16. 200 of these can hit that compute number in 6 months, which means that in a few years time pretty much every model is going to hit that.

As I also said, the foundation model stuff (including this flops thing) is incredibly stupid. I agree with you on this, but my point is that the core of the AI act was supposed to cover the ML systems built since approx 2010.

> The copyright rule and having to disclose what was trained on also means that it will be impossible to have enough training data for an EU model. And this even applies to people that make the model free and open weights.

Again, you're talking about generative stuff (makes sense given the absurdly misleading name now) whereas I'm talking about the original AI act, which I read well before ChatGPT happened.

The training data thing is a tradeoff, like copyright is far too invasive (IMO) and it's good to be able to use this information for other purposes. However, I personally would be super worried about an ML team that couldn't tell me what data went into their model. Like, the data is core to all ML/AI approaches so that lack of understanding would make me very sceptical of any performance claims.

Lets be real, the AI companies don't want to say what's in their models because of the rampant copyright infringement, not because of any technical incapability.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: