I wanted to create a game (Pre-AI) and thought it would be easy. Turns out it is not at all.
Sure you might get lucky with the next Angry bird, but there is a whole range of skills required to make a good game, and actually get people to play it.
I even read a couple of books which gave me a better understanding of how out of my depth I was. Books were "Theory of Fun", "Achievement unlocked" and "The art of game design".
AI here appears to be accelerating the ability to see those gaps faster. I think without the understanding of those gaps anything created is going to be lacking.
I don’t really think AI would help much with the gaps related to the actual game design. It should allow for faster feature development, adding QoL, etc, but you need the actual game part coming from you.
China is certainly lax, but the US doesn't allow autonomous ATTACK systems. For Attack systems it is always required that a human makes the judgement call when to attack.
Or least it didn't until the current regime.
The US does have autonomous defensive systems.
I could be wrong though, can you post your evidence? The closest I could find is loitering munitions.
Even so, a company shouldn't be forced to go against its ethics if those ethics help humans.
Drone pilots don't get any info about their target, certainly not enough to make a judgement call. If they object (or burn out) someone else is put in the chair.
People are conscripted, they put on the uniform and become legitimate targets? It might as well be a robot doing the shooting. Same difference.
The pilot becomes responsible for those outcomes. For example indiscriminately killing civilians for example is a war crime. Its easier to get an AI to commit war crimes than humans.
Perhaps but if the difference is significant I don't know. Everything changes then we try stretch rhetoric from stabbing someone with a sword to hypersonic missiles? We might hold the pilot responsible if they erase a building but I'm far less comfortable blaming them. We know the targets are actually picked by computers using metadata. The difference gets increasingly vague.
The safeguards dropped are when they will release a model or not based on safety.
The Friday deadline is to allow to use their products for mass surveillance and autonomous weapons systems without a human in the loop.
Anthropic hasn't backed down on those, yet. But they are in a bad situation either way.
If they don't back down, they lose US government contracts, the government gets to do what it wants anyway. It also puts them in a dangerous position with non-governmental bodies.
If they give into the demands, then it puts all AI companies at risk of the same thing.
Personally I think they should move to the EU. The recent EU laws align with Anthropics thinking.
I would recommend reading up on the EU AI Act. It clearly defines what safety is in regards to the human race. Your questions are actually covered by it.
When I first started learning C in uni many years ago, we were forced to use vi and command line, despite there being functional IDEs.
The argument then was IDEs cause cognitive offloading and you don't actually learn to the fullest extent. By forcing us to do everything manually helped us understand how the compiler works, how to debug errors, etc.
This is what current systems are doing. There is a good article that explains it much better
The EU AI act activates this year. Facial recognition is in the restrictive list. You don't want to give auditors ammunition before it goes live as top fine would cost FB around $4B, and wouldn't be a one time fine.
Even if only law enforcement can use it, having that feature is highly regulated.
[edit] I see this is from years ago. I should read the articles first. :)
I've found Claude works so much better if you build a CLAUDE.md and tell it that you want it to be an interactive design process.
It helps formalise your plan, then creates some code, you review, talk about what you believe to be wrong or asking it why it took that approach, or even telling it to take the approach that you want.
The end result a world of difference, and I feel I have a better grasp of what is going on in the whole application.
reply