america would ban electrics if they could.
the petro dollar is on the ropes.
and news like "flash charging" is going to further the anti China hype machine, but the real kicker will be an econobox with 500 mile range and 5 min charge for $15000, $20000 with a solar roof for home charging.
Model Welfare?
Are they serious about this? Or is it just more hype?
I really don't trust anything this company says anymore.
"We have a model that is too dangerous to release" is like me saying that I have a billion dollars in gold that nobody is allowed to see but I expect to be able to borrow against it.
Maybe referring to it as welfare is odd, but these points are important. It isn't a good look to have a model that tends to get into self-deprecating loops like one of Google's older models, it's an even worse look and potential legal liability if your model becomes associated with a suicide. An overly negative chat model would also just be unpleasant to use.
With the weights being mostly opaque, these kinds of evaluations are an important piece of reducing the harm an AI model can cause.
I feel that anthropomorphizing the model is also potentially very harmful. We've seen that in the LLM interactions that end in tragedy. It's the wording that bothers me.
The darkness is always where the light is again found.
The people who caused this situation are burning all their bridges on this. There will be a turnaround.
Seems plausible. I used to refer to StackOverflow before LLMs and a good amount of the examples there were flawed code presented as working. If the LLM had less junk in its training then it might benefit even though the volume of training on that language is lower.
What is a coder? Someone who is handed the full specs and sits down and just types code? I have never met such a person. The most annoying part of SWE is everyone who isn't an SWE has inane ideas about what we do.
reply