"To be clear when we floated the idea of 'federal loan guarantees' we were only asking the government subsidize building a bunch of cheap supply for us. We definitely believe in competitive markets." https://www.reuters.com/business/openai-does-not-want-govern...
It's funny how the big AI labs can effortlessly say both "if we don't continue the arms race we'll just be choosing to lose to China in 7 months" and "if Chinese labs didn't copy our models using ""distilation attacks"" they would be years behind". The enemy is both as weak or strong as the paragraph demands.
Chatbots are certainly not objective. There are countless articles are this that and the other bias with them. The whole sycophancy blowup or their basic inability to choose a fair random number without assistance should clearly demonstrate that they have many implicit biases. The distribution of answers chatbots give to questions is being constantly and deliberately tweaked by their developers.
The reveal that the pods building is only 2 stories is pretty funny to me. 12 Mint Plaza btw for anyone who wants to poke around the area in street-view. It's right next to an 8 story apartment building, across the block from what must be a 20+ one, so it's not like tall buildings are infeasible there.
My experience is usually the opposite. The code they write is verbose yes, but the diffs are over-minimal. Whenever I see a comment like "Tool X doesn't support Y or has a bug with Z [insert terrible kludge]" and actually fixing the problem in the other file would be very easy, I know it is AI-generated. I suspect there is a bias towards local fixes to reduce token usage.
Simultaneous Multi-Threading (hyper-threading as Intel calls it). I'm not a cpu guy, but I think the ALU used for subtraction would be a more valuable resource to leave available to the other thread than whatever implements a xor. Hence you prefer to use the xor for zeroing and conserve the ALU for other threads to use.
- Normally ALU implements all "light" operations (i. e. add/sub/and/or/xor) in a single block, separating them would result in far more interconnect overhead. Often, CPUs have specialized adder-only units for address generation, but never a xor-specialized block.
- All CPUs that implement hyper-threading also optimize a XOR EAX,EAX into MOV EAX,ZERO/SET FLAGS (where ZERO is an invisible zero register just like on Itanium and RISCs). This helps register renaming and eliminates a spurious dependency.
- The XOR trick is about as old as 8086 if not older.
I'm definitely seeing an ongoing train of dashboards, chatbots, and digests. Some of it is definitely self-promotion. I do think though that a lot of people don't realize how much continuous advocacy and support you have to provide for a tool to gain mind share. I've had people message me months after I would have assume that they integrated a tool into their workflow, asking after points of confusion getting started. They love it, super helpful, and then they come back with new bugs you have to triage, new feature requests to support their work. People have limited time and energy, and they will not invest unless you are out pounding the pavement for your tool.
It seems to have been at least slightly improved, but youtube video summaries suffered from this to an almost comical degree not long ago. The AI voice is already pretty recognizable and stilted, then you constrain it to avoid saying anything negative or spoilery about the video, and (presumably) don't let it remember past output. No surprise its extremely repetitive. For humans you're at least getting different people's voices, on different days, who remember that they just wrote about how the last one was a "unique look highlighting the importance of design".
reply