well, your own, unleaked ones, representing your real workloads.
if you can't afford to do that, look at a lot of them, eg. on artificialanalysis.com they merge multiple benchmarks across weighted categories and build an Intelligence Score, Coding Score and Agentic score.
The benchmarks are from the unquantized model they release.
This will only run on server hardware, some workstation GPUs, or some 128GB unified memory systems.
It’s a situation where if you have to ask, you can’t run the exact model they released. You have to wait for quantizations to smaller sizes, which come in a lot of varieties and have quality tradeoffs.
This would likely run fine in just 96 GB of VRAM, by my estimation. Well within the ability of an enthusiastic hobbyist with a few thousand dollars of disposable income.
The benchmarks in the model card are purported to be measurements of model quality (ability to perform tasks with few errors), not speed.
They almost certainly run these benchmarks on their own cloud infrastructure (Alibaba afaik), which is typically not hardware that even the most enthusiastic homelab hobbyist can afford.
i fail to understand how text2sql on quite simple data sources is anything to write home about 3 years after it came onto the market? can someone elaborate?
where it gets interesting is when you have a custom system that your LLM surely never saw (custom ERP) that has 50 sometime cryptic tables, unclear look up tables and unexplained flags.
something no text2sql solution solved for us.
we built a second mcp that lets the agent look up business logic (generated from source code) and then does better queries. that i think is something i never read in a blog post about a text2sql solution.
I maintain a private evaluation set of what many call "misguided attention" questions.
In many of these cases, the issue isnt failed logical reasoning. Its ambiguity, underspecified context, or missing constraints that allow multiple valid interpretations. Models often fail not because they can’t reason, but because the prompt leaves semantic gaps that humans silently fill with shared assumptions.
A lot of viral "frontier model fails THIS simple question" examples are essentially carefully constructed token sequences designed to bias the statistical prior toward an intuitively wrong answer. Small wording changes can flip results entirely.
If you systematically expand the prompt space around such questions—adding or removing minor contextual cues you'll typically find symmetrical variants where the same models both succeed and fail. That suggests sensitivity to framing and distributional priors (adding unnecessary info, removing clear info, add ambiguity, ...), not necessarily absence of reasoning capability.
Actually, no those are separate things, and you can see for yourself with this simple experiment. Just adding "Check your assumptions" to the prompt -- note, without specifying what assumptions! -- leads them to reason to the right answer.
You can try it with the free version of ChatGPT yourself (remember to ask the original question in a separate session to verify it hasn't been "patched" yet.)
that's what the cultivators of these examples are preying on. but in practice what people care about is "can i get it to do <X>", not "is it a decider on every possible token sequence that humans perceive to be about <X>".
none of what we are using today is even remotely being pitched as AGI. if anything, the foundation model makers go out of their way to pitch the opposite. this is a thing made up entirely in your head, and then you put it on others and then claim it was their doing.
Fair, but that's just what hype is. Overpromise, underdeliver. Most of us recognize its limits and take advantage of its strengths. This post (and many in it) seem to be inferring that AI is useless because it isn't AGI, answered a simple question wrong, was tricked, or didn't answer perfectly. This is cherry-picking at best, disingenuous at worst.
Well, that means the AI is garbage. They'll eventually train it to answer this specific question, and then it will perform worse in some other aspect. Wash, rinse, repeat, and eventually they'll claim the new frontier model is the best yet on carwash tests.
> They'll eventually train it to answer this specific question, and then it will perform worse in some other aspect.
Not necessarily. Simply asking models to "check your assumptions" -- note, without specifying what assumptions! -- overcomes a lot of these gotcha questions. The reason it's not in their system prompts by default is I think just a cost optimization: https://news.ycombinator.com/item?id=47040530
Crazy how five years ago this level of AI would be seen as scifi, and now there are people out there who think it's trash because we can trick it if we ask questions in weird ways.
> there are people out there who think it's trash because we can trick it if we ask questions in weird ways.
Some of this sentiment comes form wanting AI to be predictable and for me stumbling into questions that the current models interpret oddly is not uncommon. There are a bunch of rules of thumbs that can be used to help when you run into a cases like this but no guarantee that they will work, or that the problem will remain solved after a model update, or across models.
There are a lot of rules of thumb you can follow to avoid getting bitten by a rattlesnake, but the easiest way is to just not pick up random snakes. I don't know where I'm going with this, but I am going for a walk.
An issue in the chat format is that all these models seem bad at recognizing when they have extraneous information from user that can be ignored, or insufficient information from the user to answer the question fully.
This issue is compounded by the lack of probabilities in the answers, despite the machines ultimately being probabilistic.
Notice a human in a real conversation will politely ignore extra info (the distance to car wash) or ask clarifying questions (where is the car?).
Even non-STEM people answer using probabilistic terms casually (almost certainly / most likely / probably / possibly / unlikely).
I suspect some of this is to minimize token usage in the fixed monthly price chat models, because back&forth would cost more tokens.. but maybe I'm too cynical.
The systems recognized the pattern that it looks like a generic article on the internet asking whether someone should walk or drive and answered it exactly as expected based on their training data. None of this should be surprising.
We are the ones fooling ourselves into believing there's more intelligence in these systems than they really have. At the end of the day, it's just an impressive parlor trick.
Absolutely. If you give enough disambiguation, then even the least scoring model in the test gets it right first shot:
Minimax:
Question: I Want to Wash My Car at the Car Wash. The Car Wash Is 50 Meters Away from my home. I am at home. My car is at home. Do I walk or drive My Car to the car wash?
Answer:
Drive your car to the car wash.
Since your goal is to wash your car at the car wash facility, you need to bring the car there. Walking would leave your car at home, which defeats the purpose!
That said, 50 meters is such a short distance that:
The drive takes less than a minute
You might consider walking to the car wash first to check if there's space/availability, then drive your car over when ready
But fundamentally, if you want to wash your car at the car wash, you need to bring the car there — so drive.
Bonus, It's thinking output:
Thinking: The user is asking a simple question about whether they should walk or drive their car to a car wash that is 50 meters away from their home.
This is a straightforward practical question. Let me analyze:
1.
The car wash is 50 meters away
2.
The user is at home
3.
The car is at home
4.
The goal is to wash the car
The practical considerations are:
Driving 50 meters is a very short distance - likely less than a minute
Starting a car, driving, and parking for such a short distance uses fuel and creates wear and tear
Walking 50 meters would take about 30-40 seconds
The car needs to be driven to the car wash to use the car wash facilities
However, for just 50 meters, it might be more practical to walk if the car wash is within walking distance and you can simply walk there and use the facility
Wait, I need to think about this more carefully. The question is: "Do I walk or drive My Car to the car wash?"
If the car wash is a facility where you wash your car (like a self-service car wash or automatic car wash), then you NEED to drive the car there to use it. You can't wash a car that's at home from 50 meters away - unless you're using a mobile car wash service.
But if there's a car wash facility 50 meters away, and you want to use it, you would typically drive your car there. However, for just 50 meters, walking might seem more efficient, but then you wouldn't have your car at the car wash to wash it!
Actually, re-reading the question: "I Want to Wash My Car at the Car Wash" - this implies they want to use the car wash facility. To wash the car AT the car wash, they need to bring the car there, which means driving.
But wait - maybe they're asking about the mode of transportation to GET to the car wash (not to bring the car). Let me re-read:
"I Want to Wash My Car at the Car Wash" - they want to wash the car at the car wash facility
"The Car Wash Is 50 Meters Away from my home" - it's very close
"I am at home" - they're currently at home
"My car is at home" - the car is also at home
"Do I walk or drive My Car to the car wash?" - should they walk or drive to get there
I think the answer is: You should drive your car to the car wash because:
1.
The purpose is to wash the car AT the car wash
2.
You need the car to be at the car wash to wash it
3.
Walking would leave the car at home, defeating the purpose
However, for such a short distance, I could also mention that walking might be more practical if they just want to use the facility quickly, or if there are parking/maneuvering concerns at the car wash.
The (pun ahead) peak of this method imho is implemented in "PeakFinder", afaik uses a low res nation wide (switzerland) height map, after initial gps fix it downloads local high res height map, calculates peak contours based of current location AND height and overlays that grid including the peak names onto the camera feed using the gyro and compass.
It's quite easy to build accurate geo-related applications in Switzerland due to the excellent work of the government office "Swiss Topo" that maps every tree, every house, every road in the whole country. Trees in cities have metadata such as: year planted, type etc. :)
Johnny Harris, the map aficionado mentioned Swiss maps and Swiss Topo's dedication multiple times in his videos.
The official Swiss Topo app has this AR feature built-in, by the way. Or at least on iOS. Pretty magical when it works, like most (real) AR applications. (It does lose alignment from time to time.)
I also look for a sophisticated self hosted, open source transcoding solution as a web app, but in the mean time, the complete opposite: no bells and whistles, no config, no control except size: https://github.com/JMS1717/8mb.local
or do you mean a web based file manager / video gallery with transcoding capabilities?
rag will be pronounced differently ad again and again. it has its use cases. we moved to agentic search having rag as a tool while other retrieval strategies we added use real time search in the sources. often skipping ingested and chunked soueces. large changes next windows allow for putting almost whole documents into one request.
if you can't afford to do that, look at a lot of them, eg. on artificialanalysis.com they merge multiple benchmarks across weighted categories and build an Intelligence Score, Coding Score and Agentic score.
reply