a) what happens if there is change that hasn't been encountered yet so it's not in .agentnotallowed?
b) is there a guarantee that something described in these files won't be touched? I've seen examples when agents directly violate these rules, profusely apologising after they get caught on it.
I agree somewhat with you - nonetheless a FastAPI + Alembic + SQLAlchemy alternative in R would make it possible to use it as a general purpose language
data layer > business logic layer > presentation layer
I believe the presentation/analytics layer has become malleable, possibly parts of the business logic layer - you still need a higher % of trustworthiness than LLMs can provide for parts of the business and data layers.
> you still need a higher % of trustworthiness than LLMs can provide for parts of the business and data layers
For many domain-heavy systems, it's not even the trustworthiness; just getting the business logic right requires a lot of work and lots of iterations with in-house domain experts and clients, there's no way LLMs can do that.
This is the current sentiment. But it is short sighted.
The best recommendation is to _know_ the fundamentals of house prices. To know when buying is cheap and expensive.
Eg. in relative terms: buying a house at 30 Price/Rent makes it more affordable to rent - in such an environment, just rent. If the P/R falls to 15-20, then buy.
Housing can also be unaffordable in absolute terms such as wanting to live in down town San Fransisco. In this case people should strongly consider if they want to pay a premium for that locality.
We don't have to go longer back than 2013 to when it made sense to buy over renting - and that will return at some point.
It's your choice to think about such things and therefore it's your choice to be unhappy about it. You can't change the past so you can either be unhappy about it, or not. It's your choice.
Any reason to upgrade an M2 16GB macbook to a M4 ..GB (or 2026 M5) for local LLMs? Due an upgrade soon and perhaps it is educational to run these things more easily locally?
For LLMs, VRAM is the requirement number one. Since MacBooks have unified RAM you can use up to 75% for the LLM, so a higher RAM model would open more possibilies, but these are much more expensive (of course).
As an alternative you might consider a Ryzen Pro 395+ like in the Framework desktop or HP Zbook G1a but the 128GB versions are still extremely expensive. The Asus Flow Z13 is a tablet with ryzen 395+ but hardly available with 128GB
I did just that , got the r 32gb ram one so I could run qwen.
Might still be early days I’m trying to use the model to sort my local notes but I don’t know man seems only a little faster yet still unusable and I downloaded the lighter qwen model as recommended.
Again it’s early days maybe I’m being an idiot I did manage to get it to parse one note after about 15 mins though.
gpt-oss-20b eats too much ram to use for anything other than an overnight task. maybe 3tok/s.
Been playing around with the 8b versions of qwen and deepseek. Seems usable so far. YMMV, i'm just messing around in chat at the moment, haven't really had it do any tasks for me
force agents to not touch mission critical things, fail in CI otherwise
let it work on frontends and things at the frontier of the dependency tree, where it is worth the risk
reply