Hacker Newsnew | past | comments | ask | show | jobs | submit | mhogers's commentslogin

.agentignore/.agentnotallowed file

force agents to not touch mission critical things, fail in CI otherwise

let it work on frontends and things at the frontier of the dependency tree, where it is worth the risk


a) what happens if there is change that hasn't been encountered yet so it's not in .agentnotallowed? b) is there a guarantee that something described in these files won't be touched? I've seen examples when agents directly violate these rules, profusely apologising after they get caught on it.

allowlist instead of denylist, depending on your risk profile :)

but may they have their own workspace account while working with you? :)

One of my favorite star trek scenes: https://www.youtube.com/watch?v=t4A-Ml8YHyM


Seeing a `pip install -r requirements.txt` in a very recently created python project is almost a red flag now...


requirements.txt allows pip arguments to be included, so can be doing much more than just listing package names.

For example, installing on an air gapped system, where uv barely has support.


Was just taking a break from reading Traefik documentation - thank you for this amazing project.


I agree somewhat with you - nonetheless a FastAPI + Alembic + SQLAlchemy alternative in R would make it possible to use it as a general purpose language


data layer > business logic layer > presentation layer

I believe the presentation/analytics layer has become malleable, possibly parts of the business logic layer - you still need a higher % of trustworthiness than LLMs can provide for parts of the business and data layers.


> you still need a higher % of trustworthiness than LLMs can provide for parts of the business and data layers

For many domain-heavy systems, it's not even the trustworthiness; just getting the business logic right requires a lot of work and lots of iterations with in-house domain experts and clients, there's no way LLMs can do that.


Thank you for sharing, glad you are doing well now :)


A 20-30 year long monthly reminder that your mortgage payments are significantly higher due to missing the boat. Ouch!

Painful for people that do not expect significant further income increases.


This is the current sentiment. But it is short sighted.

The best recommendation is to _know_ the fundamentals of house prices. To know when buying is cheap and expensive.

Eg. in relative terms: buying a house at 30 Price/Rent makes it more affordable to rent - in such an environment, just rent. If the P/R falls to 15-20, then buy.

Housing can also be unaffordable in absolute terms such as wanting to live in down town San Fransisco. In this case people should strongly consider if they want to pay a premium for that locality.

We don't have to go longer back than 2013 to when it made sense to buy over renting - and that will return at some point.


It's your choice to think about such things and therefore it's your choice to be unhappy about it. You can't change the past so you can either be unhappy about it, or not. It's your choice.


Any reason to upgrade an M2 16GB macbook to a M4 ..GB (or 2026 M5) for local LLMs? Due an upgrade soon and perhaps it is educational to run these things more easily locally?


For LLMs, VRAM is the requirement number one. Since MacBooks have unified RAM you can use up to 75% for the LLM, so a higher RAM model would open more possibilies, but these are much more expensive (of course).

As an alternative you might consider a Ryzen Pro 395+ like in the Framework desktop or HP Zbook G1a but the 128GB versions are still extremely expensive. The Asus Flow Z13 is a tablet with ryzen 395+ but hardly available with 128GB


I did just that , got the r 32gb ram one so I could run qwen.

Might still be early days I’m trying to use the model to sort my local notes but I don’t know man seems only a little faster yet still unusable and I downloaded the lighter qwen model as recommended.

Again it’s early days maybe I’m being an idiot I did manage to get it to parse one note after about 15 mins though.


Have a 16GB one, just setup ollama yesterday.

gpt-oss-20b eats too much ram to use for anything other than an overnight task. maybe 3tok/s.

Been playing around with the 8b versions of qwen and deepseek. Seems usable so far. YMMV, i'm just messing around in chat at the moment, haven't really had it do any tasks for me


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: