Hacker Newsnew | past | comments | ask | show | jobs | submit | efitz's commentslogin

The guy will be solely responsible for half a degree of global warming next year.


AI layoffs are very shortsighted IMO and should be viewed by investors as a sign of weakness in management or the business itself.

If everyone is going to increase productivity by some factor k per employee, then kx is the new norm of overall productivity of x employees.

If you lay off some percentage Y of your work force, then your expected gains will only be k(x(100-y)/100). In other words, you will not recognize the same productivity gains as your competitors that chose not to lay off.

Yes I realize it is more complex than that, because of reduced opex, but there are diminishing returns very quickly.


There are productivity gains to be had by reducing amount of communication and internal layers IMO.

I believe there are also diminishing returns on new features/products (in general case). So you won't really need that many people.


Mixing in geothermal and hydro really distorts the story. Although technically correct, the common usage connotation of “renewable energy “ today is “wind and solar”.

> the common usage connotation of “renewable energy “ today is “wind and solar”

Hydro, wind and solar. Hydro is often even more important because it runs more steadily than the other two.

Geothermal and nuclear are neither fossil nor renewable, they are their own category.


That was an awesome app, I loved it. Thank you.


When my data structures are messages to be sent over a network, I always start with msgId and msgLen, both fixed width fields.

This solves the message differentiation problem explicitly, makes security and memory management easier, and reduces routing to:

switch(msg.msgId): …


switch on version, then messageId…


Not every damn thing needs to be “social”.


Perhaps not, However Gamification of fitness is huge motivation for many people to keep exercising and maintaining the rhythm which in fitness is quite important.

Such social sharing + gamification systems are no different than Github contribution streak or StackOverflow awards for streaks etc. Those streak award only benefited the platform, while awarding us fake points and badges, the fitness streak rewards and social sharing benefits the users health so arguably has a stronger case for being gamified.

We can argue all day that people should want to do fitness to be healthy, not on how they look or other people see them or their fitness, but reality is that the social component of fitness is a big part for many people be it at the gym or in an app.


Logging is one thing, syncing it to the cloud is unnecessary and shouldn’t even be the default; making any of the location data available publicly is just terrible. If you want to share an individual workout map so you can say you circumnavigated Manhattan or whatever, fine! Share that one workout with your friends! (And ideally as a freaking screenshot rather than some database) Anything else is far too risky.


Risky for what? It's just a bit of fun. Most of us aren't being pursued by stalkers or assassins.


It doesn't need to be anything nearly that dramatic as assassins, because economies of scale both lower the bar and make most attacks impersonal. Consider how odd it would be for someone in 2025 to say: "Computer security?I haven't done anything to personally offend a genius hacker."

Imagine this data going to a burglar, who has a digital dashboard of nearby one-person properties and when the owner is likely to be out, able to act with confidence they can leave before the victim could return.

Sure, sophisticated international hitmen won't have any interest in catching you in ambush... but that doesn't make you safe from a local rapist of opportunity.


What a weird comment. The type of low-end criminal who commits home burglaries aren't sophisticated enough to do that level of research.


They are. A related example is criminal gangs tageting gun owners in France after the dataleak at the sport shooting federation. This one has been well covered. There have been a few hundred targeted robberies (on old people mostly) and one or two deaths (predictably).

In Western Europe there are also foreign burglar gangs that go on sprees for a few weeks. They're well organised but don't have time to do the stalking. They use publicly available data as much as they can.


do you have any evidence to back your claim? gangs employing teams of underage burglars assisted by risk averse adults with skills for entry and targeting are a thing. everyone has a mobile phone.


They'd buy access from someone on the dark web for $5 a day.


I'd recommend reading 'Confessions of a Master Jewel Thief' -- normal dude, just decides to spend a career stealing shit for fun.


Low-end criminals fish based on data leaks all the time. More data, especially cross-referencable data, will make this ever easier.


With the new crop of agentic coding tools, you can whip up such an app in a few hours for all burglar buddies to use.


> Most of us aren't being pursued by stalkers or assassins.

Most of us, but for those that are...

However, in the world we live in today, the various LEOs are using this type of data to find people they do not like. It's getting to the point that I pine for the days of good ol' 1985 where you could just be another anonymous person in public with no tracking of your every move.


No but every damn thing seems to be that way by default, so we are expecting everybody to opt out rather than opt in most of the time


Fwiw, from the people I know using Strava, it's less about the sharing/reading other's efforts aspect that makes them use it, and more because of the analysis, dashboards and stuff like that.


For me it's both. I compare my runs on routes and segments going back years. The social part is nice to share info about trail conditions and see when my friends hit a big effort or PR.


Yes, all of which can be purely personal and not shared beyond the device.


Sure, but many people want to use Strava for more than one purpose.

a) Analysis and tracking of your own personal goals. (Some of the tools are better than the stuff available on the device itself.)

b) Sharing and socialising some other activities.

You can be careful and only allow certain activities to be public but you'll make mistakes and eventually many people will just think "whatever, I'll just default to public and remember to hide the ones I don't want to be public" and then it's even easier to make mistakes.

Defaulting to "opt-in" is all well and good until a human makes a mistake.


imho with unusually sensitive things like precise location data it could just not let you opt-in to making it all public, and make it much easier to share with a specific named friends than to share on a public directory


I really don't understand these criticisms of Strava, it has excellent privacy controls so you can share as little or as much as you want. You can already choose to share your activities with only your friends (followers). Or keep your activities private or hide the location data.


It does but my point is that your settings are applied to all activities.

Here's a few examples that might help demonstrate my point:

I used to do parkrun regularly. I had no problem sharing my Strava activities for parkrun because me doing it wasn't a secret, nor was the location secret, nor was my time secret. All of these things could be found from the parkrun website once the results had come up. John Doe was at this location at 9am and ran this route with 400 others in a time of 26 minutes or whatever.

I was also part of a cycling club that did a regular "club run" on a Sunday. 5-15 of us all doing the same route. It was good for club morale for us all to upload our rides to help show how popular it was and encourage other club members to come along. They could see that we weren't going at a silly pace and that we stopped regularly to regroup as we had riders of all abilities and speeds riding with us.

But then I also helped out with my kids running club at school, taking a bunch of 7-11 year old's on a 20 minute jog/run (depending on how quick they were) around the local area. This absolutely should not appear on Strava (public or not). The running club wasn't a secret (everyone at the school knew since they had the option of letting their kid do it) but that's a whole world of difference from having it public on Strava showing the usual start time, the various routes we used to take, where we stopped, etc. Privacy zones can help hide the start/end but that wouldn't help hide everything.

We just made sure that all of the parents who helped out knew that we shouldn't even record it with their smartwatch. I just used to create a manual entry of "Morning run" with approximate distance and time. That was good enough for my training stats.

There's no one privacy setting that handles all of this. Whatever setting you use relies on me to manually adjust the activities that don't fit that setting. The problem is that humans are fallible, so remembering to make it private or hide the location data isn't entirely reliable. You're also at the mercy of Strava (or whatever) not doing something stupid and accidentally making private data visible due to some bug, glitch or leak.


Right, requiring human intervention to share a run (other than maybe with eg a specific small circle of mutual friends) seems like it solves all those problems, other than perhaps being annoyed that you forgot to manually share a run.

But at least that's a failure you can fix once you notice, as opposed to making something public that shouldn't have been. Letting people opt in to automatically sharing runs to the public just seems like something designed to get people to share stuff without thinking about it.


You can already do that with Strava if you want to. Just make activities private by default, or don't sync it to Garmin and upload the files manually.


I'm saying something a bit different: that even letting people opt in to sharing every run they track publicly is just asking for trouble. It's setting people up for their information to be made public when they forget to turn it off or that they turned it on in the first place.

Maybe "automatically share everything to the globe" should just not be an option for sensitive data like this.


Strava has had a lot of privacy issues over the years, particularly with stuff like flybys.


> and more because of the analysis, dashboards and stuff like that

Which is weird, because if they bought a Garmin device, they already have all that built in.


Which if you've ever had a Garmin device + tried Strava, you'd realize that perhaps Strava provides additional insights on top of what Garmin provides?


Genuinely not sure what insights they provide that you don’t get out of the box from Garmin.

The social stuff is nice though.


> Genuinely not sure what insights they provide that you don’t get out of the box from Garmin.

Genuinely weird to make statements like "they already have all that built in" if you don't even know what Strava provides, don't you think?


I’ve been using both for ~7 years so I’m pretty familiar with them…


I agree with you ... but gotdamned if I don't see another unasked-for shared workout stat.

I have the family exercise group on mute, lol


That's precisely why you want it in a safe.


What about drunk driving laws?


Same argument applies. Driving slowly for 1km 0.01 under the speed limit, over legal blood alco limit is safer than driving at the speed limit for 10kms just under the alco limit.

It's very easy to come up with thought experiments to show that technically illegal scenarios are not necessarily more dangerous than some legal scenarios.

The law is often made to be easy to apply, not for precision. Hard to see how anyone could see otherwise.

That's not say that the laws are necessarily problematic. You have to draw the line somewhere.


To an LLM, answering “no” and changing the mode of the chat window are discrete events that are not necessarily related.

Many coding agents interpret mode changes as expressions of intent; Cline, for example, does not even ask, the only approval workflow is changing from plan mode to execute mode.

So while this is definitely both humorous and annoying, and potentially hazardous based on your workflow, I don’t completely blame the agent because from its point of view, the user gave it mixed signals.


Yeah but why should I care? That’s not how consent works. A million yesses and a single no still evaluates to a hard no.


The point is that if the harness’ workflow gives contradictory and confusing instructions to the model, it’s a harness issue, not necessarily a model issue.


First it was a model issue, then it was a prompting issue, then it was a context issue, then it was an agent issue, now it's a harness issue. AI advocates keep accusing AI skeptics of moving goalposts. But it seems like every 3-6 months another goalpost is added.


Your comment doesn’t make as strong of a point as you think it does; it might make the opposite point.

Because, yes, first, it was a model issue, and then more advanced models started appearing and prompting them correctly became more important. Then models learned through RLHF to deal with vague prompting better, and context management became more important. Then models became better (though not great) at inherent context recollection and attention distribution, so now, you need to be careful what instructions a model receives and at what points because it’s literally better at following them. It’s not so much that the goalposts are being moved, it’s that they’re literally being, like, *cleared*.

This isn’t a tech that’s already fully explored and we just need to make it good now, it’s effectively an entirely new field of computing. When ChatGPT came out years ago no one would have DREAMT of an LLM ever autonomously using CLI tools to write entire projects worth of code off of a single text prompt. We’d only just figured out how to turn them into proper chatbots. The point is that we have no idea where the ceiling is right now, so demanding well-defined goalposts is like saying we need to have a full geological map of Mars before we can set foot on it, when part of the point of going to Mars is to find out about that.

As a side point, the agent is the harness; or, rather, an agent is a model called on a loop, and the harness is where that loop lives (and where it can be influenced/stopped). So what I can say about most - not all, but most, including you, seemingly - AI skeptics is that they tend to not actually be particularly up-to-date and/or engaged with how these systems actually work and how capable they actually are at this point. Which is not supposed to be a dig or shade, because I’m pretty sure we’ve never had any tech move this fast before. But the general public is so woefully underinformed about this. I’ve recently had someone tell me in awe about how ChatGPT was able to read their handwritten note and solve a few math equations.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: