The government does NOT let people have choices in many cases. People should NOT be forced to choose between medical privacy and potential prosecution.
That your comment even implied that would be acceptable in this context is appalling.
I don't know where you got "the government" from, all I'm saying is that apps should be allowed to have cute designs or boring designs, based on their own judgement, and that people should be allowed to freely choose between those. No one should be FORCED to chose anything, I agree, and I didn't imply anything like that.
Parallel construction like that is unambiguously fruit from the poison tree. It should never be allowed, and the fact that it is used routinely is one of the many ongoing travesties in the US.
My understanding is that it would be, if admitted to. That's where the parallel comes in: establish an evidentiary trail that's plausible enough to withstand defense scrutiny, and count on the court itself (ie, judge) not to dig any deeper.
Because they think it might make people give a shit enough to do something to change that outcome?
Fear is a strong motivator, but it is not a good one in this case. To really be effective, there must be the threat of direct, immediate, and severe consequences.
Instead it causes people to treat their messages as hyperbolic and undermines their entire movement.
They probably did not suddenly wake up after six months and realized the Indian developers were mot getting the job done. They probably lied about how long it would take. The consultant that said they could do it in a month probably also lied about their estimate.
Now, might think I should be generous here and give them the benefit of the doubt. However I once had the chance to talk with the CTO of a major embedded consultancy about how to get those first few jobs where you really can’t be confident about any estimate, and that was the explicit and unambiguous advice he offered to me: lie. Tell them you can do it.
Once a company hires a consultant, it can take a lot of pain to make them go back to the drawing board. They do not want to admit they made a mistake hiring someone, so they will accept less than they expect… but only up to a point.
Fair, I was hand waving to make a point. “If it generates more than $1100 + (resale price * WACC) + opportunity cost from physical space/etc” would have been more accurate.
But the point is — you don’t decommission profit generators just because a competitor has a lower cost structure. You run things until it is more profitable for you to decommission them.
Wait, you think AI won’t eventually have full control over a bio lab, where it can manipulate an unsuspecting tech to produce and release a bioweapon to accomplish that explicit goal?
Because I think that seems virtually inevitable at this point.
Humans will give a slop machine control of a lab full of CRISPR machines because they think it might make them a dollar? It wouldn’t take Supreme Super Intelligence for that to go badly.
They don’t have to hand over control to lose control to AI. People are easily manipulated, and AI has proven itself able to manipulate people. How long until a tech is tricked or coerced into doing something dumb on a planet scale, based on intentional misinformation given by its apparently benevolent AI assistant?
Because they lack any better signals from within the company. At several places I have worked, hiring is almost fully detached from the groups that need the workers. They never could find good candidates for our teams. This kind of disconnect is what corporate cancer looks like, and it is endemic in big business.
That your comment even implied that would be acceptable in this context is appalling.
reply