In most cases I agree with this, but maybe not for potentially dangerous things like cars? What if someone roots into their car and disables some essential safety feature - maybe even a legally mandated safety feature?
More concretely, the expertise-required-to-access-root is in a different field to the expertise-required-to-make-wise-changes. i.e. you might know how to hack a car, but that doesn't mean you know how cars operate.
Given electric cars are responsible for much bigger responsibilities than combustion cars (avoid driving into that bicyclist), there are new concerns here which beg extra consideration.
I actually think we should be asking more of safety regulations here with regards to the design of electric/computerized cars.
Think of it this way: every concern you have about a teenager having root on their electric car is the same as any sociopath hacker (AI enabled for modern nightmare fuel) who finds a root vulnerability and decides to not be a good person with it. If a teenager can mess with the collision avoidance, e.g. Israel can modify it to murder anyone who talks shit about Israel in the car. Or the CIA could turn it into a weapon. Or one day some dev could push a bad OTA update. Et cetera. Our safety regulations should mandate design features to prevent a malfunctioning computer from posing any greater safety risk than any other modified part in the car.
Up until v recently cars were not remotely accessible and part of a command-and-control network which Teslas are (perhaps other modern cars are too, I only know Tesla because I have one).
I know that the car reports practically all user events to Tesla in real time over the cell network (eg, open door), and I know it has root access. I don't know if that root is available remotely and I don't know if foundational commands like steering, acceleration and brake are accessible via the CLI (they are computer controlled actions locally)
THUS I would not want to drive a Tesla if there was the possibility of all cars being rooted and remotely controlled by an unauthorized actor.
No one should have nuclear weapons, we aught to have robust policy, institutions, and vigilance to prevent their proliferation and use.
Computerized vehicles aught to be strictly regulated in terms of how computers may affect the physical operation of the car, such that a reasonable standard of safety can be ensured outside the usual risk one takes when hopping in a motor vehicle. The fact that a hacker can possibly kill people by rooting an infotainment system is a symptom of the general disregard for security in design, and we continue to ignore it for engineering expediency.
Umm, assuming you have the same opinion as grandparent comment, you don't want google tracking your payments but you'll happily trust google's pinky promise about your fingerprint being stored only on the phone?
I’m not commenting on the security/privacy. Only the convenience. And I find tap to pay extraordinarily convenient - a significant upgrade over the plastic card.
Strapping your card to your phone in one of those magnetic card wallets seems to achieve the same level of convenience, or close, and avoids all of the downsides of running a Googled system.
I personally find using a plastic card more convenient than fumbling for my phone and unlocking it (I don't use biometric unlocking as it's not protected by the 5th.) It's also easier to go somewhere without a phone (yes, it's possible) when you have a card on hand.
I always enjoyed the Terminator movie series, but I always struggled to suspend my disbelief that any humans would give an AI such power without having the ability to override or pull the plug at multiple levels. How wrong I was.
N.B. the time travel aspect also required suspension of disbelief, but somehow that was easier :-)
We delegate power already. Is unleashing AI in some place different from unleashing JSOC on an insurgency in a particular place? One is code and other is a bunch of humans.
You expect the humans to follow laws, follow orders, apply ethics, look for opportunities, etc. That said, you very quickly have people circling the wagons and protecting the autonomy of JSOC when there is some problem. In my mind it's similar with AI because the point is serving someone. As soon as that power is undermined, they start to push back. Similarly, they aren't motivated to constrain their power on their own. It needs external forces.
We are currently giving them similar power to the average human idiot because I figure they won't do much worse than those. Letting either launch nukes is different.
I would much prefer to see a ZK system that, by design, CANNOT reveal info neither to the website nor to the authority. e.g. in the new EU system, it is (afaik) conceivable that the ID authority could collude with social network providers, or with government or with police etc. That's not great IMO.
How about a system like Google Authenticator in which google knows nothing about which websites I'm logging into. Except, obviously, it'd have to be some kind of cryptographically signed response. e.g., website puts up a QR code (according to some standard) asking "is the user 18+", I scan with the phone, and the ID app, without accessing internet (like google authenticator) responds.
I suppose that might need a secure computing environment, so no rooted phone etc. But, of course, there's a simple workaround. Any adult can give their phone to a child. As long as that vulnerability is there, there's no such thing as a guarantee on the responses no matter what way you build it.
My expectation is that there'll never be a single bust-up moment, no line-in-the-sand beyond which we'll be able to say "it doesn't work anymore."
Instead agent written code will get more and more complex, requiring more and more tokens (& NPU/GPU/RAM) to create/review/debug/modify, and will rapidly pass beyond any hope of a human understanding even for relatively simple projects (e.g. such as a banking app on your phone).
I wonder, however, whether the complexity will grow slower or faster than Moore's law and our collective ability to feed the AIs.
Maybe software systems will become more like biological organisms. Huge complexity with parts bordering on chaos, but still working reasonably well most of the time, until entropy takes its course.
It's already like that, for a long time. Humans are quite capable of creating complex systems that become unwieldy the bigger they get. No one person can understand all of it. I will offer the AT&T billing system as an example that I'm all too familiar with as a customer, due to the pain it causes me. So many ridiculous problems with that system, it's been around a long time, and it is just so screwball.
Biological systems are vastly more complex, and less brittle, in the sense that killing a single cell doesn't cause system failure like for example removing an object file from a program often would. Just look at the complexity of a single sell, and try to imagine what an analog of similar complexity would be in software.
You're kind of jumping around in scope here, and I think you got it a little wrong.
>Biological systems are vastly more complex, and less brittle, in the sense that killing a single cell doesn't cause system failure like for example removing an object file from a program often would.
Removing a single node from a massively parallel system doesn't kill the system either, it only removes one node, another node will spin up and replace it, just like a failing cell would in a biological system. One liver cell doesn't do anything for the host on its own, it's part of a massively parallel system.
> Just look at the complexity of a single sell, and try to imagine what an analog of similar complexity would be in software.
Removing some "parts" from a cell certainly could kill it, or a million other things that lead to apoptosis, or turn it cancerous. But "parts" isn't analogous to software, DNA is. The same goes for a single node in a system - remove part of the code and you're going to have big problems - for that node. But probably won't kill the other nodes in the system (though a virus could).
There are 3 Billion base pairs in Human DNA. I could imagine that there's more than 3 billion lines of code running important things throughout the world right now. Maybe even in one system, or likely soon there will be. With "AI" doing the coding that number is going to explode, without anyone able to understand all of it. And so I could imagine that "AI" will probably lead to some kind of "digital cancer", the same way there are viruses and other analogues to biological systems.
> "AI" will probably lead to some kind of "digital cancer"
Gosh I've just imagined someone asking an AI agent to code a computer virus to infect software "X". The virus' code will be wonderfully complex and therefore so will the response of the AI responsible for keeping "X" uninfected and in good working order.
I was imagining code becoming awesomely complex even without the adversarial element in play.
Replying to myself here. Maybe coding will eventually be simply learning how to give an AI the right prompt. e.g. instead of
"Hey AI, create my new banking app with such-and-such functionality, appearance, properties, APIs, network connections etc"
we will instead do:
"Hey AI, you are a banking app on a user's cellphone. Connect to mybank.com, authenticate the user and allow the user to perform these-and-those actions in a sensible interface in accordance with the API spec. Don't let yourself be jailbroken."
Then the virus writer's job changes into jailbreaking the AI. Obviously with an AI's assistance...?
Then it would be logical to have a single AI on the phone managing all the prompts in parallel: e.g. "Hey AI, be android by doing [actions]", "Hey AI, be firefox...", "Hey AI, be snapchat...", "Hey AI, be [insert app name]...".
One of my most fascinating reads of all time was "Brave New World Revisited" (1950s I think), a follow-up of "Brave New World" (1920s I think) by Aldous Huxley. Similarly, the point then was how the mass media and TV would eventually be used to mislead and deflect populations' attentions.
Such innocent times when we thought the TV could be evil.
I feel like people forget that so much of what they blame on social media now existed with television. Propaganda, misinformation, addiction, emotional manipulation, mind rot, overstimulation, excessive advertising, even moral panics blaming it for violence and deviant behavior.
Television didn't create self-reinforcing bubbles of hyperreality because it represented a corporate model of reality applied to an entire culture. It could only do so much being a one-way means of communication, but bear in mind all most people do with social media now is consume. The more social media becomes like television, the worse it becomes.
I would go so far as to say that the criticisms of broadcast television were completely correct; and that for all the problems of modern centralized social media and other internet use, one major good thing that it has done is kill off broadcast television. It is much easier now than it was for much of the 20th century for random ordinary people who weren't members of established mass media organizations to broadcast their ideas to the world, and try to build an audience that cares about their message. And even though this results in a lot of bad content being made (or just content that is uninteresting to you personally), it also allows a lot of gems to rise to people's attention that never would have under the old mass culture making system.
One salient example is Grant Sanderson's 3blue1brown math explainer youtube channel and the various other people inspired by him (and often using his open-source software) to make similar math content on youtube. The kinds of math videos he makes are a pretty niche interest when you consider percentage of a regional or national TV market, and so they didn't end up getting made in the 20th century broadcast TV era of mass-culture-making.
There was some math and science content made in that regime, some of it even good - but it mostly got made by publicly-funded television studios with limited airtime, and subject to the inherent constraints of having to make mass-market-friendly content. But when you have internet-based platforms that allow people starting out as hobbyist enthusiasts to broadcast to anyone who can understand English in the entire world, you can do things like actually put real, difficult equations in your videos, and still have that build a sustainable audience.
In general the state of math and science communication on the internet is way better than it was under broadcast television, and this is one of many ways that the world has steadily improved over the past few decades.
gems and turds. The far right consipiracy stuff was filtered out, likewise the neonazi/technocracy stuff (and yes, there is clear historical links between technocracy and nazi idologies, see the history of Joshua Norman Haldeman (1902–1974), the American-born Canadian maternal grandfather of Elon Musk, and why they moved to South Africa)
You're right, the TV was evil. I suppose I meant to say: such innocent times when we thought the TV was about as evil as it could get. More better? :-)
> I feel like people forget that so much of what they blame on social media now existed with television
TV news/documentary broadcasts have a "fairness doctrine" in most of the democratic world [1], meaning both sides of political discussions must be presented. This is a very good bit of legislation which makes television (and radio) broadcasts much more impartial and open minded than a typical social media bubble.
TV programming might well be "mind rot" to some. But to equate TV news/documentaries with social media is a poor comparison. One is demonstrably worse.
> TV news/documentary broadcasts have a "fairness doctrine" in most of the democratic world [1], meaning both sides of political discussions must be presented.
That is the problem. Most discussions have more than two sides. There are lots of shades of opinion and nuances. Showing just two viewpoints might not be quite as bad as the "memes" and straw man arguments that dominate social media, but it is well down the same road.
What if, in the event of a tie (just Alice & Bob), we always decide to trust Alice. Would that not improve our probability of guessing the tie correctly, i.e. back to 80% success?
no in fact that's the proof that adding Bob doesn't help. If Alice & Bob disagree, then since both are correct with same probability it doesn't matter if you pick alice or bob. So WLOG you choose to trust Alice. But now when Alice and Bob agree, that means that you also trust Alice's output [as in you're only right if Alice is right (which is same as when Bob's right since Bob and Alice match)]. So in both cases you are right when Alice is right, i.e. you trust Alice and that means you don't even care about Bob's output.
I think why it feels odd is that most people intuitively answer a different question. If you had to bet on an outcome then Alice and bob agreeing gives you more information. But here you're not dealing with that question, you're either right and wrong; and whether or not Alice & Bob agree, you're effectively "wagering the same" in both cases (where your wager is 0.8, the probability [or expectation] that one is correct).
Although revisiting this, you have to be a bit careful about the argument.
Basically what you're doing is breaking down p(correct) = p(correct & agree) + p(correct & disagree) where former is 0.8*0.8 and latter is 0.8*0.2. Explicitly computing the conditional probability however makes calculating more difficult: p(correct | agree)*p(agree) + p(correct | disagree)*p(disagree). This is something like (16/17) * (0.8*0.8 + 0.2*0.2) + 0.5 * (0.8*0.2*2) which is not easy to arrive at intuitively unless you grind through the calculation.
So _conditioned_ on them agreeing you are right ~94% while conditioned on them disagreeing it's a coin-toss (because when they disagree exactly one is right, and it's equally likely to be alice or bob). Interesting case where the unconditional probability is actually more intuitive and easier than the conditional.
More concretely, the expertise-required-to-access-root is in a different field to the expertise-required-to-make-wise-changes. i.e. you might know how to hack a car, but that doesn't mean you know how cars operate.
reply