There are far far more dollars available to people that are on the "AI Safety" bandwagon than to those pushing back against it.
The idea that the Upton Sinclair effect is the source of pushback against AI Safety zealotry, is getting things largely backwards AFAICT.
Folks that are stressing the importance of studying the impact of concentrated corporate power, or the risk of profit-driven AI deployment, and so forth are receiving very little financial support.
> There are far far more dollars available to people that are on the "AI Safety" bandwagon than to those pushing back against it.
> The idea that the Upton Sinclair effect is the source of pushback against AI Safety zealotry, is getting things largely backwards AFAICT.
> Folks that are stressing the importance of studying the impact of concentrated corporate power, or the risk of profit-driven AI deployment, and so forth are receiving very little financial support.
IMO your comment doesn't substantively address michael_nielsen's comment, but I might be wrong. The following is how I understand your exchange with michael_nielsen.
The two of you are talking about three sets of people:
Let A be AI notkilleveryoneism people.
Let B be AI capabilities developers/supporters.
Let C be people concerned with regulatory capture and centralization by AI firms.
A and B are disjoint.
A and C have some overlap.
B and C have considerable overlap.
michael_nielsen is suggesting that the people of B are refusing to take AI risk seriously because they are excited about profiting from AI capabilities and its funding. (eg, a senior research engineer at OpenAI who makes $350k/year might be inclined to ignore AIXR and the same with a VC who has a portfolio full of AI companies)
And then you are pointing out that people of C are getting less money to investigate AI centralization than people of A are getting to investigate/propagandize AI notkilleveryoneism.
So, your claim is probably true, but it doesn't rebut what michael_nielsen suggested.
And I believe it's also critical to keep in mind that the actual funding is like this:
capabilities development >>>>>>>>>> ai notkilleveryoneism > ai centralization investigation
I'm not really trying to rebut Michael's argument -- I think it's true, to an extent, some of the time. But I think it's more true more of the time in the reverse direction. So I don't think it's a good argument. And more importantly, I think it fails to properly grapple with the ideas, instead using an ad hominem approach to discarding them somewhat thoughtless.
On your last point, I do think it's important to note, and reflect carefully on, the extremely high overlap between those funding ai notkilleveryoneism and those funding capabilities development.
(this discussion is quite nuanced so I apologize in advance for any uncharitable interpretations that I may make.)
> I'm not really trying to rebut Michael's argument -- I think it's true, to an extent, some of the time. But I think it's more true more of the time in the reverse direction.
I understand you to be saying:
Michael: Pro AI capabilities people are ignoring AIXR ideas because they are very excited about benefiting from (the funding of) future AI systems.
Reverse Direction: ainotkilleveryoneism people are ignoring AIXR ideas because they are very excited about benefiting from the funding of AI safety organizations.
And that (RD) is more frequently true than (M).
IMO both (RD) and (M) are true in many cases. IME it seems like (M) is true more often. But I haven't tried to gather any data and I wouldn't be surprised if it turned out to actually be the other way.
> So I don't think it's a good argument.
I might be misunderstanding you here because I don't see Michael making an argument at all. I just see him making the assertion (M).
> And more importantly, I think it fails to properly grapple with the ideas, instead using an ad hominem approach to discarding them somewhat thoughtless.
I am ambivalent toward this point. On one hand Michael is just making a straightforward (possibly false) empirical claim about the minds of certain people (specifically, a claim of the form: these people are doing X because of Y). It might really be the case that people are failing to grapple with AIXR ideas because they are so excited about benefiting from future AI tech, and if it were, then it seems like the sort of thing that it would be good to point out.
But OTOH he doesn't produce an argument against the claim "AIXR is just marketing hype." which is unfair to someone who has genuinely come to that conclusion via careful deliberation.
> On your last point, I do think it's important to note, and reflect carefully on, the extremely high overlap between those funding ai notkilleveryoneism and those funding capabilities development.
Thanks for pointing this out. Indeed, why are people who profess that AI has a not insignificant chance of killing everyone also starting companies that do AI capabilities development? Maybe they don't believe what they say and are just trying to get exclusive control of future AI technology. IMO there is a significant chance that some parties are doing just that. But even if that is true, then it might still be the case that ASI is an XR.
I mostly agree with this. Certainly the last line!
I've been reflecting on Jeremy's comments, though, and agree on many things with him. It's unfortunately hard to tease apart the hard corporate push for open source AI (most notably from Meta, but also many other companies) from more principled thinking about it, which he is doing. I agree with many of his conclusions, and disagree with some, but appreciate that he's thinking carefully, and that, of course, he may well be right, and I may be wrong.
Thank you Michael. I'm not even sure I disagree with you on many things -- I think things are very complicated and nuanced and am skeptical of people that hold overly strong opinions about such things, so I try not to be such a person myself!
When I see one side of an AI safety argument being (IMO) straw-manned, I tend to push back against it. That doesn't mean however that I disagree.
FWIW, on AI/bio, my current view is that it's probably easier to harden the facilities and resources required for bio-weapon development, compared to hardening the compute capability and information availability. (My wife is studying virology at the moment so I'm very aware of how accessible this information is.)
The idea that the Upton Sinclair effect is the source of pushback against AI Safety zealotry, is getting things largely backwards AFAICT.
Folks that are stressing the importance of studying the impact of concentrated corporate power, or the risk of profit-driven AI deployment, and so forth are receiving very little financial support.