NYPD complained to Waze about warning drivers of police checkpoints [0]. This quote from the article seems as compelling an argument as anything I've seen re: HKMap:
> The Waze website advertises the feature on its website, saying, "Get alerted before you approach police."
It’s a different category of action, but the app comes down to the same. It marks locations, right? Or I suppose moving points.
Or am I misunderstanding something?
If I think this is unacceptable[1] and I'm skeptical of the nebulous "AI will do this job in the future" claims, are there conclusions to be drawn other than "UGC isn't sustainable on a global, public platform"? That is, are there serious alternative options, or anybody working on ideas in this space?
I think it's readily apparent that "just show everything" doesn't work if you want to attract a mainstream audience, but I'm reluctant to just give up on the global public platform that FB was originally idealized as.
[1] I think I'd still find it unacceptable if the moderators were being paid 6 figures, had extensive 1:1 counseling, or any other perks - selling mental health for money is something I'm happy saying a utopian society wouldn't include.
My only thought is "scale only through federation". It's impossible to moderate the content of a billion users. It's pretty easy to moderate the content of 1000 users. And if you're moderating the content of 1000 users who mostly come from an actual community (physical or subcultural) that shares the same values, you don't have to have your moderation rules enforced worldwide to the lowest common factor of different value systems, or by outsourced wage slaves from a different culture without any context.
Source: I'm a moderator on a Mastodon instance with about 1000 users (connected to the larger Fediverse of about 2 million users). We've got 5 moderators, and we respond to reports (either by or about our users) promptly and well. We don't have to police the behavior of the whole Fediverse (just our users), and we don't have to protect the whole Fediverse (just our users).
I haven't used Mastodon, but this is kind of the reddit model, right? Obviously some subreddits are >>> 1k subscribers, but they typically scale the quantity of moderators up accordingly.
Do Mastodon admins share common blocklists or anything? If a bad actor decided to start posting offensive content to random instances, I assume you can ban that {username | IP} from the instance used by you and your users, but would they then be able to just iterate through the other n Mastodon instances? Is there anything in place to prevent them from creating a new account and repeating ad nauseam? (not that there is on Facebook, necessarily)
(I don't know anything about Mastodon, which I'm sure is obvious from some of my questions - if they're incoherent in the context of Mastodon that's totally fine)
It's not very like that – subreddits are all on one server, but posts only appear on one subreddit, whereas Fediverse instances are separate servers, but posts propagate between different instances.
Mastodon instance administrators have the ability to block whole instances, and this is usually done because of bad/nonexistent moderation policies. There is some sharing of instance blocklists, more as a matter of convenience than of policy.
This is a really good question. I'm surprised how rarely we even entertain the possibility of these systems growing beyond our capacity for effective, low-suffering review and moderation.
(As for your footnote - people will rightly point out that any media will be used for some awful content, but that doesn't mean every system has an acceptable rate. "Can we get the rate low enough to tolerate?" is still a legitimate question.)
Every time Youtube makes the news for having unsavory content, their responses seem to have the same underlying tone: "we really are trying, but this is impossible." Every day, 24,000 days of video are uploaded to Youtube. Live human review for every new upload would take >100,000 full-time viewers, plus whatever is needed for comments, review appeals, and copyright notices. I can't even find good estimates on how much existing content there is. Some analysts have suggested Youtube might produce $15B/year in revenue. That review team would cost $3B a year at $15/hour, before payroll tax, benefits, counseling, etc; so we're talking about numbers in the ballpark of determining whether Youtube can be profitable. And yeah, you can do playback at triple speed, automate some basic content-filter removals, prioritize content posted under dubious keywords, de-prioritize major channels unlikely to upload something awful. That all lowers costs and makes it more plausible that troubling content will actually get caught.
But you're right: even if we wanted to run this as a public good, all those cost-savings only serve to concentrate the human toll. Law enforcement has been dealing with this issue since photography became widespread, and hasn't found a way past traumatizing people who have to sort through abhorrent content. Now, the same digitalization that makes distributing media nearly free makes it possible to create that experience a thousand times as often. (Shock sites, after all, are just a version of the same pattern consciously centered on the unwanted views.)
So what to do?
Robust tagging systems can help reduce unwanted exposure to content without requiring moderator involvement, the same way it's helped fanfiction communities reach a detente over protecting readers while allowing mature content. But it's telling that the shouting matches over insufficient tagging and robust age controls continue, and of course this only works for content that's acceptable on the site - a "take this down immediately" flag doesn't get used.
Federated approaches like Mastodon and WhatsApp groups might reduce legal liability and help people find spaces they're comfortable in and diminish unwelcome surprises. But that sacrifices both oversight and unity - even if we accept that some instances will be used to trade illegal content, plot violence, etc., we still lose the network effects of "searching Youtube" or "being on Facebook".
At the other end, I suppose ever-more-aggressive centralization is possible; the ability to multiply harm depends on being able to act repeatedly. If Google demanded your SSN to leave a Youtube upload or comment and banned bad actors for years, or the government ran Facebook with warrantless access to every message, the frequency of this sort of content would decline and the moderation burden might shrink greatly. (After all, anyone can tack a grotesque picture to a physical bulletin board, but there's not much of an issue with obscene UGC there.) Of course, the issues with that are screamingly obvious. Identity and the risk of consequences have a chilling effect, legitimate behavior can still be embarrassing when such a store is predictably breached, usage creep for this sort information is not just a risk but an invariant, and governments are not only inclined but often required by law to act on all sorts of not-actually-bad content like "promoting drug legalization".
Beyond that... I don't actually know. Allowing even minimal privacy and UGC while trying to maintain a palatable space seems like a genuinely unsolved problem. There might be a lot of unexplored value in trying to reduce psychological cost without taking humans out of the loop. Most 'digital natives' have seen some horrible shock content without lasting harm, so perhaps the winning approach is to reduce the time/frequency/vividness of exposure to a manageable level. At the very least, it seems like a less exhausted space than the technical side of things.
This is very thoughtful and gets at the heart of the matter.
It's a little discouraging (considering how influential it is in the evolution of tech) how quickly HN reaches for "rah rah decentralization" as a panacea for anything social media.
The public platform needs to be decentralized. We need a social protocol where the data lives in the protocol and not in some corporate server where it is subject to their whims.
We currently live in a dystopia where your Twitter or Facebook could be banned at their whim leaving you a digital outcast.
I’m not sure I follow. Almost any time there is an issue with a large social network folks on HN say decentralization needs to happen to fix it. But how do you “fix it” while still having the same or better user experience?
It’s a very difficult problem to solve and I don’t think saying “make it decentralized! Make it live on a protocol!” is useful without the extensive “how” that everyone seems to ignore.
So utopia is a completely decentralized uncensorable network where basically anything goes? Not wholly achievable probably because of DNS etc. but you could come close. Not an unsupportable position but you probably amp up the underlying issues Facebook is trying to address by 10x.
Are there examples of decentralized/unmoderated platforms with similar popularity (= success for the purpose of this conversation) to Facebook?
I suspect reddit is the best example of blending "something for everybody" with "default users don't see offensive things", while also being able to remove things like the content in the story, but obviously they still have human moderators.
I think the ultra-open / libertarian model is fundamentally incompatible with broad acceptance, personally [1], but I'm happy to be proven wrong. Early IRC/BBSes aren't very convincing to me because even if they were unmoderated the barrier to entry (knowledge, hardware) was high enough to limit adoption.
[1]: I think sites like Gab indicate that even if you clone a successful product and market it as "<x> but with free speech", the free speech part winds up being a negative factor, concentrating elements that will scare off mainstream users.
I didn't read the Google Cloud part as an assertion of authority, just as a disclosure - if they're talking about competitors (and especially how the choice of competitor negatively impacted GitHub) I appreciate it.
(disclosure - I work for a competitor, not on cloud stuff)
I also appreciate it. Its very common for owners/employees to criticize/attack competitors online anonymously. While the GP wasn't attacking, its just nice to know he works for a competitor.
This is an under-communicated point, and both makes the admission more palatable while subtly increasing the sense of value through the implication that you'll need multiple days to actually take in the breadth of the institution's collections.
I had a great experience, even as someone primarily working in embedded development. They only targeted SF & NY when I went through their process, which wound up unexpectedly being a dealbreaker for me, but if it weren't for that I absolutely would've taken one of their offers.
I have a very similar situation. I started taking pitch calls and then ran the numbers on what it would cost our family to move to San Francisco / Silicon Valley, and it didn't work out. But their process was fantastic, and if my situation is different in the future I'd absolutely use them again.
Edit: By the way, you should email them about the bug with your profile, they'd probably want to know about it / give you another shot.
Same here. The bay area has now priced out professionals with families. I was looking at $40-$60k more per year for housing depending on where in the bay area I would live and I'm currently making over $120k...
Thanks for everything you do - your app is a huge part of the reason I can live in the Seattle area without a car.
One note - I just donated, but I almost bailed out when I saw I had to go through the whole UW process (especially when I then paid with PayPal, which has all of that info anyway....). I'm sure you're restricted to using their platform, but even better PayPal integration would go a long way.
> The Waze website advertises the feature on its website, saying, "Get alerted before you approach police."
---
[0]: https://www.cnn.com/2019/02/07/us/nypd-tells-google-stop-waz...