Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Using any automation to copy from Fandom is prohibited, and they do not provide backups of images and other media. This means that any attempt to copy a wiki to another host is either a manual process that could take days of downloading, or violation of the computer abuse act and a federal crime.

And then they won’t delete your wiki if the community asks for it. Fandom is hostile to forks.



> or violation of the computer abuse act and a federal crime.

I thought the linkedin scraping case set the precedent that anonymous scraping was legal?


Maybe it is, I’m old and remember old stuff. Also the rules maybe apply differently under Trump, who knows.


ArchiveTeam have an automatic process to upload wikis to archive.org, and it works fine for Fandom hosts.

https://wiki.archiveteam.org/index.php/Wikibot


Using automation to scrape lots of sites is prohibited but the AI crawlers seem to get away with it?


Can my browser not be an AI crawler?


Isn't that what a DMCA takedown is for?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: