Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

https://www.fandom.com/licensing

> Except where otherwise permitted, the text on Fandom communities (known as “wikis”) is licensed under the Creative Commons Attribution-Share Alike License 3.0 (Unported) (CC BY-SA).

> To grow the commons of free knowledge and free culture, all users editing or otherwise contributing to wikis that use the CC BY-SA license agree to grant broad permissions to the general public to re-distribute and re-use their contributions freely for any purpose, including commercial use, in accordance with the CC BY-SA license. Such use is allowed where attribution is given and the same freedom to re-use and re-distribute applies to any derivative works of the contributions.



Using any automation to copy from Fandom is prohibited, and they do not provide backups of images and other media. This means that any attempt to copy a wiki to another host is either a manual process that could take days of downloading, or violation of the computer abuse act and a federal crime.

And then they won’t delete your wiki if the community asks for it. Fandom is hostile to forks.


> or violation of the computer abuse act and a federal crime.

I thought the linkedin scraping case set the precedent that anonymous scraping was legal?


Maybe it is, I’m old and remember old stuff. Also the rules maybe apply differently under Trump, who knows.


ArchiveTeam have an automatic process to upload wikis to archive.org, and it works fine for Fandom hosts.

https://wiki.archiveteam.org/index.php/Wikibot


Using automation to scrape lots of sites is prohibited but the AI crawlers seem to get away with it?


Can my browser not be an AI crawler?


Isn't that what a DMCA takedown is for?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: