Google is doing the same business with VirusTotal.com.
We're hosting malware as zipped files, and providing passwords to users. It's not possible to accidentally run the malware as user.
And the problem is not about this, we removed all the files and Google said you didnt. They're not saying you're not trustable website, they said you still have the files but we haven't.
You have to agree that verifying a website is free from a malware is not possible, while verifying that it has a malware is easy. If I play devil's advocate and say that:
1. A bad actor is publishing a malware at example.com/malware-a.zip;
2. The antivirus measure block them;
3. They move it to example.com/malware-b.zip but tell Google that the site is free from all malware, because if Google hit the old url `example.com/malware-a.zip` it is a 404;
Should Google give the bad actor the benefit of the doubt, unblock them/ or keep blocking them?
Again, I'm not downplaying the shittiness of the situation both you are in right now, but again, this is the default behavior we are all expecting from any antivirus measure.
I guess technically Google flagged the site while the malicious files were still being hosted; it's not that they flagged the site when it wasn't hosting malicious content.
You buried the lede - the important context is that your site hosted malicious content at one point, regardless of intent. Whether the policy is correct or not, withholding that information in the title isn't doing you any favors.
As i mentioned before, i know hosting these files on our local server is not the best practice. There are other businesses doing this thing and Google is not marking them harmful. All files are zipped with a password, we provide the password to users.
Also, the filename is not a unique indicator the identify a file as malware. I can put the filename as "Google Chrome.exe" and that doesn't mean this is a malware.
The second issue is, we just changed the our domain name and when i try to add it to "Authorized redirect URIs" for Oauth getting this error: "The request has been classified as abusive and was not allowed to proceed". So, i have no idea about what is going on about my domain.
Zip password encryption is trivially broken and should not be trusted for anything more than basic obfuscation. I'd recommend:
- Using AES directly yourself with a strong password.
- Hosting behind authentication
- Using a different password per user.
This should a) make it effectively impossible to detect the malware, b) make it clear that the intention is for research and education only, and c) prevent bad actors from maliciously using your hosted repository of malware.
> when i try to add it to "Authorized redirect URIs" for Oauth getting this error: "The request has been classified as abusive and was not allowed to proceed"
Is this for Google OAuth? I don't know anything about this system, but I wouldn't be surprised if the data lags behind the up to date safe browsing database. Maybe try again tomorrow?
We just moved our API domain to another domain. That's why it's working well now. I have the same opinion, maybe their system is cached but it's affecting our business. And if you get declined one time, they're not responding to you quickly again.
Well I have to say, it wasn't really a good idea to put malwares files directly on your business domain, even for research purposes and with encryption.
Send a link to an external host next time !
That's right. It's not the best practice, i totally agree with that. The problem is, i already removed the files but Google is not accepting that. And also they are doing the same business with VirusTotal, but its working really well.
They're just clicking the decline button, and its affecting our business.
Any proxy could act as a MIM, so someone using a malicious fork of Stealth may cause problems.
But, the net is like this already. One site may send you to another site that tricks you into stealing your data. And, a relatively recent vulnerability subverted any WebKit-based browser from stating whether the site’s URL was using the correct server, so you’d have no visible way of knowing a site using HTTPS was legitimate.
Using a VPN could be better, but it’s sometimes worse, because you change who is trusted more (the VPN provider), as they know one of the addresses you’re coming from and everything you’re doing, and can record and sell that data.
I mean, technically, mozilla's ca-certificates tracker is the biggest attack vector on the internet's infrastructure [1]
and TLS transport encryption relies heavily on identification mechanisms which are recorded, verified and stored in a manner that a lot of third parties have to be trusted, too.
Even when ignoring that salesforce is a private entity with financial motivations, and that the server is hosted on 17 years out of date OSes, I wouldn't trust any single entity with a responsibility like this. Maybe the UN, but nothing below that, and I think a legislation for this would be the "most correct" approach.
I hope that in future (given tlsnotary works in the peer to peer case) this can be solved with content based signatures instead of per-domain-and-ip based certificates.
I mean, a snakeoil cert has to be assumed to be just as legit as a cross-signed cert these days due to the lower feasibility of letsencrypt certs.
Certificate pinning was a nice approach from the statistical perspective, but with letsencrypt taking over this is only valid for 3 months (max) until the pinned cert will lead to a required reverification.
Can you please share with me which mail service are you using?
And i tested login page, it looks ok. Can you create new user? Probably you are trying wrong password.