> I don't know why people thought they could start using random TLD's on their own, there was always the risk they could be delegated officially.
For a long time, the set of non-country TLDs was mostly static, and country TLDs were always exactly two characters. The risk was always there, but it looked like it was only a theoretical risk, until recently the floodgates opened and nearly every TLD became fair game.
"Recently" being relative of course. The big gTLD expansion was first announced in 2008 when the application process for new gTLDs started being worked on.
There's been plenty of time to fix things, but as usual, people will delay and only fix things when they break. Hence ICANN's Controlled Interruption process.
Those of us who used augmented DNS roots knew of this problem long before 2008. In the various augmented root systems, there were a lot more non-country top-level domains a lot (i.e. years) earlier, and people had already registered ones that conflicted with private-use top-level domains beloved of wrongheaded system administrators and developers. AlterNIC had corp., for example.
So "recently" is even more relative than that. (-:
People still use .local in this day and age, I’m so freaking tired of editing my avahi-daemon config to not spend 10 seconds resolving internal resources.
Using .local is just asking for trouble -- avahi/bonjour/mdns have used that pseudo-TLD for ages. Using that locally is a great recipe for things breaking.
In fact, Apple officially registered it for this purpose in order to avoid this kind of conflict. These conflicts with manually-assigned .local names were already happening since the beginning of Apple's zeroconf work, and other people were frustrated with them (you can see on Wikipedia that Microsoft had previously advocated manual use of .local in its documentation). But the designation has been official since 2013.
Edit: This was all described in another part of this thread already, sorry for the pure duplication.
I looked through that trying to find why the parent commenter shouldn't use .local as they described, and it sounds like their use case is exactly what the RFC recommends. Can you explain why it's a bad idea?
If they're relying on a DNS server for resolution, they might not get the behavior they expect on an RFC compliant system.
Any DNS query for a name ending with ".local." MUST be sent to the mDNS IPv4 link-local multicast address 224.0.0.251 (or its IPv6 equivalent FF02::FB).
And an issue most sysadmins would be familiar with unfortunately.
For years, Microsoft's own guidelines suggested using .local for internal AD domains. Then one day, Apple started using it on mDNS, and Microsoft advice was suddenly "never use .local".
But renaming an AD domain is painful and in many scenarios not supported at all, and people with domains more than a few years old are regularly pointed at Microsoft's recommendations and forced to explain the situation.
Ugh, our AD domain at work isn't even that old but was set up by some local consulting company and of course they used .local.
It causes all sorts of grief when shared SMB drives won't resolve on linux, or you can't tell at first blush whether the DNS server is working properly or you just happen to have mDNS correctly resolving something for you on one machine (but not on another).
As far back as I can remember the domain creation wizard has yelled at you for using a non-delegated zone, .local was a stupid default they put into SBS and unfortunately it led them into a mess of saying “yes, this is OK” and “no, don’t do this” at the same time...
And on BSD/Linux (with avahi). And on Phones. Actually, I think ONLY windows does not use this, though the mDNS implementation (bonjour) for windows uses that too.
Right up until they can't. Better to use things that are guaranteed to work, not just things that work now by happenstance but are not guaranteed to continue working in the future.
One big usability issue with .test domains is that it doesn't work well with Chrome's address bar. If I type "my.test", it tries to do a Google search for it. "my.dev" on the other hand tries to connect to my local http service. This is a stupid problem to complain about, but typing out the full "http://my.test" every time just becomes annoying.
The annoying part about using .test, .localhost, .invalid, etc. is that Chrome still treats "my-project.localhost" as a search instead of doing a DNS lookup. Need to add a trailing slash or specify the protocol, which honestly is just annoying enough that I'd rather risk collisions and use a TLD Chrome recognizes like .dev.
You're free to do whatever you want on your local network. If you have collisions with resources on the global Internet, however, and you local network is connected to the Internet, them you're setting yourself up for problems. Hence why it's a best practice to not do that.
Except for things like this issue, where you'd be affected when using Chrome or Firefox - even if your local network were completely disconnected from the internet.
Chrome and Firefox are primarily designed for use on the World Wide Web. If you're going to use a locally modified network you may need to use locally modified browsers as well, just as you'll need your locally configured DNS.
> If you're going to use a locally modified network
Except (correct me if I'm wrong), this is not a "modified network", it can be another disconnected network that's just as "correct" in the standard conformance aspect as the Internet. It's more like Chrome is "modified" to support one network better than others - or rather, to possibly break on other networks.
In fact, it seems to me like the very idea of HSTS preload list isn't friendly with UAs working on multiple separate networks.
But yeah, you're right that this is what they are designed for after all. (I'm also probably slightly biased, since some of our test environments use .dev domain in our LAN.)
Chrome is primarily an Internet browser though, not a random network browser. It wouldn't be a good idea to prioritize random network browsing at the expense of useful security features to secure browsing on the Internet.
There is no "DNS police" entity to force them, but just using a random domain will cause issues if that domain ever gets registered.
First, you will begin leaking internal data by means of DNS requests, server connections, maybe even internal emails - e.g. when a client attempts to connect to the intranet outside of VPN.
Second, you obviously won't be able to communicate with the new owners of that domain.
Chrome as a browser doesn't follow internal name assignments. Running in a sandbox won't fix the issue that Chrome will redirect perceived TLDs to HTTPS.
Rewriting/expanding the statement using my words, this is what they said: "I don't know why people (developers) thought they could start using random TLDs for local/testing purposes".
Yes, Google got the TLD and controls it now. The GP isn't criticizing Google (although I still think "we bought a TLD for internal use" is worth debating), they criticize people like the blog author for using 'random' (i.e. "not meant for that purpose") TLDs in their development process.
You shouldn't be locally rebinding domain names that exist on the global Internet. That is a recipe for disaster, as it makes your test suite and potentially even application logic dependent on your /etc/resolv.conf. It can cause spectacularly difficult-to-debug issues down the road as well as leaks of potentially confidential data (this is what ICANN's Controlled Interruption process was designed to mitigate). And you just can't expect other developers to have to change their DNS config just for tests to pass.
The best practice for at least the past decade has been to either use subdomains of globally resolving domains that you own, or use one of the four testing TLDs specified in RFC 2606 that are guaranteed to never be delegated: https://tools.ietf.org/html/rfc2606
> The best practice for at least the past decade has been to either use subdomains of globally resolving domains that you own, or use one of the four testing TLDs specified in RFC 2606 that are guaranteed to never be delegated: https://tools.ietf.org/html/rfc2606
Unfortunately, not one of those four is designed for the use case of "private domain, for use in production", which is pretty common for companies to use.
.test is for "testing of current or new DNS related code"
.example is for "documentation or examples"
.invalid is for "online construction of domain names that are sure to be invalid and which it is obvious at a glance are invalid"
.localhost is for the loopback (and using it otherwise breaks existing code)
Nothing prevents the first three from being used for this purpose, but at best it's semantically awkward and jarring. What sysadmin wants services running on an internal VPN to use the TLD ".test" or ".example"?
I provided the answer in my previous comment that you quoted: "use subdomains of globally resolving domains", or alternatively, just use global domain names, e.g. company.com for external services and companyprod.com for internal services. Google uses a combination of all of the above.
Domain names that only resolve internally are a security anti-pattern. You should have full authentication on all services, and not rely on simply being able to reach a service in order to grant access. See e.g. DNS rebinding as one attack vector that can really ruin your day if you don't do this.
There was a post a few days ago about a systemd resolv feature that resulted in (almost) permanently switching to the secondary DNS when the primary failed. The primary is an internal DNS resolving private domains. The secondary could be 8.8.8.8. It's easy to see what can go wrong if the private domain is somebody's else domain on the public internet. A kind of honeypot for any kind of web request.
Indeed. I wrote a Frequently Given Answer about it in 2003, years before systemd and years before the 2008 events mentioned elsewhere in this discussion. Because people were seeing exactly that.
I also wrote a Frequently Given Answer describing some more of the mistakes that people have made over and over in this, that one should learn from and not repeat.
I like the facts presented in your second link, but unfortunately the tone that it is written in means that it is unlikely to be received well by anyone it is linked to. Have you considered revising it?
What's so bad about service.company-internal.com? (or service.internal.company.com, but some people understandably prefer having them totally separated) You don't have to actually resolve the internal names publicly.
> What's so bad about service.company-internal.com? (or service.internal.company.com, but some people understandably prefer having them totally separated) You don't have to actually resolve the internal names publicly.
No, but you do actually have to reserve the public domain, which can be inconvenient or undesirable for all sorts of reasons.
> Not doing so is way worse though. You could be leaking all sorts of internal requests to a malicious attacker who does register that domain name.
It is, but again, it doesn't help that none of the reserved TLDs actually address the primary use case here for a lot of companies. We can both agree that the companies should be registering their private domains, but clearly most aren't, and there's no reason they shouldn't have a free, reserved-for-private use space that does address it.
I'd also say that, while having TLDs for testing usage is acceptable, there should never be any designed for production use as that would seemingly endorse the idea. The best practice (to use domains/subdomains that you own) should always be followed, and just because people do the wrong thing doesn't mean ICANN or the IETF should go out of their way to make the wrong thing easier to do.
DNS was always a system for fetching information from SRI's NIC official master database. It was never supposed to be itself a source of truth for everything related to a domain.
Hey everyone. I'm the Tech Lead of Google Registry and I'm the one behind this (and likely future) additions to the HSTS preload list. I might be able to answer some questions people have.
But to pre-emptively answer the most likely question: We see HTTPS everywhere as being fundamental to improving security of the Web. Ideally all websites everywhere would use HTTPS, but there's decades of inertia of that not being the case. HSTS is one tool to help nudge things towards an HTTPS everywhere future, which has really only become possible in the last few years thanks to the likes of Let's Encrypt.
I bet you can't answer the actual most likely question: What are you (Google) planning on doing with .dev domains anyway?
However besides that...just curious: Will Chrome honor a header to disable HSTS for preloaded domains? (e.g. Strict-Transport-Security: max-age=0)? And what is going to happen if I submit my .dev site for removal from https://hstspreload.org/ ?
Also, reading the rules on hstspreload.org, I see the following statement: "Don't request inclusion unless you're sure that you can support HTTPS for your entire site and all its subdomains the long term"
Since you cannot actually guarantee that every .dev site will support HTTPS considering the fact that many developers use it on their LAN...don't you think you're breaking the rules here and possibly causing more problems than solutions?
I would love to be able to answer that first question for you. Unfortunately I have nothing to share yet.
My understanding is that HSTS carveouts are supported by Chrome but not yet other browsers. There's more standardization work to be done there. That said, you can only request entries on the HSTS list for domains that you own, and since no one (yet?) owns any .dev domain names, no one could request such changes. It goes without saying why you can only change security settings for domain names that you own.
And for the last question: Again, there are no .dev domain names. There never have been. It's never been available for registration. The recommendation for a long time has been to only use either (a) domain names that you actually own, or (b) domain names that are reserved for testing and are guaranteed never to exist a la RFC 2606. Using domain names for testing that don't yet exist but could in the future is a huge security hole that you must fix now. Do it now while the domain names still fail to resolve. Once they resolve, and you don't own them, then your security situation gets a lot worse.
> And for the last question: Again, there are no .dev domain names. There never have been. It's never been available for registration.
It is, however, in widespread practical use.
> The recommendation for a long time has been to only use either (a) domain names that you actually own, or (b) domain names that are reserved for testing and are guaranteed never to exist a la RFC 2606
If taken to its extreme, that recommendation would mean that any kind of internal hostnames - even dotless ones - are subject to potential override unless they are tied to a public DNS entry.
As indeed they actually are. WWW browsers have a search path, that is added on to a second search path mechanism that DNS client libraries have. Domain names with dots in, but that are not fully qualified, suffer from this. Yes, this means that what http://wibble/ is right now is subject to change; and even http://wibble.wobble/ could be, depending from various factors.
Would that I had recorded the case some while back where, when I pointed this out, someone chimed in to note that xe had worked for a company that had actually taken advantage of this in order to usurp external WWW sites.
Yes. That's why it's recommended to have a subdomain of a domain you own under which all internal names live. (no need for it/them being in public DNS though)
> Using domain names for testing that don't yet exist but could in the future is a huge security hole that you must fix now.
I disagree that this is something that Google needs to do on my behalf. If a cybersquatter squats on <my-startup>.dev, I don't consider it broken if I can't access their squat page. If I'm overriding random domains without doing my due diligence, then I get what I deserve (and probably wanted the results anyways). If somebody is spoofing a domain as a method of cyber attacking, the domain owner should have had HSTS if they had dignificant digital assets and they would already be using a non .dev domain.
"If I'm overriding random domains without doing my due diligence, then I get what I deserve."
I mean, that is exactly what is happening here, so it sounds like you get it. We aren't doing things on other people's behalf; HSTS is part of our plans.
An interesting consequence of your last question, is that if Google added .dev and .foo to the HSTS preload list, it means they don't intend to make these domains available for public registration at all. If they did let people register them, they would have no way to enforce that the sites on there honored the hsts requirements.
That doesn't follow. Sites that did not abide by those requirements simply would not work. The requirements are enforced by the browser.
The intention when registering such a domain name would be to follow said requirements, otherwise you wouldn't be able to use the domain name for hosting websites (though you could of course still use it for other services).
Not to defend plaintext HTTP, but what you describe is a DNS registrar that mandates which services can be used with the registered domain... Would you buy a house where you cannot cook, only microwave?
The house analogy doesn't really work as you could always install a stove. A better example would be "Why would you buy a plot of land that is zoned residential if you can't build an office building on it?" The answer is that you know what you're getting into before you buy it, and so you'd only buy it if you were building a house. If the restrictions are known up front then it's all good. I'd also like to point out that HSTS has very real security benefits, and if the entire TLD is already on the list then you don't have to go through the hassle of adding all your domains individually and waiting months for those updates to roll out widely. The expectation is that the pros vastly outweigh the cons.
Sorry for the lack of knowledge, but what is actually contained in the preload lists? Just a flag "force TLS and activate HSTS" or also a certificate pin?
I.e., could you actually use your own certificate for a .dev site or would the browser only accept Google's?
We are only use certificate pinning for .google. The only requirement on any of the other TLDs is that you must have an SSL certificate and serve over HTTPS.
One of greatest things about the Web is how easy it is to write HTTP clients and servers. I see why HTTPS everywhere would be helpful, but I also think it would be a shame to lose this simplicity. Has there been any thought put toward simpler alternatives to HTTP+TLS?
It's easy to write a simple HTTP client or server that works for the happy path of the exact software you tested it with. Writing one that follows the RFCs and doesn't allow various security holes when the RFCs are violated is much harder. HTTP is not a simple protocol.
> Has there been any thought put toward simpler alternatives to HTTP+TLS?
Well, many people think that current TLS is too complicated and in part that discussion led to TLS 1.3. While I'm still not entirely happy with the complexity, it is far less complex than TLS 1.2.
TLS 1.3 removes a lot of the options that 1.2 had.
That's way outside my area of expertise. I don't think HTTPS is that bad anymore, not now that certificates are easy to obtain. What are your biggest pain points?
More broadly, I think it's worth incurring some inconvenience for the sake of security. Go too far in the other direction and you end up like Equifax.
Yeah, that's awesome, but many of us do test on our local machines where https would just be an extra layer of annoyance and having to use .localhost TLD would be icing on that annoying cake.
Since it appears that .dev is a Google internal TLD, would you consider not forcing that rule outside of Google? Effectively, we're getting all the stick and none of the carrot.
Alternately, announce your plans for .dev so as to bring some transparency to the process OR purchase maybe the .vm TLD and mark it for developer local testing in place of/addition to .localhost.
.dev is not an internal Google TLD. It is delegated but not yet launched (i.e. made available for registration). We don't use it for anything internally.
You should not be using globally delegated TLDs for local testing. See RFC 2606. I don't see how using .localhost or .test is any more of an inconvenience than using .dev. Indeed it is less of an inconvenience because it will work without DNS shenanigans.
The last round of gTLD expansion occurred in 2012 and there's still many years left until a potential future round. Additionally, .vm is not a valid gTLD because all two letter combinations are reserved for ccTLDs (countries).
And I would love to have some plans to announce. Maybe soon.
What about providing a config switch in chrome etc to disable the new behaviour, so those of us that use .dev in our workflow can turn off the redirection.
You never should have been using it, though? It wasn't your domain to begin with or your TLD.
It's like complaining that you're using appname.ycombinatordev.com as a development suffix, then someone registers it and all your stuff breaks, and you want them to fix your problem.
It isn't good practice to use random domains or TLDs that you don't own for testing. At work we have a separate publicly-registered domain, and an internal subdomain with NS records on the primary publicly-registered domain that resolves separately. It's never been a problem..
So, kill off lan-level caching http proxy servers? (Or force a choice between trust the cache with everything; including email etc - or - no caching of signed software updates and common media resources?)
I'd be less unhappy about this if web browsers (and application updaters) understood x509 ca cert limits, so you could tell a device to trust the lan cache to impersonate Debian.org, windowsupdate.com and BBC.com - but not Gmail.com...
Just commented over at Matthias' blog, I'll just copy-paste it here:
First of all I think this is generally a good move. If people use random TLDs for testing then that’s just bad practice and should be considered broken anyway.
But second I think using local host names should be considered a bad practice anyway, whether it’s reserved names like .test or arbitrary ones like .dev. The reason is that you can’t get valid certificates for such domains. This has caused countless instances where people disable certificate validation for test code and then ship that code in production. Alternatively you can have a local CA and ship their root on your test systems, but that’s messy and complicated.
Instead best practice IMHO is to have domains like bla.testing.example.com (where example.com is one of your normal domains) and get valid certificates for it. (In the case where you don’t want to expose your local hostnames you can also use bla-testing.example.com and get a wildcard cert for *.example.com. Although ultimately I’d say you just shouldn’t consider hostnames to be a secret.)
I've been wanting to experiment with something like this for my own dev environments for a while, and have been eagerly awaiting wildcard cert support from Let's Encrypt.
One thing I'm not quite sure about is if this means we need to be using the same wildcard cert for both dev and prod? I don't suppose the cert can be considered valid by the browser if otherwise?
If that's the case, I'm wondering if there are any best practices around securely distributing valid production certificates to dev machines across a team and keeping them up-to-date with Let's Encrypt's auto renewing mechanism? Ideally in a way that's transparent to each individual developer? I'm guessing committing them directly into a repo is probably a bad idea, especially for open source projects.
You can have more than one certificate for the same names but both would need to be valid. If the testing one leaks, someone can impersonate your production service.
To go further on this though, wildcards usually only allow one "level" of wildcarding. so if you had a wildcard cert for *.internal.domain.com no one could use it to impersonate www.domain.com (which is good, you should consider a cert that every developer has on their machine not trustworthy)
Thanks for the clarifications. This is all starting to make a lot more sense now.
Though I'm still curious how people usually distribute a cert like that internally and update it to keep it in sync with Let's Encrypt automatic renewal mechanism?
As far as I understand, Let's Encrypt requires a public facing web server on the matching domain to renew certificates, so we'd have to actually set up a server solely for the purpose of certificate renewal on a 2-levels deep subdomain, expose it to the public internet, and then propagate the updated certs from that server into every dev machine every time a renewal is triggered?
It sounds like there's little security risk with this approach as long as we use a wildcard cert at least 2-levels deep as you've described, as we don't have to trust this cert for real production traffic at the root domain. But I'm still wondering if there's some tooling I could adopt to streamline this process a bit? Or should I just bite the bullet and script it all myself?
You can issue/renew via DNS. I have a bunch of valid certs for domains that only resolve internally using this method. I believe the plan for wildcard certs is to only support DNS-01 challenges.
Some companies do, many don't. I don't recommend running your own CA unless you have a very good reason to do so. It's complicated and it introduces additional security risks.
If one absolutely must regardless please use name/IP constraints!
Client support isn't super awesome from what I understand but it's much better than having an unconstrained root ('keys to the internet') on >1 client - especially if you can't go the nines to protect said key.
Wow, you guys have very different use cases from me. We do all internal authentication through server and client certificates. I'm still astounded that people seem to want a third party to participate in internal trust relationships, which seems just like a really bad idea to me.
They don't want a third party in internal trust relationships, but they want to avoid their internal trust being used to attack their communications with the outside even more. Locally added CA certs override even features such as key pinning, so if your local CA gets compromised it can be used to MITM everything. Not everyone trusts themselves to run a CA safely enough, or to make it properly available to developers.
What? No; that's ridiculous. Running an internal CA is absolutely the right thing to do in almost any situation. Why on earth would you let a third party insert itself into your internal trust relationships?
All the more reason you wouldn't let a third party CA sign certificates for your bank. Seriously, this is and has always been the weakest point of PKI. We have known instances of bad google.com certificates in the wild, as well as certificates being backdated to meet hash requirements.
An internal CA is simply not the problem here, particularly since you can audit your admins' actions.
>> [..] Yes, every company should have an internal trusted CA. I have no idea why there are people arguing against that.
> An internal CA is simply not the problem here, particularly since you can audit your admins' actions.
If you can properly audit your admins' actions, then that's indeed not a problem.
It's easier said than done; I just tried to mention a few reasons why a lot of people suggest (IMHO rightfully so) that perhaps it's not something just about every company should do.
In particular, it's you don't just have to ensure that the CA is properly defended from outside threats only.
Before letsencrypt we didn't have any choice, but now the landscape is really different and in many cases I believe a company could easily defer dealing with an internal CA, especially after https://letsencrypt.org/2017/07/06/wildcard-certificates-com...
FWIW, certificate pinning (HPKP) and efforts like Certificate Transparency are attempts to address or at least mitigate weaknesses of the PKI system.
OK but if I can't audit my own sysadmin's actions how in the WORLD am I going to audit the actions of the sysadmins of the 150+ organizations that my distro has decided can be trusted to sign certificates for any domain? I still fail to see where a third-party CA brings any value whatsoever here.
Why on earth would you want a third party CA to verify your own identity to you? What's the point? Just run an internal CA. For that matter a third party CA is worse, since it's an additional element in the trust chain.
Exactly. You can run your own ACME compatible server internally and auto issue internal certs just like LetsEncrypt does. Then just put the company CA on your base Windows/mac images or policy, and create debs/rpms for your Linux devs/admins.
Honestly, I'd only go to the effort of mimicking Letsencrypt internally if you are already using their service for prod and wanted to keep your dev environment very close to this.
Here's a gist you can follow if you'd like to do things the old fashioned way (i.e. create your own Root Certificate Authority, create a Sub-Root Authority and use that to sign internal certificates): https://gist.github.com/corford/9a206664bb8278c8243821d23666...
It's 99% copy & pasteable (doesn't assume any existing openssl.conf) and should give you decent compatibility and future proofed security. Just remember to keep the root CA safe and only use the sub-ca for signing certs.
I mean, yes? LibreSSL is pretty easy to use. An ACME server is even easier to use. And avoiding an external certificate authority means you can avoid most of the problems that exist in the worldwide PKI to begin with. If you have an ACME-compatible server you just set up an ansible script to deploy the certs for you (I think they even provide one); it's a one-time cost of a day or so, and if your sysadmin can't set it up you've got bigger problems.
They should do the opposite. There should be a .insecure domain where browsers accept HTTP or HTTPS with wrong or no certs, and pretend it is HTTPS with all consequences (e.g. loading of HTTPS third party resources). I wouldn't put it on the open net, but rather let people set it up internally for testing.
Just as a note; .dev is not yet an official TLD, it's on the status "proposed" which means that google is basically the highest priority on the waiting list.
.foo is delegated and thusly a full TLD, yes.
On the other hand, you should not be using .localhost if the target is not running on your loopback interface, resolving localhost to anything but loopback is considered harmful.
I find .test or .intranet to be more useful for such installations, they are either designated as "cannot be a TLD" or are very very unlikely to become a TLD, respectively.
Who do you want to trust more, the DNS root servers or a wiki (which might be outdated, because "This page was last modified on 1 August 2014, at 10:04"). Is the wiki even ran by ICANN, given it's on a separate domain, or by a separate entity or volunteers?
I think you're right on this, for example .moi, which Amazon is taking registrations for (and has been for six months or so), is still listed on that wiki as proposed.
If you really want to go down that rabbit holy: ".no-hsts" is still not taken, I believe so you can probably set it up as a private TLD in your hosts file.
Use one of the four TLDs from RFC 2606 that are guaranteed to never be delegated. Picking another random TLD that hasn't been delegated yet (but may be in the future) isn't a good idea.
That is far from the worst potential problem with using it, though. The biggest problem is if it does get delegated and you do not end up owning the actual "domain names" that you are using. You are now leaking web traffic to some unknown third party.
> should not be using .localhost if the target is not running on your loopback interface, resolving localhost to anything but loopback is considered harmful.
I have heard this before, but not really seen any great arguments as to why. Sure, `localhost` should not be bound to anything other than loopback, but I don’t see the harm in remapping `whatever.localhost` to the IP address of the Vagrant box running on your local machine.
The problem is that while the loopback addresses can be mapped as a secure context, even without HTTPS, .localhost. cannot since it might not be a loopback context.
You can literally use anything else to map to your vagrant box including "whatever.i-dont-care" and "whatever.whatever".
Remapping .localhost. destroys the implicit assumption of many developers that a .localhost. means loopback and rightfully the above draft proposes to force any resolve to either fail or resolve to loopback.
Which developers? I’ve never heard of anyone using whatever.localhost in their code, assuming it will map to loopback? Sane developers use localhost, or localhost.localdomain in some cases.
* Publish mDNS records to give myself extra `.local` names, or
* Get a wildcard published in the organisation's internal DNS
If you can't do either of those, _please_ use `.test` as your test TLD, as it's explicitly set aside for that purpose so you know you're never going to collide with anyone.
It allows you to easily announce CNAME's in mDNS to do the sort of dev testing locally that requires alternate domains, without the possibility of leaking them outside of your own local network.
I've hacked together an Apache module that tries to announce local hostnames for vhosts, but while it's good enough for me it's never been quite good enough for me to share.
That means your local development machine needs to;
- Be able to serve HTTPs
- Have self-signed certificates in place to handle that
- You'll have to click through the annoying unsecure site window every time
Such fun.
Part of HSTS is the requirement that certificate warnings become unskippable. So the above wouldn't work - you'll need an actual CA-signed certificate that is accepted by the browser, otherwise, you won't be able to access the site.
This is perfect and great. I'd love to see gradually (yes, GRADUALLY, without breaking anything!) all TLDs do this.
".localhost" has existed and been popular for local development for MANY years. I've no idea why somebody would use `.dev, but now that it's a registered TLD, using it locally is just asking for trouble.
Also, you can just use 127.0.0.1, 127.0.0.2, 127.0.0.3, etc.
Web developers test their code in all the browsers their audience use. Switching to a different browser isn't a solution. This isn't about what we use for personal web use.
> Web developers test their code in all the browsers their audience use.
If only. Those days are long gone.
These days the majority of web-developers I see exclusively use Chrome and test in nothing else.
Google seems to be encouraging this behaviour too, with web-pages promoting people use unfinalized APIs found only in Chrome for production websites. It's the new MSIE, for sure.
This is why I said main. Most webdevs don't continously test in each browser they target, they usually just use Chrome and then check if things break in other browsers. And some don't even check at all.
As a developer, I never used .dev for development and I don't know anyone who does. The number of developers around the world is a small percentage of all users, let alone if not all of them have this redirect. And those who do probably use /etc/hosts, so your specific host needs to be in there (since the hosts file does, unfortunately, not do wildcards)... yeah, I think people are not going to have any second thoughts registering .dev domains.
I use .dev via dnsmasq resolving *.dev to 127.0.0.1.
I wouldn't register any dev domains though these are supposed to be throwaways based on the project (my "serve" script takes the folder name and uses that plus .dev as a domain)
I'll just switch to .test as recommended elsewhere
127.0.53.53 is the IP address used for ICANN's Controlled Interruption process. It essentially means "fix your shit; you're unintentionally leaking requests to the Internet".
I’ve never used .dev - but going back five or six years we set up a .dev sub domain of our domain and use that exclusively for development.
dev.ourdomain.net is a web-accessible server on our local network, configured as the dns server for that sub domain and is our internal CA trusted to issue the certs we use for development.
We have always used local.{site}.com as a sub domain rather than tld. Makes CORS rules simpler, and we actually have a real dns record pointing to 127.0.0.1 so we don't have to bother with HOSTS
With .test thrown around a lot, would there be any complementary support from browser vendors for that TLD to be specifically a development tld? localhost is recognised to be one by chrome for example, that's the only domain where html5 geo api works without https, and "your passwords are transferred via plain text" is not displayed. In order to help shift to .test google might alter it's heuristics to recognise .test as a common tld used for development.
The main issue here is how much of a PITA it is to work with HTTPS locally (totally true that .dev is the wrong thing to use for dev boxes here). Self signing certs and forcing /etc/resolver/ configs is only half of it. Then you run into trouble with mobile emulators, proxying, etcetc.
We have an automated setup of it for devs, but it's out of necessity rather than anything else. It's a pain to deal with.
I don't really see this as a problem. In fact, I wish Chrome would do that for every gTLD, but obviously that's not going to happen any time soon. Secure by default would be great.
The real issue (for me at least) is that it's far too much of a pain to run an SSL secured site locally. It can be done, but doesn't work well across teams given you need to register your certificates locally. Being able to serve a site from a Vagrant box or a Docker container over https in a way that a browser will accept (or even just pretend to accept) would be immensely helpful. I'm sure web developers and browser vendors are trying to resolve the problem already, but it can't come soon enough in my opinion.
*.localhost is a cool idea, would be cool if it allowed self-signed certificates as valid, or even have the browser do some magic and pretended it had an ssl certificate.
Sorry for top-leveling a grand-child comment, but reading between
the lines, this is the attack vector:
> And for the last question: Again, there are no .dev domain
> names. There never have been. It's never been available for
> registration. The recommendation for a long time has been to
> only use either (a) domain names that you actually own, or (b)
> domain names that are reserved for testing and are guaranteed
> never to exist a la RFC 2606. Using domain names for testing
> that don't yet exist but could in the future is a huge security
> hole that you must fix now. Do it now while the domain names
> still fail to resolve. Once they resolve, and you don't own
> them, then your security situation gets a lot worse.
Google is concerned with nation-state attacks. This means they
have to assume ninja-assassin-scuba-divers have tapped all their
cables underground. They're also concerned about
ninja-assassin-usb-stick-droppers, and all kinds of other use
cases.
What they're doing is:
1) Requiring *.dev to match PRE-LOADED HSTS certs. This allows
google to "safely" boot up a computer from scratch. Just so long
as "clone-a-computer-from-scratch.dev" matches the
public/private handshake for HSTS/HTTPS then google knows that no
MITM, no nation state DNS takeover, etc. is possible.
So long as the VERY FIRST CONTACT WITH THE INTERNET is a *.dev
domain, then that computer can be "as secure as possibly known".
2) Forcing people to bounce "off" of invalid TLD's as a network
administration method.
Remember, google is concerned about nation states. Remember
wanna-cry? How it was disabled by some random researcher
registering xyz-abc-123.com?
That attack costs $15. Now imagine a nation-state, intentionally
registering a gTLD of "\*.haha-now-your-company-infra-is-pwnd"
which they somehow glean is the gTLD your developers use for
local development / testing / intranet portal.
If you could spoof IBM's intranet by doing something like:
"http://www.welcome.ibm" or "https://www.welcome.ibm" (b/c the
*.ibm wasn't cert-pinned.....) then you could trivially cause
*.ibm to resolve to some sort of spoofed site to collect
passwords. Or what if they're catching `mysql -uroot -pxyz
staging.product.ibm`? Whoops.
Or... perhaps another gTLD we'll see google register is "*.go" or
maybe their internal builds of chrome already do cert-pinning on
that. (Reason is I've seen/heard they allow
'http://go/my-internal-shortlink' ... I know that other tech
companies have had similar setups).
Same attack vector. You control the DNS, you control ALL
responses. And when somebody types www.microsoft.com, it may be
_impossible_ to know if that "Down for Maintenance" banner is
real or fake if their DNS is controlled by somebody who really is
your enemy.
This forcing of opinionated things goes on my nerves. How about develop the browser, and let the mass decide what they use. Amazon was 100% HTTP for 20 years (except the single login page) - it worked very well.
> This forcing of opinionated things goes on my nerves. How about develop the browser, and let the mass decide what they use.
It's Google's browser forcing one of Google's own gTLDs to HTTPS. There is no masses involved. Anything else on the HSTS preload list is there at the request of the domain or gTLD owners.
> Amazon was 100% HTTP for 20 years (except the single login page) - it worked very well.
Sure it did! It also allowed any interested party to observe all your interactions with Amazon. What worked 20 years ago doesn't necessarily work today. Standards evolve, new attack vectors emerge, and people's needs for privacy or what they expect to be private changes.
It's funny because people complain all the time about how strict security policies are....right up until something gets hacked and then they moan about how the organisations should have done more to protect their accounts.
I'm in the former group but not the latter. I would gladly have some insecurity because it's an indication that we have not become a completely authoritarian society (yet). Bad things happen, that's just how life is and should continue to be; and trying to stop them all will result in something very dystopian.
Online security has nothing to do with dystopia. Running a site over HTTPS works completely transparently to the visitor. Yes there is a little extra pain for the sysadmin setting up the service but that's what they are paid for (and I say this as a systems administrator).
Plus this isn't even the governments intervention, this is browser vendors raising the bar for online security.
HTTP can work if your "local" means strictly "on localhost domain". These days local can also mean that you are running a VM and accessing something that's running there (docker-machine for example). In that case you wouldn't be able to use localhost very trivially (you can do port forwarding, but that's pretty annoying). So you then need to access non-localhost domain for local stuff and browsers don't like this.
I use .local for, well, my local network and I find that Chrome can't find them half the time (telling me that the server can't be found or that I don't have internet connection). Safari, Firefox and even command line tools find them without skipping a beat while Chrome insists that I'm offline.
I think the culprit is the internal DNS cache. If, for whatever reason it can't find the server once, then it gets stored in the cache as unreachable and needs a cache flush to fix. To add insult to injury, in previous versions of Chrome you could disable the internal DNS system, but apparently not anymore.
In a way yes, it's trying to avoid the initial redirect. But not in order to save you from the potential latency of the initial redirect:
> These sites do not depend on the issuing of the HSTS response header to enforce the policy, instead the browser is aleady aware that the host requires the use of SSL/TLS before any connection or communication even takes place. This removes the opportunity an attacker has to intercept and tamper with redirects that take place over HTTP.
> Also, wouldn't bundling (tens of) thousands of domains start to add up, and slow down first page load for regular browser use?
Why would it? Checking in a data structure if the domain the user requested should be loaded over HTTPS can be done in a perfectly efficient way. A hash table would give you O(1) lookup times on average and there's other things you can use to mitigate the worst case lookup of O(n).
I was hoping the article would cover the scaling aspect a bit more. I guess it's just meant to be a mid-step towards browsers defaulting to HTTPS at some unknown point in the future.
The most annoying part here is that Google isn’t even using .dev as public TLD – they purely use it for internal testing, and all registered .dev domains resolve to 127.x.x.x addresses.
.dev should have been entirely reserved, or made available publicly. Registering a TLD just for your own internal testing, and forcing everyone to switch away, is the most user unfriendly move you can do.
So, why is this not documented with the TLD, or in any of the informational material?
(And why is Google in the TLD market at all? Google’s already far too large as company – impossible to democratically control, any further growth of Google should be immediately and forcefully stopped).
I don't know why people thought they could start using random TLD's on their own, there was always the risk they could be delegated officially.
https://www.iana.org/assignments/special-use-domain-names/sp...