I had to maintain a lot of old custom appliances and the like for which no ready replacement existed. Niche stuff where you had one or two vendors in the whole arena. If they were running on Apache, it was hidden somewhere with a big "WARRANTY NULL AND VOID" sticker. For all of these custom jobs, HTTPS was an afterthought, if it was present at all. For these, I had to generate cert requests through a menu system, load up certs through a menu system, and the like. I couldn't just CertBot it. That's never going to happen.
Frankly, the latest cult fad ("all must be encrypted, heretic!") has not made my life easier nor brought any benefit to my users. Nobody was going to MitM for this stuff. It's just another tiresome fashion like the people who relentlessly put W3C validation boxes at the bottom of their webpages, only now it is getting shoved through by the same browser vendors who wonder if us mere mortals need to see URLs at all.
As someone who live in an country with no net-neutrality and government-mandated censorship, I'm glad https is everywhere now. My government requires all isp to intercept all dns requests and plain http requests to block access to banned domains. Not only that, the ISPs are doubling down by blocking domains of competing services (the ISP I use block netflix for years to promote their own streaming service, only unblocking it recently after netflix finally partner with them).
They also implement the blocking by intercepting http HOST header, so even if you somehow bypass their dns filtering, they can still got you if the censored site use plain http (they'll replace the response with a redirect to their censorship portal, which fully laden with ads). They also inject ads to any non-https page at least once every day. Thanks god for pihole and vpn (when I'm using mobile connection) I can still browse whatever I want.
That seems like a case where a reverse proxy like NGINX[1] would be useful. The configuration would basically be:
User->HTTPS over public internet->NGINX->HTTP on your private network->Custom appliances
The most vulnerable link (the public internet part) would still be encrypted, and browsers won't show the scary warnings. The HTTP connection would be confined to your private network, where MITM is less of a concern.
You can use one NGINX instance to reverse proxy multiple different appliances, so the maintenance overhead wouldn't be that high.
It's funny you should mention that. A new guy (who has since left) who was super into the Let's Encrypt thing before running up against the org policy of only using a certain certificate vendor, got excited about the "problem," used NGINX, and then had to take it down for reasons I wasn't privy to. I didn't look into the reasons why.
Probably stepped on some toes. If they're not familiar with LE, it's because they think anything free has a catch, or they're just ignorant and want to keep sending money to Norton because that's the antivirus that came with their computer all those years ago.
Oh, the guy who wanted NGINX was familiar with Let's Encrypt, but the org-wide policy was set somewhere far, far away from either of us. The approved vendor changed at least twice that I can recall.
So, I'm definitely not against vanilla HTTP where appropriate, but here are three arguments for the "all must be encrypted, heretic!" side of things:
1. The vast majority of web developers/IT people/etc. aren't really knowledgeable enough to make the call whether something actually needs to be encrypted. Very smart developers have told me they weren't worried about security because their application didn't hold any sensitive data. Sometimes this was because they thought emails and/or passwords weren't sensitive data. I've also had very smart people tell me they weren't worried about security because nobody had any incentive to hack their server. But there is always incentive for bad people to hack a server: even if they can't ransomware you, they can host botnets or child pornography. Having seen situations like this enough, I'm no longer even convinced that even I am correct when I think security isn't necessary. Maybe you're qualified to make that call, but even then, evaluating the system may be harder than just adding security.[1]
2. Just because your system doesn't need HTTPS now, doesn't mean it never will. YAGNI conveniently doesn't specify whether "A" stands for "Are" or "Aren't". In my experience, most systems involving HTTP do eventually have a requirement that passes sensitive data over HTTP, and it would be all-too-easy to forget that channel wasn't encrypted.
3. Even if you don't need encryption for yourself, do it for everyone else who does need encryption. Before HTTPS was required in every browser, users would happily submit their passwords over completely unencrypted channels. Now, browsers will give big warnings that hopefully prevent users from doing this. Those warnings probably aren't necessary if, for example, you're submitting an anonymous comment to a Disqus conversation. But it's important not to normalize clicking past those warnings for nontechnical users, because if they click past those warnings to post an anonymoust Disqus comment, they'll click past them to submit a password. And in a more political sense, normalizing encryption for stuff where it isn't important protects encryption for stuff where it is. "If you are encrypting you must be hiding something bad" is a dangerous narrative which is pervasive in politics, and default-on encryption is one of the best tools against that.
[1] As an example of this, it appears that there are a few attack vectors in your own posts, which you either didn't think of or don't care about, but which I would very much care about as a user of your library catalog.
I've heard these arguments. I am not unsympathetic.
For the first, I don't know what to tell you. If they aren't qualified to know when security must exist, they are not likely qualified to implement it.
For the second, when it is needed, it can be added. A staff directory with no interaction does not need it now. It may never need it.
As to the third, in a story, here is a great argument against that:
So years ago I ran this dumb little site, by request, for the users to submit stuff, nothing too special. They suddenly wanted the HTTPS, which was at that time outside of their budget, which was zero, and we would have had to move it off of a shared IP (this was some time ago). I told them it wasn't necessary for their use case.
"But the LOCK thing makes it safe!" They were so insistent ... but they still demanded the results be sent to them over SMTP. Oh, how I facepalmed. I explained that SMTP was its own channel. No, no, they insisted. The LOCK.
HTTPS can result in a false sense of security.
At the end of the day, I think I would rather have choices available even if mistakes could be made. Removing choice from people for their own good is fantastic if and only if it will never ever cause any kind of issue on its own. Here, in the article, we discuss some of those issues. If HTTPS were possible to just implement retroactively with nothing ever breaking, not even some ancient software on a floppy disk, fine. For anything other, it's a tradeoff, and I believe for tradeoffs we ought to have the ability to assess the tradeoffs and make a choice. Even choices others do not agree with!
> For the first, I don't know what to tell you. If they aren't qualified to know when security must exist, they are not likely qualified to implement it.
...which is why numerous high-quality implementations already exist.
There isn't a way to say this without risking offending you, but if you can take it for what it is, I think you would benefit from it: you have posted a few examples in this thread where you think no one would care about encryption, and there are people posting here saying they do care about encryption in those cases. It might behoove you to approach this with a bit of humility and acknowledge that you are one of the people who isn't qualified to know when people need/want security, and your opinions are borne from that fact.
This feels like a no-true-scotsman argument. A true security-minded person understands the benefits of TLS, and if you don't, then you're not qualified to understand security.
"If you have not yet attained Buddhahood, you cannot understand the value of enlightenment." is also a non-controversial tautology, but it also doesn't leave any room for anyone to do anything else.
I think there are people who care about just about everything. Let's make those certificates expire every month. Also without a way to say it without risking offending someone: the tinfoil hat brigade can make everything grind to a halt. People can have concerns, and they can be very invested in those concerns, but they can be so concerned that I don't know if I could ever satisfy them. They could be concerned about what operating system that HTTPS-serving software runs on, and their set of requirements might be such that I cannot run anything.
> I think there are people who care about just about everything.
True. But I'm not one of those people. You'll note that there are a few of your examples which I haven't objected to. That's because I don't have objections to all of your examples.
> Let's make those certificates expire every month.
I get your point--people do ask for excessive security measures sometimes. But as an aside, the point of frequent certificate expiration has nothing to do with security. It's to encourage automation of renewals.
Yeah it is more work. If you're a sysadmin, that is your job and it's why you get paid. You're welcome to choose another job if it's too much work for you.
Yes, it is more work. And doing it doing it once every month instead of once a year is yet more work. But is it valuable work? Does that work accomplish anything? Or is that effort wasted when I could have been doing other things?
I was not a sysadmin -- I kept trying to avoid that, but got sucked in, as a programmer (my title), with the reasoning I have been doing "web stuff" for a long time, as well as the sysadmin duties that were required for setting up websites "back in the day." I could have been programming and solving real problems on a more permanent basis, but here I am, carrying water with a hole in the bottom of the bucket. Once the DevOps wand got waved, I got out.
If people are truly concerned about security, even the much-disliked sysadmin duties foisted upon me could have had the time more wisely spent then flipping certs around.
1. I am extremely skeptical of your claim that cert renewals couldn't be automated.
2. Nobody here is saying that you need to renew your certs every month. Just because some completely different person asked for something that didn't provide any value, doesn't prove that what anyone here is saying is incorrect.
Okay, be skeptical. I didn't want to void our support contract over finding out. Would you come to my rescue if I did?
First the certs need to be renewed every three years. And some browsers want to raise warnings if they're past a year old. And Let's Encrypt is down to three months. Just waiting for the next click of the ratchet.
> Okay, be skeptical. I didn't want to void our support contract over finding out.
Really? Your support contract would be voided if you automated certificate renewal? If what you're saying is true, the problem isn't with short renewal times, it's with your absurd support contract. Automatic renewal is the industry standard at this point. If anything, your support contract should be voided if an error occurs because you failed to automate this. I suppose it's possible that your support contract is that absurd, but given it sounds like you didn't even check, it seems a lot more likely that you don't know how to automate it and refuse to learn.
You don't get to critique Let's Encrypt, a system built with automatic renewal as a fundamental component, if you refuse to use the very simple to use tools[1] they provide for automatic renewal. Maybe it's the support contract's fault, maybe it's your fault, but it's definitely not LetsEncrypt's fault that you're stuck in 2010 through incompetence or bad contract negotiation. You could spend literally 5 minutes per server and never have this problem again.
All of the cert stuff was hidden behind a very complex series of menuing and more. I keep saying it in different ways: This wasn't just an Apache server.
Could I have pried behind the curtain? Yes? Would it have worked? Most likey! Were there lots of big warnings about not screwing with various sections of the configuration even if I could get to them? Also yes. They really didn't want you to interact with the system beyond a few set points and that was clarified to me in a call with tech support. I didn't like it, but that's what I had to work with.
This just wasn't one a thousand bog-standard Apache rollouts that you could chuck certs into. Want your CSR? Go through their menu system to generate it. Want to install it? Upload it to this specific directory, then hit the menus again.
CertBot is fine for what it does. Great. But it wasn't appropriate for what we had.
Another one of the servers isn't based on Apache at all, it's a compiled executable (yeah, .exe) running on Windows. It starts off of .ini files.
CertBot doesn't work everywhere for everything that has ever served HTTPS in the history of computing. This shouldn't be something I have to mention but I guess I have to do it.
> Another one of the servers isn't based on Apache at all, it's a compiled executable (yeah, .exe) running on Windows. It starts off of .ini files.
So? If you think any of what you've said means Certbot can't be used: again, it's pretty clear that you don't know what Certbot does.
Certbot works anywhere you have a command line and an internet connection, and can be automated if you have cron or a cron-like utility.
There might be a bit more complexity if you have to open a port for callback or store the certs in memory or in a database, but we're still talking < 20 lines of shell code.
> CertBot doesn't work everywhere for everything that has ever served HTTPS in the history of computing. This shouldn't be something I have to mention but I guess I have to do it.
That may be true, but so far all the examples you've given are just examples of you not knowing how Certbot works.
Really dude, I don't know why you want to have an argument about Certbot without knowing anything about Certbot. Anyone who spends a few minutes reading the Certbot docs can see you haven't researched it. Probably nobody is reading this besides me, but if they are, you aren't impressing them, and you aren't persuading me. There's nothing wrong with not knowing anything about Certbot, but it's a bit silly to pretend you do when you don't.
If there are technical reasons you can't use Certbot, you certainly haven't said any of them, and the things you think are reasons just demonstrate a lack of knowledge. Again, there's nothing wrong with not knowing things. But seriously, this would save you so much time! Why wouldn't you at least look into it?
> If they aren't qualified to know when security must exist, they are not likely qualified to implement it
Actually that's incorrect. It's simple to add a LetsEncrypt cert in the right way -- but not easy to come up with all possible ways people might want to attack your site.
@kerkeslager wrote: "But there is always incentive for bad people to hack a server ..."
Those are good reasons, the ones you listed -- and yet another reason is that someone might want to use your website or app, to break into someone's else's website, e.g. by adding a link to, say, Google's login page, but the link is in fact to a password stealing site. -- And it's impossible to know when or if someone has the motivation to do such things to some of one's visitors / users.
> 1. The vast majority of web developers/IT people/etc. aren't really knowledgeable enough to make the call whether something actually needs to be encrypted.
> 2. Just because your system doesn't need HTTPS now, doesn't mean it never will
The web... used to be a place where people, companies, and organizations shared linkable information publicly. The whole point was to make a massive web of interlinked information. If your goal is to contribute to this web, the cost of entry was to buy a DNS name and push out your content on an inexpensive server.
The modern web has greatly degraded this experience and HTTPS adds friction and implementation costs which degrades it even more so more and more information which used to be easily shared and accessible becomes trapped in FaceBook and other walled gardens.
> 3. Even if you don't need encryption for yourself, do it for everyone else who does need encryption.
I get this, but understand that lots of small organizations and private individuals aren't going to pony up the time/ money to bridge over to the fully encrypted web and will just become nodes on FaceBook or other large sites where hosting is "burden-free".
The open web is more or less a memory already and this is just another nail in the coffin.
I too mourn the failure of the open web to gain traction, but I don't think you've identified the causes correctly.
1) I disagree that the modern web has degraded this experience much, if at all. If anything, the tools today are much better, easier, and more mature. Perhaps there's a few extra steps now, but Apache today is much easier to use than Apache in 1996, including things like installing it from a package manager. If you don't consider things like NearlyFreeSpeech and DigitalOcean "walled gardens", these have gotten much better as well. A few years back, I spent 30 minutes installing Wordpress and my completely nontechnical father was able to build a website. So no, things have not gotten harder, they've gotten easier. HTTPS perhaps makes things minimally harder, but many hosts will set up HTTPS for you with a free support ticket these days.
2) Removing the burden of HTTPS, and indeed removing the burden of a great many other things, still wouldn't bring you close to the ease of entry of Facebook or other walled gardens. Even though entry into the open web has gotten easier, it there has been a migration away from the open web because the closed web is so much easier. This isn't a problem that will be solved by getting rid of HTTPS. It's a problem that will only be solved by hacking open source software that does what Facebook et al does.
I don't even disagree that HTTPS by itself is not needed for that use case; but in my experience those "old custom appliances" are riddled with unpatched security holes and shouldn't be left open to the Internet anyway, lest they become zombie members of a botnet. Are those an exception?
"Niche stuff where you had one or two vendors in the whole arena."
If we didn't have them open to the Internet, we would have no service. There are no choices in that arena besides strawberry and vanilla. That is it, that is all.
In this case, your requirement would result in the service not existing. This is not helpful to what we're trying to do.
I think the counter-argument here is that by existing in their current form, these products either are now, or will in the future be, used as part of botstorms, exploited for illegal purposes, etc.
I'm sympathetic to your argument, but I've also seen a little tiny "I'm just storing a list of the CDs I own in a text file" apps blow up with multiple gigs of unwanted data, which _probably_ would not have happened with HTTPS.
Overall I'm on the side of "there must be some exceptions," but I have to admit that every month it's harder and harder to think of what the legitimate exceptions might actually be, given the toxic nature of the internet.
> has not made my life easier nor brought any benefit to my users
So I'm curious, is this appliance on a private network or something? That's they only way I see that it doesn't bring any benefit. If it is going out onto the public internet, I would think encryption and integrity would be a huge concern.
It goes out on the public. It's not a target because it is a library catalog. Nobody is going to man-in-the-middle looking up which shelf a book is on. I mean, I am sure you can come up with a Sneakers-style plot where it is important but no, not realistically. Could it be done? Certainly! Why would they? What's the benefit?
There's just tons of stuff out there that exists where, honest-to-gosh, credit card details and HIPAA compliance do not happen. I know Hacker News leans very heavy toward the Silicon Valley side, but I swear stuff happens in the flyover states that involves computers. It might be pedestrian and unexciting, and nobody will IPO, but they do happen.
> It's not a target because it is a library catalog.
Oh! That's actually a great little encapsulation of the larger problem here.
Before COVID hit, I was a volunteer teacher at a Girls Who Code class at a library. Most of the library's computers were running Windows 7, which of course lost security updates in January.
One of the other volunteers remarked: "Someone needs to update those to Windows 10."
I answered: "Someone should update them, yes. But no one is going to."
I don't even know if those old laptops could run Windows 10. Should the library really spend money on replacing all of them? As opposed to buying books, or teaching more students?
---
There was an article on Hacker News some time back about how the NYC subway system runs OS/2. A lot commenters said this is a huge security risk—sure, it's ostensibly not connected to the internet, but who really knows?
But, what should be done about it? It doesn't make sense to switch out the architecture of a city's subway system every 30 years—I'm not convinced it makes sense even every 50 or 100 years. How do you set up a system like that?
We have some machines that still run Windows 7 at the office, while we don't use Windows 10 for anything beyond isolated testing environments. This is a result of assessing the risks of using Windows 10 as being greater overall than the risks of using Windows 7 in the relevant cases, despite the state of official support and lack of any further security patches. And that in turn is in no small part because Windows 10 has a track record of breaking things that would be important to our business operations. Newer is not always better, and being more secure in some respects is not always being more useful overall.
Yes, we did. The higher editions of Windows 10 do seem to be qualitatively different products that don't have the technical deal-breakers we are concerned about. Unfortunately, as far as we could tell, there was no (legal, properly supported) way to get hold of any of them for small businesses like ours through simple one-off purchases of permanent licences.
I have a lot I experience navigating Microsoft's licensing for small businesses. I have a Customer who needed Windows 10 LTSC for some PCs running expensive laboratory instruments as a recent example of needing to do just what you're looking for.
You can acquire a permanent license for Windows 10 LTSC through Microsoft's Open Business licensing program. There is a minimum initial purchase quantity of 5 SKUs, but any competent reseller will just pad your order with the lowest-price-in-the-catalog SKU to get you up to that minimum.
The Open License Agreement itself expires in two years, meaning that you're subject to the minimum 5 SKU purchase to start a new one at the end of 2 years. During the term of the Agreement you can purchase licenses piecemeal. Regardless of the Agreement's term the software you license thru the program is perpetually licensed.
This won't make it any cheaper, though. Windows 10 LTSC is ridiculously expensive, to me, for what it is. Licenses acquired through this program are transferable to new hardware, at least. That's why I used it a lot over the years. Buying Office and transferring it to a new PC once in the useful life ended up being a cost savings over buying OEM Office with the original and replacement PCs.
The idea that you have to go through a dealer, join some overcomplicated volume licensing programme, possibly buy extra stuff, and probably pay a premium for the privilege just to get a legitimate copy of LTSC to use doesn't sit well with us.
If Microsoft offered LTSC as a one-time, off-the-shelf purchase with no strings attached, I expect we'd buy several copies immediately. It seems to be the only version of Windows 10 we might actually want, other than for having the same as our customers/clients for testing purposes. But as a small business, we have limited time and resources, and we have remarkably little interest in playing big business games.
I don't like it either, but that's proprietary software. Personally, I wish my Customers didn't have business-critical applications that keep them tied to Windows and other proprietary software.
I suppose my point is that it wasn't how proprietary software tended to work until relatively recently. For many years, we (assorted small businesses) were using the Pro editions of Windows with no drama. This only became an issue when Microsoft chose to make the Pro edition of Windows 10 unsuitable for professional use (in our humble opinion) while simultaneously locking the more suitable editions behind Big Organisation Hassle.
As a direct result of that decision, they have essentially lost our business, just like certain other large software organisations whose names start with A that have adopted similar customer-hostile practices in recent years. There are viable alternatives for almost anything these days when you're a small business with the flexibility to make intelligent policy decisions about your hardware and software purchases on a case by case basis and, if appropriate, to change those policies however you want later on.
I've been pretty displeased with how Microsoft has chosen to alienate Customers the last few years. I've made a good since the late 90's installing and support Microsoft software in small businesses, and the changes in the last few years, particularly with Windows 10 and the associated Server versions, have been distressing.
I will say that I've never found the Open License program to be a tremendous hassle. There was good cost savings to be had using transferable licenses, and the product use rights and other terms and conditions were clearly spelled-out. Dealing with resellers was the worst part of it, but I managed to find good resellers who would mostly just do what I asked for and not hound me with sales-gerbil nonsense. The volume license management website was actually fairly nice, and was useful for keeping track of a Customer's license inventory.
I've had a hard time getting much free/open-source software adoption in my small business Customers. They almost always have a mission-critical application that keeps them locked to Windows (or SQL Server, Office, Exchange, etc), and no budget or desire to finance software development. The value proposition of spending money on the proprietary software is often just good enough to make it worth the cost and draconian licensing.
This is very helpful. I've bought a couple of LTSBs (now LTSC) over the years through oddball means, I strongly favor it, but it seems like Microsoft doesn't want me to be able to buy the licenses. Finding less shady-seeming resellers has not been something I've been able to manage.
The problem is not so much people snooping on your traffic, but injecting their own malicious content. Confidentiality isn't the only security goal of TLS.
Haven’t ISPs been caught MITM’ing literally all traffic and injecting tracking or ad code on every page? They don’t check if it’s a library catalogue or not
A person on the library network runs Wireshark to watch the network traffic, because they're bored. They see a woman sit down at the library terminal and search for "abortion", or "cancer", or "rape". Now they know something about that woman's interests that she probably would not want them to know.
That's why I encrypt everything: you just never know what kind of things are going to catch the attention of the wrong people.
And if our users had ever mentioned this happening once, ever, in decades, it might be important.
I've never seen it happen when I used it from home, I've never heard of it happening. It's a nobody cares situation.
Now, I have had to defend university servers from quite a lot of things, in a large variety of situations. But this? This has never shown up as an issue.
What you are looking up in the library catalog is absolutely of interest to the surveillance state. Are you okay with anyone who looks the Autobiography of Malcolm X being added to a watchlist?
And incidentally, if your ISP decided to collect that information and share it with the government, they could collect that information with a MitM attack that would give you absolutely no indication that a MitM attack, so your assertion that "this has never shown up as an issue" doesn't hold much weight.
Can you say, with confidence, that it will never happen in the future? And if so, how much of your own money are you willing to put down on that? Would you stake your job/career on that claim that it won't happen in the future?
I’m not OP, but I’m willing to claim that there are going to be at least an order of magnitude more reported security vulnerabilities due to overly complex crypto-stacks than there are going to be reported successful internet-scale MITM-attacks.
For many things crypto itself is going to be a bigger security-risk than what it’s defending against.
Many bugs in "overly complex crypto-stacks" are more like "complexity to decrypt was reduced by X%". If it took a regular attacker 1,000,000 years to crack the encryption, it now takes only 900,000 years. So if you are not up against some state sponsored actors, these bugs in crypto stacks should be of no concern. The impact of these bugs is, instead of "nobody can read this data" now "only 10 people on this planet can read the data". That's still better than "anybody can read the data".
And sure there are other kinds of bugs like downgrade attacks but even if an attacker can downgrade you to use RC4 and decrypt data in real time, you are now in the same situation as you were when not using crypto at all. Not even that: to be in the same situation as with no crypto at all, the attacker would need to downgrade/break encryption and signing because breaking only one of both would still not allow modifying the data stream.
Bugs in crypto stacks that allow code execution or memory reads/writes on the server or client are rather rare.
> at least an order of magnitude more reported security vulnerabilities due to overly complex crypto-stacks than there are going to be reported successful internet-scale MITM-attacks.
On the other hand, the overly-complex crypto stacks might be the thing preventing most internet-scale MITM-attacks from being successful.
So, you will be betting your career/job on that assertion? And your money as well?
Following that, there must be a crypto-averse bank you're using to deposit your paychecks, right? And if so, can I have the URL to their public and unencrypted site?
Shouldn't the fact that there are exploitation frameworks for that specific attack vector (e.g. https://beefproject.com/) be motivation enough to not leave that vector unpatched?
> I know Hacker News leans very heavy toward the Silicon Valley side, but I swear stuff happens in the flyover states that involves computers.
Yep, I live and work in one too! That's why I asked what it was. This is an edge case where I agree, it would add little utility. However, like you said, there are a lot of cases where there should be this sort of compliance but is not.
I actually thought you were referring to PLCs for public infrastructure or something similar to that. Sadly, I have gotten in arguments for people who do not want to even add encryption to talking to a PLC on the public internet.
That's definitely a case where I would find some way to lock that down in at least two ways. PLCs seem to be in that "interesting/terrifying/lucrative" triangle to me, with the middle part being the kind of thing I would protect with layers of security.
Overall, I have had to support some very odd security situations, back in the day. As an example: someone's philosophical position somehow got translated to an org-wide policy forbidding firewalls. I'll repeat that: they forbid firewalls. Because we were "open." And I had to support FTP sites. On Windows NT. I managed it without incident through layers of security and care, with watchfulness and compliance.
What does that has to do with https? https does not fix your server side bugs. Nor https prevents anyone from connecting to your server. Only(in general) thing that it provides is content encryption which is going through internet. If you hosting public website, this content is already publicly available.
>Only(in general) thing that it provides is content encryption which is going through internet. If you hosting public website, this content is already publicly available.
Why are you confusing https encryption with private vs publicly available information?
Let's unpack your concepts a bit using COVID-19 as an example[0]. The information that Wikipedia displays about COVID-19 is public information instead of top-secret classified files. But you'll notice that wikipedia serves its pages as "https". Think about why they do that.
Even if you manually type "http" without the "s" in your web browser url bar[1], Wikipedia will still redirect you to "https". Why do they do that if COVID is public information?
As a web surfer, I also want wikipedia to serve its pages with "https". Think about why I would want that.
Lots of content is publicly available but detailed information about which people are accessing exactly which parts of the content in which patterns is something else altogether and is potentially more valuable and private. https doesn’t fully protect this information but it helps.
> Niche stuff where you had one or two vendors in the whole arena
Seeing that tells me it is not just some sort of public website, and not something you just want any random person to access. Think like PLCs controlling public infrastructure (this is a real scenario in places) If it is on the public internet, I imagine the first thing you go is login to it. If it is HTTP, those credentials are going over the internet in the clear.
No, the first thing you do is not log in to it. That's the one size fits all mindset that is part of the problem. It says "Everything is running on these two webservers, therefore we just have to cover those two use cases. Everything is super-critical top secret, so of course we will want to do it."
The first thing you do is see if the book exists. Logins go to a different system, for something else.
If you stop thinking of the Internet as a place where people put in login information and credit card details to get products shipped to them, it becomes a great deal wider and more complex. Sometimes websites are just ... present to provide information. It might be backed up by a database but that is it.
>Sometimes websites are just ... present to provide information.
The wikipedia.org website just provides information instead of data input of private sensitive information such as credit-cards. But this doesn't mean high-value targets like Wikipedia should serve plain http.
Fortunately, I am not saying that Wikipedia should serve plain HTTP. "That which is not mandatory is forbidden" is what I am trying to avoid; I am moving toward options and choices. HTTP should be an option for people depending on what their needs are and how comfortable they feel with various threat models.
>; I am moving toward options and choices. HTTP should be an option for people depending on what their needs are and how comfortable they feel with various threat models.
That's fine and I agree with "http" sometimes being a valid choice.
I disagree with how you argued it using phrases like "sometimes a website just provides information instead of credit-cards". The "provides information" is a flawed mental model to base a decision tree on and just confuses people about why https is also important for non-credit-card data.
Your later qualifications specifying "threat models" is much better argued. Yes, my internal git web server doesn't need https and I don't want the hassle of getting LetsEncrypt certificate for it. And a toy website on my Raspberry Pi on my local private firewalled NAT'd LAN doesn't need https either.
It's not about "public information"; it's about "threats".
I had to maintain a lot of old custom appliances and the like for which no ready replacement existed. Niche stuff where you had one or two vendors in the whole arena. If they were running on Apache, it was hidden somewhere with a big "WARRANTY NULL AND VOID" sticker. For all of these custom jobs, HTTPS was an afterthought, if it was present at all. For these, I had to generate cert requests through a menu system, load up certs through a menu system, and the like. I couldn't just CertBot it. That's never going to happen.
Frankly, the latest cult fad ("all must be encrypted, heretic!") has not made my life easier nor brought any benefit to my users. Nobody was going to MitM for this stuff. It's just another tiresome fashion like the people who relentlessly put W3C validation boxes at the bottom of their webpages, only now it is getting shoved through by the same browser vendors who wonder if us mere mortals need to see URLs at all.