"HTTPS-only" goes directly against the architectural principles laid out in "REST", where intermediaries should be able to understand (in a limited sense) the request and responses that pass through, do caching, differentiate idempotent from non-idempotent actions etc.
The ability for intermediaries to see what goes through is in large part why "REST" is said to aid scalability, the same point this article seems to address.
Now, both movements, "HTTPS-only" and "REST" are widely popular in dev communities. Yet I never see one acknowledge the existence of the other, which threatens it. In fact, I'd see people religiously support both, unaware of their cognitive dissonance.
Because your initial premise is flawed. Equal GET requests will often have different results based on the user doing them. Either because they are requesting their "own" data or because they have different privileges and see different results. While not perfect, it's the reality.
This throws out all possibilities of caching. And why intermediates should differentiate more than that I cannot see. So https is in no way limiting REST.
My premise is that HTTPs-only and REST have opposing constraints.
You have not demonstrated any flaws in it, REST says communication is stateless and cacheable except for acknowledging some select minority cases when it's not the case.
Turning the minority cases into the only way of communication nullifies most of the benefits of REST, because the whole rationale of the paper is lost. I.e. intelligent shared processing and caching by intermediaries.
I'm taking no stance on what "the reality is". I'm taking no side about which side is more correct. I'm stating what both sides want, and finding it curious they don't see the contradiction.
I think the description of REST you've outlined is not entirely right. The statelessness relates to client state, not the system state - i.e., POST/PUT/DELETE etc. can very well change the system state and that's the whole point of them - and also session state is allowed too, it's just not the part of REST architecture but is assumed to be implemented externally.
It is true that HTTPS may impede some cacheable resources. Maybe HTTPS may be improved to allow transparent caching of _some_ content, but the security implications may be hard to predict and will require very careful implementation to not introduce new security issues with attacks on caches themselves (DNS system still has this problem AFAIK).
The statelessness relates to communication state. A client can hold state and it most certainly will hold state (consider your browser: open tabs with URLs, bookmarks, local browser cache; form autocompletion; settings; all of this is "state").
Instead, REST talks about a request being stateless and a response being stateless (i.e. sufficient on its own and not dependent on preceding or future communication between that client and server).
This is, again, done for the benefit of intermediaries, because intermediaries should not be forced to hold state in order to interpret REST communication. Every request, response should be sufficient on its own to be understood.
Section 5.2.2 of Fielding's thesis specifically says how per-user authenticated cannot be cached in a shared cache because they may vary per request. There are other cases mentioned too.
"All REST interactions are stateless. That is, each request contains all of the information necessary for a connector to understand the request, independent of any requests that may have preceded it. This restriction accomplishes four functions: 1) it removes any need for the connectors to retain application state between requests, thus reducing consumption of physical resources and improving scalability; 2) it allows interactions to be processed in parallel without requiring that the processing mechanism understand the interaction semantics; 3) it allows an intermediary to view and understand a request in isolation, which may be necessary when services are dynamically rearranged; and, 4) it forces all of the information that might factor into the reusability of a cached response to be present in each request."
When the paper was written, the per-user requests were supposed to be an exception, a minority case.
HTTPS will effectively make everything opaque and "per-user", and hence everything I quoted above which refers to intermediaries will no longer matter.
Restrictions in 5.1.3 ("Stateless"), 5.1.4 ("Cache"), 5.1.5 ("Uniform interface") and 5.1.6 ("Layered") would no longer apply either. All intermediaries will see is encrypted data, so shared data and functionality as explained can no longer be moved to an intermediary.
BTW, parent, way to selectively refer to a phrase in Fielding's paper while missing the point of 99% of the rest of it.
If some form of user authentication is part of the request, or if the response indicates that it should not be shared, then the response is only cacheable by a non-shared cache.
This clearly indicates that returning different versions of the same resource on a per-user basis is valid REST architecture (and I was responding to the comment that Equal GET requests will often have different results based on the user doing them is very non-REST.).
While it is only one phrase, it is the only phrase that deals with user authentication and matches the discussion very well. As such I think your comment about missing 99% of his thesis is incorrect - indeed, I think choosing the correct and relevant part is precisely what "getting the point" is about.
I agree with your points about HTTPS, but that is orthogonal to the user authentication discussion in the sense that transport-layer is separate to the API design.
However, I appreciate that your points about how the transport layer affects the assumptions around API design are correct, and that going to a HTTPS-only transport mechanism may have performance impacts in many cases (especially high-volume ones).
I know which phrase you're referring to, but if you read it in context, it's apparent this is an exception case, because the very same section talks about cacheable, stateless requests and responses.
All of REST's constraints are about encouraging cacheability and "visibility" to intermediaries. Intermediaries should in most cases be able to see which resource is being requested/returned, read the method, read the content-type and other headers.
All of this is not available during an HTTPS session. So "HTTP + a bit of HTTPs" is REST + a dose of realism.
First, proxies are certainly not one of the necessary principles of REST. Even without proxies, there can be REST. More importantly, most REST APIs can't take advantage of proxies anyway, because most responses must not be cached.
Second, HTTPS is MITMed using self-signed certificates (signed by custom CAs which are installed in browsers on the network) by proxies all the time. This is very common in corporate networks. Therefore, HTTPS currently works with proxies.
> Yet I never see one acknowledge the existence of the other, which threatens it. In fact, I'd see people religiously support both, unaware of their cognitive dissonance.
Are you sure you're not just seeing different groups of people support one or the other? I support HTTPS-only and am more or less anti-REST.
I think the big advantage of "REST" was being easy to use from the browser, but modern REST (with e.g. content negotiation and HTTP verbs) actually goes against that. I think strict, automatically-checked schemata for APIs are very valuable, so I'd prefer to use something like thrift or even WS-* rather than REST.
> think strict, automatically-checked schemata for APIs are very valuable, so I'd prefer to use something like thrift or even WS-* rather than REST.
Strict, automatically-checked schemata for APIs are perfectly doable with REST, JSON or whatever Protobuf flavor. OTOH automatically-generated schemata behemoths have been created with SOAP and WS-* that I have very creative ideas about how to deal with and dispose of.
As for the easy-in-the-browser part (whether it is for tests or implementation), it was merely a side effect of reusing the HTTP spec semantics as a common-ground general purpose vocabulary. REST in itself doesn't even mandate HTTP.
> Strict, automatically-checked schemata for APIs are perfectly doable with REST, JSON or whatever Protobuf flavor.
Up to a point, but having a single standard that's built into all the tooling is huge. Hopefully one or other approach will "win" in the REST world and we'll start to see some convergence.
> As for the easy-in-the-browser part (whether it is for tests or implementation), it was merely a side effect of reusing the HTTP spec semantics as a common-ground general purpose vocabulary.
Intended or otherwise, it was a big advantage, and I think it was the real reason for "REST"'s success.
So why not have the TLS applied at the edge of your network of machines that provide the service, and plain comms between them? Or is it somehow important that everyone, everywhere be able to read the stuff?
The ability for intermediaries to see what goes through is in large part why "REST" is said to aid scalability, the same point this article seems to address.
Now, both movements, "HTTPS-only" and "REST" are widely popular in dev communities. Yet I never see one acknowledge the existence of the other, which threatens it. In fact, I'd see people religiously support both, unaware of their cognitive dissonance.
Curious, I think.