Great point! I have a "cheap" Samsung M32 that I bought for about $170. One reason I chose this Samsung phone over a similarly priced phone from a competitor was that I wrongly believed that Samsung provided several years more OS and security updates. After buying the phone I realized that longer support only applied to flagship models :/
Devil's advocate question: How did you expect Samsung could fund SW dev costs to support 7 years of updates on the margins of a $170 phone while still turning a profit?
Not sure how much you were expecting at $170 but you might have been penny wise and pound foolish here trying to scrape the bottom of the barrel as nobody else gives you more than 2 years of updates at those rock bottom prices. Sometimes it pays to spend a bit more and get something worthwile.
Samsung's other budget phones from last year in the ~300 Euro range, like the A54 have 5 years of guaranteed support. Maybe the mid rangers of 2024 will also get 7 years of updates which would be killer value.
"In 2022 alone, the South Korean company sold almost 260 million smartphones worldwide."
Say there's a model that sold 10m total. I think it's fair to say Samsung could reasonably increase the price by $1 (~0.75c minus tax) for 10 years of support instead of ~3 years.
That's $7.5m. I used to flash Cyanogenmod on my phones (motorola defy etc.), IIRC it was often a single guy making the roms, I guess part time, doing a decent job of it. $1m/year for years 4-10 should cover a team of 5.
I think difficulties arose when newer kernals wouldn't work with the older hardware drivers that were available. But there's fewer SOCs than smartphone models.. I guess maybe $0.10 to Qualcomm for every SOC sale should cover updating drivers.
Not sure I'd want to be using a 10yo (2013) phone now, but a 5yo (2018) phone with fresh software would be fine. Todays higher-end phones should still be usuable beyond 5 years.
> I think difficulties arose when newer kernals wouldn't work with the older hardware drivers that were available.
That's a problem of Samsungs own doing. They can mainline their drivers and force their subcontractors to do the same if they want to sell to them. They're definitely big enough to be able to do it if they wanted to.
> How did you expect Samsung could fund SW dev costs to support 7 years of updates on the margins of a $170 phone while still turning a profit?
Release the source code and accept patches. Nobody even cares if you provide further updates at all if you release enough code or documentation to begin with that third parties can feasibly get up to date versions of stock Android running on it.
I can absolutely use an A as my daily driver, but M I will not touch. And you paid way too much for an M series phone, I would say they are worth $80-100.
Based on that, I can see why they can't give you 7 years of support. Besides, most M series use some unknown Chinese CPU that will never receive any kernel updates after release.
It's not a toggle, but Google Pixel phones (or at least the one I owned a few years ago) come with very few if any bloatware type apps, since the default Android apps are the Google apps anyway. Contrast with Samsung that duplicates a bunch of core apps/functionality.
Motorola is also super minimal/mostly Google. I think the only bloatware on my newest was an app to control 'moto actions', which I find gimmicky but some tend to like.
Yeah, my previous Samsung I had to spend a half hour with adb to get all their junk disabled.
Not sure what you mean by default Android apps but Google Pixel apps != AOSP's stock apps. AFAIK most apps can now be disabled in Settings on recent Samsung phones, I'm not a fan but I don't think they're that worse compared to Pixels, especially on the flagship devices.
Sony’s phones also come with minimally modified Android. They still have 3-4 bloatware apps to remove with ADB but it’s pretty manageable. I picked up an Xperia 1 V on discount to take the position of “flagship phone” in my Android app dev testing lineup and if I were switch my daily driver away from iOS, it would be in my list of considerations.
In my experience Samsung's flagship phones are top notch hardware with superb build quality. According to user reports on the internet the latest Google Pixel can't even manage to connect to the cellular networks reliably and without overheating. I wish Samsung would step their security game up to GrapheneOS standards because I just can't trust Google not to fuck the phones up.
Are you aware of /The Implementation of Functional Programming Languages/ by Simon Peyton Jones, the main person behind the front end of the GHC Haskell Compiler? The full book is available free on his web site as PDF:
Yes. I appreciate that he released it as a PDF, but that book is one of the old, dense, academic books I described. I should try it again, but it's very different from the step-by-step books posted above, or the partial book by Stephen Diehl.
Do you mean as opposed to e.g. verifying the absence of timing attacks? While I agree that verifying the absence timing attacks is probably much harder than what was done here, the difficult part of the s2n verification I linked to was that we verified equivalence between imperative C code and a functional mathematical specification.
Right, it says "convincing argument that the C implementation does the same thing as the mathematical specification" and "Assuming that we didn’t accidentally program the same “bug” into our Cryptol spec".
My understanding is it's another way of white-box testing the code against specified behaviour, but just that using a (proven?) mathematical specification for algorithms is probably easier than writing unit tests that have to capture all edge cases. (In essence, it sounds like verification software is probably set up to detect such edge cases, which I do think is a good idea, because you only have to program such software once.)
I don't think I understand what you mean by "white-box testing" here, but perhaps it's helpful to clarify what I meant by "equivalence" above, and how it relates to testing: what we did here was verify input/output equivalence between the imperative C code and our functional mathematical spec in Cryptol, for a range of key and input buffer sizes. This corresponds to testing all inputs of those sizes, which is not possible to do by direct testing: e.g., for a 64 byte key and a 1000 byte message, the equivalence corresponds to checking
tests, which would take "forever" to verify by direct testing.
We did not prove any properties of our mathematical specification in Cryptol, but the claim is that it's close enough to the official FIPS mathematical specification for HMAC [1] that it's easy to believe that it's correct. However, a group at Princeton has also verified HMAC in the past, and gone further than us by not only proving that the imperative C code is input/output equivalent to their mathematical spec in Coq, but also proving that their mathematical spec has the security properties of a secure hash function [2].
AFAIK, white-box testing is simply when you can look at the source code (as opposed to black-box testing) for example a unit test is a type of white-box test.
What I was struggling to express is that in the mathematical notation, the operations are well defined (right?); in C that's not necessarily the case. So you could argue that if you were writing direct tests, you don't need to check all inputs, but testing edge-cases will do. And maybe that's true, but practically impossible for complex algos because how do you know which inputs cause edge case behaviour? So I was agreeing that this approach is probably better than having some fallible human write test cases :) (better = more thorough and reliable) And although you'd have to make sure the same fallible human hasn't put bugs in the mathematical spec, as you've said that's probably easier to check.
EDIT: Nevermind, I found part three about undefined behaviour. I had written: You seem to know loads about this, maybe you could say how undefined C behaviour is handled when comparing against a spec? Is e.g. shift-past-bitwidth simply forbidden? The only alternative I can think of is looking at the disassembly on a certain platform and checking those instructions, which sounds less than ideal.
* the operations in the mathematical spec are mostly well defined, but e.g. division by zero is not defined. However, the verification handles this by checking that all operations are well-defined on all possible inputs.
* yes, identifying the "edge cases" is not something you can do easily, and hard to make formal. In some sense, the fact the non-edge-case inputs are treated in a uniform way is probably what allows the verification to succeed at all.
* a short summary of the answer you already found in the third blog post: what we actually verify is the LLVM assembly that Clang produces when compiling the C program. Much of the potentially undefined behavior in a C program is translated away by the compiler on the way to LLVM assembly. For any potential undefined behavior that remains in the LLVM assembly, the verification checks that it cannot happen at runtime.
In the context of the article, I think the justification was: assuming you're more productive in quality and not quantity, because you're working towards ambitious goals, then on most days you will not complete a recognized goal. The suggestion is to go out of your way to recognize your smaller day-to-day accomplishments which help you reach the big goal.
"I need to apologize to everyone in CS 367 for providing data sets
containing offensive material. I had not looked at the contents of the
large and huge data sets until well after the assignment had been
released. Had I realized what the data sets contained, I never would have
used them. I am sorry that this happened.
"