Devpressed seems like a really good initiative! Looking at HN's submission history, and from personal experience, I believe such a forum is much needed. As a student or professional struggling with mental illness, having someone to speak to that knows your world, is extremely valuable. Keep up the good work!
I roughly agree with your sentiment, but your last sentence is IMO a bit too rough with the "resort to a bunch of drugs" part.
Any psychiatrist worth his salt, will primarly prescribe drugs as a means to help people cope, while working on the actual underlying causes. Indeed, in many cases, using drugs in the treatment may be the only option to make the patient's day-to-day life bearable. Drugs may also be essential for allowing the treatment team to examine, possibly very problematic, underlying issues.
Also, a SSRI, for instance, can of course not "treat" someone of a clinical depression; you will virtually always need professional cause-oriented therapy to get better. Most patients know this, or will realize it as they experience how their meds work.
Finally, there are some mental disorders, and many individual cases, that require constant medication, despite of the quality of the other treatment given. For example, full-fleged bi-polar disorder, where a manic or a depressive episode may have very severe consequences.
I'm using PyPy for real-world code every day, because it runs fast as^H^H^H^H, uhm, very fast! I regularly see 6-15X speedups compared to CPython.
That being said, if your code is mostly IO-bound or calls into foreign-language libraries, PyPy's speedup of the Python code of course won't help much with your wall time.
Note that for for-loop-style number crunching code, the speedup you are aiming for over CPython is on the order of 50x-750x depending on cache locality.
I'm definitely not saying that PyPy won't do better than 15x speedup on number crunching code, just pointing out that in the context of NumPyPy a 15x speedup is not very relevant if you want PyPy to be a viable alternative to, say, Julia.
Yeah. It depends on the hardware and problem. A 50x speed up with optimised routines(memory optimized, branch reduced, using SIMD etc) multiplied by 8 cores gives you a total of 400 times speed up. This is what I've seen in real life code I've made. Also, if you offload to some other processor (GPU or dedicated hardware) then of course you can get even faster (again depending on the hardware and problem).
So pypy speed ups aren't very good in comparison to the best you can achieve using other techniques... but you can mostly use the same tricks in pypy as you can in CPython to get those results there too :)
PyPy speedups over CPython in numeric code are in 50-200x ballpark, it really depends what you do. We can certainly do better, but it's within 2x of optimized C for most of it (unless there is a very good vectorization going on).
For stuff that I tried pypy was universally faster than Cython on non-type annotated code and mostly faster on type-annotated code from cython benchmarks.
It would be great to see a repo with this set of examples with code for both what is run under PyPy and the type-annotated cython code, and the exact settings that were used.
I run stuff I found in the cython repo only (good or bad) benchmarks/ or so. It was also ages ago so treat it with a grain of salt. My point is that there is no fundamental reason why cython should be faster than pypy (even with type annotations), because type annotations are essentially done during JITting. In fact, pypy should be faster because of other things, like a faster GC or objects.
It's hard to approximate how long time you'll use, since it depends so much on what ideas you are familiar with, but I'd say that SICP is generally a nice and "easy" read for a programmer with some experience. I don't mean to say that the material is "easy" -- far from it, there's a lot of challenges and brain-twisters -- but the writing is IMO very good, and the book builds up nicely. You should also complement your reading by watching some of the videos of Abelson and Sussman teaching 6.001 (especially Lecture 1A, and Sussman's lectures on meta-circular interpreters).
Also, yeah, read the entire thing! You could skip some sections, but I would advice you to mostly move linearly from the start.