I hope this is a first of many and that genetic testing of more conditions gets approved soon.
I’m currently paying for a test suite during my IVF process that will also test for things like ADHD risk, diabetes risk, and low IQ risk[1]. I think soon it will be basically immoral to not do these type of screenings and it should be accelerated.
If you're using GP, they don't do monogenic screening like the hereditary cancers described here (it's an assay that's good for GRS but can't do monogenics). Orchid does monogenic screening though[1], because it's WGS, you can get in touch.
In a scenario where embryo selection is possible, it does seem like new moral/ethical considerations arise if you have the ability to choose an embryo that is at a lower risk for long term issues.
That said, other questions arise like: what happens to the broader population when we start manually selecting for certain traits? It seems like unintended consequences could be lurking as well.
To suggest we should choose an embryo that is at lower risk of long-term issues reveals an underlying presupposition that suffering should be avoided or is bad. There are many examples throughout history of people who rose to great heights as a result of facing their suffering bravely.
While you will always find stories of people who've overcome great difficulties, there is no virtue in suffering for no reason, and when possible, we use medical interventions to prevent or avoid it. The success scenarios shouldn't imply that the state of suffering is itself desirable, even if it does sometimes lead to inspiring outcomes. There are plenty of examples of the opposite, with suicides providing a counterpoint at the extreme.
The way I see this, suffering is part of life, and isn't something we can entirely eliminate. When suffering happens, there are better and worse ways to handle it, and good can come from it. But most of cultural/societal evolution is focused on finding ways to reduce or avoid it, and that's a good thing. Obvious examples like improving food security and reducing the impact of previously debilitating illnesses come to mind. I personally grew up not knowing where the next meal would come from some days. That early experience influenced the person I am today. Poverty/starvation are still situations we should continue to try to solve.
So the question really becomes: what differentiates the standard ways we avoid unnecessary suffering from preemptive options that are on the horizon, if anything? i.e. if you have a bad knee, it makes sense to see someone and fix it. Given the choice, there's no reason to choose immobility. If an embryo has a high chance of developing lifelong congenital issues and a future parent has the option to choose an embryo that is significantly less likely to suffer the same fate, why not make that choice, especially if the affliction leads to high child mortality rates?
So yes, I think there is a presupposition that suffering should be avoided when possible (it is often not possible). And I think most of the world runs on this presupposition. This should not be conflated with believing that suffering is inherently "bad", nor should the possibility of positive outcomes lead one to believe that suffering is "good".
Yeah sorry didn’t make it clear - specifically in the IVF process when you get to the decision of which embryo to implant, it’ll be immoral not to screen them all and choose the one with lowest health risks.
on the main topic, have you heard about RWKV ( rnn based gpt style network )? the project's actively working on implementing "infinite" context length support, which would probably pair very well with a project like yours
I'm waiting on my GPT-4 API access so I can use gpt-4-32k which maybe can soak up 10k LOC?
Clearly this will break eventually, but I am playing around with some ideas to extend how much context I can give it. One is to do something like base64 encode file contents. I've seen some early success that GPT-4 knows how to decode it, so that'll allow me to stuff more characters into it. I'm also hoping that with the use of .gptignore, I can just selectively give the files I think are relevant for whatever prompt I'm writing.
I wonder if you could teach it to understand a binary encoding using the raw bytestream, feed it compressed text, and just tell it to decompress it first.
Here is what GPT-4 says about it. "As an AI language model, I can understand and work with various text encoding schemes and compression algorithms. However, to work with a raw bytestream, you would need to provide specific details about the encoding and compression used.
To teach me to understand a particular binary encoding and compressed text format, you should provide the following information:
The binary encoding used (e.g., ASCII, UTF-8, UTF-16, etc.).
The compression algorithm employed (e.g., gzip, Lempel-Ziv-Welch (LZW), Huffman coding, etc.).
Once you provide these details, I can help you process the raw bytestream and decompress the text. However, keep in mind that my primary focus is on natural language understanding and generation, and I might not be as efficient at handling compressed data as a dedicated compression/decompression tool."
When GPT gives an answer like that, is it actually a meaningful description of its capabilities? Does it have that kind of self-awareness? Or is it just a plausible answer based on the training corpus?
My guess is that the training data includes things specifically about the GPT itself and its capabilities, so it would be somewhat correct. But it's also known to just make shit up when it feels like it, so you can't 100% trust it, same as with all other prompts/responses.
One huge example is an assumption that portfolio returns are normally distributed. That’s not a minor nitpick, this invalidates every formula that goes after.
Thats a fair assumption. But I did skim to the part that attempted to address non-normality—it doesn’t. Under non-normality, not a single formula in this post holds (“standard deviation” does not exist).
Hi, I wrote the post. I discussed the problem of non-normality in the post. Moreover, if non-normality is the problem, then pretty much all of modern finance is invalidated. Instead of saying its all bullshit, a better approach is to realize the assumptions of the model and use some discretion in trading based on its output.
Black Scholes, VaR, factor models, CAPM, modern portfolio theory, etc are all based on the normal distribution and are all still used in industry today. Every quant fund in the world is using models that assume a log normal distribution of returns. Moreover, Taleb certainly used models based on log normal distributions at his hedge funds.
The headline is disingenuous. I suspect a lot of folks are taking away from the headline that Facebook the company is on the ropes, however looking into the primary source[1] indicates that they are only talking about Facebook the product. The press is not making that distinction here. A vast amount of people leaving "Facebook" are just going to Instagram.
You should make this happen! I ended up getting a part time over the weekend gig at a San Francisco coffee shop - it was super satisfying to be on the other side.
I made the effort to befriend the baristas since I see them every day anyway. Eventually asked one of them to teach me during slow hours and I started to make my own drink each time I came in. One of them baristas ended up referring me to a different cafe (allowed me to avoid the awkward conversation of a weird looking resume) and I passed the interview by making decent drinks.