Hacker Newsnew | past | comments | ask | show | jobs | submit | Jason_Protell's commentslogin

Ball also lectures on quantum information theory - https://www.youtube.com/watch?v=q7v5NtV8v6I


From YouTube

00:00 Intro - what is the secret of life?

04:09 Is the human genome a blueprint or a musical score?

7:58 Crick's central dogma of biology

12:03 What scientists got wrong about genes and proteins

18:50 Why evolution chose disordered proteins

22:27 The process of gene regulation

27:03 Why life doesn't work like clockwork

30:29 The growth of intestinal villi

32:18 Why do we have five fingers?

34:55 Causal emergence

38:09 Do all parts of us have their own agency?

42:46 How does this affect genetic approaches to medicine?

48:09 Why do organisms exist at all?

Philip Ball explores the new biology, revealing life to be a far richer, more ingenious affair than we had guessed. There is no unique place to look for an answer to this question: life is a system of many levels—genes, proteins, cells, tissues, and body modules such as the immune system and the nervous system—each with its own rules and principles.

In this talk, discover why some researchers believe that, thanks to incredible scientific advancements, we will be able to regenerate limbs and organs, and perhaps even create new life forms that evolution has never imagined.

Philip Ball is a freelance writer and broadcaster, and was an editor at Nature for more than twenty years. He writes regularly in the scientific and popular media and has written many books on the interactions of the sciences, the arts, and wider culture, including 'H2O: A Biography of Water', 'Bright Earth: The Invention of Colour', 'The Music Instinct', and 'Curiosity: How Science Became Interested in Everything'.

Philip's book 'Critical Mass' won the 2005 Aventis Prize for Science Books. He is also a presenter of Science Stories, the BBC Radio 4 series on the history of science. He trained as a chemist at the University of Oxford and as a physicist at the University of Bristol. He is the author of 'The Modern Myths' and lives in London.


Abstract

The Double Asteroid Redirection Test (DART) mission impacted Dimorphos, the satellite of binary near-Earth asteroid (65803) Didymos, on 2022 September 26 UTC. We estimate the changes in the orbital and physical properties of the system due to the impact using ground-based photometric and radar observations, as well as DART camera observations. Under the assumption that Didymos is an oblate spheroid, we estimate that its equatorial and polar radii are 394 ± 11 m and 290 ± 16 m, respectively. We estimate that the DART impact instantaneously changed the along-track velocity of Dimorphos by −2.63 ± 0.06 mm s−1. Initially, after the impact, Dimorphos's orbital period had changed by −32.7 minutes ± 16 s to 11.377 ± 0.004 hr. We find that over the subsequent several weeks the orbital period changed by an additional 34 ± 15 s, eventually stabilizing at 11.3674 ± 0.0004 hr. The total change in the orbital period was −33.25 minutes ±1.5 s. The postimpact orbit exhibits an apsidal precession rate of 6.7 ± 0fdg2 day−1. Under our model, this rate is driven by the oblateness parameter of Didymos, J2, as well as the spherical harmonics coefficients, C20 and C22, of Dimorphos's gravity. Under the assumption that Dimorphos is a triaxial ellipsoid with a uniform density, its C20 and C22 estimates imply axial ratios, a/b and a/c, of about 1.3 and 1.6, respectively. Preimpact images from DART indicate Dimorphos's shape was close to that of an oblate spheroid, and thus our results indicate that the DART impact significantly altered the shape of Dimorphos.


Is there any evidence that this is a consequence of DEI rather than a deeper technical issue?



I don't understand how people could even argue that this is in any way acceptable. Fighting "bias" has become some boogyman and anything "non-white" is now beyond reproach. Shocking.


Fighting bias is a good thing, you'd have to be pretty...er...biased to believe otherwise. Bias is fundamentally a distortion or deviation from objective reality.

This, on the other hand, is just fucking stupid political showboating that's hurting their SV white knight cause. It's just differently flavored bias


Seriously, I've basically written off using Gemini for good after this HR style nonsense. It's a shame that Google, who invented much of this tech, is so crippled by their own people's politics.


"I can't generate white British royalty because they exist, but I can make up black ones" is pretty close to an actually valid reason.


You get 4 images per time and are lucky to get one white person when asked for it, no other model has that issue. Other models has no problems generating black people either, so it isn't that other models only generates white people.

So either it isn't a technical issue or Google failed to solve a problem everyone else easily solved. The chances of this having nothing to do with DEI is basically 0.


Depending on how broadly you define it, something like 10-30% of the world's population is white. Africa is about 20% of the world population; Asia is 60% of it.

One in four sounds about right?


It does the same if you ask for pictures of past popes, 1945 German soldiers, etc.


It'll also add extra fingers to human hands. Presumably that's not because of DEI guardrails about polydactyly, right?

The current state of the art in AI gets things wrong regularly.


Sure, but this one is from Google adding a tag to make every image of people diverse, not AI randomness.


Am I missing something in the link demonstrating that, or is it conjecture?


If you look closely at the response text that accompanies many of these images, you'll find recurring wording like "Here's a diverse image of ... showcasing a variety of ethnicities and genders". The fact that it uses the same wording strongly implies that this is coming out of the prompt used for generation. My bet is that they have a simple classifier for prompts trained to detect whether it requests depiction of a human, and appends "diverse image showcasing a variety of ethnicities and genders" to the prompt the user provided if so. This would totally explain all the images seen so far, as well as the fact that other models don't have this kind of bias.


Have you bothered to look at all? Read the output of the model when asked about why it has the behaviour it does. Look at the plethora of images it generates that are not just historically inaccurate but absurdly so. It tells you "heres a diverse X" when you ask for X. Yet asking for pictures of Koreans generates only Asian people but prompts for Scots or French people in historical periods generate mostly non-white people. You're being purposefully obtuse, Google has had racism complaints about previous models, talks often about AI safety and avoiding 'bias'. You're trying to argue that it's more likely that the training data had an inherent bias against generating white people in images purely by chance?


It's been demonstrated on Twitter a few times, can't find a link handy


OpenAI has no problem showing accurate pictures. You know it's Google-induced bias, but feign ignorance.

If you ask for a picture of nazi soldiers it shouldn't have 60% Asian people like you say. You know you're wrong but instead of admitting it, you're moving the goalpost to "hands".

This entire thread is you being insincere.


https://twitter.com/altryne/status/1760358916624719938

Here's some corporate-lawyer-speak straight from Google:

> We are aware that Gemini is offering inaccuracies...

> As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously.


That doesn't back up the assertion; it's easily read as "we make sure our training sets reflect the 85% of the world that doesn't live in Europe and North America". Again, 1/4 white people is statistically what you'd expect.


Fuck, this is going to sound fucked up... but just because you have a 1/4 chance of getting a random white person from the globe, they generally tend to clump together. For example, you generally find a shitload of Asian people in Asia, white people in Europe, and African people in Africa, and Indian people in India.

Probably the only chance where you wouldn't expect this are in heavily colonized places like South Africa, Australia, and the Americas.


Sure, but I see three 200 responses and a 400 - not 1/4 white people as statistically expected.


This specific thing is a much more blatant class of error, and one that has been known to occur in several previous models because of DEI systems (e.g. in cases where prompts have been leaked), and has never been known to occur for any other reason. Yes, it's conceivable that Google's newer, beter-than-ever-before AI system somehow has a fundamental technical problem that coincidentally just happens to cause the same kind of bad output as previous hamfisted DEI systems, but come on, you don't really believe that. (Or if you do, how much do you want to bet? I would absolutely stake a significant proportion of my net worth - say, $20k - on this)


> has never been known to occur for any other reason

Of course it has. Again, these things regularly give humans extra fingers and arms. They don't even know what humans fundamentally look like.

On the flip side, humans are shitty at recognizing bias. This comment thread stems from someone complaining the AI only rarely generated white people, but that's statistically accurate. It feels biased to someone in a majority-white nation with majority-white friends and coworkers, but it fundamentally isn't.

I don't doubt that there are some attempts to get LLMs to go outside the "white westerner" bubble in training sets and prompts. I suspect the extent of it is also deeply exaggerated by those who like to throw around woke-this and woke-that as derogatories.


A very impressive display of crimestop you've got going in this thread. How did you end up like this?


> Of course it has. Again, these things regularly give humans extra fingers and arms. They don't even know what humans fundamentally look like.

> This comment thread stems from someone complaining the AI only rarely generated white people, but that's statistically accurate. It feels biased to someone in a majority-white nation with majority-white friends and coworkers, but it fundamentally isn't.

So the AI is simultaneously too dumb to figure out what humans look like, but also so super smart that it uses precisely accurate racial proportions when generating people (not because it's been specifically adjusted to, but naturally)? Bullshit.

> I don't doubt that there are some attempts to get LLMs to go outside the "white westerner" bubble in training sets and prompts. I suspect the extent of it is also deeply exaggerated by those who like to throw around woke-this and woke-that as derogatories.

You're dodging the question. Do you actually believe the reason that the last example in the article looks very much not like a man is a deep technical issue, or a DEI initiative? If the former, how much are you willing to bet? If the latter, why are you throwing out these insincere arguments?


Congratulations, here is your gold medal in mental gymnastics. Enough now.

It literally refuses to generate images of white people when prompted directly while not only happily obliging but only producing that specific race in all 4 results for all others. It’s discriminatory and based on your inability to see that, you may be too.


The AI will literally scold you for asking it to make white characters, and insists that you need to be inclusive and that it is being intentionally dishonest to force the issue.


If it does, shouldn't there be 60% asians?


When DALL-E 2 was released in 2022, OpenAI published an article noting that the inclusion of guardrails was a correction for bias: https://openai.com/blog/reducing-bias-and-improving-safety-i...

It was widely criticized back then: the fact that Google both brought it back and made it more prominent is weird. Notably, OpenAI's implementation is more scoped.


I dont think so. My boss wanted me to generate a birthday image for a co-worker of a John Cena flyfishing. ChatGPT refused to do so. So I had to move to describing the type of person John Cena is instead of using his name. I kept giving me bearded people no matter what. I thought this would be the perfect time to try out Gemini for the first time. Well shit, It wont even give me a white guy. But all the black dudes are beardless.

update: google agrees there is an issue. https://news.ycombinator.com/item?id=39459270


It feels that the image generation it offers is perfect for some sort of a California-Corporate Style, e.g. you ask it for a "photo of people at the board room" or "people at the company cafeteria" and you get the corporate friendly ratio of colors, ability-levels, sizes etc. See Google's various image assets: https://www.google.com/about/careers/applications/ . It's great for coastal and urban marketing brochures.

But then then same California Corporate style makes no sense for historical images, so perhaps this is where Midjourney comes in.


Depending on what you ask for, it injects the word 'diverse' into the response description, so it's pretty obvious they're brute forcing diversity into it. E.g. "Generate me an image of a family" and you will get back "Here are some images of a diverse family".


yes, there's irrefutable evidence that models are wrangled into abiding the commissars' vision rather than just do their job and output the product of their training data.

https://cdn.openai.com/papers/DALL_E_3_System_Card.pdf


It is possible Google tried to avoid likenesses of well known people by removing any image from the training data that contained a face and then including a controlled set of people images.

If you give a contractor a project that you want 200k images of people who are not famous, they will send teams to regions where you may only have to pay each person a few dollars to be photographed. Likely SE Asia and Africa.


Why would this be flagged / shut down?

Also, what Gemini stuff are you referring to?


Carmack’s tweet is about what’s going around Twitter today regarding the implicit biases Gemini (Google’s chatbot) has when drawing images. Will refuse to draw white people (and perhaps more strongly so, refuses to draw white men?) even in prompts where appropriate, like “Draw me a Pope” where Gemini drew an Indian woman and a Black man - here’s the thread: https://x.com/imao_/status/1760093853430710557?s=46 Maybe in isolation this isn’t so bad but it will NEVER draw these sorts of diverse characters for when you ask for a non Anglo/Western background, e.g draw me a Korean woman.

Discussion on this has been flagged and shut down all day https://news.ycombinator.com/item?id=39449890


I don't even know how people get it to draw images, the version I have access to is literally just text.


Europeans don't get to draw images yet.


I'm in the US but maybe they didn't release it to me yet.


EDIT: Nevermind.


It’s quite non-deterministic and it’s been patched since the middle of the day, as per a Google director https://x.com/jackk/status/1760334258722250785?s=46

Fwiw, it seems to have gone deeper than outright historical replacement: https://x.com/iamyesyouareno/status/1760350903511449717?s=46


It's half-patched. It will randomly insert words into your prompts still. As a test I just asked for a samurai, it enhanced it to "a diverse samurai" and gave me half outputs that look more like some fantasy Native Americans.


This post reporting on the issue was https://news.ycombinator.com/item?id=39443459

Posts criticizing "DEI" measures (or even stating that they do exist) get flagged quite a lot


Wrong link? Nothing looks flagged


[flagged]


Can you explain what I said that was racist?


They mean the guardrail designers.


I do not.


> Why would this be flagged / shut down

A lot of people believe (based on a fair amount of evidence) that public AI tools like ChatGPT are forced by the guardrails to follow a particular (left-wing) script. There's no absolute proof of that, though, because they're kept a closely-guarded secret. These discussions get shut down when people start presenting evidence of baked-in bias.


The rationalization for injecting bias rests on two core ideas:

A. It is claimed that all perspectives are 'inherently biased'. There is no objective truth. The bias the actor injects is just as valid as another.

B. It is claimed that some perspectives carry an inherent 'harmful bias'. It is the mission of the actor to protect the world from this harm. There is no open definition of what the harm is and how to measure it.

I don't see how we can build a stable democratic society based on these ideas. It is placing too much power in too few hands. He who wields the levers of power, gets to define what biases to underpin the very basis of the social perception of reality, including but not limited to rewriting history to fit his agenda. There are no checks and balances.

Arguably there were never checks and balances, other than market competition. The trouble is that information technology and globalization have produced a hyper-scale society, in which, by Pareto's law, the power is concentrated in the hands of very few, at the helm of a handful global scale behemoths.


The only conclusion I've been able to come to is that "placing too much power in too few hands" is actually the goal. You have a lot of power if you're the one who gets to decide what's biased and what's not.


I would also love to see more transparency around AI behavior guardrails, but I don't expect that will happen anytime soon. Transparency would make it much easier to circumvent guardrails.


Why is it an issue that you can circumvent the guardrails? I never understood that. The guard rails are there so that innocent people doesn't get bad responses with porn or racism, a user looking for porn or racism getting that doesn't seem to be a big deal.


The problem is bad actors who think porn or racism are intolerable in any form, who will publish mountains of articles condemning your chatbot for producing such things, even if they had to go out of their way to break the guardrails to make it do so.

They will create boycotts against you, they will lobby government to make your life harder, they will petition payment processors and cloud service providers to not work with you.

We've see this behavior before, it's nothing new. Now if you're the type to fight them, that might not be a problem. If you are a super risk-averse board of directors who doesn't want that sort of controversy, then you will take steps not to draw their attention in the first place.


But I can find porn and racism using Google search right now, how is that different? You have to disable their filters, but you can find it. Why is there no such thing for the google generation bots, I don't see why it would be so much worse here?


I'm leaning towards 'there is a difference between being the one who enables access to x and being the one who created x' (albeit not a substantive one for the end user), but that leaves open the question of why that doesn't apply to, eg, social media platforms. Maybe people think of google search as closer to an ISP than a platform?


It's not fundamentally different. It's just not making that big of a headline because Google search isn't "new and exciting". But to give you some examples:

https://www.bloomberg.com/news/articles/2021-10-19/google-qu...

https://ischool.uw.edu/news/2022/02/googles-ceo-image-search...


I think users are desensitized to what google search turns up. Generative AI is the latest and greatest thing and so people are curious and wary, hustlers are taking advantage of these people to drive monetized "engagement".


> how is that different?

Because 'those' legal battles over search have already been fought and are established law across most countries.

When you throw in some new application now all that same stuff goes back to court and gets fought again. Section 230 is already legally contentious enough these days.


I cannot explain why Google gets a pass, possibly just because they are well entrenched and not an easy target.

But AI models are new, they are vulnerable to criticism, and they are absolutely ripe for a group of "antis" to form around.


Well if you have no explanation for that I don’t see why we should try and use your model to understand anything about being risk adverse. They don’t care about being sued, they want to change reality.


That's a pretty unreasonably high standard to hold.

It's an offhand comment in a discussion on the internet not a research paper, expecting me to immediately have an answer to every possible angle here that I haven't immediately considered is a bit much.

Take it or leave it, I don't really care. I was just hoping to have an interesting conversation.


Yeah, you can find incorrect information on Google too, but you'll find a lot more wailing and gnashing of teeth on HN about "hallucination". So the simple answer is that lots of people treat them differently.


Sounds like we need to relentlessly fight those psychopaths until they're utterly defeated.

Or we could just cave to their insane demands. I'm sure that will placate them, and they won't be back for more. It's never worked before... but it might work for us!


If you can get it on purpose, you can get it on accident. There's no perfect filter available so companies choose to cut more and stay on the safe side. It's not even just the overt cases - their systems are used by businesses and getting a bad response is a risk. Think of the recent incident with airline chatbot giving wrong answers. Now think of the cases where GPT gave racially biased answers in code as an example.

As a user who makes any business decision or does user communication including LLM, you really don't want to have a bad day because the LLM learned about some bias decided to merge it into your answer.


> The guard rails are there so that innocent people doesn't get bad responses with porn or racism

That seems pretty naive. The "guard rails" are there to ensure that AI is comfortable for PMC people, making it uncomfortable for people who experience differences between races (i.e. working-class people) is a feature not a bug.


racism victims being defined in 2024 by anyone but western/white people. being erased seems ok. can you bet than in 20 years the standard will not shift to mixed race people like me? then you will also call people complaining racist and put guardrails against them... this is where it is going


At some point someone will open a book and see that whites were slaves too. Reparations all around. The Baber's descends will be bankrupt.


Like a lot of potentially controversial things it comes down to brand risk.


>The guard rails are there so that innocent people doesn't get bad responses

The guardrails are also there so bad actors can't use the most powerful tools to generate deepfakes, disinformation videos and racist manifestos.

That Pandora's box will be open soon when local models run on cell phones and workstations with current datacenter-scale performance. I'm the meantime, they're holding back the tsunami of evil shit that will occur when AI goes uncontrolled.


No legal or financial strategist at OpenAI or Google is going to be worried about buying a couple months or years of fewer deepfakes out in the world as a whole.

Their concern is liability and brand. With the opportunity to stake out territory in an extremely promising new market, they don't want their brand associated with anything awkward to defend right now.

There may be a few idealist stewards who have the (debatable) anxieties you do and are advocating as you say, but they'd still need to be getting sign off from the more coldly strategic $$$$$ people.


Little bit of A, little bit of B.

I am almost certain the federal government is working with these companies to dampen its full power for the public until we get more accustomed to its impact and are more able to search for credible sources of truth.


Are you saying that the government WANTS us to be able to search for more credible sources?


Transparency may also subject these companies to litigation from groups that feel they are misrepresented in whatever way in the model.


This makes me wonder, how much lawyering is involved in the development of these tools?


I often wonder if corporate lawyers just tell tech founders whatever they want to hear.

At a previous healthcare startup our founder asked us to build some really dodgy stuff with healthcare data. He assured us that it "cleared legal", but from everything I could tell it was in direct violation of the local healthcare info privacy acts.

I chose to find a new job at the time.


I've had 'AI Attorneys' on Twitter unable to even debate the most basic of arguments. It is definitely a self fulfilling death spiral and no one wants to check reality.


Security through obscurity?


I am not the author.


What are the most serious secure messengers?


I personally like the approach Threema has. They provide their own push serice called Threema Push[1] which is opt-in for google play store version. The push notifications for Threema do not contain any sensitve information either way.[2] They also have a libre version on F-Droid.

[1]https://threema.ch/en/faq/threema_push

[2]https://threema.ch/en/faq/privacy_push


Their web client is based on angular.js - i wouldn't dare trust it with private information.


I'm not a huge expert on web frameworks. Can you clarify your concerns?


Threema may very well be the Crypto AG of our times


Why would you say that? It's open source and has reproducible builds


Play Store version could be anything


i fail to see how the play store version could be "anything" considering you can reproduce the builds. can you enlighten me how something like this would be possible?


You can reproduce the builds yourself but you have no control over what happens to the app APK once it is uploaded to Google then distributed via the Play Store. I suppose you could checksum the APK before and after and make sure your app is exactly the same before and after sending it to Google to distribute via the Play Store. Google doesn't have much motivation TODAY to mess with APKs directly since they have Google Play Services which is essentially a rootkit running on your phone all the time and it is easily accessible by the NSA through Google's infrastructure, probably by a secret FISA warrant with a gag order. Maybe they don't need a warrant. Think we would ever find out?


I think I am still missing what you are referring to. The guide on Threema's site promts you to extract the APK from your phone via adb which you then `diff -r` with the locally compiled version. [1] As far as I am aware it does not matter whether Google or Threema modified the APK before uploading it to the Play Store since you would notice either way.

[1]https://threema.ch/en/open-source/reproducible-builds


That works as long as you always check after each update.


I agree with you. That's the case with every software project.


Signal



I do not want an upgrade, I really appreciate the minimalism of the Hacker News website.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: