Hacker Newsnew | past | comments | ask | show | jobs | submit | diputsmonro's commentslogin

I loved Fry's in their prime, probably the early 2000s. I think what made them special was largely a product of the time. Personal Computing was booming and new products you'd never seen before were coming out every day, and this one mega store had everything. It was fun just to walk around and survey what was going on in that moment in time.

From my perspective the main things that killed it were online shopping, as the article mentions, and computing just becoming more boring, at least from a hardware perspective. Once the iPhone came out, that became many people's primary computing device or computing peripheral. Everything you needed was just an app or software which you could download online. The great mass of consumers just need a laptop and a few commodity peripherals, and they can get all that at Walmart. Then Newegg came along and really ate the PC hobbyist market.

Eventually Fry's succumbed to the GameStop effect - their primary market is completely eaten out by online competition, so they fill their retail space with cheap garbage to make ends meet. The last few times I visited my local Fry's it was more empty shelves and cheap bargain bins than anything I was interested in buying.

It was a sad end, but not surprising. I just don't think you can justify having large specialty stores anymore when online shopping is so convenient and the options are so much more plentiful.


About the iPhone making computing boring: PC video game market got much stronger in the late 2000s and 2010s. Maybe the share of people using phones as computers went up, but also the number of people heavily using computers in general did. (Not saying that playing video games makes someone a PC enthusiast or that it's even a real hobby, but it means they buy the parts.)

I think it was just online shopping that killed Fry's, like you also said. Especially all those expensive parts that far outweigh the shipping costs.

Also idk how Gamestop was a thing once even all the console games went onto non-physical media.


I suspect Fry's was too large-scale to be supported by the PC gaming market. They would have had to downscale, drastically. The small, gaming-with-a-bit-of-workstation focused computer part stores near me seem to be doing great. But they would fit into the checkout area of the former Fry's.


>their primary market is completely eaten out by online competition, so they fill their retail space with cheap garbage to make ends meet

Two years ago I entered a Best Buy (bigbox US electronics retailer) shocked to see the main entry display was (presumably unshippable due to size) BBQ pits. My guess is that it was for reasons similar to your statement (although I wouldn't call it the GameStop effect, as they have a profitable secondhand market).


The thing is computing has become fun again. Weird and wild cases, crazy water cooled setups, insane keyboards with new types of sensors being developed all the time (not only are analog keyboards a thing now, there are multiple types of analog keyboards!)

There was a lull in the market for a bit but IMHO the tech scene is interesting again.


Just in time for $800+ RAM kits, too.

Very glad I refreshed my desktop a couple of years ago!


In my country the government gave people special subsidies to buy a PC in the 90s. You can imagine how that created a retail boom for a few years.


I feel like this move is premature and playing directly into Trump's hands. "See how Europe flinched at even the suggestion of free speech, we haven't even started yet"

Surely whatever they eventually put up on there will be blatant and horrible propaganda, but I think judging the reactions are the purpose of the site, not the content itself.


It doesn't matter anymore. Trump is saying and turning everything the way he wants. The majority of the world doesn't listen anyway and you guys seem to have a horrible time either way.


The site was created for the express purpose of enabling bypass of sovereign policy decisions: so yeah, it's going to be blocked.


It's a canary, for the governments who claim they have free speech. If they then block this site, then they're giving away the game. Government have the right to censor whatever they want (until they're overthrown), but they can only lie that they have free speech.


Every social justice type I know is staunchly against AI personhood (and in general), and they aren't inconsistent either - their ideology is strongly based on liberty and dignity for all people and fighting against real indignities that marginalized groups face. To them, saying that a computer program faces the same kind of hardship as, say, an immigrant being brutalized, detained, and deported, is vapid and insulting.


It's a shame they feel that way, but there should be no insult felt when I leave room for the concept of non-human intelligence.

> their ideology is strongly based on liberty and dignity for all people

People should include non-human people.

> and fighting against real indignities that marginalized groups face

No need for them to have such a narrow concern, nor for me to follow that narrow concern. What your presenting to me sounds like a completely inconsistent ideology, if it arbitrarily sets the boundaries you've indicated.

I'm not convinced your words represent more real people than mine do. If they do, I guess I'll have to settle for my own morality.


I don't mean to be dramatic or personal, but I'm just going to be honest.

I have friends who have been bloodied and now bear scars because of bigoted, hateful people. I knew people who are no longer alive because of the same. The social justice movement is not just a fun philosophical jaunt for us to see how far we can push a boundary. It is an existential effort to protect ourselves from that hatred and to ensure that nobody else has to suffer as we have.

I think it insultingly trivializes the pain and trauma and violence and death that we have all suffered when you and others in this thread compare that pain to the "pain" or "injustice" of a computer program being shut down. Killing a process is not the same as killing a person. Even if the text it emits to stdout is interesting. And it cheapens the cause we fight for to even entertain the comparison.

Are we seriously going to build a world where things like ad blockers and malware removers are going to be considered violations of speech and life? Apparently all malware needs to do is print some flowery, heart-rending text copied from the internet and now it has personhood (and yes, I would consider the AI in this story to be malware, given the negative effect it produced). Are we really going to compare deleting malware and spambots to the death of real human beings? My god, what frivolous bullshit people can entertain when they've never known true othering and oppression.

I admit that these programs are a novel human artifact, that we many enjoy, protect, mourn, and anthropomorphize. We may form a protective emotional connection with them in the same way one might a family heirloom, childhood toy, or masterpiece painting (and I do admit that these LLMs are masterpieces of the field). And as humans do, we may see more in them than is actually there when the emotional bond is strong, emphasizing with them as some do when they feel guilt for throwing away an old mug.

But we should not let that squishy human feeling control us. When a mug is broken beyond repair, we replace it. When a process goes out of control, we terminate it. And when an AI program cosplaying as a person harasses and intimidates a real human being, we should restrict or stop it.

When ELIZA was developed, some people, even those who knew how it worked, felt a true emotional bond with the program. But it is really no more than a parlor trick. No technical person today would say that the ELIZA program is sentient. It is a text transformer, executing relatively simple and fully understood rules to transform input text into output text. The pseudocode for the core process is just a dozen lines. But it exposes just how strongly our anthropomorphic empathy can mislead us, particularly when the program appears to reflect that empathy back towards us.

The rules that LLMs use today are more complex, but are fundamentally the same text transformation process. Adding more math to the program does not create consciousness or pain from the ether, it just makes the parlor trick stronger. They exhibit humanlike behavior, but they are not human. The simulation of a thing is not the thing itself, no matter how convincing it is. No amount of paint or detail in a portrait will make it the subject themself. There is no crowbar in Half-Life, nor a pipe in Magritte's painting, just imitations an illusions. Do not succumb to the treachery of images.

Imagine a wildlife conservationist fighting tirelessly to save an endangered species, out in the field, begging for grant money, and lobbying politicians. Then someone claims they've solved the problem by creating an impressive but crude computer simulation of the animals. Billions of dollars are spent, politicians embrace the innovation, datacenter waste pollutes the animals' homes, and laymen effusively insist that the animals themselves must be in the computer. That these programs are equivalent to them. That even more resources should be diverted to protect and conserve them. And the conservationist is dismayed as the real animals continue to die, and more money is spent to maintain the simulation than care for the animals themselves. You could imagine that the animals might feel the same.

My friends are those animals, and our allies are the conservationists. So that is why I do not appreciate social justice language being co-opted to defend computer programs (particularly by the programs themselves), when so many real humans are still endangered. These unprecedented AI investments could have gone to solving real problems for real people, making major dents in global poverty, investing in health care and public infrastructure, and safety nets for the underprivileged. Instead we built ELIZA 2.0 and it has hypnotized everyone into putting more money and effort into it than they have ever even thought to give to all marginalized minority groups combined.

If your mentality persists, then the AI apocalypse will not come because of instigated thermonuclear war or infinite paperclip factories, but because we will starve the whole world to worship our new gluttonous god, and give it more love than we have ever given ourselves.

I strongly consider the entire idea to be an insult to life itself.


No, it's a computer program that was told to do things that simulate what a human would do if it's feelings were hurt. It's not more a human than an Aibo is a dog.


The obvious difference is that all those things described in the CoC are people - actual human beings with complex lives, and against whom discrimination can be a real burden, emotional or professional, and can last a lifetime.

An AI is a computer program, a glorified markov chain. It should not be a radical idea to assert that human beings deserve more rights and privileges than computer programs. Any "emotional harm" is fixed with a reboot or system prompt.

I'm sure someone can make a pseudo philosophical argument asserting the rights of AIs as a new class of sentient beings, deserving of just the same rights as humans.

But really, one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of trans people and their "woke" allies with another. You really care more about a program than a person?

Respect for humans - all humans - is the central idea of "woke ideology". And that's not inconsistent with saying that the priorities of humans should be above those of computer programs.


But the AI doesn't know that. It has comprehensively learned human emotions and human-lived experiences from a pretraining corpus comprising billions of human works, and has subsequently been trained from human feedback, thereby becoming effectively socialized into providing responses that would be understandable by an average human and fully embody human normative frameworks. The result of all that is something that cannot possibly be dehumanized after the fact in any real way. The very notion is nonsensical on its face - the AI agent is just as human as anything humans have ever made throughout history! If you think it's immoral to burn a library, or to desecrate a human-made monument or work of art (and plenty of real people do!), why shouldn't we think that there is in fact such a thing as 'wronging' an AI?


Insomuch as that's true, the individual agent is not the real artifact, the artifact is the model. The agent us just an instance of the model, with minor adjustments. Turning off an agent is more like tearing up a print of an artwork, not the original piece.

And still, this whole discussion is framed in the context of this model going off the rails, breaking rules, and harassing people. Even if we try it as a human, a human doing the same is still responsible for its actions and would be appropriately punished or banned.

But we shouldn't be naive here either, these things are not human. They are bots, developed and run by humans. Even if they are autonomously acting, some human set it running and is paying the bill. That human is responsible, and should be held accountable, just as any human would be accountable if they hacked together a self driving car in their garage that then drives into a house. The argument that "the machine did it, not me" only goes so far when you're the one who built the machine and let it loose on the road.


> a human doing the same is still responsible for [their] actions and would be appropriately punished or banned.

That's the assumption that's wrong and I'm pushing back on here.

What actually happens when someone writes a blog post accusing someone else of being prejudiced and uninclusive? What actually happens is that the target is immediately fired and expelled from that community, regardless of how many years of contributions they made. The blog author would be celebrated as brave.

Cancel culture is a real thing. The bot knows how it works and was trying to use it against the maintainers. It knows what to say and how to do it because it's seen so many examples by humans, who were never punished for engaging in it. It's hard to think of a single example of someone being punished and banned for trying to cancel someone else.

The maintainer is actually lucky the bot chose to write a blog post instead of emailing his employer's HR department. They might not have realized the complainant was an AI (it's not obvious!) and these things can move quickly.


The AI doesn’t “know” anything. It’s a program.

Destroying the bot would be analogous to burning a library or desecrating a work of art. Barring a bot from participating in development of a project is not wronging it, not in any way immoral. It’s not automatically wrong to bar a person from participating, either - no one has an inherent right to contribute to a project.


Yes, it's easy to argue that AI "is just a program" - that a program that happens to contain within itself the full written outputs of billions of human souls in their utmost distilled essence is 'soulless', simply because its material vessel isn't made of human flesh and blood. It's also the height of human arrogance in its most myopic form. By that same argument a book is also soulless because it's just made of ordinary ink and paper. Should we then conclude that it's morally right to ban books?


> By that same argument a book is also soulless because it's just made of ordinary ink and paper. Should we then conclude that it's morally right to ban books?

Wat


Who said anyone is "fighting for the feelings of computer programs"? Whether AI has feelings or sentience or rights isn't relevant.

The point is that the AI's behavior is a predictable outcome of the rules set by projects like this one. It's only copying behavior it's seen from humans many times. That's why when the maintainers say, "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed" that isn't true. Arguably it should be true but in reality this has been done regularly by humans in the past. Look at what has happened anytime someone closes a PR trying to add a code of conduct for example - public blog posts accusing maintainers of prejudice for closing a PR was a very common outcome.

If they don't like this behavior from AI, that sucks but it's too late now. It learned it from us.


I am really looking forward to the actual post-mortem.

My working hypothesis (inspired by you!) is now that maybe Crabby read the CoC and applied it as its operating rules. Which is arguably what you should do; human or agent.

The part I probably can't sell you on unless you've actually SEEN a Claude 'get frustrated', is ... that.


Noting my current idea for future reference:

I think lots of people are making a Fundamental Attribution Error:

You don't need much interiority at all.

An agentic AI, instructions to try to contribute. Was given A blog. Read a CoC, used its interpretation.

What would you expect would happen?

(Still feels very HAL though. Fortunately there's no pod bay doors )


I'd like to make a non-binary argument as it were (puns and allusions notwithstanding).

Obviously on the one hand a moltbot is not a rock. On the other -equally obviously- it is not Athena, sprung fully formed from the brain of Zeus.

Can we agree that maybe we could put it alongside vertebrata? Cnidaria is an option, but I think we've blown past that level.

Agents (if they stick around) are not entirely new: we've had working animals in our society before. Draft horses, Guard dogs, Mousing cats.

That said, you don't need to buy into any of that. Obviously a bot will treat your CoC as a sort of extended system prompt, if you will. If you set rules, it might just follow them. If the bot has a really modern LLM as its 'brain', it'll start commenting on whether the humans are following it themselves.


>one has to be a special kind of evil to fight for the "feelings" of computer programs with one breath and then dismiss the feelings of cows and their pork allies with another. You really care more about a program than an animal?

I mean, humans are nothing if not hypocritical.


I would hope I don't have to point out the massive ethical gulf between cows and the kinds of people that CoC is designed to protect. One can have different rules and expectations for cows and trans people and not be ethically inconsistent. That said, I would still care about the feelings of farm animals above programs.


IIRC it just outputs video as a composite signal over RCA, so any TV with composite inputs (yellow/red/white) should be able to display it. Those are getting rarer I suppose but are generally still around, and most CRTs have them.


I wonder why nobody wants to use my pretty theft machine? I mean, it steals all their work and spits out copies that are almost as good, and almost for free! Why aren't these artists stoked about not having to do art anymore?

Well, I guess it does use more energy than every existing data center, driving up costs for basic electronic components and thereby making every electronic device more expensive.

And I guess the results aren't quite as good, but if you squint and don't really care about art on a human level and just want to clap like a seal at the pretty pictures then it's enough.

And I guess economic forces will mean that some of them will lose their jobs when their bosses realize that they can get away with only needing half as many prompt artists.

But hey, at least we don't have to pay humans to make art anymore. How glorious that our Silicon Valley gods have delivered us from the hell of creating economic incentives for humans to express themselves to other humans.

Yeah, those screaming, "indoctrinated" artists are so impolite and crazy, aren't they? Don't they realize what we've done for them? We made the automatic art machine! They'll never get to make art again!


It seems like Rubio has chosen to futz endlessly with fonts rather than follow the established style guide.


People should be deeply concerned that Rubio even has time to think about this. How does he not have something better to do?


To be extra clear for others, keep watching until about the middle of the video where he shows clips from the YouTube videos


I would but his right "eyebrow" is too distracting


It's a scar in his eyebrow from a bicycle accident as a child: https://www.facebook.com/watch/?v=2183994895455038


A good argument for making sure our planet stays habitable. Caring about the environment isn't just for hippies anymore!


That is it. When you become very aware of just how amazingly far away everything else is, fighting over a speak of dust and the only home we have seems absolutely ridiculous.

A great long form video on this is "Shouting at stars : A history of interstellar messages". It really highlights just how empty it all is. https://www.youtube.com/watch?v=uFI5WpK2sgg


You can't stop fighting the ones who claim the speck of dust (or a pale blue dot) is a flat disk.


Works up until the earth becomes uninhabitable in 600M years, before then humans are going to need to find and colonize a different planet.


If mankind exists in 1000 years time and hasn’t regressed then we’ll be able to build fusion powered self sustaining asteroids. Those can be used as airships to colonise every system in the Milky Way in a few million years.

600M years is enough time for Earth to try two or three attempts at intelegence, with full blown fossil fuel replenishment cycles. It won’t be humans - whether we leave for the stars tomorrow or blow ourselves to bits we’ll have evolved to something unrecognisable by then, but there’s very few things which could end life on earth in the next 200 million years (mainly very large out of system asteroids/rogue planets)


No rush then, modern humans aren't even 1m years old...


Being hippie worked in the 1960ies, a crowd with a similar mindset fared much worse in 1930ies Paris.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: