Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GPT-4: A Copilot for the Mind (every.to/chain-of-thought)
72 points by dshipper on March 17, 2023 | hide | past | favorite | 58 comments


Am I the only one who thinks this is a really bad idea?

By offloading your cognitive tasks to an AI, even though you now look smarter, you're becoming dumber in the long run, because you're never really challeging and exercising your intelect. This book[1] goes into a lot of detail about how rote memorization and recall is essential to critical thinking (you have a limited working memory, and the way by which you're able to critically think about complex subjects is by chunking, which only works with concepts you've previously memorized). If you just stop exercising your recall and critical thinking, they'll get weaker and weaker.

I feel that already with ChatGPT. Before, whenever I needed to learn some programming concept, I'd have to search vast amounts of resources to learn it. By being exposed to many different points of view, I always felt that what I had learned stuck with me for much longer. If I just ask ChatGPT, I get the answer faster, but I also forget faster. It's not learning.

Learning, with capital L, is not supposed to be easy. It's supposed to be hard. Education is about making what is hard a worthwhile pursuit. The people who get lured into thinking they'll be smarter if they plug themselves to the matrix will be shooting themselves in the foot.

For me, relying on OpenAI to function cognitively is like relying on Google to turn my lightbulb on. It looks cool, but it doesn't make any sense.

[1] https://www.goodreads.com/book/show/4959061-why-don-t-studen...


I haven't formed a definite opinion on this, and I think I largely agree, but what about a counterargument like this one:

Virtually no one does long division manually anymore, or really any basic arithmetic greater than two digits, because we invented pocket calculators and smartphones that do this for us. And are we any worse mathematicians or engineers because of this? If anything, this has freed us to perform more higher-order reasoning.

And so with these kinds of "AI" assistants, is it possible that the types of reasoning that we offload onto them will free us to reason in even higher orders?


Well we're pretty confident that calculators work and do so in a fairly deterministic manner.

ChatGPT tends to be extremely incoherent and often provides answers which directly disagree with what it previously said (at least on some topics*). My fear is that while you right in theory we'll have spend huge amounts of brain power and time to discern whether what it's saying is total BS or not. And I really don't know how could I even do that if I wasn't particularly knowledgeable on the topic.

If it could provide citations or some context on why did it decide to answer in the way it did it might be not so bad.

Fairly straightforward areas like software engineering are not that bad I guess.. but it's answers any even mildly complex questions on history, anthropology or related fields where there are often no clear and straightforward answer just seem absolutely awful. Just tweaking the input a bit without actually changing the core of the question can results in something that completely contradicts to what it just said before.


Socrate didn't want to write anything down because he thought it would make you stupid too.

Though maybe he might be right as well...


If you think of cognitive tasks as a hierarchy it makes more sense. It's a big task to think through and plan an essay, but it's a little cognitive task to check your grammar and citations. If you can get an LLM to do the little things, you can practice the higher level stuff.

I guess the question is whether you are actually learning the higher level stuff by getting help with the lower level stuff. I think on some level you would, like how having a calculator when you're doing higher level math helps you think about the problem rather than the details.


I recently asked an AI for help writing a python program with a mutex to prevent outdated information from being accessed. It presented me with a solution that I didn't fully understand, so I asked it to explain that part of the code. And then I asked it to explain part of the explanation. It just kept answering, never getting tired or irritated with me asking for clarification, and generating information that couldn't exist in a book or a blog post. It reminded me of the primer in "The Diamond Age". It catered the answers to my needs and deficiencies instead of making me adapt to it.


> information that couldn't exist in a book or a blog post

I think that exists somewhere. Maybe not something that is specific to situation, but there is enough information out there that can lead you easily to the solution.

That’s why I prefer research instead of this AI assisted workflow. Instead of giving me the specific knowledge I need, it lets me know what I didn’t know and give me much more information to reason about.


I think your point about research is valid, but once that is done and you have a rough understanding of a solution, you just want things to work. You dont want a bunch of ways to do one thing, you want a single opinionated solution. Part of the information encoded in these models is from trainers deciding which answer is best, and that is information itself which isnt necessarily online.


Interesting, this isn't the only life changing invention. Riding horseback was one of them. Cars, then GPS. Internet instead of books. Social sites instead of in-person skills. And so on.

The main danger of AI 'experts' is, in my opinion, the information bubble they create. They not only suggest, eventually they will drive your thoughts. In the direction their builders want. That will be crowd control even worst than Facebook was at it's peak. Currently we know that ChatGPT has woke bias embedded. But that's not the end, right?


It has already happened with people's sense of direction. Lots of people have no idea where they are when they are driving due to being completely dependent on GPS. I see so much lack of situational and spatial awareness on the road. Since people don't know where they are and where they are supposed to go, there's a lot of last moment lane switches to make an exit etc. Lots of very bad decision making because of being continuously lost.


You can still seek challenge, it just has to be more ambitious and further out at the edges now.

A challenge closer to leading a team of researchers rather than plugging along alone at a problem.


I agree with you. In a more extreme example, I wish I never offload my address book to my phone, I can hardly remember a single phone number in my head now!


Would you have remembered every phone number if it were in a Rolodex?

It’s a skill you have to exercise no matter where you keep contact information. Same deal with outsourcing directions to GPS-using maps apps: you can still maintain a basic sense of direction and how to navigate a city without an app as long as you make it a point to do so.


Counterpoint/related: LLMs penalize those who spent most of their education memorizing things/doing rote learning and encourage actual thinking.


Honestly when I read comments about how LLMs will make us all dumb, all I can think of is Steve Jobs telling people they're holding it wrong.

There hasn't been a subject that's gotten me thinking as deeply as LLMs in a minute, and I don't even work at the implementation level.

Just coming up with novel ways to use them is a delightful brain exercise that requires ways of thinking that you don't normally exercise just writing code. And since I started interacting with ChatGPT for example, primarily through APIs rather than the web interface, I've started to scratch a mental itch that "normal" programming hand long since stopped for.


> encourage actual thinking

You mean when there is an X chance that the answer it provided is BS but in a subtle non immediately obvious way and you have to spend some amount of time 'actually thinking' before you figure it out?


Not a copilot, an auto pilot. Most people will not be able to resist the temptation.


It's much like relying on Google Maps for directions. When you want to practice your navigation skills, you're free to not use it.

How much practice you need depends on what you're interested in learning to do.


"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise." - Socrates, from Plato's dialogue Phaedrus

Yes, this is going to make us smarter. Just like the personal computer was the 20th century "bicycle for the mind", large language models will be the 21st century "copilot for the mind". The only scary part it it feels like we're handing over some of the reins.


Plato was largely right though, when looking at the individual alone. It is still a problem, thinking you understand something because you could look it up. It permits hypothetical access to facts but not the synthesis of knowledge from those facts. Anyone reading this could look up the latin terms "manus" and "facere", but unless they do, they probably won't spontaneously understand the etymology of "manufacture" as I mention it.

What he didn't factor in was the effect on information logistics. This turned out to outweigh the drawbacks of more false wisdom.


Plato was an elitist.

He didn't want or believe that the lower-class Athenians should have access to education. He very much believed in an Oligarchy where the State would be ruled by a select few.


Such a big statement about someone who we know so little about.

But regardless of that, judging from the available works alone that is simply not true. Cases in point: Meno, the slave boy moment; the whole education project from the Republic.


Oh sure, but he still made an important point.


The full story for those curious, from Plato's dialogue Phaedrus 14, 274c-275b:

Socrates: I heard, then, that at Naucratis, in Egypt, was one of the ancient gods of that country, the one whose sacred bird is called the ibis, and the name of the god himself was Theuth. He it was who invented numbers and arithmetic and geometry and astronomy, also draughts and dice, and, most important of all, letters.

Now the king of all Egypt at that time was the god Thamus, who lived in the great city of the upper region, which the Greeks call the Egyptian Thebes, and they call the god himself Ammon. To him came Theuth to show his inventions, saying that they ought to be imparted to the other Egyptians. But Thamus asked what use there was in each, and as Theuth enumerated their uses, expressed praise or blame, according as he approved or disapproved.

"The story goes that Thamus said many things to Theuth in praise or blame of the various arts, which it would take too long to repeat; but when they came to the letters, "This invention, O king," said Theuth, "will make the Egyptians wiser and will improve their memories; for it is an elixir of memory and wisdom that I have discovered." But Thamus replied, "Most ingenious Theuth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess.

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."


It's enough for me to not use it, honestly.

I can tolerate no longer remembering everyone's phone numbers since I started using a cell phone, but I'm not eager to give up thinking. I don't want to get into the habit of reaching for ChatGPT when I want to write someone an email or post a comment on HN.


All week I've seen basically two dominant takes about how AI is going to impact creative work in the future (three if you count "generative AI is a fad with no value," which I do not): (1) It's going to take people's jobs (rip); or (2) It's going to help people do their jobs better.

I'm a writer and have been thinking a lot about use case (2)—since I'm not emotionally ready for my job to be taken by AI, I'm trying to figure out how to use AI to do it better. So far, I've been exclusively considering the use of AI in the sense of the "photographers using photoshop" analogy. That is, using GPT-4 to quickly draft or edit things based on prompts, while I, the human, am still "doing the work" creatively speaking. Obviously this is feasible today and going to become normalized soon, to some extent.

However, this article makes an interesting case for (2) in a way I hadn't considered before: GPT-4 opens up entirely new ways for humans to work, period. ChatGPT already improves on Google for high-level research—ask it for a summary of some well-established topic or field and chances are you'll get a coherent and set of pretty-accurate facts cobbled together from its training data (much of the Web, Wikipedia, scientific papers). But when tools become available that let ChatGPT provide this kind of summary from my own works, notes, and prior research? That is going to totally change the game.

In the last couple years I've already seen easily a 2-3x boost in my writing productivity thanks to Obsidian, a research tool that—at least the way I use it—is entirely "manual" (i.e. not automated or "smart"). If I could get the benefits of Obsidian for making connections between information and ideas, powered by an intelligent assistant that "knows" how I think and what I think about... it's cliché to say the possibilities are endless, but that's really what I'm looking at here.

Anyway, I want to inject some optimism into this hot topic. It may end up that in 10 years we're all unemployed. But I still need to do my job today. I see a lot of reasons to be excited, rather than defeatist, for the applications and value of GPT-4 in this regard.


I can't remember the last time I came up with the majority of my scene ideas and beats and character arcs and world stuff independently of actually writing the first draft. I dunno, maybe getting GPT to spit out a draft and then editing it would have the same effect. But I kinda think writing prose is the thing that distinguishes a writer from a person with an idea. I'm not sure I want to read a story someone coaxed out of an LLM. I want to read the words they wrote at 3am when they finally, finally got that chapter or dialogue or whatever to work.

Love the notes/research/et cetera idea though. That's totally different and would be a real game changer.


Everyone works in their own unique way! I think a big part of the promise of this sort of AI is that it can tailor itself to what you need :)


It's going to do both, and it's the same thing.

If AI makes people 10% more efficient, companies will hire 10% fewer people to get the same job done. It has both helped people do their jobs and stolen jobs.


Even more so -- if this tech makes people 10% more efficient, then maybe there's a strata of business ideas that wouldn't have been profitable without the tech, but can be with it, and they'll be able to grow and hire to absorb the displaced (or perhaps more).


Honest question: is there an example in history of when a new technology led to the loss of jobs and didn’t just lead to a shift and further growth? I’ve tried to think of one, but I’m stumped.


I just search Obsidian plugins and there are 3 that might work for you:

- https://github.com/louis030195/obsidian-ava

- https://github.com/brianpetro/obsidian-smart-connections

- https://github.com/bramses/chatgpt-md

I haven't looked into any of them in depth, but searching through a large corpus of text and using that to interact with GPT3 or GPT4 has some pretty good solutions already.


> If I could get the benefits of Obsidian for making connections between information and ideas, powered by an intelligent assistant that "knows" how I think and what I think about...

Damn! I'm working on how to get an algo to setup links between my Obsidian files but you've taken it to whole new levels.

How would you even encode "how I think"? <thinking.png>


I've just watched the Microsoft 365 copilot presentation. They never mentioned hallucinations, and talked about errors maybe once?

This thing will definitely make stupid errors and will make up things when summarizing, doing presentations etc. - unless it achieves near human level intelligence of course - but in this case everyone'll lose their job.

I'm really curious what'll eventually happen? How are we going to live with it - strange presentation points, wrong numbers in reports, enormous amount of auto-generated business talk texts? Will the knowledge be corrupted more and more?


You're looking at this in a black and white way, overlooking all of the uses where it doesn't have to be perfect. If it saves humans even a few minutes, it's a good tool.


Absolutely. The user still needs good taste right now too - if it writes you a mission statement or something else you have still have to know whether its good or not.


That itself is an order of magnitude speed up. Even if AI never improves beyond this point (and it almost certainly will improve), it's already a game changer.


Sure, but for less than $20 a month.


> I'm really curious what'll eventually happen?

Maybe it will produce fewer errors than humans and also improve over time.


Is there any indication based on current data that it might?

I mean currently it's just regurgitating what actually humans wrote down at some point. It's capably of synthesizing this information to some degree but it is almost completely incapable of criticizing it on its own.


Productivity junkies need to remind themselves that their goals aren't necessarily useful or practical goals.

The mind is working exactly how it should: there is no practical need to remember everything you've read or seen (it has been like this ever since Google Search existed and since 2008 even more so with the ability to google the entire world from your pocket). The mind does a great job at filtering all the noise already and ensures you only remember the most important stuff.


The mind is also highly plastic. Its like clay, the more its worked the more dynamic it can be. If it is never worked then it will be very hard to bring the heat back in to make it dynamic again.

People who rely on ChatGPT or any LLM will be outsourcing their mind in a detrimental way. A calculator allows you to build up to bigger ideas. An LLM in the usecase being described is a replacement for critical thought, not an enhancer of it so the user can think bigger. Laborious tasks can frequently be useful if theyre the right ones.

To fully use an LLM as a copilot (sending texts, writing letters, important emails, generate business ideas) is to outsource your own mind. The territory still left for machine to conquer is our own consciousness that is a product of simultaneous multi-sensory input. We can feel a table, see a plane, smell the air, hear music, experience time. These build into understanding the consequences of our actions in a more complex way than an LLM can understand. An LLM is still simply a 1-1.5D being.


"Over the next year or two, I expect GPT-4 and its successors to become a copilot for the mind: a digital research assistant that will bring to bear the sum total of everything you’ve read, everything you’ve thought, and everything you’ve forgotten every time you touch a keyboard. "

I'm not sure everybody is comfortable with handing over all that to Microsoft or other tech giants. Would you be comfortable letting facebook have video cameras in all the rooms of your house? If not, then why let Microsoft get all your emails, browsing history and everything else you have on your computer.

Letting some AI, that is controlled by some megacorp, know and control everything about your life, isn't necessarily the best idea.


> why let Microsoft get all your emails, browsing history and everything else you have on your computer

I think you'll also have to to give Microsoft access to what you have in your head for this to work...


> In Jose Luis Borges’ short story “The Library of Babel,” he creates an infinite library that contains all possible books … a book that predicts your future accurately, a book that unifies quantum mechanics with general relativity … But again, this library contains every possible book. So it also contains a lot of gibberish. Most of the books, in fact, are complete gibberish.

Google is converging on this "Library of Babel" now, and if it isn't there already LLMs will ensure that is its final state.

This idea of LLMs + curated authoritative sources is an interesting and potentially really powerful antidote.


So, note-taking? I already use Google Keep as an extension of my long-term memory, for storing everything from book recommendations to birthdays. For short-term "volatile" memory, a pen and paper or a .txt on my computer works great.

In addition, I have hundreds of pages of typed notes in per-topic gdocs that I can refer to quite handily using Ctrl+F.

I actually wouldn't want GPT to summarize or reduce these notes for me because I would always doubt the accuracy, and reading the exact words that I've typed invokes much better contextual recall than reading a rephrasing. Also, usually when I can't find something immediately using Ctrl+F, it means it's time to reorganize and that grows the body of knowledge.

Not to mention, much of the good that comes from note-taking comes from the act of writing notes. Sending off large chunks of other people's text for an AI to summarize defeats this benefit.

GPT is impressive in some ways but such "pockets" of the GPT ecosystem are very reminiscent of web3/blockchain - forcing inferior solutions on already well-solved problems.


For me this would work only on a "work" setting. Like, if I'm being paid to perform some tasks, then I want to be able to use tech like "a copilot for my mind". But outside it, I would rather prefer not. If you come to my party using such copilot and start to quote nice books always in the right time, and tell really good jokes and all, well, what can I say, you won't surprise me at all. I actually wouldn't like you in my party.


I think that's totally going to be a problem in X years with devices like Neuralink.

Maybe such devices will be expected to bear some kind of sign to indicate usage and to use them when interacting with others in social gatherings is something that will be frowned upon. Who knows.


> It will bring back the ideas, quotes, and memories you need, when you need them most, with no organizing, tagging, or linking required. It will work as a personalized extension of your intelligence available 24/7 at the touch of a button.

How can people come to this conclusion after using chatgpt?

(1) Its responses are often good but they are routinely wrong as well.

(2) Search engines already do these things, without the same loss of fidelity.


1) It's mistaken responses ('hallucinations') come largely from it missing data and having to interpolate something in spite of that; if you ask it about things that it definitely has information on it can be pretty reliable (though not perfect I'm sure). ('missing data' is definitely a simplification, but it's roughly the idea.)

2) It has the potential to be far superior to a search engine because its data is indexed not only by casual natural language, but via your personal 'vocabulary': e.g. you could say things like, "what was that song Jules' friend recommended to me last week?". Not that you could get very far with current models, but what'd be required is a matter of magnitude rather than kind: e.g. for the above scenario, for it to over time aggregate info about an acquaintance or your schedule etc., it mostly just needs an extended 'memory,' which is something inroads are already being made on in GPT-4.


Warning, this is blatant content marketing.


I would like an AI assistant that can leverage personal/work emails(via API), texts, phonecalls, photos (friends social media posts with me tagged), movies/music I like, spotify likes, netflix likes, browser history, health data, youtube history, etc. - to train on and answer (and be able to ask it) personal questions about me. When was the last time I played softball? Use it to determine interests and hobbies (ranked list)

This may be a security and privacy nightmare. However, I do believe that Google, Microsoft and more have already begun this process. So why should we not have detailed access of all combined accounts?


I am extremely bullish on this. I use ChatGPT every day. I am learning at a rate that I haven‘t in years and I love it, the pure fact I can ask follow up question or ask it to shorten things. This is the core problem of learning: I already know this I am bored, I can‘t understand this now. For the first: Hey ChatGPT sum this up for me, for the second hey ChatGPT give me more examples, set this in relation to this. I am soo excited for the future.


Be careful. I had that same excitement in the first few minutes but when I asked it about things that I understood well and it was just completely wrong or lacked some critical nuance. Granted this was in the early days, before their accuracy update, but the impression was made that it doesn't really have a logically consistent "knowledge graph" of things.


I guarantee that you are not learning anything and if forced to go without ChatGPT you would barely be able to summarize high level overviews about the topics it has "taught you".


> If I just ask ChatGPT, I get the answer faster, but I also forget faster. It's not learning.

I agree. Its necessary to put in the effort to internalize knowledge, as well as to recall once in a while. It does seem important to internalize _some_ knowledge.


Sounds like what Humane is building.


HN algorithm promotes broken links?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: