As an academic I will again make the dull but necessary point: figuring out the fundamental laws of nature is hard, stagnation is the norm, and the advances of the mid-20th century were the aftershocks of the great revolutions of quantum mechanics and relativity. You can't just make that happen again on demand. Fundamental laws are in short supply.
Whenever people tell me we're stagnating I ask them to name an alternative. So far, from best to worst case, every answer has fallen into one of the following buckets:
- a subfield that already exists and already has plenty of people working on it
- an idea that was extensively investigated and carefully ruled out over 50 years ago
- something that requires more money than the field will receive in total over the next 50 years
- a mathematical formalism which simply refactors existing laws in a way that makes them unreadable to almost everyone, without a chance of leading to any new predictions
- a complicated, ad hoc model that isn't any more predictive than simpler models, but gives the modeller hundreds of shiny knobs to tune
- a metaphysical suggestion about how to view what nature "truly" is, which amounts at best to rewriting existing laws with bigger, more "profound" words
I can't see how this is going to be fixed by changing how citations work. If anything, in my experience the particularly bad stuff has been correctly punished by low citations.
I see that you work in particle physics. I don't think particle physics is representative of science in general. My impression is that particle physics is seen as sexy and attracts a larger proportion of better researchers than many other fields for that reason.
The stagnation in fluid dynamics (my general field) seems fairly obviously linked to incentives. Pumping out papers is seen as more important than doing a good job as far as I can tell. In the past few years I think I've made a lot of progress in my subfield by simply compiling tons of data from the open literature [1], which apparently no one thought to do on the scale that I've been doing. (It's a lot of work up front, which would lead to a delay in publishing results.) In the process I've found that a lot of what's written in review articles and books on the subject is obviously wrong. This is not a sign of a healthy field! And I don't think you can do this in particle physics, but I think you can in many fields of engineering.
My experience in research comes from starting in very fundamental fluid mechanics (vortex dynamics and instabilities), moving into biomedical (fundamental and applied) and turbulence (fundamental) and now working in applied stuff.
I think part of the problem is that the field is stagnating partly because of the amazing work done in the last century ticked soooo many of the boxes. It was really the golden period ending with Prandtl and Karman - like the end of the ultimate share house. A lot of the tree has been stripped bare by the 70s and most of the advancements have been refinements and application specific. Obviously computational techniques have exploded and experimental methods have improved and there are still new discoveries, but in terms of impact, outside of microfluids and the never-ending grant gift of the turbulence closure model we are entering a dry season where grants are very much application specific and the fundamental physics is simply not rewarded. We used to have a bet what year the fluid mechanics chair will no longer be a thing at universities. It doesn't help that the fluids community is not super tight knit (lots of beef!)
I also 100% agree with you in terms of papers vs good job - but that is science in general nowadays - it will always be a tiny percentage that actually progress the field, a bunch that try and fail as research is want to do and a majority just try and stay relevant by pumping out rubbish.
Could you elaborate on some of the obviously wrong stuff that you've seen? I'm curious. I have an ME background but strayed away from it into electronics, then oddly enough ended up in a fluids heavy business.
This paper is on the different varieties of liquid jet breakup, e.g., for applications like fire hose streams and fuel sprays. At low speeds you have more regular breakup but this changes to various types of less regular breakup at higher speeds. (There are photos of each type in the preprint.) These varieties are called "regimes" in the literature. Because models are typically only valid in a particular regime, it's important to identify the correct regime. This is often done with a "regime diagram".
There's a lot I could write about what was wrong before. The easiest thing to do is compare figure 3 (old regime diagram, p. 5) and figure 4 (my new diagram, p. 13). These are in exactly the same coordinates but don't resemble each other much. If science were working correctly then the change would not be anywhere near as dramatic as it was.
Now, with the small amount of data past researchers used to construct these diagrams, they couldn't see that the diagram was wrong. The data was simply too sparse to see the big picture. But once you start adding tons of data it becomes dead obvious that the diagrams you see in textbooks and review articles are wrong. So I can't blame previous researchers that much, but compiling open data is something that should happen regularly. The current academic system does not incentivize data compilation, at least in engineering.
The most recent study mentioned in the paper used a grand total of 11 data points and claimed to have enough resolution to move some of the established boundaries slightly. This actually is a regression. The first study to construct a regime diagram had 63 data points. Mine has roughly 1200, and I still want more data! 11 data points is not scientifically acceptable, but it was enough to get a publication.
(I've since revised and extended this paper but I can't upload the new version yet due to the publisher's policies. One example: I determined that "turbulent dripping" is probably not possible so I removed it from the diagram. ;-)
I could list more if you want. There's no shortage of problems. But keep in mind that these problems are usually only obvious after you get enough data.
I strongly disagree. You make a great argument for why stagnation in physics is to be expected, but advances in physics were a minority factor in the technological advances of the 20th century. Advances in chemistry were by far the primary driver, followed by biology, and then by physics. (I love physics, I minored in it in undergrad, and didn't minor or major in chemistry, lest you think I'm biased.)
Chemistry brought us:
- the Haber-Bosch process, artificial nitrogen fixation that revolutionized food production
- the Bessemer process, modern steel production, necessary not just for modern buildings but also engines, modern guns, cars/boats/planes, modern supply chain via shipping and trucks, industrial factory and farm equipment, etc
- modern plastics. Seriously, look around your room, wherever you are. How many things don't have some amount of plastic in them? Less than half, probably?
- petroleum extraction and processing, which powers those steel engines, powers most of our electricity, and is the raw materials for plastic
- modern explosives, especially smokeless gunpowder (did you know that people totally built Gatling-style rotary machine guns in the Victorian Era? The were useless because black powder creates too much smoke for you to be able to aim a black powder-powered machine gun)
Biology brought us germ theory and modern medicine, and of course these fields are all interrelated because engines required thermodynamics and electricity is mostly physics, but none of that required relativity or quantum mechanics.
Quantum mechanics is the foundation for the physical chemistry subfield of chemistry, is crucial to modern silicon integrated circuit manufacturing, and of course—the atomic age. But none of those have had the impact on society as oil alone, for example.
As mentioned, I love physics, but the fact is, details about the fundamental laws of nature just did not have the impact on society that higher-level, less fundamental scientific advancements have had. The Wright brothers created aviation without any quantitative understanding of aerodynamics; and 100% of the societally impactful advancements in aerodynamics since then has had no relation to relativity or quantum mechanics.
Hmm, I agree, the answer to the question posed varies a lot between different fields. That's another aspect of these articles that I don't like: they talk about "problems with science" as a whole as if it were a monolith.
Yeah, I totally agree. Within fundamental physics, I think you're totally right that people were, in a sense, "spoiled" by the great revolutions of quantum mechanics and relativity, and seem to think the next revolution is right around the corner if they can just gedankenexperiment hard enough in their heads. But it probably isn't. It's probably going to be a long slog of hard work and incremental advances that will be necessary to set us up for the next breakthrough.
> Advances in chemistry were by far the primary driver
Seems like a pretty biased view of recent progress, to be honest. One could come up with similar lists from any number of other areas (Physics - Nuclear Energy, Biology - DNA & biotech, Computer Science - The Internet, etc.)
I'm not sure what your point is. As I explained, I am biased towards physics, if anything. And I acknowledged that each field has had societally impactful advances, I never claimed chemistry was the sole driver. Surely you're not arguing that physics, chemistry, and biology have all had exactly equally-sized impacts on society over the 20th century? What a surprising coincidence that would be!
If you think one could come up with similar lists, by all means, please do. I actually did mention the impact of nuclear energy (I called it "the atomic age" to include nuclear weapons as well), but surely you wouldn't claim that has had as large an impact on society as petroleum alone, which powers most of our electricity, internal combustion engines that made modern trade and supply chains possible, and is the raw material for the hundreds of pieces of plastic in the very room you're sitting in?
> As I explained, I am biased towards physics, if anything.
It's one thing what you say you're biased towards, it's another thing what you're actually biased towards :)
> I'm not sure what your point is.
You're claiming that chemistry had by far the biggest impact, and at the very least your post does not substantiate that claim (and listing a few key results from field A does not logically prove that those results dominate over other fields B, C, D in terms of impact).
I did substantiate my claim, I just had no need to do so with a logical proof. Logical proof is an unnecessarily (in fact, illogically) high bar to demand from a Hacker News comment.
All I needed to substantiate my claim was convincing examples of results from field A whose impact, added together, most people would agree is more than the combined impact of field B in the 20th century. The fact that the person I was replying to then agreed with me is strong evidence I succeeded. So is the fact that, of the people who read my comment, 19 more of them upvoted it than downvoted it.
Good luck trying to convince people that someone is wrong not by providing convincing examples or reasoning against their conclusion, but merely by pointing out that their reasoning, though convincing, didn't amount to a logical proof.
Good luck believing, and acting on, solely information for which you have found a logical proof.
(Ironically, I mentioned my minor in undergrad was physics, but my major was pure math, so I used to live and breathe logical proofs, and I still love them. But acting like they're the only kind of argument worth making is pure foolishness.)
In physics we have two (smaller?) areas that are ripe for some real progress: Optics, electronics.
Electronics: The memristor. Though it seems that the HP breakthrough was a bit premature, the promise of a solid-state memristor would allow for very energy efficient computation. Think top-of-the-line modern GPUs that can run off a small solar cell or watch battery. The actual physics of such circuits may have some fun things hidden in them. I think it'll really change how we use computers.
Optics: As our manufacturing revolution reaches out of semi-conductors and into more difficult materials, we're seeing cheap and interesting optics happen. I'm not sure where it's going, but grads are 'playing' more in the lab with cheaper stuff, allowing for kismet to happen faster. Especially as bio becomes 'thirstier' for optics, as there is a LOT of money there from disease research.
Though these aren't re-writing the fundamental laws, they are looking to have real impact on human life, and not in ways that are just making things more efficient (though there is a lot of that too).
What you wrote make total sense for fundamental sciences. However, a lot if not most scientific fields are removed from that, and the supply of ideas or things to investigate is way larger. In this context, gaming the system is easier. I see it in my field, where lot of published stuff is junk written for publication count.
1. You have effectively attempted to straw man all possible suggestions and conversation around modifying STEM-based progress by using a few anecdotes as a basis. This is not 'a point'. These are only a handful of observations.
2. You seem to be framing all possible STEM-based progress from the perspective of progress within the physics community. And this comment being at the top (currently) really diverts so many other possible enriching discussions around this topic. As another comment pointed out progress in chemistry during the 20th century that effectively donated as much to human progress as physics even though it goes mostly unnoticed by a larger audience.
3. Your argument only mentions how citations have allowed for better filtering of true-negative papers/authors, but it does not explain away overly citated papers/authors which are essentially false-positives as to being progressive.
4. It does not take extraordinary logic to find flaws in any system, so I am not sure why so many from an academic background (although not all) adopt a more defensive stance towards the existing system as if it was provably a global optimum. Would you mind engaging in conversation around this?
> It does not take extraordinary logic to find flaws in any system, so I am not sure why so many from an academic background (although not all) adopt a more defensive stance towards the existing system as if it was provably a global optimum. Would you mind engaging in conversation around this?
I have seen similar defenses of various things in academia over the years. It's very easy for someone who is successful in a particular system to think that system must be generally good. I can recall going to a talk nominally about writing good grant proposals where the speaker would add random comments about how the current funding system is the most effective known to man, etc. The speaker was a tenured professor. They're going to be inclined to think that whatever system they succeed in must be fundamentally good. After all, it recognized their brilliance!
Similarly, knzhou is a graduate student at Stanford on a NSF fellowship. He seems like a very smart guy. But his experiences are not representative of science as a whole. It seems obvious that his opinions are going to differ from someone like me. I was rejected by Stanford, MIT, and Caltech and also rejected from every graduate fellowship I applied to. (Don't read too much into the schools: Later I decided that those schools would not have been a good fit for me, so I'm glad to have been rejected from them.) I think I do good quality research that isn't recognized by the current system. To do my research I've had to take whatever scraps of funding were available or work as a TA. This doesn't strike me as optimal. People like knzhou haven't had these experiences. This is triply true because I've been in grad school for about 9 years now, but knzhou has only been in grad school for about 3. Maybe in a couple of years, knzhou's opinions will sour? After 3 years I didn't have the same opinions I do now.
You are right. I missed this. It was mostly disappointing that the top comment was practically a diversion from a much richer discussion on the topic.
> Maybe in a couple of years, knzhou's opinions will sour
Is his opinion "allowed" to sour (publicly) until he is a Tenured professor? I am genuinely asking as I believe the incentives of academia have become in some ways (and perhaps in many) entirely anti-science. It feels more like students are filtered for higher positions within a large religion e.g., Catholicism. And we really should not be surprised by this phenomenon because game theory tells us there are tendencies of these dynamic systems to evolve towards similar outcomes/structures. So, science becoming more socially popular over the last 100-200 years and people behaving religiously around it (with the similarity being between the structuring of a religion and modern academia) seems inevitable and actually needs to be proactively addressed and corrected.
> Is his opinion "allowed" to sour (publicly) until he is a Tenured professor?
I don't think getting a negative opinion of the status quo would help. If an assistant professor is not vocal about their opinions then the tenure committee might not notice or care.
But don't get the impression that tenure provides much protection for unorthodox views. If the higher ups want to get rid of you, they'll find a way. They might not be able to outright fire you but they could (for example) increase your workload to the point where staying as a professor is untenable.
And note that the tenure process itself filters out people who would challenge the status quo. It's a protection that tends to go unused.
I'm not following your analogy with religion. If you mean that to advance requires unquestioning belief then I would largely agree.
> But don't get the impression that tenure provides much protection for unorthodox views.
I understand. I just think every degree of security/authority is a degree of "flexibility" (not quite "freedom") within the institutional structure of academia. Which is on par with totalitarian-oriented structures.
> If the higher ups want to get rid of you, they'll find a way
I am very curious who are the "higher-ups" in an institution dedicated to the production and enrichment of intelligence within humanity? If you are a brilliant professor exercising scientific thinking against your own establishment, then who really is considered "higher" than you? Are these higher ups objectively more intelligent? Perhaps, sometimes. But, my guess is the vast majority of the time these higher ups are just politically-savvy administrators who are just in it for the prestige and money. Science has become adulterated by the same incentive structures as religion e.g., endowments, donations, etc. It is not run for the promotion and of intelligence but rather for the optimization of the clerical. Administrator is practically synonymous with clergy.
> If you mean that to advance requires unquestioning belief then I would largely agree.
I am not even attempting to argue anything with regards to belief or faith. I am simply trying to make obvious the simple observation that structurally things tend to converge towards a certain status-quo (equilibrium) in almost everything. Academia is not free of this reality even if many within like to (naively or blindly) believe they are. But, it is most frightening that science itself (almost the epitome of which is to question) has converged towards these exact same structures (as religion and government often do) almost without any hindrance to becoming such. It is actually scary.
I am genuinely hoping the younger generation changes this dramatically. But, I am not seeing it. Most millennial academics are either supportive or entirely silent towards the status-quo.
> I am very curious who are the "higher-ups" in an institution dedicated to the production and enrichment of intelligence within humanity?
I was thinking about both university and department administration, though for the latter I'm thinking only about faculty members with administrative positions. Also included are committee members.
> But, my guess is the vast majority of the time these higher ups are just politically-savvy administrators who are just in it for the prestige and money.
I think most genuinely believe they are doing what is in the best interest of science in general, perhaps limited by their own capabilities as they see it, i.e., they can't do everything and need to follow their bosses and incentives. But these people aren't always helping.
> I am genuinely hoping the younger generation changes this dramatically. But, I am not seeing it. Most millennial academics are either supportive or entirely silent towards the status-quo.
I would tend to agree that the younger generations don't seem any better in these respects than the older. But there is a growing independent science movement, and I'm hoping eventually they'll move the needle. Many of them are here: https://forum.igdore.org/
The opportunity today to do great work is seemingly better than at any other time in history (internet, number of people interested in a field, absence of war/famine) so it does feel like we are stagnant despite having many great tools and collaborative potential. Of course in reality the most important work often happens often under very difficult conditions, Hamming said that.
The most difficult thing of allowing for more novel science is accepting that more 'useless' research will be done. If you read stories about the old days, you'll see how often professors/researchers used the freedom they had to do absolutely useless things, just based on their own personal preferences. The same freedom of course allowed others to do great things.
I think nowadays there is much more awareness about what researchers are doing, and they will be held accountable, not just by the people who fund them, but also by the general public (Why are we funding this research about levitating toads?!?!). Shielding the researchers from such outrage and building acceptance for (seemingly) useless research will be just as important as the article's suggested new metrics for novelty.
People say you need to let scientists do useless exploration in order to promote innovation. I think that is true to a degree, but definitely should not be the spirit of any policy where innovation is the goal.
Many of the scientific innovations that pushed the 20th century forward were in fact purposefully done. The Von Neumann architecture was not invented to have fun, it was invented to aim artillery and design nukes.
Similarly, much of the tech infrastructure in Silicon Valley descends directly from radio engineers coming out of WWII theater.
Bardeen, Brittain, and Shockley didn't invent the solid state transistor because it would be interesting, they invented it because it would allow miniaturization of vacuum tube computers.
The laser was invented for telecommunications purposes. There is now an entire field of "quantum electronics" describing the theory behind them.
Even going back further, the invention of the steam engine preceded the development of thermodynamics, which initially sought to describe the limitations of these engines. IF "science leads technology", then one would expect steam engines to be rationally designed from the results of thermodynamics. The opposite is true.
I think there are enough examples of scientific fields emerging from technological innovation that "uselessness" should not be considered correlated with innovation.
The von Neumann architecture became possible because Church and Turing wrote their highly theoretical, mathematical works.
Lasers became possible because people like Fresnel studied the properties of a highly impractical phenomenon, coherent light, while other people discovered another short-lived curiosity, inverse energy levels population.
Many such works had to be done decades before any engineering applications, or a prospect thereof.
Science is when you study the literally unknown, including no known practical applications.
When you study small pockets of unknown in a generally understood and practically fertile field, it's engineering.
> The von Neumann architecture became possible because Church and Turing wrote their highly theoretical, mathematical works.
I don't know about others but Von Neumann architecture is much more described by Church and Turings work than derived from it, like steam engines are described by thermodynamics. Neither invention was actually dependent on the theory.
Science often derives from engineering, not the other way around. Get over it.
> I don't know about others but Von Neumann architecture is much more described by Church and Turings work than derived from it, like steam engines are described by thermodynamics. Neither invention was actually dependent on the theory.
Huh? Turing's paper on the halting problem and Universal Turing Machines was released in 1936. Von Neumann, Eckert, and Mauchly's memo was not released until Jan 1944. There is no way that Von Neumann's architecture is described by Turing's work, given the 8 year difference between them!
> I don't know about others
I'm puzzled by how you can make such a strong claim about the nature of innovation without precisely drilling down into the evidence backing your theory.
I think your comment is excellent but doomed to be undervalued by those outside science. I'd guess this is in part because science textbooks and pop sci always present science as a fait accompli. To put it differently, it's as if you listen to a wonderful jazz improvisation and ignore the thousands of hours of 'doodling' on the scales the musician might have done.
Are you trying to suggest that Zuse's computer didn't work because the NVA was not out yet?
That Babbage would machine (also NVA) wouldn't have worked bc Neumann wasn't born yet?
The Babbage's machine would have worked! But it'd be more limited that it needed to be. It was a specialized calculator by design, not a general-purpose computer, partly because of the lack of the theoretical framework.
> I think there are enough examples of scientific fields emerging from technological innovation that "uselessness" should not be considered correlated with innovation.
I was going to cite lots of "useless" stuff that suddenly became really important when some "blocking technology" got removed, but "useless" is the wrong problem and phrasing.
The problem really is: "How do you compensate someone who tackles a big problem but winds up making no progress or even being wrong?"
From personal experience, pressure and flexibility. The investigator should be put under pressure to produce results quickly justifying further time investment, in addition there needs to be flexibility in the form of alternative projects they can tackle that have a high probability of working.
But I think it's crucial that they be allowed to place themselves under that pressure, if they truly believe they can accomplish whatever big thing it is. And if they have a really good reason that can be written up, that itself is a result that justifies further time investment.
I think current science eliminates the possibility of that pressure, and focuses on the high probability projects. I think places like Google X actually don't put enough pressure on, though I don't really know that organization very well.
Right. However, the inventions you mention, were not expected to be useful the same year. The problem today is not that there is a demand to focus work on potentially useful areas; that's fine as far as it goes. The problem is that there is too much demand that work bear fruit in the short term, whereas in fact we have picked most of the fruit from the branches currently in reach, to mix metaphors, and we need to go back and do some more long-term investment.
My father is a researcher in electrical engineering. A while ago, he and his colleagues envisioned a method of developing solar panels that could have blown past the efficiency of the day. Only problem is it involved creating and working with very small crystalline structures. Their research into said structures was novel for its time. Turned out once they learned more about the actual physics, the theory behind the panel design broke down and they had to abandon the project.
But was the result useless? Hardly. To this day he gets contacted by scientists from time to time trying to do the same thing and he has to tell them why their plan won’t work.
Preventing people from going down dead ends is valuable, like knowing how to look for the solution to an obscure software error. Or, to paraphrase Edison, it’s knowing ahead of time how not to make a lightbulb.
This problem exists in schools as well: education is being increasingly restricted and tuned to focus on where the puck is, not where it will be, and increasing numbers of worthless metrics are demanded to ensure this is the case.
Not that this is a new problem (consider Dickens’ Hard Times) but I have been shocked how my 1970s/80s education (mostly broad, fun stuff with the only “skills” being mathematics and very concrete things like operating a car or camera or making a nutritious meal) contrasts with that offered to my kid or even worse to my gf’s kids in the Palo Alto schools.
This especially astonishes me as I consider one of the best things in the US is its non-specialized approach to undergraduate education.
Good point. That said I imagine another facet is the risk of fraud and graft as some would love to be paid for navel gazing 'studies' or generally slap-dash work.
> This emphasis on citations in the measurement of scientific productivity shifted scientist rewards and behavior on the margin toward incremental science
this sounds like the science equivalent of how search engines lead to the creation of content farms. citation scores are to spam science as pagerank is to wikihow?
Imagine going through life with a high-tech spam filter from the future that can filter out wikihow and boring science. Impossible to build, I suspect, but that would be the life.
> this sounds like the science equivalent of how search engines lead to the creation of content farms. citation scores are to spam science as pagerank is to wikihow?
Or a science equivalent to what money and market competition does with all human ventures. You need more and faster than your competitors to progress, else you'll fall into obscurity. "First to publish" is isomorphic to "first to market". If you treat research as a videogame, citations fit perfectly as an in-game currency.
That is to say, it's an example of a general problem of optimizing short-term metrics, which are a decent proxy of the actual goal up to a point.
Science could definitely use a spam filter. Journals unfortunately do a poor way of being one; they're thoroughly gamed, the same way Google is via SEO.
At least in the market a junk product doesn't survive despite being first to market. The product needs to be good or at least inspire others to be regarded as a real contribution. This additional requirement is what's missing from the incentive system in science today.
Tons of junk survives and thrives in the market. Your great grandparent specifically brought up content farms as an example. Homeopathic remedies, counterfeit SD cards, the list goes on.
Nothing about markets magically solves the problem that if it's hard to evaluate quality, there will be junk masquerading as quality.
About the search aspect, I wonder if there is any way to create a custom search engine or filter out crap from google.
I would love a browser addon that auto-filters out quora, wikihow, w3schools, techcrunch, etc...
None of those garbage sites are what I'm looking for, ever. I think that, amusingly, filtering out the top 5% of best SEO optimized sites would make results much better.
w3schools loads faster than MDN but I'm mostly with you
reevaluating our social attitudes towards 'trust in experts' will be an aftermath of the covid crisis, and it may bleed into search & social media as well -- is there value in 'peer-reviewed everything'?
Maybe more multi-disciplined group based (given how much more complex most problems are for individual experts to grapple with by themselves). Probably rated like team sports.
It's doubly tragic if it simultaneously incentivized the mass production of boring, low-value science and punished the boring but valuable work of replication.
>Imagine going through life with a high-tech spam filter from the future that can filter out wikihow and boring science.
Happily it's available now and it's called intuition. If something is boring then avoid; if something is exciting then pursue.
Problems being that it's purely anecdotal and you have to know and trust yourself. If you're too attracted to prestige, money or job security then it's going to return a distorted signal. Which is why organised science is now bureaucratic and slow despite the fact that there are more scientists than ever before.
This premise that novelty should be better-rewarded seems odd to me, because I thought a common complaint about scientific journals was overly rewarding novelty instead of robustness? Isn't that considered one of the contributing factors to the replication crisis?
I think it's interesting to think about the incentives in scientific publication by comparison with the incentives in HN (or Reddit) comment posting.
Someone could spend days of time writing a well researched HN comment that was exceptionally informative and accurate. There are occasionally comments that represent an hour or two of work on them... But multiday-effort comments are non-existent: the incentives of the venue don't reward that effort. And if you do take the time the discussion will have moved on before you get it published. If you published them they would likely be lost in a see of other low effort comments, or if they were acknowledged-- not likely much more than comments that merely took an hour.
You don't just see fewer multiday comments, you see essentially none at all. Nothing technical about HN or Reddit prevents people from writing some epic work of commentary or research in a comment, and there would often be value from such works existing...
The same pattern exists in academic publishing but with the effort levels shifted up one or two orders of magnitude: the maximum moves from an hour to (say) weeks (exact threshold varies by field). Works taking more effort than some field specific cutoff are extremely seldom done. Why make one 10x effort paper when you will serve your interests much better and with lower risks making 10 1x effort papers?
This would be fine if all of science in that field could be done in the window of efforts allowed by those incentives, but that isn't the case... especially since a lot of the low hanging fruit-- results that can be obtained below the cutoff-- is already picked.
So this is part of why you see things like immunologists pointing out that there has been relatively little academic investigation of virus seasonality-- though its an apparent, interesting, and seemingly important phenomena. Studying it in any depth would require experiments spanning years and likely dealing with human subjects on top of the possibility that your ideas don't turn up anything new... a lot of risk for someone who actually needs to get things published.
The extent to which this is an issue depends on the field, though. In my field, you can get massively rewarded, in both soft and hard criteria, if you put effort into writing a good review paper. And many of the giants of the field have sunk even more effort into writing textbooks, which yield even greater rewards. People can do this precisely because the incentives are better than for internet comments; a good textbook can be celebrated for decades.
There have been converastions for years about the slow down in scientific discovery. Often times the explanation is that science is getting more expensive, something i've often felt was a bit of a straw man argument.
Instead pointing at the current incentive structure (that everyone already agrees is broken) makes a lot more sense.
So lets start a conversation about a better incentive structure. What do we want to incentivize? How do we do that? How do we fix science?
I would start by cutting academic salaries at elite schools. These are $150k-$200k for a typical professor, and $1 million at the top. Maybe $100k at the bottom.
I'd place these at $60k-$100k: enough to live on, but not enough to go into it for anything other than love-of-science. A university president might hit $200k.
I'd hire many more academics, and given them much more freedom. Not as much publish-or-perish, and more intellectual exploration. Anyone qualified to do research should have the option to do so in their field of interest.
That's kind of how academia used to work before massive endowments.
I might also do something about tenure. It seems like an obsolete idea as structured right now. It's not a horrible idea, but it's obsolete in a lot of ways. It forces people to put in massive efforts early-career. That, for example, it doesn't line up with biological clocks, and puts in many other bizarre incentives. I don't mind someone gaining tenure if they've done fantastic work, mind you, but it shouldn't be a 7-year clock. For example, perhaps you're a professor with a 5-year renewable contract. If you do fantastic work, you become a professor-with-tenure, whether that's 4 years in or 40.
I will disagree with every single suggestion you make:
1. Salaries should be -higher-, not lower! Why will any self respective smart person want to throw their intelligence away for a pittance? You want them to be smart with science and stupid with money is it? Live like Diogenes?
2. There shouldn't be more researchers, there should be less. My decade-long experience with academia has been that too many people who aren't exactly scientifically smart (more smart at socializing and grant writing) are too established. We need to recruit the type of minds that are truly capable of innovation and make sure they don't have to compete with beuraucrats who're there simply because they chose biology in undergrad and just kept making the default career choice every time they were presented one. These new people should also be REALLY smart, not just marginally better than public. Which means that there can't be too many of them anyways. They should then be given resources that don't inherently convert the entire system into a Ponzi scheme (like the phd system now does). In the grand scheme of things they can be given lower resources if they are given structures to manage things well.
3. I'd argue that the tenure system worked quite well despite its flaws. If anything tenure doesn't give the same guarantees it gave half a century ago, so people are still incentivized to continue running the rat race. If you still want to hold them accountable maybe a much longer cycle might be okay, perhaps 15 years? 5 year contract sounds like hell for most fields. Some of the most interesting work I did took more than that time to bear fruition and that's not uncommon.
Only thing I'll agree with you is that we should make sure that whatever new process is conceived must try to correct perverse incentives for women, given how the current system plays against some common life choices they might want to make (having kids).
> Why will any self respective smart person want to throw their intelligence away for a pittance? You want them to be smart with science and stupid with money is it?
Because it buys them the freedom to do what they want in terms of research, rather than being a slave to a certain agenda, or a quota, or what have you. Science progresses by empowering curious people. If a researcher was interested in making money, then they would focus on money-making discoveries and spin off startups (which already happens plenty now).
> There shouldn't be more researchers, there should be less. My decade-long experience with academia has been that too many people who aren't exactly scientifically smart (more smart at socializing and grant writing) are too established.
That's because the academic and publishing incentives are all skewed, like the OP said, not because we have too many people doing research. If the low-hanging fruit has been plucked, then we need more eyeballs looking at things from all sorts of different perspectives to find pieces that don't quite fit our established theories.
I have an agreement and a disagreement with your take:
* I agree that reducing salaries for academics will only make the currently misplaced incentives worse. It will deprive society from the valuable research that more competent and talented folks would have done. In my mind, the key point in all these budgetary conversations is the ballooning bureaucratic/administrative layer that is eating up more and more of the budget and making it harder for universities, as collectives of teachers and researchers, to adapt to the evolving priorities of the real world.
* I disagree that there should be less researchers. Fewer academics, maybe, but we definitely need more people who can make a lifestyle out of entirely or partially doing research. My admittedly anecdotal impression is that for every "not exactly scientifically smart" person who secures tenure, there are a handful of would-have-been-great researchers who just get fed up and leave academia despite having the passion, competence, and the willingness to even make a few sacrifices.
IMHO we have cornered ourselves into a false dichotomy (broad strokes here, there are exceptions of course): either (a) you are an academic, you have to pull 60 hr weeks to do any meaningful research, and you have to deal the ossified structural issues of academia, or (b) you are out in the wild, make a lot more money, but you spend your 40 hr weeks towards maximizing short-term profits of your employer. I would think were there a viable alternative to this dichotomy that many passionate and competent people would happily make some reasonable sacrifices (in pay, work hours) to engage meaningfully in much-needed research.
As someone who has now experienced both sides of the dichotomy (80 hr weeks underpaid and overworked in academia, 40hr weeks bored out of mind doing coding), a compromise would be great.
Personally I'm hoping to keep my job to pay the bills and start a garage lab and pursue my passions in science. It'll severely be underfunded, but I'm hoping to conceive things that can be done with machines that hopefully costs only as much as a boat would. Further, Richard hamming is on to something [1] when he suggests that great science happens when resources are scarce.
I don't think reducing salaries will have any effect on the caliber of academics. There are a ton of graduate students, postdocs, and research scientists glad to work for less. Most are just as qualified as the faculty who supervise them. Likewise, the quality at faculty at elite schools and state schools is identical; the key difference in research output comes from access to resources.
Money isn't everything that motivates people. And you want to get exactly the kinds of people who are motivated by curiosity, not power and prestige.
The salary needs to be enough to live on without financial stress. On the other hand, it doesn't need to guarantee anything beyond cafeteria-grade food, basic housing, and a beat-up old car.
Benefits are important too. Things like university medical, insurance, and retirement policies reduce the risk profile. Academics shouldn't be distracted or stressed by financial constraints, but neither should they be motivated by them.
I don’t think you really understand what professors make already. Even a CS professor at UCLA is barely breaking $100k, and they would be in dire straights for their market if the university didn’t back their housing loan. Sometimes professors have the option of topping up their salary out of their research grants, but that has a few problems in itself.
It's a little tough to generalize about salary in academia.
Obviously, a 'professor' covers multiple ranks in the academic hierarchy, but if you look at the published pay scales for assistant professors (I believe the lowest professor ranking with tenure) at University of California, it looks like large majority of them are breaking $100k [1]. I don't see a way of breaking those results into discipline, I'd imagine that those individuals in fields like CS vs. more academia-exclusive fields like literature or geography (relatively, anyway) are making towards the higher scale of that. Some of them in with clearly medical-related title are making much more, but I think that's to be expected.
Also, it's possible I'm mistaken, but I believe professors are allowed to be paid to act as advisors or "resources" for external organizations to act as a secondary revenue stream? Combined with topping off their salary with research grants as you pointed out, and the nice perks academia offers if you can get tenure (job security, etc.), I'm not sure that even this data fully encapsulates compensation either way
AFAIK an assistant professor typically is on the tenure track, but doesn't have it yet (= they'll get tenure if they meet certain goals during this time)
Ah, you're correct. I was thinking about associate professorship, which it looks like is usually accompanied by tenure. Sorry, I sometimes get a little mixed up between which one of those is on the tenured/non-tenured side of things.
Wouldn't that pay scale push many talented people to work in private industry? At that pay it would be difficult to retain computer science faculty for example, where universities are already struggling with a shortage.
I think that's a naive view. It's not just academics optimising for money. It's academics optimising for a balanced relationship, or a family, or even just friends (since the current academic system will also often mean displacement).
I was very passionate about the work I did in academia, but I did not want to force my wife to live in a region where she could not really get a decent job, and my income would barely have been enough to support children in the early part of the academic career. I am strongly considering going back to academia in a few years, after the income doesn't matter as much anymore.
Making the financial disincentive even stronger would filter out a certain type of person, and is not a great way of achieving diversity of people.
This is what I'm going to do, I left working in academia in 2015 even though I'm still super fascinated by the research we were doing (although I though our lab application was bit banal for my tastes, but it's what got the grants) and started expanding on it on my own in my free time while working in industry.
I about to quit my job and work on it again for fun.
Plenty of people who love science also want to be able to go on holidays, buy houses and have children. They want to live the kinds of lives people of their social class do. Your proposal would turn science into a career for the kind of people who were monks or nuns in the Middle Ages. Back then that was the way to live the life of the mind and the outside alternatives were much worse. Things have changed.
Not everyone can afford to pursue their passion at the cost of a cut salary. A common case would be the need to support a family, parents, relatives, etc.
In my view, $150k-$200k is already low. Anybody with the drive and focus to become a professor, can already earn more as a doctor, or even as a computer programmer. It's already the case that it only attracts people who are in it for the love of science.
In a similar fashion, lowering the incomes of classical musicians won't make classical music any more creative.
> It forces people to put in massive efforts early-career.
From what I understand and have heard, a lot of advancements in math and physics in particular have come from people that are quite young, often in their 20s. That's apparently when brains are at their peak; it seems to me that's the period in which massive efforts are most likely to pay off big.
Einstein was 26 when he published his annus mirabilis papers. I believe Newton was in his mid-20s when he began developing calculus.
Incentives, and the corresponding KPIs, are always a good place to start looking if one is trying to find out why systems and people behave the way they do.
When I read the title I thought the paper was about business ideas and similar things. Having quickly read the paper, citations is a likely reason for, what the authors called, me too science.
Well, yeah, but bold is nearly impossible to tell from crazy, and people really don't like being told that "actually, the most efficient management technique in this case is to throw money randomly at a lightly-filtered set of projects."
People want to believe that blue-sky research can be predicted, managed, and optimized, no matter the staggering mountains of evidence to the contrary.
That seems to problem, doesn't it? How to identify the truly crazy ideas from the simply bold ones. Especially as nobody "gets fired to buy IBM", which kind of applies to everything from buying and hiring, over startup funding all the way to research.
A lot of things, but this paper points to one issue:
* In order to get an academic position, you need to have letters from a research community.
* In order to get citations, you need your papers to be used by a research community.
This gives a strong advantage to work done within existing lines of research. Virtually anything outside of one of the existing "academic cottage industries" or "mutual adoration societies" is at a huge disadvantage.
You end up with research communities going down long rabbit holes with tunnel vision, and big broad areas of potential research are never explored.
There is so much about papers like this that make me skeptical.
1) It assumes we've been good at measuring growth. Which to me is dubious. Our current system counts the production, deployment and detonation of a bomb all as positive production meanwhile not counting unpaid domestic work. I know economists don't like to out a value to different types of services and goods because they feel they are putting a finger on the scale. But they are doing just as much by using a blind metric which ends up counting some work and not other.
2) It never cost adjusts growth. For example, all that wonderful growth in the midst of the 20th cwntury incurred significant externalities. The system we have now tries much harder to make producers realize their externalities. This curbs growth which is not necessarily a bad thing.
3) It assumes we can atomize historic growth and single out what things contributes to what. But this seems ridiculous especially how these factors interact and are not necessarily seperable. For example, we happen to live in a universe in which the movement of electrons can be used transmit energy and appropriate it to many tasks. The story of the 20th cwntury's economy is very much the story of how we mastered this one property of nature. Electrification doesn't just provide heat and light. It enables the transportation of water. It enables the construction of more and larger structures. It enables the creation of aluminum and plated metals. Aluminum itself allows many features of our world we take for granted. From airplanes to electronics. But does abundant aluminum lay at the feet of scientific innovations? In its infancy for sure but after some basic science it is widespread cheap electrification which allows us to mass produce aluminum. How much of that economy do we count to cutting edge science? It's not easily disentangled from popular politics which allowed the mass construction of hydroelectric damns throughout the U.S. and other countries. There is a possible future in which for centuries, growth has slowed to something more than Renaissance levels but well below mid 20th century levels. In such a future we might look back at this period as the time we mastered the single most useful physical property of our universe and so of course it was an era of unprecedented growth.
I'm not saying they're wrong. But every time someone from Bill Gates to NBER talks about this issue of growth, whether it has slowed, why it as slowed, how it can be increased, I just get this feeling that people are failing to realized what a unique time we live in. How unique the 150 years preceding it really are. And how little we know about why it happened.
Economists, policymakers, and scientists studying how to improve science output have themselves been hamstrung for decades because economists have to measure something, and for so long it's been nothing but bibliometrics.
Until we can come up with a way to measure the other outputs of science productivity, we're stuck with this citation-based machine that has all of this institutional and cultural inertia behind it. Which is why I've come to believe this kind of change or new introduction of value isn't going to come from within academia. E.g. novelty is in some fields directly at odds with what makes you a "productive" professional scientist.
Last year I was in the NBER's Science of Science Funding working paper session, and most of the datasets discussed are still heavily focused on patents, citations, and bibliometrics (https://projects.nber.org/drupal/SOSF/data).
I think the answer is that trust should be part of the equation.
"If you can't measure it, you can't manage it" is a description of how to keep track of a low-trust system. Scientists are generally conscientious people, and bean counting is demoralizing.
If there is some organizational solution, I think it should be to keep organization size small enough that it can be governed by personal relationships and trust.
Otherwise social trust takes decades to build up, and I don't think there is a quick fix.
things that have a chance to be quantum leaps for a software project get shot down. instead we choose incremental improvements with less risk of failure.
incentives driving this include performance review cycle length, focus on immediate impact, and the startup/vc fundraising norms.
First of all, I believe the stagnation today in scientific discoveries is resulted from the lack of big grand visions of a future that can draw inspirations.
Over the years, society has shifted into favoring financial languages and metrics in most of today communications instead of telling stories that are often associated with lifetime generational experiences. Most things are calculated based on precise risks and probabilities so naturally we would opt for the least risky path. As a result, the system has evolved into favoring incremental improvements rather than explorations of uncharted territories that are much more riskier.
In scientific publishing, this metric is represented by an over-emphasize in citations which has become the main criteria those publications are now being evaluated based on. Novelty or a desire for new experience that can generate large and meaningful impact, or even simply playful experimental ideas are no longer valued as much. Citations quantity has become the main currency in scientific publishing, and understandably has also led the community to prioritize incremental improvements.
In the paper, it mentioned that many seemingly irrelevant or uninteresting new scientific discoveries initially took a long time for the community to understand its potential, but those very same discoveries would later lead to much bigger and more meaningful inventions, such as the gene-editing tool CRISPR we have today. It took 20 years for this to happen counting from the initial discovery, so this is where the disconnection occurred.
In that sense, there is a great need to help propagating those initial discoveries both in its magnitudes and speeds so that it can receive more attentions from other scientists and community. The novelty should once again be the main focus to drive motivations and inspirations. Scientific publications shouldn't just prioritize on hard cold metrics like citations, but instead attaching more metaphors and new visions of future possibilities that can excite and propel both science community and public interests.
More than ever, people today are craving for that common naivety which used to connect everyone together into believing making the impossible possible. That's precisely what has made Elon Musk and his companies so successful.
Perhaps initially, citation indexes were useful (i.e. they were a good metric to solve certain productivity issues), but lost their usefulness since people started to optimize and target that specific metric (along the lines of Goodhart's law).
There's a much much simpler explanation for the stagnation of real GDP. You have to look at energy consumption (joules / year). The correlations are uncanny and seem to explain just about everything.
Part of it is also the research workforce composition. Research students are increasingly people without other good options or who feel like they need to do it for status reasons. It is a form of low paid labour that they take in preference to working low status jobs, or moving away from where they are currently living. Of course it is hard for them to take on bold projects, they are too busy trying to survive and ultimately aren’t that interested in the material. The recent protests from UCSC students saying they can’t afford the rent, and the constant blog posts from long suffering adjuncts are classic examples of this - why would anyone put up with these ridiculous arrangements? PIs don’t care about this stuff, they need warm bodies to churn out papers, and the pay is so low it doesn’t matter. This transition is enabled by the massive expansion of university footprint, staffing and student count.
The higher degree has become the current generation’s desperate attempt to show they are making progress after the boomers.
Whenever people tell me we're stagnating I ask them to name an alternative. So far, from best to worst case, every answer has fallen into one of the following buckets:
- a subfield that already exists and already has plenty of people working on it
- an idea that was extensively investigated and carefully ruled out over 50 years ago
- something that requires more money than the field will receive in total over the next 50 years
- a mathematical formalism which simply refactors existing laws in a way that makes them unreadable to almost everyone, without a chance of leading to any new predictions
- a complicated, ad hoc model that isn't any more predictive than simpler models, but gives the modeller hundreds of shiny knobs to tune
- a metaphysical suggestion about how to view what nature "truly" is, which amounts at best to rewriting existing laws with bigger, more "profound" words
I can't see how this is going to be fixed by changing how citations work. If anything, in my experience the particularly bad stuff has been correctly punished by low citations.