Ramón Y Cajal was a contrarian when this was written, but he had great timing. In the late 19th century, it was fairly popular to believe that all the laws of physics had already been established—remaining progress would come from improvements in experimental methods. There's a famous "physics is over" quote misattributed to Lord Kelvin (actually said by Michelson, the guy who measured the speed of light).
A few years after this was written, Planck proposed energy quanta. And in 1905, Einstein published his four Annus Mirabilis papers, introducing the photoelectric effect (applying quantum), special relativity, and the mass-energy relationship.
As typical with contrarians, Ramón y Cajal said some things that held up well and others that didn't. In the same book "Advice for a Young Investigator" that this excerpt is from he also gave his view of theorists: "Basically a theorist is a lazy person masquerading as a diligent one because it is easier to fashion a theory than to discover a phenomenon"!
Well, it's true that we can't really tell how serious he was being. And it is worth remembering he was a neuroscientist who studied neurons. He was probably thinking of people who made complex theories about "how the brain works" without ever designing experiments to test them, not physics theorists.
Didn't we recently confirm gravitational waves by checking out a couple interacting black holes, originally theorized by Einstein 100 years ago? I think even Einstein would agree that it was much harder to discover it than to theorize it.
The experiment would never have been done without the theory, and a long line of experiments suggesting that Nature and the theory had a lot to do with one another, first. We knew that gravitational waves, or something very much like them, happened due to the binary pulsar work by Hulse and Taylor in 1974.
Einstein's advance was quite spectacular, and early.
Without the theory, how well do you think the funding submission for the experiment would have gone? Two laser interferometers a couple of kilometres long in specially built tunnels, with super sensitive custom equipment costing millions of dollars running for years just on the off chance? Sure, here’s the cheque!
So their method for finding the signal was to calculate the signal, then subtract it from the data and see if the residual noise looks like noise...how is that good science? It seems like it would be too easy to make your data fit the theory.
There is a cool documentary about how Einstein was trying to prove relativity theory, and it involved a lot of work, special telescope, traveling, war... I think it was done at third attempt.
Imagine if he would have died before proving it ...
My point is, he didn't stop in the theory, but also designed an experiment and worked a lot to get it proved.
Cajal's statement clearly doesn't apply to him.
I'll try to find the video, or if someone remembers please link it.
I would say that successful theorists are exactly those who discover phenomena. Saunders Mac Lane is perhaps the epitome in mathematics of someone who was guided by phenomena.
This is why category theory was not discovered, it was reverse engineered! The reverse engineering steps were:
3. Natural transformations
2. Functors
1. Categories
Edit: Of course, when he said theorist I think he meant people who don't experiment physically.
Yes, but that would be a more generic description. All of mathematics abstracts previously partially known knowledge.
When we went from 1 coconut -> the set {1}, then we were being really abstract for the times.
But I think your point is that category theory synthesises group theory, linear algrebra, topology, etc. into one concept, which was very much the spirit of the origins of category theory. However, Mac Lane and Eilenberg thought that their diagrams were just an aid to mathematics (much like a Venn diagram, Cayley diagram or a Feynman diagram). But when they realised that natural transformations are so ubiquitous and fundamental, then they realised that their graphs were not just a useful shorthand, but in fact would lead to a whole new type of mathematics. When people thought (not Mac Lane though) category theory was "abstract nonsense" they were making this mistake of thinking that the diagrams are illustrations rather than concrete mathematics.
In the same way, you might thing that {1,2,3} is just an illustration, but in fact it is a rigorous shorthand for a very specific set.
The real meat behind category theory are things like natural transformations and adjunctions. But to get to category theory from there, you do a kind of reverse engineering.
I would say theories and discoveries go hand in hand, or maybe leapfrog each other. They both travel in the direction of the fog at the edge of our understanding.
And like all such books, almost certainly it is wrong.
Backwards time travel or FTL are not measure of progress. Even with the field of "fundamental" physics:
1. There are lots of things we do not understand in cosmology (cosmological constant, nature of dark matter, matter/antimatter asymmetry, force unification at very high energy scales, gravity at high energies, etc). Each of those could potentially revolutionize our understanding of the universe
2. There are lots of things we do not understand at small scales (Casimir effect/vacuum energy relationship, plank scale effects, why the particle soup, gravity on very small scale, reason behind asymmetry in helicity/weak interaction and other parity/symmetry related effects, doing "useful" calculation with renormalization group, etc). Each of those could potentially revolutionize our understanding of the universe.
There is also a lot to be done in our understanding of computing (as in, nature of computation)
1. Computation related problems (Church-Turing thesis, novel algorithmics + computing platforms such as quantum computing). Is approximately correct/probabilistic computing a loophole for getting essentially/mostly correct results in P time for NP-hard problems? Nature of AGI/what enables sapience when doing computing.
Of course as we go into "less fundamental" sciences like chemistry/biology/etc then the amount to be learned is just overwhelming, we truly know very little.
Not a physicist so excuse the ignorance, but do we understand gravity at all?
I mean afaik we can observe and predict it's behaviors but do we understand what underlying force causes it, and if it potentially has a counter force.
Well, at sufficiently low energies and sufficiently large length scales we can say gravity is how space time curves under the influence of energy (principly from gravitational mass; another mystery, why gravitational and inertial masses proportional to each other). This picture is called general relativity and it works very well in a wide range of regimes (from motion of planets and galaxies to merger of black holes). However, there should be an exchange of force carrier description (quantums of graviton) at small enough scales or very large energy densities where we fail to make the math work (because we expect certain properties/symmetries from gravitons).
So the word "understand" is a bit loaded. GR is a certain understanding of how gravity works, but it is not a "quantum mechanical" understanding.
The difference between the problems physicists were tackling at the turn of the 20th century and the ones we're trying to tackle today is that we don't have the technology or the resources required to build the kind of experiments that would allow us to study these phenomena at a meaningful scale and test out our hypotheses.
But the same was 100,200,300 and so years ago. Thete were always some sort of limitations, mostly technological. I'm not saying that we'll build solar system size colliders any time soon, however 50-100 more years and we should be much much more sophisticated in terms of what we can do in space and back on the earth.
This is not clear to me at all. I don't think "bigger particle accelerators" is what we need, and lots of the questions I posted are not necessarily solvable by particle accelerators. We do need new experiments, but probably in a sense of "new ways to observe the universe" rather than "old ways to observe the universe scaled bigger."
I personally believe a lot of that stuff cannot be further studied unless we are able to divert solutions to other problems in our society first. I'm saying that we need to have things like mass quantity sustainable energy, significant automation, global unification and standards, higher minimum education levels.
I'm saying that imagine 50% of the population works in blue collar general labor or semi-skilled labor fields. Now in this hypothetical worlds, all those jobs are managed by autonomous robots. Also we have a green power that is sustainable, storable, sufficient for even double the population, and can be held in high densities at low volumes. So there are now innumerable sectors within the economy that we don't need people themselves to learn. That leaves more time for people to take extended amounts of time to learn and study. I mean quite literally a Star-Trek "post-scaricity world" in a lot of ways. People use time to further themselves and expend time on cultural or scientific endeavors. Life is no longer about struggle and survival since money clearly would have no value if any and everything can be made or consumed for free. I mean it's really interesting to think that the only "conflict" that would exist is between people trying to min-max life in terms of achievement. There would be no achievement in religion, money, or ownership since everybody can do it.
Ultimately what I'm saying is that a lot of our advances are contingent upon other sectors becoming automated and allow for more people to get into academic sectors.
The 50% of the population who are blue collar workers aren’t going to retrain as particle physicist or theoretical computer scientists one they get their UBI.
I mean we could hash out all hypothetical all day. I'm just saying that the needs of the economy would shift to that where a PhD basically becomes the new High School Diploma. In essence, entry level jobs would be stuff that today would legitimately require a masters or higher.
I'm not saying that 100% of that 50% will be employable in this world. I'm saying that over time that 50% will inevitably become that bare minimum. The way I see it is 150 years ago, your idea would be that we couldn't possibly get all children to become educated at an 8th grade level, yet here we are, even making an HSD the bare minimum.
Eventually your masters thesis will be an area for you to study and pursue to make an attempt at furthering society.
I am inclined to say the real problem is that smart people don’t have kids. But we’ll probably figure out how to make all babies intelligent before that becomes a problem. Incidentally, the problem you mention will also be solved in the same way.
genuine question - what do you mean by smart people? High IQ? Deep knowledge about mankind's place in the Universe / natural order? or their ability to maximise their own personal happiness over their lifetime? If the last point, then personal experience would suggest that having kids makes you happier person (once they are sleeping through the night)YMMV
Pretty much any study of the past century and any projection from reputable sources says that populations reach equilibrium and stop growing as the level of education, in particular education of women, rises.
This is a non-problem that has always taken care of itself in any developed country and we have no reason to believe it will not take care of itself in the developing world as well.
The UN for instance does not believe there will be 10 billion humans on earth ever (where "ever" means "as long as projections have any value").
I wouldn't quite call world population growth a non-problem, considering it is already a problem. The future predictions may well turn out correct, and have some solid reasoning behind them, but the numbers still look fairly crude to me. Plots of historic growth rates are bouncing up and down, while the prediction is they will suddenly turn a corner tomorrow and drop monotonically to zero and all will be peachy. How often has that happened? Considering the current record-breaking population numbers, the ultimate answer is it has never happened, at least not permanently. There's a lot of risk in betting on things to just work themselves out. Development can actually work against us in a sense, as new food production technology and cures allow faster-than-ever growth in new regions and populations.
Below replacement fertility is a recent phenomenon (last couple of generations). Evolution works in multiples of generations. We’re at the very beginning of evolving resistance to modernity.
Education, quality of life and equal opportunities for women. As these increase, the birth rate goes down. Most countries that score highly in these areas actually have below replacement birth rates and only grow in population due to immigration.
Why are time machines and Starship Enterprises a good standard of scientific progress? Why not immorality, sentient computers, and stuff like that, which is equally science-fictiony but possible?
"Immorality" is not only theoretically possible, it has been thoroughly mastered by many ;)
I think there's a noteworthy distinction between science and its applications. In my mind, science is about understanding the world, whereas fields like engineering/medicine are about their practical applications.
I do think that there's a tremendous amount of progress that could be made in sciences like Biology, Psychology etc. But I would draw a distinction between the things that fundamentally change the way we understand the world, vs building really cool toys that we would love to have.
I think the point of the "End of Science" argument is that any discoveries in biology (and physics) will merely be elaborations of basic principles that already exist, rather than elucidations of any as-yet undiscovered principles.
For example, CRISPR. Many people think that CRISPR was an amazing discovery, but really, it's just a biological system that has existed for a long, long time, where a collection of smart people realized that with some engineering it could be used for effective genetic modifications with high precision and no need for engineering custom proteins to bind specific sequences. That seems fundamentally different from, for example, the experiments that established that DNA is the molecule of heredity when nobody had an idea how DNA could encode information.
DNA as the molecule of encoding information for heredity is also "merely" a discovery of an ancient biological system. However, it's not as though physics predicts the existence of DNA specifically, or CRISPR, yet these things are important for understanding biology, and in the case of CRISPR it's been turned into a technology that humans can use. Which is why I have a lot of complaints about the commonly held belief such as this one:
> merely be elaborations of basic principles that already exist, rather than elucidations of any as-yet undiscovered principles.
This is not a meaningful or thoughtful examination of even chemistry. 3D structure of proteins is "merely" an elaboration of physical properties, yet "physics" doesn't have the tools to make much progress on solving the 3D structure of a sequence of amino acids, despite it being a purely physics process.
Is the world "physical" in the sense that probably don't have new fundamental forces of nature? Of course. That doesn't mean that physics helps understand much of the physical world, because the "elaboration" in the "merely elaboration" has nothing to do what physicists or other scientists consider "physics."
No, the elucidation of the structure of the DNA isn't just merely a discovery of an ancient biological system. It was the recognition that the structure was formed by antiparallel strands encoding information in a reversible molecular form, that represents a real level-up in human understanding of the universe. That's the whole point of that throwaway sentence at the end "It has not escaped our notice (12) that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material."
Also, we can simulate protein folding well enough from classical physics and quantum approximations such that "rapid two-state folders" are considered solved. That was a major outcome in the course of my career, to which I contributed significantly :)
If we can simulate protein folding well enough, why was the Google announcement last year such a big deal?
I worked in protein folding over 30 years ago at EMBL, and have loosely followed it since. I could easily have been led astray, but I was absolutely not under the impression that we can do this even close to "well enough".
I work at Google and did protein folding before the Deepmind (not Google) announcement.
The CASP results weren't really a big deal. It was a modest advancement using techniques that were already spreading throughout the community, coupled with a skilled team that understood the score metric very well.
Two state folders can be reversibly folded using empirically determined force fields (two state folders basically go from "any totally unfolded configuration" to "fully folded single structure" in milliseconds); we can just run simulations and let the (quantum-inspired, classically embedded) physics do the folding, or we can use other techniques, like Rosetta (monte carlo plus lots of empirical data from known structures), or evolutionary data-based techniques (like Deepmind and others used).
If I'm filling in the blanks here correctly, what we're still far away from is determining the folded configuration of an arbitrary polypeptide. Is that correct? Or has there been real breakthroughs there? 10 years ago when I last checked in with some folk I knew from EMBL, this still seemed to be a complete pipedream.
Is there a paper that describes the parameters of the peptide structure that go into the "physics do the folding" part? When I was at EMBL, I was focused on using local hydrophobicity to see how predictive it was (not at all). Is the physics model operating at this level, above it, or below it?
> But I would draw a distinction between the things that fundamentally change the way we understand the world, vs building really cool toys that we would love to have.
It may very well be that understanding emergent phenomenon at the appropriate level of emergence will turn out to be vitally important, and that reductionism (while undoubtedly useful in many scenarios) is impeding our understanding of emergent phenomena like consciousness and evolution.
Obviously, we are discussing immortality of non-human species.
However, we more or less understand that morality of larger lifeforms is encoded in our DNA (e.g. telemers). Mortality seems to be a defense against cancer.
There is no particular reason that a human needs to grow old, except for the accidents of evolution.
They do say “laughter is the best medicine”, and morality is obviously funnier than mortality. No mere telomere can tell me my mortality, as long as I’m defending myself by laughing at spelling errors.
Did I tell you about the time, in 7th grade biology class, the kid sitting next to me was asked to read something aloud from the textbook about ”organisms”, but of course she said “orgasms”. She might have died a little right there, whilst the rest of us got her energy recycled as a power-up.
Mortality is not a defence against cancer. Mortality is a tool that allows species to more speedily adapt to changing environment. Sexual reproduction is another such tool, which ensures enhanced diversity of future generations, so some of the descendants will adapt better and leave more better adapted descendants.
Btw, in this sense cancer is just another tool, that limits lifespan and ensures generations change. Of course, it didn't appear as such, but most species have no natural incentive to develop a resistance to it.
>"Btw, in this sense cancer is just another tool, that limits lifespan and ensures generations change. Of course, it didn't appear as such, but most species have no natural incentive to develop a resistance to it."
Not exactly and i believe there are a rare few that are much much more resistant to it and as such have become subject of research (trough TP53 in elephants or P16 and P27 in naked mole rats)
At the end of the day this natural incentive depends on when the cancer can appear (generally right away at every step of the cell cycles) and how likely it is which probably depends on the turnover and amount of cells of a particular type or in the being overall which one would assume increases as the being grows and additionally how likely it is to inhibit reproduction as it grows.
As it stands i'd say whilst they're not too inhibiting on this front (for example humans are most fertile at relatively young age well before most cancer occurrences become a problem. We've just extended our lifespan quite a bit) they still can be (kids can die of cancer too) and thus an evolutionary incentive against it however minor is present.
I just said time machines and faster than light travel are unlikely to ever occur; I do think life extension and AGI are not technically impossible, but rather inevitable (my training is in biology, and I work on machine learning).
For the first two, we'd need to have a radically different physics than the current model, while the last two, they seem like reasonable extrapolations from modern technology.
The nature of a paradigm shift is that it recasts all the existing laws of nature as special cases of a more powerful, more general model of reality.
Who knows, maybe we'll find that we actually are living in a simulation and then figure out how to hack the matrix. The idea of "travel" and "time" would become obsolete then; you'd just poke new values for your wave function into the simulation's RAM.
This seems implausible for many reasons, it seems more likely that if we're living in a simulation, we'll have trouble figuring out how to proloxify the feeblegarps.
Yes. The idea is that if we "break out" of whatever simulation we purportedly exist within, none of the concepts in the enclosing universe would make any sense to us.
(I happened to be watching How A Plumbus is Made when I wrote the comment, btw).
A radically different paradigm of physics would result in technologies that can't be imagined in the present paradigm of physics. For example nobody had even remotely guessed that transistors were possible until well in to the development of solid state theory.
we have plenty of unrealized technologies that can be imagined in the present paradigm but that we're not exploiting yet (see for example recent advances in 2D topological materials).
(based on my understanding of transistors, the first ones were conceived before the theory for them existed, and the first ones were built around the same time the quantum theory for them was expressed).
You don't send a message if the receiver can't understand it.
People need to have a point of contact with that extrapolated future to became a popular science fiction work, even the culture in the far future fiction is usually pretty similar to our own (or at least, the one of the moment where that book was written).
Present works (not the ones with inherited universes from old ones) are updated to our current expectations of the future, so you have sentient computers and other "possible" technology, and probably in 50 years we will have a different set of standards and not something as naive as what used to stand as possible 50 years before.
Why not better understanding of complexity and complex systems?
The assumption that any of these new technologies would be desirable and create a net positive effect in the world sounds very naive after seeing the results of something as simple as "connecting the world".
We need to have a better understanding of how new technologies interact with our existing technologies (including institutions and communities) and our environment, or else we risk (further) destabilizing everything that has allowed us to get this far.
personally I believe that improving the techniques we use to study complexity is the most important thing in science today. In many fields we are now drowned in tons of high quality data, yet scientists struggle to store, process, and turn that data into knowledge.
The origin of life, why life is evolvable, the evolution of the complexity in a eukaryote cell, and multicellular consciousness/intelligence are to me big unanswered questions in science.
Although, life can be reduced to chemistry and chemistry to physics I feel we are missing some high-level self-organizing principle of the universe.
Sorry, could you explain why you think life is not evolvable exactly? Assuming you take the existence of a single celled organism with DNA as a given (we still don't know the origin of life), evolution gets you the rest of the way rather nicely. Notably, "life" usually contains the assumption that it is evolvable as part of the definition. If the children of the organism can't adapt to the environment, we don't consider those things to be "alive" (e.g. a 3d printer that can print a copy of itself isn't alive).
As for the origin of life, all serious scientists are onboard with abiogenesis, though we don't know the mechanism. Every year, new science comes out showing how microfluid droplets with organic compounds + the natural environment, can result in behavior that looks similar to a cell.
For example, this one shows fairly interesting "cell like" movement without any life, and there was another last year that proposed a possible abiogenesis of cell walls through evaporation and organic compounds that suck up large molecules into the interior when evaporated.
We know that life is evolvable because life exists and we know the biochemical mechanisms involved (DNA + cellular biochemistry).
Evolution implies a relatively smooth path through "DNA space" from, say for example, an early single cell eukaryote to a mushroom.
However the search space is enormous. Even if we account for billions of years of evolution and a trillions of evolutionary experiments each year, a simple random walk with selection through DNA space should go nowhere because of the numbers involved. The curse of dimensionality[0] means there has to be some other principle of nature to make the search space yield a path from one viable life form to another. The search space of life would have to be 'smooth' in some sense. That 'smoothness' is something we don't understand.
If DNA space is just 256 bits (as a dramatic simplification), then 2^256 is a very very big space to search just by chance [1].
Now imagine a space orders of magnitude bigger.
This is an interesting way of framing the idea, but it's not a question of traveling in DNA space from some point (eukaryotic cell) to another specific point (mushroom): that would be very difficult in the way that you're talking about.
Imagine flipping a fair coin 256 times. The particular outcome ('HTTTTHHTTTTTTTTHTHHHTHHTHTHHHHH...') is extremely difficult to replicate, but getting any outcome is very easy: just flip the coins again. In this case we also have a lot of selection bias: all the paths through DNA space that don't result in intelligent life don't result in anyone having this conversation.
Regarding the curse of dimensionality: it's a statement about the available data rapidly becoming sparse in high dimensional spaces. It doesn't really say that high dimensional spaces are necessarily sparse, it's just hard to "fill" them in with the amount of data available.
Comparing evolution to a random walk with selection doesn’t quite sit right with me. In practice much of evolution occurs via gene duplication and recombination. At that point you can evolve complex changes very quickly. Evolving novel phenotypes is much easier if your starting material is an existing functional gene. Many motifs can be reused and reapplied.
Comparing a mule with it’s parents shows how much novelty can be produced in a single generation (in this case an evolutionary dead-end of course)
It's not really a random walk in any way. Having designed artificial evolutionary systems, even if you screw up the implementation and the search space is really bumpy, it usually still makes progress, albeit very slowly.
Physics might be bumping up against limits of knowledge, but I think it's quite short sighted to claim that this means all of Science is bumping up against fundamental limits.
Just because many of the questions we want to solve today are of practical significance (inventing new medicines, perhaps) doesn't make it any less scientific.
Indeed, almost 20 years after the Human Genome Project, we have only scratched the surface on how to understand what any particular genes are doing, and are very far from doing anything more than "hacking" on existing genes, let alone writing a biological program from the ground up.
A few years after this was written, Planck proposed energy quanta. And in 1905, Einstein published his four Annus Mirabilis papers, introducing the photoelectric effect (applying quantum), special relativity, and the mass-energy relationship.