Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Truth About Nuclear Fusion Q Factors (backreaction.blogspot.com)
90 points by terryf on Oct 6, 2021 | hide | past | favorite | 116 comments


It's a valid point that ITER will not produce net power, but it's still a big step forward.

Years ago I worked for a brilliant physicist who believed (usually correctly) that any factor less than an order of magnitude was just an "engineering problem".


Unfortunately for that viewpoint, the power density of a DT fusion reactor is at least an order of magnitude worse than commercial fission reactors. PWR reactor vessel = 20 MW/m^3, ITER = 0.05 MW/m^3, ARC = 0.5 MW/m^3.


Did you read TFA? The author is not arguing that it's not worthwhile to fund fusion research; she's arguing against the misleading discourse surrounding it. "But it's still a step forward" is exactly the reason that science is misleading the public.


Yes. I RTFA and saw that. I was not disagreeing with her.


Good documentary on Iter's claims by investigative journalist, Steve Krivit. https://www.youtube.com/watch?v=xnikAFWDhNw Shows that the public claims are not just an exaggeration but outright fraud for the purpose of getting more funding. It's one thing to say that the purpose is experimental research, it's another entirely to claim that it will produce net energy, which is not backed any evidence or even a reasonable scientific analysis. There's a long history of fraud in fusion research, it's really important the public is well educated on the history here.


I think the gap is not as far as the author purports. I've followed fusion research as a layperson for years, and clearly understand the point of these designs is to prove the most meaningful piece of the process of generating energy, not actually doing it, and that it would be a lot more work going from here to there.

It think the summary on fusion is:

- it's the most amazing energy source we have lined up in the mid term

-ots almost certainly going to work

- we are making meaningful but slower progress than we would like

- it's dramatically underfunded


My thinking on fusion is quite different. It's a technology that used to be considered the default future, but never made a lot of practical sense when examined closely, and now the justification for continuing to pursue it is becoming increasingly threadbare. In a decade or so this will become impossible for even fusion supporters to deny as the alternatives crash in cost.


I have a question, i ve read a book recommended on here, about fusion, called the "The Future of Fusion Energy". (It was very good, can recommend for a general topic overview and history of progress).

But i do not understand the general approach to plasma-vessel construction, in particular, why it is so passive. Its either "bend" to orbital shape dicated by the plasma physics properties when constructing (Wendelstein, Stellerators etc.).

Or increase the containment vessels magnetic field strength (scaling up the reactor, high-temperature-superconductors) until it is capable to hold all the chaotic increasing fluctuation inside no matter what.

My Question: Is there not a more active, not brute force approach for containing the plasma? Sort of "Whack-The-mole" countering the escape-events as they happen in a Tokamak?

If i understood correctly, the system adds to itself constantly energy (its a reactor after all) and any chaotic butterfly wing flapping on the plasma current can shape up to a tornado breaching the containment and ending fusion.


I'm a PhD student in robotics at Carnegie Mellon working on exactly this. It's extremely challenging for a few reasons:

- the dataset is a mess. The experiments that have been conducted on the tokamak that we have access to were done for very many different reasons and under many different configurations of the machine so there is not a clear method for disambiguating what dynamical changes are due to differences in the system vs underlying dynamical truths

- the simulators available are very slow and not that accurate

- the physics is hard enough that it's not possible to develop a controller in closed form (obviously)

This implies that we need a version of reinforcement learning or model-predictive control that is substantially more robust and sample-efficient than currently exists. We're working on that but obviously it's an open research problem.


I'm assuming disruption mitigation is a big part of what you're looking at? Controlling a normal state plasma would be nice, but recognizing incipient disruptions is absolutely critical if ITER (or ARC) is to function at all. One unmitigated disruption could break ITER.


Yeah so the paper I linked on contextual bayesian optimization (https://papers.nips.cc/paper/2019/hash/7876acb66640bad41f1e1...) does a combination of controlling for beta and optimizing linear MHD stability --- which is part of the problem. One of our collaborators has been working precisely on disruption prediction and mitigation but it's on most of our minds: https://www.pppl.gov/news/2021/artificial-intelligence-helps....


Any papers you can link to on this? Sounds interesting.


will repost my comment from the other one:

Papers I would recommend from our collaboration on control of normalized plasma pressure: https://papers.nips.cc/paper/2019/hash/7876acb66640bad41f1e1...

plasma profile transport modeling: https://iopscience.iop.org/article/10.1088/1741-4326/abe08d/...

hybrid dynamical modeling of gross plasma quantities: https://arxiv.org/abs/2006.12682

uncertainty quantification for plasma dynamics: https://arxiv.org/abs/2011.09588

It's still early days for this work and for us but we're looking at pushing reinforcement learning in methods and engineering to solve this problem.


I thought the design of the Wendelstein was actually to be able to control the plasma more completely by leveraging modern computing. Its design allows for a lot more input than a simple Tokamak would. But I only have a very superficial understanding of this.

I don't understand why you're saying the Wendelstein follows the path of the plasma, it's not like plasma has a shape it wants to be in, you just sort of push it the way you want it to go, and wherever it goes you try and push it the other way again, to make it go in a circle.

What other shape than the Stellerator would someone choose if they wanted to use even more fine control over the plasma?


There is a fusion startup called CT Fusion that has an innovation in this vein: [warning: not a plasma physicist, this is just what I understand from talking to actual physics nerds who follow this] instead of using giant magnets for plasma confinement, they are using a continuously variable magnetic field that is provided by helical coils surrounding the plasma. By monitoring the shape of the surface of the plasma and varying the electrical input to the coils, playing whack-a-mole as you put it with the shape of the plasma.


ITER has/will have a multitude of different control systems to control the inherently unruly plasma too. As I understand it, any tokamak will have to do something like that.

https://www.iter.org/newsline/-/3297


Poloidal field coils in ITER (and really any nuclear MCF machine that will ever be made in the future) integrate dynamic control. They operate on the microsecond timescale, which is expected to be sufficient for expected instabilities. You'd never move a vessel that fast. It's my understanding that instabilities aren't heavily affected by the vacuum wall's location anyway.


> Is there not a more active, not brute force approach for containing the plasma?

I think General Fusion's approach is interesting, they're compressing the plasma with a liquid metal wall.


This problem is clearly a fabulous use case for machine learning.


Yes but perhaps moreso in the more traditional ways. Stellarator geometries have several layers of optimization. You optimize for a last closed flux surface weighing MHD and turbulence models, then you optionally optimize nested flux surfaces, then you optimize single filament coils, then you optimized multi-filament coils. The computations are all too much, but more efficient optimizations buy performance for less computing.


I don't think so. Don't prematurely optimise. First thing to try is to hook up an RL algorithm and let her rip. If that doesn't work move on from there.


like I said above, we certainly hope so! It has been slow progress so far but applying modern ML / control techniques to tokamaks is one of the truly exciting applications of the current generation of AI in my opinion. Biased because this is literally what I do all day


Do you have a website or any papers on your work I could read?


I need to redo my website, just getting into the more public part of my PhD. Papers I would recommend from our collaboration on

control of normalized plasma pressure: https://papers.nips.cc/paper/2019/hash/7876acb66640bad41f1e1...

plasma profile transport modeling: https://iopscience.iop.org/article/10.1088/1741-4326/abe08d/...

hybrid dynamical modeling of gross plasma quantities: https://arxiv.org/abs/2006.12682

uncertainty quantification for plasma dynamics: https://arxiv.org/abs/2011.09588

It's still early days for this work and for us but we're looking at pushing reinforcement learning in methods and engineering to solve this problem


Do you have a github repo for the controller software? It will be fiberop to the sensors? And currentcontrolling by the ai?

Have you considered antagonistic training? One AI tries to destabilize the proces, the other trains not against a simulation, but against the destabilizing input and a succes-metric?


As I understand it, the bigger problem around fusion that we're still working towards fixing is being able to maintain an actively fusioning plasma. All of these Q measurements are around very short runs, sub-second I think. ITER seems to have a goal of 400-second runs, which sounds more like it. Once we have enough understanding of plasma physics and confining magnet behavior, it will hopefully be more straightforward to optimize the energy going into the plasma, and the energy going into the magnets and cooling system, etc to hopefully achieve a QTotal > 1. And maybe have enough heat being generated that it'll be necessary and reasonable to attach a boiler to it, hook it up to a steam turbine and generator, and make some electricity.


W7-X was supposed to demonstrate a 30-minute run in 2021, but that's been pushed back a year.


Mostly because of the you-know-what.

W7-X is not a tokamak nor a nuclear machine. It's designed to run all day and no one on the planet doubts it. The upgrade to get it to 30 minute pulses is primarily to the heat dissipation subsystem. The ECRH subsystem of W7-X is eyewateringly beautiful.


This is what I like about scientists, they can change their minds based on the evidence. Well done, Sabine!


To be fair though, I don't believe Sabine is involved in any nuclear fusion research, so changing her mind on this doesn't necessarily put her own funding at risk. Changing your mind is easy when your next meal doesn't depend on continuing to believe something.

In fact, she might even stand to gain if funding gets diverted from fusion research towards other projects.


If she was then she would have realized how ridiculous it is to directly compare Q performance between MCF and ICF machines, using Q as the metric to track progress, and not once uttering the words "triple product". This piece has about as much investigation as I put into the doneness of my toast.


Can you expand on that?

I find the claim almost trivially true: we care about electricity production, so that should be the main goal we're tracking.


A high Q machine is expensive. It's by nature a nuclear facility, which adds a few zeroes to the cost. Nuclear fusion machines needs tritium fuel, which is made in exactly one breeder on the planet. In general a fusion machine just being nuclear doesn't add much scientific value.

The plasma physics for making a better (cheaper) reactor has been pushed in smaller machines, as shown by better triple product values and empirical scaling laws. Triple product is a measure of how well the plasma is confined. That's the metric to focus on (when accounting for the fact that the science machines are small with scaling laws). In that regard, we have made excellent progress.

A high Q machine (such as ITER) is valuable because it lets us study burning plasmas. It's an expensive pill to swallow but an essential step towards making a real reactor. Qtotal = 1 is a nothingburher. Qplasma = 1 is also a nothingburger, but at least going past that barrier grants new knowledge.


ITER isnt meant to have Q-total > 1 though, it is meant as an experimental platform not energy generation. I wonder if the author has similar thoughts on promise for smaller reactors that take advantage of HTSC magnets.. should they have an easier time achieving breakeven Q-total?


Her point is that almost all coverage of fusion elides the difference between Q-total and Q-plasma. I didn't know how big the difference was until reading her article and I'm guessing almost no one else does either.


Her point is that the "Q" that has been used to sell ITER leads the public and policy makers to believe it is Q-Total, not Q-Plasma.


And beyond Q-Total is ROI: the ratio of $ in to $ out. An ignited fusion reactor with very high Q could very well still be below economic breakeven.


> should they have an easier time achieving breakeven Q-total?

I believe that’s why they’re being made: Q being a function of both physical size and magnetic field strength, and the new superconductors being useful for increasing the latter so the former can be reduced.

But I have no idea if any of them are currently attempting Q_total > 1 or not: too much hopeful PR for me to know what stage they’re really at.


No currently operating device is 'attempting' to achieve Q_total; they are all physics experiments, not simply searching for a magic combination of knobs that will ignite the plasma.

The Europeans are doing quite a bit of preliminary design scoping and engineering for DEMO, a 'demonstration' reactor which is supposed to come after ITER and generate net electricity, but there's not been a site selected, for example. Similarly, Chinese researchers are working on their CFETR, 'Chinese Fusion Engineering Test Reactor'.


Those are all (government funded?) mega-projects.

I’m not talking about that category.

I’m talking about private sector stuff by groups which claim developments in superconductors means they can do it privately for much less, e.g.: https://arstechnica.com/science/2021/09/mit-backed-fusion-st...

I lack the knowledge to tell real projects in this sector from snake oil.


How could they possibly be attempting Q-total > 1 when they aren’t even near Q-plasma = 1 yet?

If the gain of the system overall is greater than Q-plasma, you’ve invented some other energy source in there somewhere and should throw away your reactor and focus on that thing. :D


I’m not sure what you’re arguing against here. The new fusion startups claim the tech now exists to make compact useful Tokamaks. I’m not a physicist of any kind, so I don’t know what they think their MVPs are. I don’t know, and am not claiming to know, if they’re jumping straight to {Q_total, Q_plasma} > 1 or if they’re going to make a mere Q_plasma > 1, Q_total < 1 “fund-raising only” demo reactor first.


I’m responding to “but I have no idea if any of them are currently attempting Q_total > 1 or not.” My point is that literally nobody is going to set their target on Q-total > 1 without first setting a target of Q-plasma > 1. It would be like wondering if a rocket company was setting their sights on landing on the moon straight away or if they were going to attempt to escape earth’s atmosphere first.

Edit: one might think Q-plasma > 1 is just an internal milestone, not a public goal, but it’s not. Fusion startups are all going to be forced to make a public fuss over Q-plasma > 1 because the amount of funding you need to get from there to Q-total > 1 is MASSIVE and that milestone will be an investor aphrodisiac.


I feel like if this was the 2000s all over again, with ITER making headlines alongside some dubious claims of fusion being around the corner.


Fusion research is fundamentally good research to pursue, and vigorously. But in terms of energy for the world, we just happen to have an already made fusion plant some 92 million miles away. Its just an engineering problem of how to get more of it's wasted energy onto the earth using space solar arrays (Only one one-billionth of the Sun's total energy output actually reaches the Earth, now, per a google search...which seems too high, honestly).


Space solar is now something of a fossil idea from the 70s. At the time it was thought that solar panels would be comparatively expensive, and spacelaunch/space manufacturing would be comparatively cheap. But the economics changed out from under its feet: solar panels became a hundred times cheaper http://costofsolar.com/management/uploads/2013/06/price-of-s... but space stuff remained the same price. Now the most economical option is to put solar arrays on the Earth's surface, but build ten times more of them than you need.

Specifically, one of the killer engineering problems of space based solar is getting the power back to Earth. Tom Murphy wrote the canonical post on this: https://dothemath.ucsd.edu/2012/03/space-based-solar-power/ Depending on what frequency you're using and how big the orbital antenna is, the receiving rectenna array is going to be a couple kilometers across. If you use a beam power density that isn't going to cook birds flying across the antenna, then you're not receiving too much more power than regular solar irradiation. The atmosphere eats some of the power, the receiving antenna eats some power, and then the rectified power has to be converted to AC, losing a few percent more power. Maybe 50% total transmission loss. For the cost of putting a million tons into orbit, you might want to just build more ground solar.


Its advantage is continuity of supply - ground based PV in a single location is only available for about a third of a day.

Building transcontinental and transoceanic UHV power transmission lines circling the northern hemisphere, providing the same function is probably cheaper, but may be more politically difficult.

But that's moot. We have a cornucopia of viable storage methods in active development--viable meaning capable of being scaled up to global scale in 20 years or so.


> We have a cornucopia of viable storage methods in active development--viable meaning capable of being scaled up to global scale in 20 years or so.

Do you have more info on this? I’m skeptical of any claims of scaling to global scale that quickly.


Form Energy's iron-air battery, for instance - good at the two days to two weeks duration, needed as backup for wind during prolonged calms. We already mine a great deal of iron; the extra needed globally wouldn't really be noticed. Others are working on iron-air also.

At least a couple of teams are quite far along with hot rock energy storage. Good for the one week to one month timescale.

Various other battery technologies for sub-day timescales - lithium for grid frequency stabilisation, flow batteries.

I don't know of any active trials of ammonia energy storage, one of the most scalable candidates for seasonal storage, but ammonia engineering is very old and very widespread, so there don't appear to be roadblocks to scaling up rapidly once required.

The YouTube channels "Just Have a Think"[1] and "Undecided with Matt Ferrell"[2] focus on this stuff, and do some research.

Edit: you should be sceptical; well done! Scaling really does take forever. That's why the only viable technologies are those already lying around all over the place, being used for other things.

1. https://www.youtube.com/c/JustHaveaThink/videos

2. https://www.youtube.com/c/UndecidedMF/videos


Thermal storage in sand, good for weeks of storage at modest cost per kWh of storage capacity. Efficiency is meh, but that's ok.

https://arpa-e.energy.gov/sites/default/files/2021-03/07%20D...


This is great, thanks for the links!


Space solar was an idea from when PV was expensive. Put those expensive PV up in space with cheap launchers so they're more effective, that was the idea. But now PV is very cheap, so there's no need to put it in 24/7 sunlight. And with storage improving transmitting the power in time will be cheaper than transmitting it in space.


Besides, there is no need for space solar. The amount of solar radiation hitting deserts around the planet is way more than enough to generate electricity for the whole human society.


And soon we'll all live in a desert, so it will finally make sense to generate our power with solar.


Yes, this IS the problem. Could not a space tether system help with that? Although, space elevators are quite a daunting challenge too.


The problem with space based solar power is RF interference. The transmitter will interfere with RF comms on the same band thousands of kilometers from the receiver. And I mean thousands. Figures 55 and 56 show how much RF power is incident a given distance from the center of the receiver for 2.4 GHz and 5.8 GHz.[0]

2.4 GHz is one of the best frequencies to use because losses through storms and what not are quite low. However, we now have bluetooth and wifi devices which operate at this frequency now. 1000 km from the receiver, the incident power is stronger than the minimum sensitivity of a bluetooth receiver.

[0]https://www.researchgate.net/publication/348442155_Microwave...


Space solar arrays would be the perfect solution if we needed the energy in space. Since we need it on Earth, and since the sun is already sending exactly that energy to Earth, they are almost entirely pointless.


In fact, space sunshades would be much more useful to us at this point than space PV.


> Fusion research is fundamentally good research to pursue, and vigorously.

Fusion research is applied. It is justified by the useful end goal, being an energy source, not by pure science. A pure plasma physics program would look very different (and likely not be able to justify the current fusion budget.)

So, simultaneously advocating fusion research, while suggesting solar will win, is internally inconsistent.


Its just an engineering problem of how to get more of it's wasted energy onto the earth using space solar arrays

Considering even the best solar panels have under 50% efficiency, increasing the total solar radiation to earth would be very bad for the climate.


Actually it's true that increasing the overall input of energy in the planet atmosphere is bad.

Yet, this also happens with nuclear, coal, gas, oil...

The only solution is to capture the energy that is already being received from the sun in order to prevent it from heating up the atmosphere on site.

E.g. solar panels on the desert and wind turbines


> The only solution is to capture the energy that is already being received from the sun.

Or stop screwing with it entirely (it worked fine without our help for quite a while) and build nuclear reactors everywhere, which scale wonderfully and work even when it's dark, cloudy, or shrouded in smoke.


And also works in space..


missing the point much?


> increasing the total solar radiation to earth would be very bad for the climate.

Citation needed.

The radiation to earth varies already, sometimes we're farther away from the sun and the output of the sun isn't constant either. The earth seems to handle that just fine.


> The radiation to earth varies already, sometimes we're farther away from the sun and the output of the sun isn't constant either. The earth seems to handle that just fine.

That sounds remarkably similar to "how can there be global warming if we still have a cold season and a hot season".

I don't say space-based solar arrays are necessarily a problem, but arguing that earth handles variations, as if the net total doesn't matter, is a little blinkered.


It's not like global warming necessarily has to do with the net total. It's the heat not getting out that's the biggest problem, hence why we focus on the gas(es) that keep the heat in?


The net total is the sum of heat in and heat out. :-)


This is a silly comment. First, its not any different that creating your land-based fusion energy. Yes you are adding more energy into the system, thus more heat. In any case, space solar arrays that inefficiently convert electricity while in space would not be heating the earth directly. Just the inefficiencies in how that energy is used later.


It also applies to fossil fuels: the energy would otherwise be locked up underground.

It'd be interesting to compare the energy directly released as usable heat from burning hydrocarbons, to the indirect increase in energy absorption over the life of the released greenhouse gases.

Not sure, but I reckon the cumulative greenhouse effect would be massively more than the usable energy released during combustion.


Yeah except that's not what he said. He proposed space mirrors with the photovoltaics on earth. Doing the inefficient part in space is fine.


Can anyone comment on the heat > electricity conversion ratio? My understanding has been that modern multi-stage steam turbines used in power plants are around 90% efficient, not 50%. I might be missing other energy losses along the way, though.


Carnot's law is what you want. Max efficiency is temperature differential divided by hot absolute temperature.

So, with super heated steam of say 600 degrees C, I'd make that a maximum of about 66% theoretical, so 50% actual sounds pretty good.

If you could build a plasma heat engine, you could get near 100% but I've no idea what that would technically look like.


You can likely harvest a little bit more by using a lower temperature differential in another step, using, say, substances that boil at room temperature, to cool the water after it condenses and cannot produce work by expansion.


>> Can anyone comment on the heat > electricity conversion ratio? My understanding has been that modern multi-stage steam turbines used in power plants are around 90% efficient, not 50%.

They probably mean 90% of a theoretical maximum.

For example, one of the results that stood out to me in thermodynamics class was that the maximum possible efficiency of an internal combustion engine (I forget which cycle) was a function of the compression ratio. In other words you could never reach 100% conversion of chemical energy to mechanical energy and the theoretical "efficiency" was a function that increased with compression ratio but could never reach 100 percent. So lets assume an engine with a given compression ratio could have a theoretical efficiency of 50 percent, but a real-world design only achieved 45 percent. Someone might say 45/50 is 0.9 or 90 percent of the theoretical limit. That's a measure of how good the design was vs what's theoretically possible, but it has little to do with the actual efficiency of the energy conversion.

IIRC the best heat-to-mechanical energy conversion devices are rockets with a theoretical efficiency of 50% but it's been half my life since I studied thermodynamics, so maybe I'm off on some of this.


The maximum theoretical efficiency of ideal rocket engines is much higher than 50%. At very high expansion ratio in vacuum the gas can become so cold it condenses, so almost all the internal energy of the initial hot gas will have been converted to jet kinetic energy.


Won't you need a colossal nozzle to actually harvest these last bits of energy?


You need a very high area ratio (nozzle exit area/nozzle throat area), but that's possible in vacuum, especially if your thrust requirements are modest.


I don't think so, checking Siemens they're touting a 65% efficient 600MW turbine. The higher number might be including some application for the waste heat.


If you haven't read Sabine Hossenfelder, you should check out her other work. She is pretty acerbic about the politics of Big Physics.


One should be able to tell from her bad faith implications without grasping the basics of the field.


what are her "bad faith implications" ?


That the physicists are deceiving the public. She's made a narrative around fusion progress without understanding fusion progress.


Let not forget the total energy input it takes to construct the ITER project . We should not be wasting finite resources on this project.


Wasted resources? How about $6 Tn / year subsidy just to keep the lights on with an energy source that is vanishingly finite and making the planet uninhabitable? We're racking up a debt that nature is certain to collect.

It is absurd to say that fusion research's current level of funding is a waste.

https://www.theguardian.com/environment/2021/oct/06/fossil-f...


The relatively minuscule resources "wasted" on this project could eventually solve the "climate crisis" in one fell swoop via delivering nearly inexhaustible sources of clean energy. So inexhaustible, in fact, that we could reduce CO2 by entirely engineering-driven (rather than ineffectual policy) means. And fusion would do so while also _increasing_ not _reducing_ everyone's standard of living. You could even use your gas guzzler still, by making gas from CO2 and water, and it'd be "carbon neutral" even if it belches a column of black smoke straight into the air. And this gas would be like 50 cents a gallon, too.

I'd rather spend a trillion on fusion and get it done Manhattan project style than continue droning children in Afghanistan or funding $3.5T in pork barrel bullshit here at home. Just drown it in money completely until it's done. Put the very best people on it and let them have at it for real. We've done it before, several times, it worked. There's no reason why it wouldn't work this time. If there is any 21st century "Moon landing" type goal, this is it, and the United States is still best positioned to accomplish something of this magnitude. For how long that will be true I do not know, but it is true now.


ITER has zero chance of solving the climate crisis. Anything derived from it will be far too large, expensive, and late.


I'm not even talking about ITER here. Let them have their own nice European pork barrel boondoggle, though I'd point out that even if it fails miserably, it's still generating valuable science and technological capabilities that cannot be generated or sustained any other way, so it's not a "waste of resources". We're talking about these projects as if they'd offer marginal, rather than _radical_ improvements in our situation, which is simply not true. If this is done this will completely upend the world and make it a better place - there's no doubt in my mind on this whatsoever. This is the _only_ environmental thing I wouldn't mind paying more taxes for, but I'd argue we don't even need that - we just need to look a little bit ahead and allocate existing resources towards longer term, riskier things benefits from which we _know_ will absolutely dwarf the investment. The reason why this is not happening is because we have trillions of dollars flowing into other troughs from which everyone involved is used to feeding. Until we disrupt those arrangements somehow our only chance of solving this is private sector investment, but private sector can't sustain things of this magnitude over an extended period of time. Not unless Musk gets involved.


Any burning plasma MCF machine will be "derived from it" in that it will incorporate the physics learned from it. The machines that come later will not be as large.

And "late" is better than never.


Commonwealth Fusion expects to start commercialization in 2025. They say it will put power onto the grid.


"This HTS magnet technology will next be used in SPARC, which is under construction in Devens, Massachusetts and on track to demonstrate net energy from fusion by 2025. SPARC will pave the way for the first commercially viable fusion power plant called ARC." [0]

But this does not say that they are going to put power on the grid in 2025. They say net energy from fusion by 2025 which may just mean Q-plasma > 1, it does not say (which kind of proves the point of the article). They don't have any estimates of an on the grid fusion plant.

https://cfs.energy/news-and-media/cfs-commercial-fusion-powe...


SPARC will not generate electricity[0]. It's intended to show that the ARC concept, which should be able to generate net electricity[1], is viable. If SPARC works, ARC might be built as early as 2030. 'As early as' being the keywords here. Showing that the difficult physics work is important to do before moving on to the difficult engineering.

[0]https://www.psfc.mit.edu/sparc/faq [1]https://arxiv.org/pdf/1409.3540.pdf


I've seen various youtube videos from their employees and CEO. From what I remember, Dennis Whyte (head of the fusion lab at MIT and CFS cofounder) says you need about Q>5 at least, to make fusion economically viable. So that must mean Qplasma > 5. SPARC is working towards that by 2025.

And so far, the company, CFS, has been achieving their milestones for the performance of their magnets. The HTS magnets and coils is the main ingredient that the startup is optimizing for. Sometime in the last year, they implied that their Qplasma was much better than they minimally hoped for. I think they achieved Qplasma > 10, but Dennis was keeping the details proprietary.

From all the talks I remember, they are targeting power on the grid with the ARC reactor by 2035.

Edit:

This is the best recent video I've seen, about SPARC and ARC. Jumped to 2:25 for Dennis Whyte.

https://www.youtube.com/watch?v=bHJyoqDO0zw

It's targeted to MechE students and gives a lot of details about other aspects of a potential ARC reactor design, e.g. how to get the heat out.


Yes. They start commercialization. Not that a commercial reactor will be ready in 2025 but where they work to commercialize SPARC -> ARC.

https://cfs.energy/technology


Ah, yes, I understand what you meant with your wording. Proof-of-concept by 2025, and commercialization after that.


They plan in 2025 to be where fission was in 1942. Only a small matter of engineering after that that can't take very long, right?


I added a link above to a good talk from Dennis Whyte. I think it will help address your question about the additional engineering issues in a commercial reactor based on ARC.

In short, from my basic understanding, there are quite a few additional major challenges involved with fusion power generation. e.g. After creating net-positive fusion, the heat must be efficiently extracted without stopping the reaction. Also tritium must be continually extracted from the FLiBe (fluorine-lithium-beryllium) bath that stops and collects neutrons and extracts the heat.

Since I'm a total novice in all this, I don't know if these require incremental innovations or major advances. But from the video, the problems mentioned seem to be more tractable than achieving fusion ignition or Q>10.


One major issue is that providing enough fusion power with ARC reactors to power the world economy would require 100x more beryllium than the current estimated global resource (not reserve). Only a bit over 200 tonnes of Be is mined every year right now. It's not a common element.

Abdou's team (the fusion engineering guy at UCLA) rejected molten salt blankets for this reason, among others, after trying really hard to get them to work in studies.

One big problem with fusion is the low power density. I harp on that a lot, but it's been known to be a very serious problem for decades. ARC's power density is 40x worse than a PWR's reactor vessel. It's difficult to see how fusion can beat fission given this. I suspect the optimistic numbers for fusion come from using a way too cheery cost estimation methodology, something that would predict fission is far cheaper than it actually turned out to be.


Interesting, thanks for the info.

I was always somewhat concerned about the abundance of critical raw materials for ARC or other fusion projects. I had assumed that the rare-earth elements in the REBCO tape, e.g. yttrium, would be constrained. I didn't suspect that beryllium could be a limiting resource. But I wonder if the lack of supply is related to true scarcity or just to a lack of a profitable market currently.

These are all questions I'd want to ask domain experts in mining and fusion.

But I agree that fission would be a better solution for baseload, at least for the next 10-20 years. If only newer modular designs were actually approved...


The ARC design (see https://arxiv.org/abs/1409.3540 for details) uses surprisingly little yttrium. The superconductor is only about 1% of the tapes, and the tapes are only a faction of the coils, and the coils mass is just a fraction of the mass of the coil support structure. I think the total amount of yttrium will be in the tens of kilograms, if that.

I have a suspicion that this is being funded at all because the magnet technology would be useful in non-fusion contexts (hybrid electric aircraft, superconducting generators in wind turbines.)

In contrast, IIRC a single ARC reactor of that design would have 90 tonnes of beryllium (although that could be reduced by half if the secondary loop used a different molten salt.)


What I’ve heard is that they will have a demonstration of “net energy” by 2025. This is still a long way from solving the engineering problem of putting energy on the grid. And if that “net energy” definitions is actually Q-plasma as this video claims, it’s a lot farther still.


SPARC is aiming for Q_plasma of about 10 in 2025, but not to generate energy at all: it doesn't have any of the required 'blanket' system for breeding tritium. SPARC is a physics test, to make sure our theories of burning plasma are correct and to test the stability of these high field tokamaks.

ITER is (among other things) supposed to be a physics test, but won't come until 2035. So SPARC's goal is to 'leapfrog' that by 10 years.

You're right that there's a lot of work to be done between Q_plasma = 10 and generating energy. A lot of this work on the materials side, to engineer materials that can last for years in a fusion reactor environment.


Fusion as a power source has been several years away for several decades.


Theranos Fusion.

Edit: There has been Edison, and there has been Musk. If you're not either of them, and you claim other people are doing their engineering wrong ... consider that you might be mistaken.

Make a big song and dance about it, you look like a charlatan.


I think this take is too skeptical to apply to all the startups. For all the talks from CFS I've seen, they've been very measured and sober about the viability of near-term fusion.

The game changer here is not very new reactor designs, but cheap high performance high-temperature superconductors that enable very high field and therefore much smaller size.

Other fusion work, like ICF or the LLNL laser fusion or Lockheed or General Fusion all seem to be built on designs that are far less understood and less Q performance so far.


The high-Tc superconductors are nice, but they are not a panacea. They move fusion from "absurdly" to just "desperately" uncompetitive.


Why will nuclear fusion scientists bother with anything other than Qplasma. Heat to electricity is separate problem.


Because the ultimate goal is a power plant, not just a paper saying that Q-Plasma of 1 (or 10) has been reached.


I can imagine it would be reasonable for fusion scientists to not spend effort on the Heat -> electricity part. What I think the article is highlighting is that they are currently totally ignoring all the energy that goes into anything other than plasma, (ie the power to run magnets, vacuum pumps, or lasers), which from the examples given is on the order of 10s-100s of times the power delivered to the plasma.

This energy should be considered by fusion scientists, because it may end up invalidating certain topologies that can achieve a Qplasma > 1, but can't achieve a Qtotal > 1. For instance from the article, it seems like pulsed lasers are relatively inefficient at turning input energy into laser energy, which might mean they would need a QPlasma > 100 before QTotal aproaches 1. I don't know which of those inefficiencies are deemed to be fundamental and which are potentially improveable, but it could suggest the whole line of research is not worth perusing.


Qtotal doesn't affect the physics of the plasma. Qplasma does. Making a machine that burns plasma is expensive. Actually understanding burning plasma is necessary to build a reactor. This basic understanding appears to be lost on Sabine.


Things like the NIF also have power losses in feeding the ignition. The NIF lasers are maybe .1% efficient, or less, meaning that even if energy delivered by the lasers is less than created by the reaction, it must deliver a 1000x power output just for lasers to be break-even.


Because Qplasma >> 1 might mean there's enough energy to make Qtotal > 1 as well.


They can bother with whatever they want. But stop misleading the public and the media.


What's the significance of reaching the Qplasma = 1 milestone?

They get to uncork some champagne, but they still need to get Q up to ~10 before handing it off to the powerplant engineers.


I would be interested in reading a broader analysis of how scientists and other experts are misleading the public. I imagine there is enough on this subject to write a book. My layman's perspective is that it seems to follow from the belief that the public cannot understand the scientists' preferred decision, and that therefore the scientists must mislead the public "for their own good". The obvious example of course is the health authorities misleading the public on masks early on so that hospitals would be able to stockpile supplies, but I'd like to see even more places where the narrative is being twisted.


An honest assesment acknowledges that current fusion research, like string theory and even HEP, has become an employment program. You can tell this is the case when 1) no efforts are made to change course once it is clear the current dogma is impractical, unprovable or lends increasingly marginal gains in knowledge at extreme cost; and 2) any who question the accepted view are shouted down and pushed aside.

That is not to say there should be no money spent on researching fusion or the standard model (or beyond). Just that the current system does a poor job of allocating limited funds by putting most of the monies in one basket until well after other ideas/technologies should be given time of day and ample funding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: