Hacker Newsnew | past | comments | ask | show | jobs | submit | eric_t's commentslogin

Associate professor in fluid dynamics here. There is typically a split between the people who want good looking simulations for movies/games, and those who need accurate results for engineering. For engineering, you need more effort on how you resolve the complex geometry and how you model the turbulence in the flow. This adds significant complexity both for problem setup and solution time. The only product I’m aware of that is built for real-time, 3D flow design is Ansys Discovery: https://www.ansys.com/products/3d-design/ansys-discovery

OpenFOAM is high quality, but as you’ve seen complex to use. There is a web-based GUI that can lighten the burden to get started somewhat: https://www.simscale.com/ I think they give you ~3000 simulation hours for a trial.


I’m also teaching fluid dynamics, and use Blender quite a bit for geometry work. Would love to hear more about your use of Blender for visualization if you’re willing to share.


Here are some of the notes for Mechanics of Fluids 1:

https://nbviewer.jupyter.org/github/nolankucd/MEEN20010/tree...


Hypothetical is the key here. Everyone can dream up a perfect system, but in the real world, it will have to make compromises, which means that it won't be perfect for everyone.


Here's the summation problem in Fortran, quite succinct, IMO:

    real, dimension(30000000) :: a

    call random_number(a)

    print *, sum(a)
That's 30 million doubles, takes 0.04s on my machine.


    real, dimension(30000000) :: a

    call random_number(a)

    print *, sum(a)
That's 30 million doubles, takes 0.04s on my machine.


A great quote from George E P Box:

All models are wrong. Some are useful.


So are those types of models useful? The ones that need to be adjusted constantly?


Sounds great, until you have kids...


That doesn't seems like an issue for the OP. Leo Babauta has six kids.


If your kids are toddler-age and older, it is certainly possible. But when you have babies, I consider step 1 impossible. I do all of the steps he mentions except number 1, and I am tired.


I think it's wrong to say "Amateurs, happy to accept small checks for snapshots of children and sunsets". It's more of a democratization of the tools needed to do a successful photo shoot. Anyone can buy a DSLR, the strobist movement means good-looking lighting for cheap and excellent post processing tools are available for free.

This means that amateur photographers in many cases can do just as good a job as a professional. It's pretty much the same reason why magazines and newspapers are declining, the internet and social networks have lead to a democratization of news.

To be successful as a professional photographer, you have to offer more than what amateurs are capable of, and in addition be able to educate your customer why you're worth the extra cash. As always, it's about staying competitive, instead of weeping over the past.


Yeah, despite the protestations, I think in a lot of relevant ways the amateur stuff is up to par these days. There is plenty of professional photography that really is unlikely to be replicated by amateurs, but from a market perspective, the photographers' biggest problem is that they don't have a monopoly on the profitable mid-range stuff anymore. It used to be that to have even a decent looking shot of like, a computer on a desk, you needed to pay a professional. Amateurs owned the "shitty snapshot" market, and professionals owned everything above that. But these days there are 100 decent shots of computers on desks on the internet, and professionals only really still own the high-end market, which is much smaller.


Most stock usage isn't sunsets - how many sunset photos do you need? It's smiling attractive people sitting at desks, or smiling in front of power poles wearing hard hats.

There is still a business of stock photos for people that want it. You just have to know what the market wants - and it isn't another misty dawn view of the Golden Gate.


For a while I've been trying to think of other fields where this will happen. iStockPhoto has already expanded into stock video, stock illustrations and stock audio (environmental and music). It's also clearly happening with written articles and perhaps even entire books.

What's next? Software? 3D models?


You already see that with software both GNU and non-GNU. The internet means that any good that can be reduced to bits on the wire will become widely available at low or no cost. And it's not just piracy, but competition between substitutes that cost almost nothing to reproduce. Automated fabrication means that this is beginning to happen to physical goods as well.

What you see happening is producers competing against all other producers in their field, past and present.

Uniqueness, originality and authenticity become the premium values for any creative work.


No, as the paper says, explicit Euler integration is extremely unstable. Implicit Euler, as the paper uses, is stable but only first order accurate.

If you're dealing with fluids with low diffusion compared to advection, some higher order Runge-Kutta scheme is typically used. For fluids with high diffusion, however, this will give you very high restrictions on the time step. For this cases, a hybrid scheme is often used, where the advection term is integrated explicitly using an Adam-Basforth scheme and the diffusion term is treated semi-implicitly with a Crank-Nicholson scheme. This means you have to solve additional linear equation systems for the velocities for each time step (Helmholtz type equations), but this is still faster since you can use longer time steps.

All of this may sound very complicated, but it all follows the same patterns, so it's really fairly straight forward.


I have a PhD in computational fluid dynamics (CFD), and have written several CFD codes in Fortran, C, Matlab and Python. I'm also a programming language geek, and have been interested in functional programming languages for quite some time. When I see something like this, however, I loose some of my faith. It just seems like too much effort to get decent speed, and even when not considering speed it doesn't seem to give much benefit. For instance, this is how the diffusion function would look in Fortran (skipping some details):

  pure function diffusion(x0) result(x)

    real, intent(in), dimension(:,:) :: x0
    real, intent(out),dimension(:,:) :: x

    x =  x0(2:n+1,1:n  ) & 
      +  x0(0:n-1,1:n  ) &
      +  x0(1:n  ,2:n+1) &
      +  x0(1:n  ,0:n-1) &
      -a*x0(1:n  ,1:n  )

  end function
Some things to note:

- The "pure" keyword guarantees that this function has no side effects.

- No do loops are needed! Fortran array slicing is very handy.

- The compiler will convert this to use SIMD instructions

- Adding some OpenMP hints to make it run on all cores is also very easy.

So this type of code in Fortran is short, very easy to understand and you are guaranteed extreme performance. Maybe functional programming has some benefits when you're dealing with more complex datastructures (for instance I'm working on a code right now which uses parallel octrees, kind off a pain in Fortran), but for simple things like this, I fail to see the point.

I want to believe, so perhaps someone here can enlighten me?


Clojure's not very fast with numerical code if you write it idiomatically. One main reason for this is that functions can't (for now) take primitives as parameters, so all arithmetic is boxed unless it's done inline with explicit type hints. (Hence all the macros in the Clojure code.)

However, don't lose hope just yet. Languages like Haskell or OCaml, with more mature compilers, might not have such limitations. Also, I think that while many small examples might turn out essentially equivalent when comparing imperative and functional styles, the difference becomes more pronounced on a whole-program scale. Pure functions are easier to reason about, and compose better.

Fortran, C and C++ will probably keep their place as tools to use when absolute best performance is needed, but with modern compilers and virtual machines, functional languages do not have to be slow either.

Finally, I hope to see functional programming languages succeed simply because functional programming is a lot of fun.


OCaml is only really fast if you use the imperative features. The only high performance Haskell code I've seen is from the language shootout, and it looks pretty hairy as well. Also, note that my Fortran function actually _is_ pure, so the whole argument about purity doesn't hold up.

I kinda understand what do you mean by whole-program scale, but in fact, most of our programs aren't much bigger than this! Typically, the number of lines of code doesn't exceed 50k.

I agree about functional programming being fun, but unfortunately I don't think my coworkers would agree, and especially not the companies funding our research!


Honestly, in a lot of cases language performance doesn't matter as long as you have a good numerics library. For example, here's one implementation of the projection step in my current project, which does fluid simulation on unstructured tetrahedral meshes and relies heavily on SciPy's sparse matrices:

  def __init__(self, mesh):
    div_matrix = op_mesh.n_to_nm1_boundary.T * op_mesh.F_matrix
    boundary_matrix = op_mesh.F_matrix - op_mesh.FB_matrix
    self.C_matrix = sparse.vstack((div_matrix, boundary_matrix))
    self.CCT_matrix = self.C_matrix * self.C_matrix.T

  def Project(self, velocities):
    lam = linalg.cg(-self.CCT_matrix, self.C_matrix * velocities)[0]
    return velocities + self.C_matrix.T * lam
It's pretty simple (constrained least squares optimization) and it's missing some stuff (preconditioner), but it runs fast enough (~1 minute/frame for > 1M tetrahedra), and only required minimal testing: it's all built up from operators I was already using elsewhere. Since it's the fourth or fifth projection method I've tried for this code, that makes a big difference. The fact that I'm using Python is almost immaterial: all of the "interesting" code is library calls.


Interesting. Is the code available somewhere? I looked at your SIGGRAPH paper, and that was very interesting.

Have you seen the FiPy project? It's a full-featured finite volume code in Python, which is very easy to extend (they did struggle with performance in early releases, not sure how the current status is).

I guess that's the area where dynamic or functional languages can be useful, in that they are easier to build generic libraries for, and can give rise to codes with easier extensibility.


I would offer that compared with Clojure, Fortran is like a DSL for numbers crunching. Does it make sense?


I'd be curious to know if/how the Clojure code could be written to do this kind of number crunching on GPU. Penumbra has a very idiomatic library for offloading work to the GPU.


I don't have time for a full conversion just now, but an eight-way diffusion on the GPU in Penumbra looks like this:

  (let [sum 0.0
        count 0.0]
    (convolution 1
      (+= sum %1)
      (+= count 1))
    (/ sum count))
"convolution" is a keyword that iterates over the neighbors (with a radius of 1, in this case), and does not overrun the boundaries of the source textures, hence the need to keep a running count.

For a more extensive example, see this implementation of Sobel edge detection on the GPU: http://github.com/ztellman/penumbra/blob/master/src/example/...


In the Fortran code, this could be run on the GPU by simply surrounding the code by

   !acc region
   !acc end region
with the PGI Accelerator. It would have to be a pretty big loop for it to pay off, though, since transfer to/from the GPU is very expensive.


what Fortran compiler(s) do you use and recommend for things like this?


We use a lot of compilers. They all vary in ability to detect bugs, and optimization features. The fastest used to be Intel, but right now the Portland group compiler seems to be the best. These are both commercial, though. Of the free compilers, Sun's compiler has been the fastest for us. It will be interesting to see if Oracle will continue that effort.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: