The thing is, whatever the hell it is that human brains actually do in the background to produce our 'understanding' of the world and our ability to synthesize new ways to manipulate it, we're also very good at back-fitting explanations based on symbolic reasoning. So it looks like machines need symbolic reasoning to replicate human abilities, whereas I'd bet a dollar that actually, we're doing something quite different (and messy and Bayesian and statistical) in the background and then, using the same process, coming up with a story to explain our outcome semantically. It's not insight so much as parallel construction.
I fully agree, as I wrote in my other comment in here. Logical symbolic reasoning is usually post-hoc rationalisation built constructively to come to an already held conclusion that "feels right". It's rare that someone changes their mind due to logic, especially if the topic isn't abstract and has real-world consequences and emotional engagement.
> usually post hoc rationalisation built constructively to come to an already held conclusion that "feels right"
Counterfactual reasoning is a promising direction for AI. What would have happened if the situation were slightly different? That means we have a 'world model' in our head and can try our ideas out 'in simulation' before applying them in reality. That's why a human driver doesn't need to crash 1000 times before learning to drive, unlike RL agents. This post hoc rationalisation is our way of grounding intuition to logical models of the world, it's model based RL.