The whole point of Turing Machines is to eliminate all of these different kinds of uncertainty. There is in point of actual physical fact no such thing as a Turing Machine. Digital computers are really analog under the hood, but they are constructed in such a way that their behavior corresponds to a deterministic model with extremely high fidelity. It turns out that this deterministic behavior can in turn be tweaked to correspond to the behavior of a wide range of real physical systems. Indeed, there is only one known exception: individual quantum measurements, which are non-deterministic at a very deep fundamental level. And that in turn also turns out to be useful in its own way, which is why quantum computing is a thing.
Well, yeah, obviously. But my point is that you do need a solution to the measurement problem in order to model measurements in any way other than simply punting and introducing randomness as a postulate.
Does a probabilistic Turing machines needs aleatory uncertainty? (would have called this ontological but (1) disagrees)
Epistemic uncertainty would mean her:
We don't know which deterministic Turing machine we are running. Right now, I see no way to use this in algorithms.
(1) https://dictionary.helmholtz-uq.de/content/types_of_uncertai...