Hi, good question; I didn't check recently but two years ago openLCA was calculating with double precision (best you can do in e.g. windows) and SimaPro with single precision but with a somehow smart digit shift, to increase precision where needed. This could make SimaPro more vulnerable in these situations, but I didn't test yet.
Consequential models are indeed less stable than the atttributional ones but you can of course still perform a Monte Carlo simulation. As always, you should treat calculation results as model results and not as natural constants or bookkeeping sums, and thus the message of extremely high variances obtained from a Monte Carlo simulation is, I would say, not that you should not apply Monte Carlo simulation for your model, but that the model reacts very sensitive to uncertain data. This can, however, also happen if you have an own larger, non-trivial foreground model, in an attributional "setup". And since in both cases, the results will "blow off", reach extreme values, very quickly, it is easy to spot these cases I think.