Q1: Most of the literature I've seen on simulation says to use Monte Carlo sampling. Is there any reason Latin Hypercube should not be used?
A1: We have not found any reason that Latin Hypercube sampling should not be used. There has been no demonstrated difference between the sampling types or any evidence that Latin Hypercube Sampling is biased in any way. It preserves the randomness that is a part of Monte Carlo sampling.
Q2: For a given number of iterations in a Monte Carlo simulation, what is the appropriate number of iterations for a Latin Hypercube simulation to achieve the same level of convergence?
A2:Our experience has shown that typically 1/3 as many Latin Hypercube iterations are required to get equal or better results as the equivalent amount of Monte Carlo Iterations. Of course, the number of iterations required for good stable results depends on the nature of the model being analyzed. For most models, 300-500 iterations is more than sufficient. However there are models out there that, because of the nature the input distributions that are being used, require more iterations before the results are stabilized. Highly skewed distributions typically require more iterations. You should use the "Convergence Monitor" feature in @RISK if you're not sure how many iterations you should run.
Q3: Is Latin Hypercube sampling valid if a simulation is halted in the middle of running?
A3: If a Latin Hypercube sampling simulation is stopped prior to the execution of the specified number of iterations, the simulation results are still valid. However, they do not reflect all the benefit of the stratified Latin Hypercube sampling. Essentially the sampling becomes more "Monte Carloish" because not all the input strata have been filled. The strata that were sampled from have been randomly selected from across a distribution, so it's as least as good as the equivalent number of Monte Carlo iterations. But, of course, not as good a complete Latin Hypercube simulation using the same number of samples.