Predictive models used in decision making, such as QSARs in chemical regulation or drug discovery, call for evaluated approaches to quantitatively assess associated uncertainty in predictions. Uncertainty in less reliable predictions may be captured by locally varying predictive errors. In the current study, model-based bootstrapping was combined with analogy reasoning to generate predictive distributions varying in magnitude over a model's domain of applicability. A resampling experiment based on PLS regressions on four QSAR data sets demonstrated that predictive errors assessed by k nearest neighbour or weighted PRedicted Error Sum of Squares (PRESS) on samples of external test data or by internal cross-validation improved the performance of the uncertainty assessment. Analogy using similarity defined by Euclidean distances, or differences in standard deviation in perturbed predictions, resulted in better performances than similarity defined by distance to, or density of, the training data. Locally assessed predictive distributions had on average at least as good coverage as Gaussian distribution with variance assessed from the PRESS. An R-code is provided that evaluates performances of the suggested algorithms to assess predictive error based on log likelihood scores and empirical coverage graphs, and which applies these to derive confidence intervals or samples from the predictive distributions of query compounds.