Bayes R Squared
Bayes R2 is a Bayesian version of traditional R² for probabilistic models, measuring the proportion of variance explained while accounting for parameter uncertainty and posterior distributions.
Bayes R² is a Bayesian adaptation of the traditional R², tailored for probabilistic models where uncertainty in predictions is explicitly modeled. It estimates the proportion of variance explained by the model while incorporating posterior distributions, making it robust for complex scenarios with stochastic elements. Bayes R² highlights the model's explanatory power under uncertainty, with values near 1 indicating strong fit, ideal for validating models in probabilistic frameworks.
The formula for Bayes R², as proposed by Gelman et al., is typically computed as:
Here, is the variance of the fitted (predicted) values, and is the variance of the residuals. In Bayesian contexts, this is evaluated over posterior samples, often taking the median for a point estimate.
This metric addresses limitations of classical R² in Bayesian settings by accounting for parameter uncertainty, ensuring more reliable insights. For instance, in analyzing feature influences, Bayes R² provides a nuanced view of explanatory power under variability.
Bayes R² facilitates assessments of model uncertainty, enhancing decision-making. However, it requires computational resources for posterior sampling, so use it judiciously with large datasets.
A key advantage is its handling of overfitting risks in hierarchical models. Compare it with standard R² to highlight Bayesian benefits.