A statistical model that predicts the likelihood or probability of an event occurring, often used in predictive analytics to forecast customer behavior, responses, or outcomes.
The use of data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical data patterns.
The probability or chance of a specific event or outcome occurring, as estimated by a propensity model.
The variable or outcome that a propensity model aims to predict or forecast, such as customer conversion, click-through rates, or the likelihood of default.
Historical data used to train and develop a propensity model, typically consisting of examples with known outcomes to teach the model patterns and correlations.
The variables or attributes used in a propensity model to make predictions, representing the input data that influences the likelihood of the target variable.
A scenario where the target variable has only two possible outcomes, often represented as 0 or 1, such as “converted” or “not converted.”
A statistical method commonly used in propensity models for binary outcomes, estimating the probability of an event occurring.
The application of artificial intelligence algorithms that enable computer systems to learn and improve from experience without explicit programming.
A set of rules or procedures designed to perform a specific task, such as predicting outcomes in a propensity model.
A situation in which a model learns the training data too well, capturing noise or irrelevant patterns that do not generalize well to new data.
A situation in which a model is too simplistic and fails to capture the underlying patterns in the training data, resulting in poor predictive performance.
A graphical representation of a propensity model’s ability to distinguish between true positive rates and false positive rates at different threshold settings.
A measure of the overall performance of a propensity model based on the ROC curve, indicating the model’s ability to discriminate between positive and negative cases.
A technique used to assess the performance of a propensity model by splitting the data into multiple subsets for training and testing, helping to avoid overfitting.