Technology Cycle:
Data warehousing is a mature technology, with approximately 70 percent of Forrester Research survey respondents indicating they have one in production. Data mining has endured significant consolidation of products since 2000, in spite of initial high-profile success stories, and has sought shelter in encapsulating its algorithms in the recommendation engines of marketing and campaign management software. Statistical inference has been transformed into predictive modelling. As we shall see, the emerging trend in predictive analytics has been enabled by the convergence of a variety of factors.
Technology Hierarchy:
In the technology hierarchy, data warehousing is generally considered an architecture for data management. Of course, when implemented, a data warehouse is a database providing information about (among many other things) what customers are buying or using which products or services and when and where are they doing so. Data mining is a process for knowledge discovery, primarily relying on generalizations of the "law of large numbers" and the principles of statistics applied to them. Predictive analytics emerges as an application that both builds on and delimits these two predecessor technologies, exploiting large volumes of data and forward-looking inference engines, by definition, providing predictions about diverse domains.
Methods:
The method of data warehousing is structured query language (SQL) and its various extensions. Data mining employs the "law of large numbers" and the principles of statistics and probability that address the issues around decision making in uncertainty. Predictive analytics carries forward the work of the two predecessor domains. Though not a silver bullet, better algorithms in operations research, risk minimization and parallel processing, when combined with hardware improvements and the lessons of usability testing, have resulted in successful new predictive applications emerging in the market. (Again, see Figure 1 on predictive analytics enabling technologies.) Widely diverging domains such as the behaviour of consumers, stocks and bonds, and fraud detection have been attacked with significant success by predictive analytics on a progressively incremental scale and scope. The work of the past decade in building the data warehouse and especially of its closely related techniques, particularly parallel processing, are key enabling factors. Statistical processing has been useful in data preparation, model construction and model validation. However, it is only with predictive analytics that the inference and knowledge are actually encoded into the model that, in turn, is encapsulated in a business application.
Definition
This results in the following definition of predictive analytics: Methods of directed and undirected knowledge discovery, relying on statistical algorithms, neural networks and optimization research to prescribe (recommend) and predict (future) actions based on discovering, verifying and applying patterns in data to predict the behavior of customers, products, services, market dynamics and other critical business transactions. In general, tools in predictive analytics employ methods to identify and relate independent and dependent variables - the independent variable being "responsible for" the dependent one and the way in which the variables "relate," providing a pattern and a model for the behavior of the downstream variables.
In data warehousing, the analyst asks a question of the data set with a predefined set of conditions and qualifications, and a known output structure. The traditional data cube addresses: What customers are buying or using which product or service and when and where are they doing so? Typically, the question is represented in a piece of SQL against a relational database. The business insight needed to craft the question to be answered by the data warehouse remains hidden in a black box - the analyst's head. Data mining gives us tools with which to engage in question formulation based primarily on the "law of large numbers" of classic statistics. Predictive analytics have introduced decision trees, neural networks and other pattern-matching algorithms constrained by data percolation. It is true that in doing so, technologies such as neural networks have themselves become a black box. However, neural networks and related technologies have enabled significant progress in automating, formulating and answering questions not previously envisioned. In science, such a practice is called "hypothesis formation," where the hypothesis is treated as a question to be defined, validated and refuted or confirmed by the data.
Google Track
Wednesday, June 22, 2011
Friday, June 17, 2011
PowerPivot in Excel 2010
PowerPivot gives users the power to create compelling self-service BI solutions, facilitates sharing and collaboration on user-generated BI solutions in a Microsoft SharePoint Server 2010 environment, and enables IT organizations to increase operational efficiencies through Microsoft SQL Server 2008 R2-based management tools.
Tuesday, April 26, 2011
Predictive Analytics
Predictive analytics encompasses a variety of techniques from statistics, data mining and game theory that analyze current and historical facts to make predictions about future events.
In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision making for candidate transactions.
Predictive analytics is used in actuarial science, financial services, insurance, telecommunications, retail, travel, healthcare, pharmaceuticals and other fields.
One of the most well-known applications is credit scoring, which is used throughout financial services. Scoring models process a customer’s credit history, loan application, customer data, etc., in order to rank-order individuals by their likelihood of making future credit payments on time. A well-known example would be the FICO score.
DefinitionPredictive analytics is an area of statistical analysis that deals with extracting information from data and using it to predict future trends and behavior patterns. The core of predictive analytics relies on capturing relationships between explanatory variables and the predicted variables from past occurrences, and exploiting it to predict future outcomes. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.
Types: Generally, the term predictive analytics is used to mean predictive modeling, "scoring" data with predictive models, and forecasting. However, people are increasingly using the term to describe related analytical disciplines, such as descriptive modeling and decision modeling or optimization. These disciplines also involve rigorous data analysis, and are widely used in business for segmentation and decision making, but have different purposes and the statistical techniques underlying them vary.
Predictive models: Predictive models analyze past performance to assess how likely a customer is to exhibit a specific behavior in the future in order to improve marketing effectiveness. This category also encompasses models that seek out subtle data patterns to answer questions about customer performance, such as fraud detection models. Predictive models often perform calculations during live transactions, for example, to evaluate the risk or opportunity of a given customer or transaction, in order to guide a decision. With advancement in computing speed, individual agent modeling systems can simulate human behavior or reaction to given stimuli or scenarios. The new term for animating data specifically linked to an individual in a simulated environment is avatar analytics.
Descriptive models: Descriptive models quantify relationships in data in a way that is often used to classify customers or prospects into groups. Unlike predictive models that focus on predicting a single customer behavior (such as credit risk), descriptive models identify many different relationships between customers or products. Descriptive models do not rank-order customers by their likelihood of taking a particular action the way predictive models do. Descriptive models can be used, for example, to categorize customers by their product preferences and life stage. Descriptive modeling tools can be utilized to develop further models that can simulate large number of individualized agents and make predictions.
Decision models: Decision models describe the relationship between all the elements of a decision — the known data (including results of predictive models), the decision and the forecast results of the decision — in order to predict the results of decisions involving many variables. These models can be used in optimization, maximizing certain outcomes while minimizing others. Decision models are generally used to develop decision logic or a set of business rules that will produce the desired action for every customer or circumstance.
Applications: Although predictive analytics can be put to use in many applications, we outline a few examples where predictive analytics has shown positive impact in recent years.
Analytical customer relationship management (CRM): Analytical Customer Relationship Management is a frequent commercial application of Predictive Analysis. Methods of predictive analysis are applied to customer data to pursue CRM objectives.
Clinical decision support systems: Experts use predictive analysis in health care primarily to determine which patients are at risk of developing certain conditions, like diabetes, asthma, heart disease and other lifetime illnesses. Additionally, sophisticated clinical decision support systems incorporate predictive analytics to support medical decision making at the point of care. A working definition has been proposed by Dr. Robert Hayward of the Centre for Health Evidence: "Clinical Decision Support systems link health observations with health knowledge to influence health choices by clinicians for improved health care."
Collection analytics: Every portfolio has a set of delinquent customers who do not make their payments on time. The financial institution has to undertake collection activities on these customers to recover the amounts due. A lot of collection resources are wasted on customers who are difficult or impossible to recover. Predictive analytics can help optimize the allocation of collection resources by identifying the most effective collection agencies, contact strategies, legal actions and other strategies to each customer, thus significantly increasing recovery at the same time reducing collection costs.
Cross-sell: Often corporate organizations collect and maintain abundant data (e.g. customer records, sale transactions) and exploiting hidden relationships in the data can provide a competitive advantage to the organization. For an organization that offers multiple products, an analysis of existing customer behavior can lead to efficient cross sell of products. This directly leads to higher profitability per customer and strengthening of the customer relationship. Predictive analytics can help analyze customers’ spending, usage and other behavior, and help cross-sell the right product at the right time.
Customer retention: With the number of competing services available, businesses need to focus efforts on maintaining continuous consumer satisfaction. In such a competitive scenario, consumer loyalty needs to be rewarded and customer attrition needs to be minimized. Businesses tend to respond to customer attrition on a reactive basis, acting only after the customer has initiated the process to terminate service. At this stage, the chance of changing the customer’s decision is almost impossible. Proper application of predictive analytics can lead to a more proactive retention strategy. By a frequent examination of a customer’s past service usage, service performance, spending and other behavior patterns, predictive models can determine the likelihood of a customer wanting to terminate service sometime in the near future. An intervention with lucrative offers can increase the chance of retaining the customer. Silent attrition is the behavior of a customer to slowly but steadily reduce usage and is another problem faced by many companies. Predictive analytics can also predict this behavior accurately and before it occurs, so that the company can take proper actions to increase customer activity.
Direct marketing: When marketing consumer products and services there is the challenge of keeping up with competing products and consumer behavior. Apart from identifying prospects, predictive analytics can also help to identify the most effective combination of product versions, marketing material, communication channels and timing that should be used to target a given consumer. The goal of predictive analytics is typically to lower the cost per order or cost per action.
Fraud detection: Fraud is a big problem for many businesses and can be of various types. Inaccurate credit applications, fraudulent transactions (both offline and online), identity thefts and false insurance claims are some examples of this problem. These problems plague firms all across the spectrum and some examples of likely victims are credit card issuers, insurance companies, retail merchants, manufacturers, business to business suppliers and even services providers. This is an area where a predictive model is often used to help weed out the “bads” and reduce a business's exposure to fraud.
Predictive modeling can also be used to detect financial statement fraud in companies, allowing auditors to gauge a company's relative risk, and to increase substantive audit procedures as needed.
The Internal Revenue Service (IRS) of the United States also uses predictive analytics to try to locate tax fraud.
Portfolio, product or economy level prediction: Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example a retailer might be interested in predicting store level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These type of problems can be addressed by predictive analytics using Time Series techniques (see below).
Underwriting: Many businesses have to account for risk exposure due to their different services and determine the cost needed to cover the risk. For example, auto insurance providers need to accurately determine the amount of premium to charge to cover each automobile and driver. A financial company needs to assess a borrower’s potential and ability to pay before granting a loan. For a health insurance provider, predictive analytics can analyze a few years of past medical claims data, as well as lab, pharmacy and other records where available, to predict how expensive an enrollee is likely to be in the future. Predictive analytics can help underwriting of these quantities by predicting the chances of illness, default, bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition, by predicting the future risk behavior of a customer using application level data. Predictive analytics in the form of credit scores have reduced the amount of time it takes for loan approvals, especially in the mortgage market where lending decisions are now made in a matter of hours rather than days or even weeks. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default.
Statistical techniques: The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques.
Regression techniques: Regression models are the mainstay of predictive analytics. The focus lies on establishing a mathematical equation as a model to represent the interactions between the different variables in consideration. Depending on the situation, there is a wide variety of models that can be applied while performing predictive analytics. Some of them are briefly discussed below.
Linear regression model: The linear regression model analyzes the relationship between the response or dependent variable and a set of independent or predictor variables. This relationship is expressed as an equation that predicts the response variable as a linear function of the parameters. These parameters are adjusted so that a measure of fit is optimized. Much of the effort in model fitting is focused on minimizing the size of the residual, as well as ensuring that it is randomly distributed with respect to the model predictions.
The goal of regression is to select the parameters of the model so as to minimize the sum of the squared residuals. This is referred to as ordinary least squares (OLS) estimation and results in best linear unbiased estimates (BLUE) of the parameters if and only if the Gauss-Markov assumptions are satisfied.
Once the model has been estimated we would be interested to know if the predictor variables belong in the model – i.e. is the estimate of each variable’s contribution reliable? To do this we can check the statistical significance of the model’s coefficients which can be measured using the t-statistic. This amounts to testing whether the coefficient is significantly different from zero. How well the model predicts the dependent variable based on the value of the independent variables can be assessed by using the R² statistic. It measures predictive power of the model i.e. the proportion of the total variation in the dependent variable that is “explained” (accounted for) by variation in the independent variables.
Discrete choice models: Multivariate regression (above) is generally used when the response variable is continuous and has an unbounded range. Often the response variable may not be continuous but rather discrete. While mathematically it is feasible to apply multivariate regression to discrete ordered dependent variables, some of the assumptions behind the theory of multivariate linear regression no longer hold, and there are other techniques such as discrete choice models which are better suited for this type of analysis. If the dependent variable is discrete, some of those superior methods are logistic regression, multinomial logit and probit models. Logistic regression and probit models are used when the dependent variable is binary.
Logistic regression: For more details on this topic, see logistic regression.
In a classification setting, assigning outcome probabilities to observations can be achieved through the use of a logistic model, which is basically a method which transforms information about the binary dependent variable into an unbounded continuous variable and estimates a regular multivariate model (See Allison’s Logistic Regression for more information on the theory of Logistic Regression).
The Wald and likelihood-ratio test are used to test the statistical significance of each coefficient b in the model (analogous to the t tests used in OLS regression; see above). A test assessing the goodness-of-fit of a classification model is the –.
Multinomial logistic regression: An extension of the binary logit model to cases where the dependent variable has more than 2 categories is the multinomial logit model. In such cases collapsing the data into two categories might not make good sense or may lead to loss in the richness of the data. The multinomial logit model is the appropriate technique in these cases, especially when the dependent variable categories are not ordered (for examples colors like red, blue, green). Some authors have extended multinomial regression to include feature selection/importance methods such as Random multinomial logit.
Probit regressionProbit models offer an alternative to logistic regression for modeling categorical dependent variables. Even though the outcomes tend to be similar, the underlying distributions are different. Probit models are popular in social sciences like economics.
A good way to understand the key difference between probit and logit models, is to assume that there is a latent variable z.
We do not observe z but instead observe y which takes the value 0 or 1. In the logit model we assume that y follows a logistic distribution. In the probit model we assume that y follows a standard normal distribution. Note that in social sciences (example economics), probit is often used to model situations where the observed variable y is continuous but takes values between 0 and 1.
Logit versus probit: The Probit model has been around longer than the logit model. They look identical, except that the logistic distribution tends to be a little flat tailed. One of the reasons the logit model was formulated was that the probit model was difficult to compute because it involved calculating difficult integrals. Modern computing however has made this computation fairly simple. The coefficients obtained from the logit and probit model are also fairly close. However, the odds ratio makes the logit model easier to interpret.
For practical purposes the only reasons for choosing the probit model over the logistic model would be:
There is a strong belief that the underlying distribution is normal
The actual event is not a binary outcome (e.g. Bankrupt/not bankrupt) but a proportion (e.g. Proportion of population at different debt levels).
Time series models: Time series models are used for predicting or forecasting the future behavior of variables. These models account for the fact that data points taken over time may have an internal structure (such as autocorrelation, trend or seasonal variation) that should be accounted for. As a result standard regression techniques cannot be applied to time series data and methodology has been developed to decompose the trend, seasonal and cyclical component of the series. Modeling the dynamic path of a variable can improve forecasts since the predictable component of the series can be projected into the future.
Time series models estimate difference equations containing stochastic components. Two commonly used forms of these models are autoregressive models (AR) and moving average (MA) models. The Box-Jenkins methodology (1976) developed by George Box and G.M. Jenkins combines the AR and MA models to produce the ARMA (autoregressive moving average) model which is the cornerstone of stationary time series analysis. ARIMA (autoregressive integrated moving average models) on the other hand are used to describe non-stationary time series. Box and Jenkins suggest differencing a non stationary time series to obtain a stationary series to which an ARMA model can be applied. Non stationary time series have a pronounced trend and do not have a constant long-run mean or variance.
Box and Jenkins proposed a three stage methodology which includes: model identification, estimation and validation. The identification stage involves identifying if the series is stationary or not and the presence of seasonality by examining plots of the series, autocorrelation and partial autocorrelation functions. In the estimation stage, models are estimated using non-linear time series or maximum likelihood estimation procedures. Finally the validation stage involves diagnostic checking such as plotting the residuals to detect outliers and evidence of model fit.
In recent years time series models have become more sophisticated and attempt to model conditional heteroskedasticity with models such as ARCH (autoregressive conditional heteroskedasticity) and GARCH (generalized autoregressive conditional heteroskedasticity) models frequently used for financial time series. In addition time series models are also used to understand inter-relationships among economic variables represented by systems of equations using VAR (vector autoregression) and structural VAR models.
Survival or duration analysis: Survival analysis is another name for time to event analysis. These techniques were primarily developed in the medical and biological sciences, but they are also widely used in the social sciences like economics, as well as in engineering (reliability and failure time analysis).
Censoring and non-normality, which are characteristic of survival data, generate difficulty when trying to analyze the data using conventional statistical models such as multiple linear regression. The normal distribution, being a symmetric distribution, takes positive as well as negative values, but duration by its very nature cannot be negative and therefore normality cannot be assumed when dealing with duration/survival data. Hence the normality assumption of regression models is violated.
The assumption is that if the data were not censored it would be representative of the population of interest. In survival analysis, censored observations arise whenever the dependent variable of interest represents the time to a terminal event, and the duration of the study is limited in time.
An important concept in survival analysis is the hazard rate, defined as the probability that the event will occur at time t conditional on surviving until time t. Another concept related to the hazard rate is the survival function which can be defined as the probability of surviving to time t.
Most models try to model the hazard rate by choosing the underlying distribution depending on the shape of the hazard function. A distribution whose hazard function slopes upward is said to have positive duration dependence, a decreasing hazard shows negative duration dependence whereas constant hazard is a process with no memory usually characterized by the exponential distribution. Some of the distributional choices in survival models are: F, gamma, Weibull, log normal, inverse normal, exponential etc. All these distributions are for a non-negative random variable.
Duration models can be parametric, non-parametric or semi-parametric. Some of the models commonly used are Kaplan-Meier and Cox proportional hazard model (non parametric).
Classification and regression trees: Main article: decision tree learning
Classification and regression trees (CART) is a non-parametric decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively.
Decision trees are formed by a collection of rules based on variables in the modeling data set:
Rules based on variables’ values are selected to get the best split to differentiate observations based on the dependent variable
Once a rule is selected and splits a node into two, the same process is applied to each “child” node (i.e. it is a recursive procedure)
Splitting stops when CART detects no further gain can be made, or some pre-set stopping rules are met. (Alternatively, the data is split as much as possible and then the tree is later pruned.)
Each branch of the tree ends in a terminal node. Each observation falls into one and exactly one terminal node, and each terminal node is uniquely defined by a set of rules.
A very popular method for predictive analytics is Leo Breiman's Random forests or derived versions of this technique like Random multinomial logit.
Multivariate adaptive regression splines: Multivariate adaptive regression splines (MARS) is a non-parametric technique that builds flexible models by fitting piecewise linear regressions.
An important concept associated with regression splines is that of a knot. Knot is where one local regression model gives way to another and thus is the point of intersection between two splines.
In multivariate and adaptive regression splines, basis functions are the tool used for generalizing the search for knots. Basis functions are a set of functions used to represent the information contained in one or more variables. Multivariate and Adaptive Regression Splines model almost always creates the basis functions in pairs.
Multivariate and adaptive regression spline approach deliberately overfits the model and then prunes to get to the optimal model. The algorithm is computationally very intensive and in practice we are required to specify an upper limit on the number of basis functions.
Machine learning techniques. Machine learning, a branch of artificial intelligence, was originally employed to develop techniques to enable computers to learn. Today, since it includes a number of advanced statistical methods for regression and classification, it finds application in a wide variety of fields including medical diagnostics, credit card fraud detection, face and speech recognition and analysis of the stock market. In certain applications it is sufficient to directly predict the dependent variable without focusing on the underlying relationships between variables. In other cases, the underlying relationships can be very complex and the mathematical form of the dependencies unknown. For such cases, machine learning techniques emulate human cognition and learn from training examples to predict future events.
A brief discussion of some of these methods used commonly for predictive analytics is provided below. A detailed study of machine learning can be found in Mitchell (1997).
Neural networks. Neural networks are nonlinear sophisticated modeling techniques that are able to model complex functions. They can be applied to problems of prediction, classification or control in a wide spectrum of fields such as finance, cognitive psychology/neuroscience, medicine, engineering, and physics.
Neural networks are used when the exact nature of the relationship between inputs and output is not known. A key feature of neural networks is that they learn the relationship between inputs and output through training. There are two types of training in neural networks used by different networks, supervised and unsupervised training, with supervised being the most common one.
Some examples of neural network training techniques are backpropagation, quick propagation, conjugate gradient descent, projection operator, Delta-Bar-Delta etc. Some unsupervised network architectures are multilayer perceptrons, Kohonen networks, Hopfield networks, etc.
Radial basis functions: A radial basis function (RBF) is a function which has built into it a distance criterion with respect to a center. Such functions can be used very efficiently for interpolation and for smoothing of data. Radial basis functions have been applied in the area of neural networks where they are used as a replacement for the sigmoidal transfer function. Such networks have 3 layers, the input layer, the hidden layer with the RBF non-linearity and a linear output layer. The most popular choice for the non-linearity is the Gaussian. RBF networks have the advantage of not being locked into local minima as do the feed-forward networks such as the multilayer perceptron.
Support vector machines: Support Vector Machines (SVM) are used to detect and exploit complex patterns in data by clustering, classifying and ranking the data. They are learning machines that are used to perform binary classifications and regression estimations. They commonly use kernel based methods to apply linear classification techniques to non-linear classification problems. There are a number of types of SVM such as linear, polynomial, sigmoid etc.
Naïve Bayes: Naïve Bayes based on Bayes conditional probability rule is used for performing classification tasks. Naïve Bayes assumes the predictors are statistically independent which makes it an effective classification tool that is easy to interpret. It is best employed when faced with the problem of ‘curse of dimensionality’ i.e. when the number of predictors is very high.
K-nearest neighbours: The nearest neighbour algorithm (KNN) belongs to the class of pattern recognition statistical methods. The method does not impose a priori any assumptions about the distribution from which the modeling sample is drawn. It involves a training set with both positive and negative values. A new sample is classified by calculating the distance to the nearest neighbouring training case. The sign of that point will determine the classification of the sample. In the k-nearest neighbour classifier, the k nearest points are considered and the sign of the majority is used to classify the sample. The performance of the kNN algorithm is influenced by three main factors: (1) the distance measure used to locate the nearest neighbours; (2) the decision rule used to derive a classification from the k-nearest neighbours; and (3) the number of neighbours used to classify the new sample. It can be proved that, unlike other methods, this method is universally asymptotically convergent, i.e.: as the size of the training set increases, if the observations are independent and identically distributed (i.i.d.), regardless of the distribution from which the sample is drawn, the predicted class will converge to the class assignment that minimizes misclassification error.
Geospatial predictive modeling: Conceptually, geospatial predictive modeling is rooted in the principle that the occurrences of events being modeled are limited in distribution. Occurrences of events are neither uniform nor random in distribution – there are spatial environment factors (infrastructure, sociocultural, topographic, etc.) that constrain and influence where the locations of events occur. Geospatial predictive modeling attempts to describe those constraints and influences by spatially correlating occurrences of historical geospatial locations with environmental factors that represent those constraints and influences. Geospatial predictive modeling is a process for analyzing events through a geographic filter in order to make statements of likelihood for event occurrence or emergence.
Tools: There are numerous tools available in the marketplace which help with the execution of predictive analytics. These range from those which need very little user sophistication to those that are designed for the expert practitioner. The difference between these tools is often in the level of customization and heavy data lifting allowed.
In an attempt to provide a standard language for expressing predictive models, the Predictive Model Markup Language (PMML) has been proposed. Such an XML-based language provides a way for the different tools to define predictive models and to share these between PMML compliant applications. PMML 4.0 was released in June, 2009.
References:
L. Devroye, L. Györfi, G. Lugosi (1996). A Probabilistic Theory of Pattern Recognition. New York: Springer-Verlag.
John R. Davies, Stephen V. Coggeshall, Roger D. Jones, and Daniel Schutzer, "Intelligent Security Systems," in Freedman, Roy S., Flein, Robert A., and Lederman, Jess, Editors (1995). Artificial Intelligence in the Capital Markets. Chicago: Irwin. ISBN 1-55738-811-3.
Agresti, Alan (2002). Categorical Data Analysis. Hoboken: John Wiley and Sons. ISBN 0-471-36093-7.
Enders, Walter (2004). Applied Time Series Econometrics. Hoboken: John Wiley and Sons. ISBN 052183919X.
Greene, William (2000). Econometric Analysis. Prentice Hall. ISBN 0-13-013297-7.
Mitchell, Tom (1997). Machine Learning. New York: McGraw-Hill. ISBN 0-07-042807-7.
Tukey, John (1977). Exploratory Data Analysis. New York: Addison-Wesley. ISBN 0201076160.
Guidère, Mathieu; Howard N, Sh. Argamon (2009). Rich Language Analysis for Counterterrrorism. Berlin, London, New York: Springer-Verlag. ISBN 978-3-642-01140-5.
In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision making for candidate transactions.
Predictive analytics is used in actuarial science, financial services, insurance, telecommunications, retail, travel, healthcare, pharmaceuticals and other fields.
One of the most well-known applications is credit scoring, which is used throughout financial services. Scoring models process a customer’s credit history, loan application, customer data, etc., in order to rank-order individuals by their likelihood of making future credit payments on time. A well-known example would be the FICO score.
DefinitionPredictive analytics is an area of statistical analysis that deals with extracting information from data and using it to predict future trends and behavior patterns. The core of predictive analytics relies on capturing relationships between explanatory variables and the predicted variables from past occurrences, and exploiting it to predict future outcomes. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.
Types: Generally, the term predictive analytics is used to mean predictive modeling, "scoring" data with predictive models, and forecasting. However, people are increasingly using the term to describe related analytical disciplines, such as descriptive modeling and decision modeling or optimization. These disciplines also involve rigorous data analysis, and are widely used in business for segmentation and decision making, but have different purposes and the statistical techniques underlying them vary.
Predictive models: Predictive models analyze past performance to assess how likely a customer is to exhibit a specific behavior in the future in order to improve marketing effectiveness. This category also encompasses models that seek out subtle data patterns to answer questions about customer performance, such as fraud detection models. Predictive models often perform calculations during live transactions, for example, to evaluate the risk or opportunity of a given customer or transaction, in order to guide a decision. With advancement in computing speed, individual agent modeling systems can simulate human behavior or reaction to given stimuli or scenarios. The new term for animating data specifically linked to an individual in a simulated environment is avatar analytics.
Descriptive models: Descriptive models quantify relationships in data in a way that is often used to classify customers or prospects into groups. Unlike predictive models that focus on predicting a single customer behavior (such as credit risk), descriptive models identify many different relationships between customers or products. Descriptive models do not rank-order customers by their likelihood of taking a particular action the way predictive models do. Descriptive models can be used, for example, to categorize customers by their product preferences and life stage. Descriptive modeling tools can be utilized to develop further models that can simulate large number of individualized agents and make predictions.
Decision models: Decision models describe the relationship between all the elements of a decision — the known data (including results of predictive models), the decision and the forecast results of the decision — in order to predict the results of decisions involving many variables. These models can be used in optimization, maximizing certain outcomes while minimizing others. Decision models are generally used to develop decision logic or a set of business rules that will produce the desired action for every customer or circumstance.
Applications: Although predictive analytics can be put to use in many applications, we outline a few examples where predictive analytics has shown positive impact in recent years.
Analytical customer relationship management (CRM): Analytical Customer Relationship Management is a frequent commercial application of Predictive Analysis. Methods of predictive analysis are applied to customer data to pursue CRM objectives.
Clinical decision support systems: Experts use predictive analysis in health care primarily to determine which patients are at risk of developing certain conditions, like diabetes, asthma, heart disease and other lifetime illnesses. Additionally, sophisticated clinical decision support systems incorporate predictive analytics to support medical decision making at the point of care. A working definition has been proposed by Dr. Robert Hayward of the Centre for Health Evidence: "Clinical Decision Support systems link health observations with health knowledge to influence health choices by clinicians for improved health care."
Collection analytics: Every portfolio has a set of delinquent customers who do not make their payments on time. The financial institution has to undertake collection activities on these customers to recover the amounts due. A lot of collection resources are wasted on customers who are difficult or impossible to recover. Predictive analytics can help optimize the allocation of collection resources by identifying the most effective collection agencies, contact strategies, legal actions and other strategies to each customer, thus significantly increasing recovery at the same time reducing collection costs.
Cross-sell: Often corporate organizations collect and maintain abundant data (e.g. customer records, sale transactions) and exploiting hidden relationships in the data can provide a competitive advantage to the organization. For an organization that offers multiple products, an analysis of existing customer behavior can lead to efficient cross sell of products. This directly leads to higher profitability per customer and strengthening of the customer relationship. Predictive analytics can help analyze customers’ spending, usage and other behavior, and help cross-sell the right product at the right time.
Customer retention: With the number of competing services available, businesses need to focus efforts on maintaining continuous consumer satisfaction. In such a competitive scenario, consumer loyalty needs to be rewarded and customer attrition needs to be minimized. Businesses tend to respond to customer attrition on a reactive basis, acting only after the customer has initiated the process to terminate service. At this stage, the chance of changing the customer’s decision is almost impossible. Proper application of predictive analytics can lead to a more proactive retention strategy. By a frequent examination of a customer’s past service usage, service performance, spending and other behavior patterns, predictive models can determine the likelihood of a customer wanting to terminate service sometime in the near future. An intervention with lucrative offers can increase the chance of retaining the customer. Silent attrition is the behavior of a customer to slowly but steadily reduce usage and is another problem faced by many companies. Predictive analytics can also predict this behavior accurately and before it occurs, so that the company can take proper actions to increase customer activity.
Direct marketing: When marketing consumer products and services there is the challenge of keeping up with competing products and consumer behavior. Apart from identifying prospects, predictive analytics can also help to identify the most effective combination of product versions, marketing material, communication channels and timing that should be used to target a given consumer. The goal of predictive analytics is typically to lower the cost per order or cost per action.
Fraud detection: Fraud is a big problem for many businesses and can be of various types. Inaccurate credit applications, fraudulent transactions (both offline and online), identity thefts and false insurance claims are some examples of this problem. These problems plague firms all across the spectrum and some examples of likely victims are credit card issuers, insurance companies, retail merchants, manufacturers, business to business suppliers and even services providers. This is an area where a predictive model is often used to help weed out the “bads” and reduce a business's exposure to fraud.
Predictive modeling can also be used to detect financial statement fraud in companies, allowing auditors to gauge a company's relative risk, and to increase substantive audit procedures as needed.
The Internal Revenue Service (IRS) of the United States also uses predictive analytics to try to locate tax fraud.
Portfolio, product or economy level prediction: Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example a retailer might be interested in predicting store level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These type of problems can be addressed by predictive analytics using Time Series techniques (see below).
Underwriting: Many businesses have to account for risk exposure due to their different services and determine the cost needed to cover the risk. For example, auto insurance providers need to accurately determine the amount of premium to charge to cover each automobile and driver. A financial company needs to assess a borrower’s potential and ability to pay before granting a loan. For a health insurance provider, predictive analytics can analyze a few years of past medical claims data, as well as lab, pharmacy and other records where available, to predict how expensive an enrollee is likely to be in the future. Predictive analytics can help underwriting of these quantities by predicting the chances of illness, default, bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition, by predicting the future risk behavior of a customer using application level data. Predictive analytics in the form of credit scores have reduced the amount of time it takes for loan approvals, especially in the mortgage market where lending decisions are now made in a matter of hours rather than days or even weeks. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default.
Statistical techniques: The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques.
Regression techniques: Regression models are the mainstay of predictive analytics. The focus lies on establishing a mathematical equation as a model to represent the interactions between the different variables in consideration. Depending on the situation, there is a wide variety of models that can be applied while performing predictive analytics. Some of them are briefly discussed below.
Linear regression model: The linear regression model analyzes the relationship between the response or dependent variable and a set of independent or predictor variables. This relationship is expressed as an equation that predicts the response variable as a linear function of the parameters. These parameters are adjusted so that a measure of fit is optimized. Much of the effort in model fitting is focused on minimizing the size of the residual, as well as ensuring that it is randomly distributed with respect to the model predictions.
The goal of regression is to select the parameters of the model so as to minimize the sum of the squared residuals. This is referred to as ordinary least squares (OLS) estimation and results in best linear unbiased estimates (BLUE) of the parameters if and only if the Gauss-Markov assumptions are satisfied.
Once the model has been estimated we would be interested to know if the predictor variables belong in the model – i.e. is the estimate of each variable’s contribution reliable? To do this we can check the statistical significance of the model’s coefficients which can be measured using the t-statistic. This amounts to testing whether the coefficient is significantly different from zero. How well the model predicts the dependent variable based on the value of the independent variables can be assessed by using the R² statistic. It measures predictive power of the model i.e. the proportion of the total variation in the dependent variable that is “explained” (accounted for) by variation in the independent variables.
Discrete choice models: Multivariate regression (above) is generally used when the response variable is continuous and has an unbounded range. Often the response variable may not be continuous but rather discrete. While mathematically it is feasible to apply multivariate regression to discrete ordered dependent variables, some of the assumptions behind the theory of multivariate linear regression no longer hold, and there are other techniques such as discrete choice models which are better suited for this type of analysis. If the dependent variable is discrete, some of those superior methods are logistic regression, multinomial logit and probit models. Logistic regression and probit models are used when the dependent variable is binary.
Logistic regression: For more details on this topic, see logistic regression.
In a classification setting, assigning outcome probabilities to observations can be achieved through the use of a logistic model, which is basically a method which transforms information about the binary dependent variable into an unbounded continuous variable and estimates a regular multivariate model (See Allison’s Logistic Regression for more information on the theory of Logistic Regression).
The Wald and likelihood-ratio test are used to test the statistical significance of each coefficient b in the model (analogous to the t tests used in OLS regression; see above). A test assessing the goodness-of-fit of a classification model is the –.
Multinomial logistic regression: An extension of the binary logit model to cases where the dependent variable has more than 2 categories is the multinomial logit model. In such cases collapsing the data into two categories might not make good sense or may lead to loss in the richness of the data. The multinomial logit model is the appropriate technique in these cases, especially when the dependent variable categories are not ordered (for examples colors like red, blue, green). Some authors have extended multinomial regression to include feature selection/importance methods such as Random multinomial logit.
Probit regressionProbit models offer an alternative to logistic regression for modeling categorical dependent variables. Even though the outcomes tend to be similar, the underlying distributions are different. Probit models are popular in social sciences like economics.
A good way to understand the key difference between probit and logit models, is to assume that there is a latent variable z.
We do not observe z but instead observe y which takes the value 0 or 1. In the logit model we assume that y follows a logistic distribution. In the probit model we assume that y follows a standard normal distribution. Note that in social sciences (example economics), probit is often used to model situations where the observed variable y is continuous but takes values between 0 and 1.
Logit versus probit: The Probit model has been around longer than the logit model. They look identical, except that the logistic distribution tends to be a little flat tailed. One of the reasons the logit model was formulated was that the probit model was difficult to compute because it involved calculating difficult integrals. Modern computing however has made this computation fairly simple. The coefficients obtained from the logit and probit model are also fairly close. However, the odds ratio makes the logit model easier to interpret.
For practical purposes the only reasons for choosing the probit model over the logistic model would be:
There is a strong belief that the underlying distribution is normal
The actual event is not a binary outcome (e.g. Bankrupt/not bankrupt) but a proportion (e.g. Proportion of population at different debt levels).
Time series models: Time series models are used for predicting or forecasting the future behavior of variables. These models account for the fact that data points taken over time may have an internal structure (such as autocorrelation, trend or seasonal variation) that should be accounted for. As a result standard regression techniques cannot be applied to time series data and methodology has been developed to decompose the trend, seasonal and cyclical component of the series. Modeling the dynamic path of a variable can improve forecasts since the predictable component of the series can be projected into the future.
Time series models estimate difference equations containing stochastic components. Two commonly used forms of these models are autoregressive models (AR) and moving average (MA) models. The Box-Jenkins methodology (1976) developed by George Box and G.M. Jenkins combines the AR and MA models to produce the ARMA (autoregressive moving average) model which is the cornerstone of stationary time series analysis. ARIMA (autoregressive integrated moving average models) on the other hand are used to describe non-stationary time series. Box and Jenkins suggest differencing a non stationary time series to obtain a stationary series to which an ARMA model can be applied. Non stationary time series have a pronounced trend and do not have a constant long-run mean or variance.
Box and Jenkins proposed a three stage methodology which includes: model identification, estimation and validation. The identification stage involves identifying if the series is stationary or not and the presence of seasonality by examining plots of the series, autocorrelation and partial autocorrelation functions. In the estimation stage, models are estimated using non-linear time series or maximum likelihood estimation procedures. Finally the validation stage involves diagnostic checking such as plotting the residuals to detect outliers and evidence of model fit.
In recent years time series models have become more sophisticated and attempt to model conditional heteroskedasticity with models such as ARCH (autoregressive conditional heteroskedasticity) and GARCH (generalized autoregressive conditional heteroskedasticity) models frequently used for financial time series. In addition time series models are also used to understand inter-relationships among economic variables represented by systems of equations using VAR (vector autoregression) and structural VAR models.
Survival or duration analysis: Survival analysis is another name for time to event analysis. These techniques were primarily developed in the medical and biological sciences, but they are also widely used in the social sciences like economics, as well as in engineering (reliability and failure time analysis).
Censoring and non-normality, which are characteristic of survival data, generate difficulty when trying to analyze the data using conventional statistical models such as multiple linear regression. The normal distribution, being a symmetric distribution, takes positive as well as negative values, but duration by its very nature cannot be negative and therefore normality cannot be assumed when dealing with duration/survival data. Hence the normality assumption of regression models is violated.
The assumption is that if the data were not censored it would be representative of the population of interest. In survival analysis, censored observations arise whenever the dependent variable of interest represents the time to a terminal event, and the duration of the study is limited in time.
An important concept in survival analysis is the hazard rate, defined as the probability that the event will occur at time t conditional on surviving until time t. Another concept related to the hazard rate is the survival function which can be defined as the probability of surviving to time t.
Most models try to model the hazard rate by choosing the underlying distribution depending on the shape of the hazard function. A distribution whose hazard function slopes upward is said to have positive duration dependence, a decreasing hazard shows negative duration dependence whereas constant hazard is a process with no memory usually characterized by the exponential distribution. Some of the distributional choices in survival models are: F, gamma, Weibull, log normal, inverse normal, exponential etc. All these distributions are for a non-negative random variable.
Duration models can be parametric, non-parametric or semi-parametric. Some of the models commonly used are Kaplan-Meier and Cox proportional hazard model (non parametric).
Classification and regression trees: Main article: decision tree learning
Classification and regression trees (CART) is a non-parametric decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively.
Decision trees are formed by a collection of rules based on variables in the modeling data set:
Rules based on variables’ values are selected to get the best split to differentiate observations based on the dependent variable
Once a rule is selected and splits a node into two, the same process is applied to each “child” node (i.e. it is a recursive procedure)
Splitting stops when CART detects no further gain can be made, or some pre-set stopping rules are met. (Alternatively, the data is split as much as possible and then the tree is later pruned.)
Each branch of the tree ends in a terminal node. Each observation falls into one and exactly one terminal node, and each terminal node is uniquely defined by a set of rules.
A very popular method for predictive analytics is Leo Breiman's Random forests or derived versions of this technique like Random multinomial logit.
Multivariate adaptive regression splines: Multivariate adaptive regression splines (MARS) is a non-parametric technique that builds flexible models by fitting piecewise linear regressions.
An important concept associated with regression splines is that of a knot. Knot is where one local regression model gives way to another and thus is the point of intersection between two splines.
In multivariate and adaptive regression splines, basis functions are the tool used for generalizing the search for knots. Basis functions are a set of functions used to represent the information contained in one or more variables. Multivariate and Adaptive Regression Splines model almost always creates the basis functions in pairs.
Multivariate and adaptive regression spline approach deliberately overfits the model and then prunes to get to the optimal model. The algorithm is computationally very intensive and in practice we are required to specify an upper limit on the number of basis functions.
Machine learning techniques. Machine learning, a branch of artificial intelligence, was originally employed to develop techniques to enable computers to learn. Today, since it includes a number of advanced statistical methods for regression and classification, it finds application in a wide variety of fields including medical diagnostics, credit card fraud detection, face and speech recognition and analysis of the stock market. In certain applications it is sufficient to directly predict the dependent variable without focusing on the underlying relationships between variables. In other cases, the underlying relationships can be very complex and the mathematical form of the dependencies unknown. For such cases, machine learning techniques emulate human cognition and learn from training examples to predict future events.
A brief discussion of some of these methods used commonly for predictive analytics is provided below. A detailed study of machine learning can be found in Mitchell (1997).
Neural networks. Neural networks are nonlinear sophisticated modeling techniques that are able to model complex functions. They can be applied to problems of prediction, classification or control in a wide spectrum of fields such as finance, cognitive psychology/neuroscience, medicine, engineering, and physics.
Neural networks are used when the exact nature of the relationship between inputs and output is not known. A key feature of neural networks is that they learn the relationship between inputs and output through training. There are two types of training in neural networks used by different networks, supervised and unsupervised training, with supervised being the most common one.
Some examples of neural network training techniques are backpropagation, quick propagation, conjugate gradient descent, projection operator, Delta-Bar-Delta etc. Some unsupervised network architectures are multilayer perceptrons, Kohonen networks, Hopfield networks, etc.
Radial basis functions: A radial basis function (RBF) is a function which has built into it a distance criterion with respect to a center. Such functions can be used very efficiently for interpolation and for smoothing of data. Radial basis functions have been applied in the area of neural networks where they are used as a replacement for the sigmoidal transfer function. Such networks have 3 layers, the input layer, the hidden layer with the RBF non-linearity and a linear output layer. The most popular choice for the non-linearity is the Gaussian. RBF networks have the advantage of not being locked into local minima as do the feed-forward networks such as the multilayer perceptron.
Support vector machines: Support Vector Machines (SVM) are used to detect and exploit complex patterns in data by clustering, classifying and ranking the data. They are learning machines that are used to perform binary classifications and regression estimations. They commonly use kernel based methods to apply linear classification techniques to non-linear classification problems. There are a number of types of SVM such as linear, polynomial, sigmoid etc.
Naïve Bayes: Naïve Bayes based on Bayes conditional probability rule is used for performing classification tasks. Naïve Bayes assumes the predictors are statistically independent which makes it an effective classification tool that is easy to interpret. It is best employed when faced with the problem of ‘curse of dimensionality’ i.e. when the number of predictors is very high.
K-nearest neighbours: The nearest neighbour algorithm (KNN) belongs to the class of pattern recognition statistical methods. The method does not impose a priori any assumptions about the distribution from which the modeling sample is drawn. It involves a training set with both positive and negative values. A new sample is classified by calculating the distance to the nearest neighbouring training case. The sign of that point will determine the classification of the sample. In the k-nearest neighbour classifier, the k nearest points are considered and the sign of the majority is used to classify the sample. The performance of the kNN algorithm is influenced by three main factors: (1) the distance measure used to locate the nearest neighbours; (2) the decision rule used to derive a classification from the k-nearest neighbours; and (3) the number of neighbours used to classify the new sample. It can be proved that, unlike other methods, this method is universally asymptotically convergent, i.e.: as the size of the training set increases, if the observations are independent and identically distributed (i.i.d.), regardless of the distribution from which the sample is drawn, the predicted class will converge to the class assignment that minimizes misclassification error.
Geospatial predictive modeling: Conceptually, geospatial predictive modeling is rooted in the principle that the occurrences of events being modeled are limited in distribution. Occurrences of events are neither uniform nor random in distribution – there are spatial environment factors (infrastructure, sociocultural, topographic, etc.) that constrain and influence where the locations of events occur. Geospatial predictive modeling attempts to describe those constraints and influences by spatially correlating occurrences of historical geospatial locations with environmental factors that represent those constraints and influences. Geospatial predictive modeling is a process for analyzing events through a geographic filter in order to make statements of likelihood for event occurrence or emergence.
Tools: There are numerous tools available in the marketplace which help with the execution of predictive analytics. These range from those which need very little user sophistication to those that are designed for the expert practitioner. The difference between these tools is often in the level of customization and heavy data lifting allowed.
In an attempt to provide a standard language for expressing predictive models, the Predictive Model Markup Language (PMML) has been proposed. Such an XML-based language provides a way for the different tools to define predictive models and to share these between PMML compliant applications. PMML 4.0 was released in June, 2009.
References:
L. Devroye, L. Györfi, G. Lugosi (1996). A Probabilistic Theory of Pattern Recognition. New York: Springer-Verlag.
John R. Davies, Stephen V. Coggeshall, Roger D. Jones, and Daniel Schutzer, "Intelligent Security Systems," in Freedman, Roy S., Flein, Robert A., and Lederman, Jess, Editors (1995). Artificial Intelligence in the Capital Markets. Chicago: Irwin. ISBN 1-55738-811-3.
Agresti, Alan (2002). Categorical Data Analysis. Hoboken: John Wiley and Sons. ISBN 0-471-36093-7.
Enders, Walter (2004). Applied Time Series Econometrics. Hoboken: John Wiley and Sons. ISBN 052183919X.
Greene, William (2000). Econometric Analysis. Prentice Hall. ISBN 0-13-013297-7.
Mitchell, Tom (1997). Machine Learning. New York: McGraw-Hill. ISBN 0-07-042807-7.
Tukey, John (1977). Exploratory Data Analysis. New York: Addison-Wesley. ISBN 0201076160.
Guidère, Mathieu; Howard N, Sh. Argamon (2009). Rich Language Analysis for Counterterrrorism. Berlin, London, New York: Springer-Verlag. ISBN 978-3-642-01140-5.
Labels:
accenture,
analysis,
analytics,
BID,
business,
Informatica,
intelligence,
microsoft,
Predictive
Sunday, January 30, 2011
SSAS 2005: Cube Performance Tuning Lessons
Intro.
A recent project has forced me (which is a good thing) to learn both the internals of SSAS 2005 as well as various performance tuning techniques to get maximum performance out of the OLAP server. It goes without saying that the grain of both your underlying data warehouse's Dimensions & Facts will drive how large your cubes are (total cube space). It also should be a given that Processing Time & Query (MDX) Execution Time usually compete with one another. Given the same grain of a model, the more Grain Data, Indexing, and Aggreggations you process upfront should generally result in a more performant end-user experience (but not always). And while ETL & Cube Processing time is of importance, in the real-world it is the end-user experience (capability and performance) which drives the DW/BI solution's adoption!
Throw-out unused Attributes/Optimizing Attributes/Leverage Member Properties
The more dimensional attributes you create infers a larger cubespace, which also means more potential aggregations and indexes. Take the time to review with your clients the proposed set of attributes and be sure they all provide value as a 1st class Dimension Attribute. Also, if you find attributes are used primarily for informative purposes only consider leveraging Member Properties instead of creating an entire Dimension Attribute. Also, the surrogate key for your dimensions almost never add business value, delete those attributes and assign the keyColumns of your dimension's grain member (ie it's lowest level) attribute to the surrogate key. If an attribute participates in a natural hierarchy but is not useful as a stand-alone attribute hierarchy you should disable it's hierarchy via the AttributeHierarchyEnabled setting. Finally, be aware that if you have a 'deep' dimension (ie like 19 million members) at its lowest grain, any additional attributes you add will incur much overhead as they have a much higher degree of cardinality.
Set Partition Slices
The question of whether or not you must explicitly set a partition's SLICE property is clearly documented incorrectly in SQL Server 2005 Books Online (BOL). For all but the simplest partition designs you should consider setting the SLICE property to match the source property (ie the dataset definitions should match across both properties). For those who do not know, a partition's SLICE is useful for query execution purposes, the SLICE tells the formula|storage engine which partition(s) hold the data that is being requested of it. Please see resources section below for more information on this.
Optimizing Attribute Relationships
Attribute relationships are the developer's mechanism to inform the OLAP server of the relation between attributes. Just like Fact Tables (measure groups) relate to dimension in various ways (Regular, Referenced, Fact, Many-to-Many), dimension attributes can relate to one another in various forms (One-to-One or One-to-Many).Also, you can set the RelationshipType to Flexible or Rigid. If you know your member's change over time (ie reclass), make sure to leave this setting as Flexible, otherwise set it to Rigid. Take the time to thoroughly review your attribute relationships and ensure that both represent their natural hierarchy and are optimized!
Scalable Measure Group Partitioning & Aggregation Strategies
This is one of the better known techniques but it is still of utmost importance. Make sure to design your measure group's partitions to optimize their performance (both processing and query execution). If your system has a 'rolling window' requirement also account for this in your ETL design/framework. You should almost always partition your measure groups by the DATE_KEY and match the underlying relational data warehouse (RDW) partitioning scheme. The basics of this is your 'hot' (the current period) partition should be optimized for query-execution time via setting a different Aggregation Design as opposed to the 'colder' (older) partitions. Also, if your main storage device (ie SAN usually) cannot hold all of your required data, consider leveraging Remote Partitions to offload the extreme 'cold' partitions to slower storage.
Continuously Tune Aggregations Based On Usage
Most documentation in the community clearly states the order of creating effective aggregations is to first leverage the Aggregation Design Wizard, enable the Query Log, and then re-tune the aggregations using the Usage-Based Tuning Wizard. What is not mentioned (near enough anyway) is to continuously retune your aggregations using a refreshed Query Log using the Usage-Based Tuning Wizard. By doing so you are ensuring your aggregations are periodically revised based up recent, real-world usage of your cubes.
Warming the Cache
Another well known technique...by issuing real-world MDX queries onStartUp of the MSOLAP service your cube's cache will be pre-optimized.
Be Mindful of Many-to-Many Dimensions Performance Implications
While M:M dimensions are a powerful feature of SSAS 2005, that power comes at the cost of query-execution time (performance). There are a few modeling scenarios where you almost have to leverage them but be aware that if you are dealing with large amounts of data this could be a huge performance implication at query-time.
Control of the Client Application (MDX): That is the Question
A lesser discussed matter yet still very important is how much control you have over the MDX issued to your cubes. For example, Excel Pivot Tables and other analytical tools allow the user to explore your cubes with freedom pending the security (no Perspectives are not a security measure). If you can write (or control) the MDX being issued by the end-user then obviously you have more control to ensure that actual MDX is optimized.
Got 64-Bit? Multi-Cores?
For enterprise-class Microsoft DW/BI engagements forget about x86/single-core, period. Analysis Services can chew through (process) more data, in higher-degrees of parallelization with x64 multi-core CPUs. Storage|Formula engine cache rely on memory...long-story short, Analysis Services has been designed to perform at higher levels of scalability when running on multi-core/x64 CPUs. Also, be sure to set Analysis Service's Min/Max Thread settings properly for both Query & Processing.
Conclusion
I am dedicated to life-long learning. I cannot take full credit for my content above as much of this knowledge was the work of others as well as my own testing. The resources section listed below gives credit where it is due accordingly. Take the time to learn and implement highly-performant SSAS 2005 cubes to ensure your project's stakeholder’s first query is a performant one!
Resources
Mosha Pasumansky's Blog (MDX 'Father'): http://sqlblog.com/blogs/mosha/default.aspx
Microsoft SQL Server 2005 Analysis Services (best SSAS 2005 OLAP internals book out!) by SAMS Publishing: http://safari.samspublishing.com/0672327821
SQL Server Analysis Services 2005 Performance Tuning Whitepaper (a great tuning document): download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SSAS2005PerfGuide.doc
HP Solutions with Microsoft SQL Server: http://h71028.www7.hp.com/enterprise/cache/3887-0-0-0-121.html
My Friends & Colleagues at Scalability Experts
A recent project has forced me (which is a good thing) to learn both the internals of SSAS 2005 as well as various performance tuning techniques to get maximum performance out of the OLAP server. It goes without saying that the grain of both your underlying data warehouse's Dimensions & Facts will drive how large your cubes are (total cube space). It also should be a given that Processing Time & Query (MDX) Execution Time usually compete with one another. Given the same grain of a model, the more Grain Data, Indexing, and Aggreggations you process upfront should generally result in a more performant end-user experience (but not always). And while ETL & Cube Processing time is of importance, in the real-world it is the end-user experience (capability and performance) which drives the DW/BI solution's adoption!
Throw-out unused Attributes/Optimizing Attributes/Leverage Member Properties
The more dimensional attributes you create infers a larger cubespace, which also means more potential aggregations and indexes. Take the time to review with your clients the proposed set of attributes and be sure they all provide value as a 1st class Dimension Attribute. Also, if you find attributes are used primarily for informative purposes only consider leveraging Member Properties instead of creating an entire Dimension Attribute. Also, the surrogate key for your dimensions almost never add business value, delete those attributes and assign the keyColumns of your dimension's grain member (ie it's lowest level) attribute to the surrogate key. If an attribute participates in a natural hierarchy but is not useful as a stand-alone attribute hierarchy you should disable it's hierarchy via the AttributeHierarchyEnabled setting. Finally, be aware that if you have a 'deep' dimension (ie like 19 million members) at its lowest grain, any additional attributes you add will incur much overhead as they have a much higher degree of cardinality.
Set Partition Slices
The question of whether or not you must explicitly set a partition's SLICE property is clearly documented incorrectly in SQL Server 2005 Books Online (BOL). For all but the simplest partition designs you should consider setting the SLICE property to match the source property (ie the dataset definitions should match across both properties). For those who do not know, a partition's SLICE is useful for query execution purposes, the SLICE tells the formula|storage engine which partition(s) hold the data that is being requested of it. Please see resources section below for more information on this.
Optimizing Attribute Relationships
Attribute relationships are the developer's mechanism to inform the OLAP server of the relation between attributes. Just like Fact Tables (measure groups) relate to dimension in various ways (Regular, Referenced, Fact, Many-to-Many), dimension attributes can relate to one another in various forms (One-to-One or One-to-Many).Also, you can set the RelationshipType to Flexible or Rigid. If you know your member's change over time (ie reclass), make sure to leave this setting as Flexible, otherwise set it to Rigid. Take the time to thoroughly review your attribute relationships and ensure that both represent their natural hierarchy and are optimized!
Scalable Measure Group Partitioning & Aggregation Strategies
This is one of the better known techniques but it is still of utmost importance. Make sure to design your measure group's partitions to optimize their performance (both processing and query execution). If your system has a 'rolling window' requirement also account for this in your ETL design/framework. You should almost always partition your measure groups by the DATE_KEY and match the underlying relational data warehouse (RDW) partitioning scheme. The basics of this is your 'hot' (the current period) partition should be optimized for query-execution time via setting a different Aggregation Design as opposed to the 'colder' (older) partitions. Also, if your main storage device (ie SAN usually) cannot hold all of your required data, consider leveraging Remote Partitions to offload the extreme 'cold' partitions to slower storage.
Continuously Tune Aggregations Based On Usage
Most documentation in the community clearly states the order of creating effective aggregations is to first leverage the Aggregation Design Wizard, enable the Query Log, and then re-tune the aggregations using the Usage-Based Tuning Wizard. What is not mentioned (near enough anyway) is to continuously retune your aggregations using a refreshed Query Log using the Usage-Based Tuning Wizard. By doing so you are ensuring your aggregations are periodically revised based up recent, real-world usage of your cubes.
Warming the Cache
Another well known technique...by issuing real-world MDX queries onStartUp of the MSOLAP service your cube's cache will be pre-optimized.
Be Mindful of Many-to-Many Dimensions Performance Implications
While M:M dimensions are a powerful feature of SSAS 2005, that power comes at the cost of query-execution time (performance). There are a few modeling scenarios where you almost have to leverage them but be aware that if you are dealing with large amounts of data this could be a huge performance implication at query-time.
Control of the Client Application (MDX): That is the Question
A lesser discussed matter yet still very important is how much control you have over the MDX issued to your cubes. For example, Excel Pivot Tables and other analytical tools allow the user to explore your cubes with freedom pending the security (no Perspectives are not a security measure). If you can write (or control) the MDX being issued by the end-user then obviously you have more control to ensure that actual MDX is optimized.
Got 64-Bit? Multi-Cores?
For enterprise-class Microsoft DW/BI engagements forget about x86/single-core, period. Analysis Services can chew through (process) more data, in higher-degrees of parallelization with x64 multi-core CPUs. Storage|Formula engine cache rely on memory...long-story short, Analysis Services has been designed to perform at higher levels of scalability when running on multi-core/x64 CPUs. Also, be sure to set Analysis Service's Min/Max Thread settings properly for both Query & Processing.
Conclusion
I am dedicated to life-long learning. I cannot take full credit for my content above as much of this knowledge was the work of others as well as my own testing. The resources section listed below gives credit where it is due accordingly. Take the time to learn and implement highly-performant SSAS 2005 cubes to ensure your project's stakeholder’s first query is a performant one!
Resources
Mosha Pasumansky's Blog (MDX 'Father'): http://sqlblog.com/blogs/mosha/default.aspx
Microsoft SQL Server 2005 Analysis Services (best SSAS 2005 OLAP internals book out!) by SAMS Publishing: http://safari.samspublishing.com/0672327821
SQL Server Analysis Services 2005 Performance Tuning Whitepaper (a great tuning document): download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SSAS2005PerfGuide.doc
HP Solutions with Microsoft SQL Server: http://h71028.www7.hp.com/enterprise/cache/3887-0-0-0-121.html
My Friends & Colleagues at Scalability Experts
Labels:
aggregation,
attributes,
cache,
Cube,
dimension,
DW,
mdx,
OLAP,
Optimization,
Partitions,
Performance,
Query,
relationships,
ssas,
Tune,
Tuning,
usage,
warming
Friday, January 28, 2011
Article in Norwegian about Office 2007 and MS SQL 2005
Ti fordeler med Microsoft Business Intelligence
Microsoft Business Intelligence, som er en fullstendig integrert programserie med server-, klient- og utviklerprodukter gjennomført integrert med 2007 Microsoft Office System, gir deg den rette informasjonen til rett tid og i rett format. Business Intelligence presenterer anvendelig informasjon til personer der de arbeider, samarbeider og tar avgjørelser. Her er de ti måtene Business Intelligence kan hjelpe deg å få bedre resultater ved å gi deg forretningsintelligens i hele organisasjonen.
Koble mennesker og informasjon enkelt og effektivt
Business Intelligence gjør det enklere for beslutningstakere å få tilgang til og analysere informasjon hvor som helst, når som helst. Oppdatert informasjon er tilgjengelig samme hvor personer arbeider, de kan samarbeide og ta avgjørelser, enten det er på skrivebordet eller over Internett.
Gi de ansatte medbestemmelse
Når analysedata er tilgjengelige og forståelige, kan de ansatte enklere handle slik at det forbedrer ytelsen og støtter den overordnede forretningsstrategien. Business Intelligence inneholder robuste, dynamiske forretningsresultatkort-, analyse- og rapportverktøy slik at alle i selskapet kan ta bedre avgjørelser raskere.
Enklere samarbeid og deling
Forbedre organisasjonens effektivitet ved å bruke Microsofts teknologi for forretningsinformasjon og samarbeid. Tett integrasjon med Microsoft Office SharePoint Server 2007 gjør at du kan dele informasjon på en enkel måte i et sikret, administrert webmiljø med kollegaer, kunder og partnere. Nå har du et sentralt sted der du kan overvåke ytelsesindikatorene, få tilgang til rapporter, analysere dataene i tillegg til å dele dokumenter, chatte på direkten og finne relevante emner.
Analysere og få kunnskap
Med 2007 Microsoft Office System, som er fullt integrert med Microsoft SQL Server 2005, kan brukere få ny kunnskap ved å bruke kjente, brukervennlige verktøy som leveres av en utprøvd, skalerbar Business Intelligence-plattform. Når informasjon er tilgjenglig og lett å få tilgang til – legge til data for å analysere, gjøre beregninger og gå mer i detalj – er brukerne bedre i stand til å analysere og vurdere informasjon for så å ta veloverveide forretningsbeslutninger.
Forbedre strategier
Business Intelligence forbedrer strategien i hele organisasjonen. Bestem strategi, sett mål, overvåk ytelse, utfør gruppeanalyser for deretter å ta veloverveide beslutninger som er i tråd med forretningsstrategien. Ledere kan lett opprette ansvarslinjer i et strategikart og ansatte kan justere målene sine i tråd med bedriftens mål.
Utnytte fordelene med 2007 Microsoft Office System
Forbedringer er blant annet økt kapasitet for interaktiv analyse i Office Excel 2007 for søk, levering og samarbeid i Microsoft Office SharePoint Server 2007, og forbedret visualiseringsteknologi i alle Microsoft Office-pakkene. Business Intelligence er også integrert med SQL Server 2005 Reporting Services og SQL Server 2005 Analysis Services – bransjeledende plattformteknologi for forretningsintelligens – som øker nytten av informasjonen på skrivebordet.
Levere forretningsintelligens til hele organisasjonen
Business Intelligence støtter bredden i organisasjonens forretningsintelligensbehov. Strategisk planlegging blir enklere med kjente verktøy, informasjonsstyring blir enklere i et sentralisert og fullt integrert BI-miljø, og utvikling er mer kostnadseffektivt ved bruk av utviklingsmiljø som er standard i bransjen. Med det brukervennlige grensesnittet til Microsoft Office System, forretningsintelligensplattformen SQL Server 2005 i bakgrunnen og tilpasset utvikling gjennom Microsoft Visual Studio, støtter Business Intelligence alle – informasjonsarbeidere, IT-medarbeidere og utviklere i din organisasjon.
Redusere opplæringsbehov
Med Business Intelligence kan brukere i organisasjonen samhandle med data hvor de vil – i 2007 Microsoft Office System. Ved hjelp av verktøy som er kjente, tilgjenglige og som det finnes støtte for, kan du oppnå en betydelig reduksjon i kostnader og redusere lærekurven.
Lage avanserte analyser og rapporter
Med avanserte resultatkortfunksjoner som støttes av rapporter, diagrammer, grafer og analyser, kan ansatte enkelt spore ytelsesindikatorer (KPI) mot forretningsmålene. Med en forståelse og analyse av forholdet mellom KPIer og forretningsmålene kan du oppnå bedre forståelse av hvordan virksomheten går i dag, ikke i slutten av måneden, kvartalet eller året, når det er for sent å handle og påvirke resultatene.
Levere forrentingsintelligens på bedriftsnivå
Med den fullstendig og fullt integrerte plattformen med SQL Server 2005 i bakgrunnen, leverer Business Intelligence ETL-funksjonalitet (Extract-Transform-Load), elektronisk analytisk behandlingsdatabase (OLAP), datauttrekking, prediktiv analyse og rapportering – alt i én pakke. Business Intelligence som er fullt skalerbar, gir kraft, stabilitet, forbedret sikkerhet og lave totalkostnader for eierskap.
Reference:
Microsoft Norway
Microsoft Business Intelligence, som er en fullstendig integrert programserie med server-, klient- og utviklerprodukter gjennomført integrert med 2007 Microsoft Office System, gir deg den rette informasjonen til rett tid og i rett format. Business Intelligence presenterer anvendelig informasjon til personer der de arbeider, samarbeider og tar avgjørelser. Her er de ti måtene Business Intelligence kan hjelpe deg å få bedre resultater ved å gi deg forretningsintelligens i hele organisasjonen.
Koble mennesker og informasjon enkelt og effektivt
Business Intelligence gjør det enklere for beslutningstakere å få tilgang til og analysere informasjon hvor som helst, når som helst. Oppdatert informasjon er tilgjengelig samme hvor personer arbeider, de kan samarbeide og ta avgjørelser, enten det er på skrivebordet eller over Internett.
Gi de ansatte medbestemmelse
Når analysedata er tilgjengelige og forståelige, kan de ansatte enklere handle slik at det forbedrer ytelsen og støtter den overordnede forretningsstrategien. Business Intelligence inneholder robuste, dynamiske forretningsresultatkort-, analyse- og rapportverktøy slik at alle i selskapet kan ta bedre avgjørelser raskere.
Enklere samarbeid og deling
Forbedre organisasjonens effektivitet ved å bruke Microsofts teknologi for forretningsinformasjon og samarbeid. Tett integrasjon med Microsoft Office SharePoint Server 2007 gjør at du kan dele informasjon på en enkel måte i et sikret, administrert webmiljø med kollegaer, kunder og partnere. Nå har du et sentralt sted der du kan overvåke ytelsesindikatorene, få tilgang til rapporter, analysere dataene i tillegg til å dele dokumenter, chatte på direkten og finne relevante emner.
Analysere og få kunnskap
Med 2007 Microsoft Office System, som er fullt integrert med Microsoft SQL Server 2005, kan brukere få ny kunnskap ved å bruke kjente, brukervennlige verktøy som leveres av en utprøvd, skalerbar Business Intelligence-plattform. Når informasjon er tilgjenglig og lett å få tilgang til – legge til data for å analysere, gjøre beregninger og gå mer i detalj – er brukerne bedre i stand til å analysere og vurdere informasjon for så å ta veloverveide forretningsbeslutninger.
Forbedre strategier
Business Intelligence forbedrer strategien i hele organisasjonen. Bestem strategi, sett mål, overvåk ytelse, utfør gruppeanalyser for deretter å ta veloverveide beslutninger som er i tråd med forretningsstrategien. Ledere kan lett opprette ansvarslinjer i et strategikart og ansatte kan justere målene sine i tråd med bedriftens mål.
Utnytte fordelene med 2007 Microsoft Office System
Forbedringer er blant annet økt kapasitet for interaktiv analyse i Office Excel 2007 for søk, levering og samarbeid i Microsoft Office SharePoint Server 2007, og forbedret visualiseringsteknologi i alle Microsoft Office-pakkene. Business Intelligence er også integrert med SQL Server 2005 Reporting Services og SQL Server 2005 Analysis Services – bransjeledende plattformteknologi for forretningsintelligens – som øker nytten av informasjonen på skrivebordet.
Levere forretningsintelligens til hele organisasjonen
Business Intelligence støtter bredden i organisasjonens forretningsintelligensbehov. Strategisk planlegging blir enklere med kjente verktøy, informasjonsstyring blir enklere i et sentralisert og fullt integrert BI-miljø, og utvikling er mer kostnadseffektivt ved bruk av utviklingsmiljø som er standard i bransjen. Med det brukervennlige grensesnittet til Microsoft Office System, forretningsintelligensplattformen SQL Server 2005 i bakgrunnen og tilpasset utvikling gjennom Microsoft Visual Studio, støtter Business Intelligence alle – informasjonsarbeidere, IT-medarbeidere og utviklere i din organisasjon.
Redusere opplæringsbehov
Med Business Intelligence kan brukere i organisasjonen samhandle med data hvor de vil – i 2007 Microsoft Office System. Ved hjelp av verktøy som er kjente, tilgjenglige og som det finnes støtte for, kan du oppnå en betydelig reduksjon i kostnader og redusere lærekurven.
Lage avanserte analyser og rapporter
Med avanserte resultatkortfunksjoner som støttes av rapporter, diagrammer, grafer og analyser, kan ansatte enkelt spore ytelsesindikatorer (KPI) mot forretningsmålene. Med en forståelse og analyse av forholdet mellom KPIer og forretningsmålene kan du oppnå bedre forståelse av hvordan virksomheten går i dag, ikke i slutten av måneden, kvartalet eller året, når det er for sent å handle og påvirke resultatene.
Levere forrentingsintelligens på bedriftsnivå
Med den fullstendig og fullt integrerte plattformen med SQL Server 2005 i bakgrunnen, leverer Business Intelligence ETL-funksjonalitet (Extract-Transform-Load), elektronisk analytisk behandlingsdatabase (OLAP), datauttrekking, prediktiv analyse og rapportering – alt i én pakke. Business Intelligence som er fullt skalerbar, gir kraft, stabilitet, forbedret sikkerhet og lave totalkostnader for eierskap.
Reference:
Microsoft Norway
Subscribe to:
Posts (Atom)