Following the behavioral science theory of management, mainly developed at Carnegie Mellon University and prominently represented by Barnard, Richard M. Cyert, March and Simon, most of what goes on in service organizations is actually decision making and information processes. The crucial factor in the information and decision process analysis is thus individuals’ limited ability to process information and to make decisions under these limitations.
According to March and Simon [1], organizations have to be considered as cooperative systems with a high level of information processing and a vast need for decision making at various levels. They also claimed that there are factors that would prevent individuals from acting strictly rational, in opposite to what has been proposed and advocated by classic theorists
Instead of using the model of the economic man, as advocated in classic theory, they proposed the administrative man as an alternative based on their argumentation about the cognitive limits of rationality.
While the theories developed at Carnegie Mellon clearly filled some theoretical gaps in the discipline, March and Simon [1] did not propose a certain organizational form that they considered especially feasible for coping with cognitive limitations and bounded rationality of decision-makers. Through their own argumentation against normative decision-making models, i.e., models that prescribe people how they ought to choose, they also abandoned the idea of an ideal organizational form.
In addition to the factors mentioned by March and Simon, there are two other considerable aspects, stemming from environmental and organizational dynamics. Firstly, it is not possible to access, collect and evaluate all environmental information being relevant for taking a certain decision at a reasonable price, i.e., time and effort [2]. In other words, following a national economic framework, the transaction cost associated with the information process is too high. Secondly, established organizational rules and procedures can prevent the taking of the most appropriate decision, i.e., that a sub-optimum solution is chosen in accordance to organizational rank structure or institutional rules, guidelines and procedures [3] [4], an issue that also has been brought forward as a major critique against the principles of bureaucratic organizations.[5]
According to the Carnegie Mellon School and its followers, information management, i.e., the organization's ability to process information, is at the core of organizational and managerial competencies. Consequently, strategies for organization design must be aiming at improved information processing capability. Jay Galbraith [6] has identified five main organization design strategies within two categories — increased information processing capacity and reduced need for information processing.
1.Reduction of information processing needs
1.Environmental management
2.Creation of slack resources
3.Creation of self-contained tasks
2.Increasing the organizational information processing capacity
1.Creation of lateral relations
2.Vertical information systems
Environmental management. Instead of adapting to changing environmental circumstances, the organization can seek to modify its environment. Vertical and horizontal collaboration, i.e. cooperation or integration with other organizations in the industry value system are typical means of reducing uncertainty. An example of reducing uncertainty in relation to the prior or demanding stage of the industry system is the concept of Supplier-Retailer collaboration or Efficient Customer Response.
Creation of slack resources. In order to reduce exceptions, performance levels can be reduced, thus decreasing the information load on the hierarchy. These additional slack resources, required to reduce information processing in the hierarchy, represent an additional cost to the organization. The choice of this method clearly depends on the alternative costs of other strategies.
Creation of self-contained tasks. Achieving a conceptual closure of tasks is another way of reducing information processing. In this case, the task-performing unit has all the resources required to perform the task. This approach is concerned with task (de-)composition and interaction between different organizational units, i.e. organizational and information interfaces.
Creation of lateral relations. In this case, lateral decision processes are established that cut across functional organizational units. The aim is to apply a system of decision subsidiarity, i.e. to move decision power to the process, instead of moving information from the process into the hierarchy for decision-making.
Investment in vertical information systems. Instead of processing information through the existing hierarchical channels, the organization can establish vertical information systems. In this case, the information flow for a specific task (or set of tasks) is routed in accordance to the applied business logic, rather than the hierarchical organization.
Following the lateral relations concept, it also becomes possible to employ an organizational form that is different from the simple hierarchical information. The Matrix organization is aiming at bringing together the functional and product departmental bases and achieving a balance in information processing and decision making between the vertical (hierarchical) and the horizontal (product or project) structure. The creation of a matrix organization can also be considered as management's response to a persistent or permanent demand for adaptation to environmental dynamics, instead of the response to episodic demands.
Source: Wikipedia
Google Track
Tuesday, November 9, 2010
Information management concepts
Labels:
avanade,
business,
capgemini,
information,
intelligence,
managment,
microsoft. accenture,
platon
Information Managment Concepts
Following the behavioral science theory of management, mainly developed at Carnegie Mellon University and prominently represented by Barnard, Richard M. Cyert, March and Simon, most of what goes on in service organizations is actually decision making and information processes. The crucial factor in the information and decision process analysis is thus individuals’ limited ability to process information and to make decisions under these limitations.
According to March and Simon [1], organizations have to be considered as cooperative systems with a high level of information processing and a vast need for decision making at various levels. They also claimed that there are factors that would prevent individuals from acting strictly rational, in opposite to what has been proposed and advocated by classic theorists
Instead of using the model of the economic man, as advocated in classic theory, they proposed the administrative man as an alternative based on their argumentation about the cognitive limits of rationality.
While the theories developed at Carnegie Mellon clearly filled some theoretical gaps in the discipline, March and Simon [1] did not propose a certain organizational form that they considered especially feasible for coping with cognitive limitations and bounded rationality of decision-makers. Through their own argumentation against normative decision-making models, i.e., models that prescribe people how they ought to choose, they also abandoned the idea of an ideal organizational form.
In addition to the factors mentioned by March and Simon, there are two other considerable aspects, stemming from environmental and organizational dynamics. Firstly, it is not possible to access, collect and evaluate all environmental information being relevant for taking a certain decision at a reasonable price, i.e., time and effort [2]. In other words, following a national economic framework, the transaction cost associated with the information process is too high. Secondly, established organizational rules and procedures can prevent the taking of the most appropriate decision, i.e., that a sub-optimum solution is chosen in accordance to organizational rank structure or institutional rules, guidelines and procedures [3] [4], an issue that also has been brought forward as a major critique against the principles of bureaucratic organizations.[5]
According to the Carnegie Mellon School and its followers, information management, i.e., the organization's ability to process information, is at the core of organizational and managerial competencies. Consequently, strategies for organization design must be aiming at improved information processing capability. Jay Galbraith [6] has identified five main organization design strategies within two categories — increased information processing capacity and reduced need for information processing.
1.Reduction of information processing needs
1.Environmental management
2.Creation of slack resources
3.Creation of self-contained tasks
2.Increasing the organizational information processing capacity
1.Creation of lateral relations
2.Vertical information systems
Environmental management. Instead of adapting to changing environmental circumstances, the organization can seek to modify its environment. Vertical and horizontal collaboration, i.e. cooperation or integration with other organizations in the industry value system are typical means of reducing uncertainty. An example of reducing uncertainty in relation to the prior or demanding stage of the industry system is the concept of Supplier-Retailer collaboration or Efficient Customer Response.
Creation of slack resources. In order to reduce exceptions, performance levels can be reduced, thus decreasing the information load on the hierarchy. These additional slack resources, required to reduce information processing in the hierarchy, represent an additional cost to the organization. The choice of this method clearly depends on the alternative costs of other strategies.
Creation of self-contained tasks. Achieving a conceptual closure of tasks is another way of reducing information processing. In this case, the task-performing unit has all the resources required to perform the task. This approach is concerned with task (de-)composition and interaction between different organizational units, i.e. organizational and information interfaces.
Creation of lateral relations. In this case, lateral decision processes are established that cut across functional organizational units. The aim is to apply a system of decision subsidiarity, i.e. to move decision power to the process, instead of moving information from the process into the hierarchy for decision-making.
Investment in vertical information systems. Instead of processing information through the existing hierarchical channels, the organization can establish vertical information systems. In this case, the information flow for a specific task (or set of tasks) is routed in accordance to the applied business logic, rather than the hierarchical organization.
Following the lateral relations concept, it also becomes possible to employ an organizational form that is different from the simple hierarchical information. The Matrix organization is aiming at bringing together the functional and product departmental bases and achieving a balance in information processing and decision making between the vertical (hierarchical) and the horizontal (product or project) structure. The creation of a matrix organization can also be considered as management's response to a persistent or permanent demand for adaptation to environmental dynamics, instead of the response to episodic demands.
Source: Wikipedia
According to March and Simon [1], organizations have to be considered as cooperative systems with a high level of information processing and a vast need for decision making at various levels. They also claimed that there are factors that would prevent individuals from acting strictly rational, in opposite to what has been proposed and advocated by classic theorists
Instead of using the model of the economic man, as advocated in classic theory, they proposed the administrative man as an alternative based on their argumentation about the cognitive limits of rationality.
While the theories developed at Carnegie Mellon clearly filled some theoretical gaps in the discipline, March and Simon [1] did not propose a certain organizational form that they considered especially feasible for coping with cognitive limitations and bounded rationality of decision-makers. Through their own argumentation against normative decision-making models, i.e., models that prescribe people how they ought to choose, they also abandoned the idea of an ideal organizational form.
In addition to the factors mentioned by March and Simon, there are two other considerable aspects, stemming from environmental and organizational dynamics. Firstly, it is not possible to access, collect and evaluate all environmental information being relevant for taking a certain decision at a reasonable price, i.e., time and effort [2]. In other words, following a national economic framework, the transaction cost associated with the information process is too high. Secondly, established organizational rules and procedures can prevent the taking of the most appropriate decision, i.e., that a sub-optimum solution is chosen in accordance to organizational rank structure or institutional rules, guidelines and procedures [3] [4], an issue that also has been brought forward as a major critique against the principles of bureaucratic organizations.[5]
According to the Carnegie Mellon School and its followers, information management, i.e., the organization's ability to process information, is at the core of organizational and managerial competencies. Consequently, strategies for organization design must be aiming at improved information processing capability. Jay Galbraith [6] has identified five main organization design strategies within two categories — increased information processing capacity and reduced need for information processing.
1.Reduction of information processing needs
1.Environmental management
2.Creation of slack resources
3.Creation of self-contained tasks
2.Increasing the organizational information processing capacity
1.Creation of lateral relations
2.Vertical information systems
Environmental management. Instead of adapting to changing environmental circumstances, the organization can seek to modify its environment. Vertical and horizontal collaboration, i.e. cooperation or integration with other organizations in the industry value system are typical means of reducing uncertainty. An example of reducing uncertainty in relation to the prior or demanding stage of the industry system is the concept of Supplier-Retailer collaboration or Efficient Customer Response.
Creation of slack resources. In order to reduce exceptions, performance levels can be reduced, thus decreasing the information load on the hierarchy. These additional slack resources, required to reduce information processing in the hierarchy, represent an additional cost to the organization. The choice of this method clearly depends on the alternative costs of other strategies.
Creation of self-contained tasks. Achieving a conceptual closure of tasks is another way of reducing information processing. In this case, the task-performing unit has all the resources required to perform the task. This approach is concerned with task (de-)composition and interaction between different organizational units, i.e. organizational and information interfaces.
Creation of lateral relations. In this case, lateral decision processes are established that cut across functional organizational units. The aim is to apply a system of decision subsidiarity, i.e. to move decision power to the process, instead of moving information from the process into the hierarchy for decision-making.
Investment in vertical information systems. Instead of processing information through the existing hierarchical channels, the organization can establish vertical information systems. In this case, the information flow for a specific task (or set of tasks) is routed in accordance to the applied business logic, rather than the hierarchical organization.
Following the lateral relations concept, it also becomes possible to employ an organizational form that is different from the simple hierarchical information. The Matrix organization is aiming at bringing together the functional and product departmental bases and achieving a balance in information processing and decision making between the vertical (hierarchical) and the horizontal (product or project) structure. The creation of a matrix organization can also be considered as management's response to a persistent or permanent demand for adaptation to environmental dynamics, instead of the response to episodic demands.
Source: Wikipedia
Labels:
accenture,
Amazon,
analytics,
Avanade,
Business intelligence,
capgemini,
data,
forecasting,
Google,
information,
job,
jobs,
managment,
math,
microsoft,
mining,
model,
modelling,
Platon,
statistics
Monday, August 9, 2010
My Brainbench Transcript of Free Exams
You have my full transcript of BrainBench Inc testing
Sunday, August 1, 2010
KPI for Hospitality Business
Important Calculations (BI) for Hospitality
Key Performance Indicators (KPI) for Hospitality industry help remove the guesswork from managing the business by checking the numbers that tell what’s really happening.
There’s a business saying: ‘If you can’t measure it, you can’t manage it!’ Real, responsive management needs reliable and truthful figures on which decisions can be based. If there are problems, you can take corrective action quickly. If you are having success, you’ll know to do more of what you’re doing! Good figures also give you a wider understanding of your success – sometimes if it’s a quiet month (when your suppliers are telling you that ‘everyone’s quiet!’) you’ll see that some of your KPIs are actually improving (ex. sales per head).
KPIs in Hospitality industry can be categorized for functions like Reception, Housekeeping, Maintenance, Kitchen, Restaurant, Sales, Store, Purchasing, etc.
Staff KPI:
- Wage Cost %: wage costs as a percentage of sales
- Total Labour Cost %: not just wages but also the other work cover insurance, retirement and superannuation charges and other taxes that apply on your payroll
- Total Labour Hours: how many hours worked in each section. This is useful to compare against sales to measure productivity
- Event Labour charge-out: Hotels usually charge-out service staff at a markup on the cost of the wages paid. Are you achieving a consistent mark-up?
- Labour turnover: number of new staff in any one week or month
- Average length of employment: another way to look at your success in keeping staff. Add up the total number of weeks all your people have worked for you and divide this by the total number of staff
- Average hourly pay: divide the total payroll by the number of hours worked by all staff
Kitchen Management KPI:
- Food Cost %: measured by adding up food purchases for the week and measuring them against your food sales
- Total Food Costs: how much was total food bill? Sometimes a useful figure to show staff who think you are made of money
- Food Costs per head: see every week how much it costs to feed an average customer
- Kitchen Labour %: measure kitchen productivity by comparing kitchen labour against food sales
- Kitchen Labour hours: how many hours worked in this section? Compare against sales to measure productivity
- Stock value: food stock holding- It should be less than a week’s use, but can slip out if you are storing frozen food
- Main selling items: weekly sales from POS or dockets & know the best sellers and map these on the Menu Profitability
- Kitchen linen costs: cost of uniforms, aprons & tea-towels can be a shock! How many tea-towels are being used each day?
Front House Management KPI:
- Total Sales Per Head: total sales divided by number of customers. This may vary between different times of the day
- Number of customers: simple! A good measure of popularity
- Food, Dessert, Beverage Sales per head: how much your menu appeals to your customers (do you have all the choices they want), & how well your staff are selling.
- Seating Efficiency: how well are tables being turned over while still offering high quality customer service
- Basket Analysis: how many items do lunch customers buy? What else do morning coffee drinkers order? Grab a pile of dockets and look for ordering patterns
- Linen costs: uniforms, aprons etc.
- Front of House Labour %: how many hours worked in this section? Compare against sales to measure productivity
- FOH Labour hours: how many hours worked in this section? Compare against sales to measure productivity
- Customer satisfaction: Feedback forms, complaints and other methods that are hard to quantify sometimes but worth making an attempt.
- Strike rate: if 500 people came to hotel last night & only 100 ate at the bistro, your ’strike rate’ would be 1 in 5, or 20%
- RevPASH Revenue per Available Seat Hour: take the total number of ’seat hours’ and divide total revenue for a period by this number
Bar & Restaurant Management KPI:
- Sales per head: how much your beverage and wine appeals to your customers and how well your staff are selling
- Gross Profit on sales: difference between what you sold and what it cost you. The sales mix can influence this heavily
- Average Profit % on sales: useful to see if your sales are holding steady, although ultimately the actual Gross Profit (real money) will matter the most
- Stock value: It’s worth checking with your suppliers and seeing how much you can order ‘just in time’
- Stock turnover: how fast is your cellar stock selling?
- Carrying cost of stock: what is the cost of financing the stock?
- Sales / stock-take discrepancies: Alcohol is security problem, & keeping an eye on ’shrinkage’, staff drinks and stealing a constant problem
Banquet Sales Management KPI:
- Number of customers: simple! A good measure of popularity.
- Visits by your top 100 or 200 customers: they provide a huge proportion of your sales! Track their frequency and spending – these people are gold!
- Sales per head: across all areas
- Marketing and advertising costs: total value of spend, always trying to measure it against response
- Response rates: how many people responded to different campaigns and what effect did this have on profit?
- Press mentions: keeping your eyes open for favourable mentions
- Bookings: in the current week and month and coming up. Also in peak times, eg Christmas.
- Event inquiries: No. of inquiries about large bookings & functions, especially if a campaign to promote them is on
- Sales inquiry conversion rate: No. of inquiries that turn into actual sales. why so few people were ‘converted’ – was it the quality of the promotional material, skill of the sales staff, pricing or make-up of your function menus and facilities?
Finance & Admin Management KPI:
- Cash position at bank: how much do you have available after reconciling your cheque book?
- Stock-take discrepancies: measure of efficiency of each department, but also of administrative systems in place
- Total accounts due: how much do you owe?
- Total accounts payable: needs careful management if you have accounts, eg large restaurants
- Return on Investment: profit business makes can be measured as a percentage return on the amount invested in it
- Taxes owed: to know how much is owed at any one time so it is not ’spent’
- Sales & costs: actual figures compared to what budgeted for a period
- Administration labour costs: strong and skilful administrative support will be essential to manage the KPIs listed above!
- IT efficiency: how much down-time for IT systems? How accurate is the POS system?
Other KPIs:
- Revenue per available room
- Average daily rate of rooms
- % of occupancy of rooms
- Average cleaning costs per room
- % of reservation requests cancelled with / without penalty
- % of rooms with maintenance issues
- % of cancelled reservation requests
- Average number of guests per room
- Average length of stay of guests
- % of non-room revenue
- % of cancelled rooms occupied
- Kilowatt-hours (kwh) per room
- Number of hotel guests per employee
- Gross operating profits per available room
- % of guests who would rank stay as exceeding expectations
- Waste per night per occupied bed space
Provided by: Maia Intelligence, edited by: Besim Ismaili
Labels:
Big data,
business inetlligence,
Data science,
hospitality,
kpi,
sentiment analysis,
tripadvisor
Friday, June 25, 2010
Important Calculated members
Date Range- This is a good one
WITH SET [MyRange] AS
'{[Time].[2009].&[5].[22].[26] : [Time].[2009].&[6].[26].[24]}'
MEMBER [Time].[Range] AS 'Aggregate([MyRange],[Measures].CurrentMember)'
select {[Measures].[Sends],[Measures].[Views], [Measures].[Clicks]} on columns,
{[Zone Id].&[142]}
on rows
from [All Communication Analysis] where ([Time].[Range])
This is an OR query. I got this query from HERE
SELECT [Measures].[Internet Sales Amount] ON 0
FROM [Adventure Works]
WHERE(
{([Date].[Calendar Year].&[2003],[Customer].[Country].[All Customers])
,
([Date].[Calendar Year].[All Periods],[Customer].[Country].&[United States])})
WITH SET [Top5Camps] AS ' TopCount({Descendants( [Zone Id].&[14],3)},200,([Measures].[Sends])) '
select {[Measures].[Sends],[Measures].[Views],[Measures].[Clicks],[Measures].[Clicks Text],[Measures].[Clicks Html],[Measures].[Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count]} on Columns,
[Top5Camps] DIMENSION PROPERTIES [Zone].[uniq views],[Zone].[uniq soft bounces], [Zone].[uniq hard bounces],[Zone].[uniq clicks html], [Zone].[uniq clicks text],[Zone].[uniq clicks],[Zone].[uniq bounces], [Zone].[Campaign Launch Date],[Zone].[Url Count], MEMBER_CAPTION, MEMBER_KEY on rows
from [Communication]
--Some of these below use custom code in reporting services. The queries are formed in visual basic. Maybe someday I'll take the time to make these real queries.
="select {[RunningTotalSubs]} on Columns, CrossJoin(TopCount( { [DMA].children}, 5000, ([RunningTotalSubs])),
{[Time].[2004].&[1].[1].[1], [Time].[2005].&[1].[1].[1],[Time]}) on rows
from [consumers] where ( [Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "])"
WITH
MEMBER [Measures].[Variant Percentage] AS
'(( [Time].[2004].&[1].[1].[1], [RunningTotalSubs] ) -( [Time].[2005].&[1].[1].[1], [RunningTotalSubs] ))/( [Time].[All Time], [RunningTotalSubs] ) '
select {[RunningTotalSubs],[Measures].[Variant Percentage] } on Columns, CrossJoin(TopCount( { [DMA].children}, 5000, ([RunningTotalSubs])),
{[Time].[2004].&[1].[1].[1], [Time].[2005].&[1].[1].[1],[Time]}) on rows
from [consumers] where ( [Zone Id].&[14])
WITH SET [Top5Camps] AS
' Descendants([Zone Id].&[14].&[74],2) '
SELECT {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]}
="select { { ([Age Group].[All Age Group], [Gender].[All Gender]) },{ ([Age Group].[All Age Group], [Gender].[Male]) },{ ([Age Group].[All Age Group], [Gender].[Female]) },{ ([Age Group].[All Age Group], [Gender].[Unknown]) },{ ([Age Group].[13 and 17], [Gender].[All Gender]) },{ ([Age Group].[18 and 24], [Gender].[All Gender]) },{ ([Age Group].[25 and 34], [Gender].[All Gender]) } ,{ ([Age Group].[35 and 44], [Gender].[All Gender]) },{ ([Age Group].[45 and 54], [Gender].[All Gender]) },{ ([Age Group].[over 55], [Gender].[All Gender]) },{ ([Age Group].[Unknown], [Gender].[All Gender]) }
} on Columns, NON EMPTY { TopCount( { [Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].children}, 10, ([RunningTotalSubs])) } on Rows from [consumers] where ("&
iif(Parameters!dt.Value="d",Code.getCubeDate(Parameters!p1.Value, Parameters!dt.Value),"[Time]") &",[RunningTotalSubs])"
="select { { ([Age Group].[All Age Group], [Gender].[All Gender]) },{ ([Age Group].[All Age Group], [Gender].[Male]) },{ ([Age Group].[All Age Group], [Gender].[Female]) },{ ([Age Group].[All Age Group], [Gender].[Unknown]) },{ ([Age Group].[13 and 17], [Gender].[All Gender]) },{ ([Age Group].[18 and 24], [Gender].[All Gender]) },{ ([Age Group].[25 and 34], [Gender].[All Gender]) } ,{ ([Age Group].[35 and 44], [Gender].[All Gender]) },{ ([Age Group].[45 and 54], [Gender].[All Gender]) },{ ([Age Group].[over 55], [Gender].[All Gender]) },{ ([Age Group].[Unknown], [Gender].[All Gender]) }
} on Columns, NON EMPTY { TopCount( { [Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].children}, 10, ([RunningTotalSubs])) } DIMENSION PROPERTIES [Zone Id].[Campaign Type Id].[Key], [Zone Id].[Campaign Type Id].[Name] on Rows from [consumers] where ("&
iif(Parameters!dt.Value="d",Code.getCubeDate(Parameters!p1.Value, Parameters!dt.Value),"[Time]") &",[RunningTotalSubs])"
ON COLUMNS,
Order([Top5Camps], [Top5Camps].currentmember.properties("Campaign Launch Date") ,desc) DIMENSION PROPERTIES [Zone].[Campaign Launch Date], [Zone].[Url Count], MEMBER_CAPTION, MEMBER_KEY
ON ROWS
from [All communication analysis]
WITH SET [Top5Camps] AS
' Descendants([Zone Id].&[14].&[74],2) '
SELECT {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]}
ON COLUMNS, [Top5Camps] DIMENSION PROPERTIES [Zone].[Campaign Launch Date], [Zone].[Url Count], MEMBER_CAPTION, MEMBER_KEY
ON ROWS
from [All communication analysis]
="WITH SET [Top5Camps] AS ' STRTOSET(IIF(ISERROR(STRTOVALUE(""[Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].&[" & Code.EDecrypt(Parameters!PId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].&[" & Code.EDecrypt(Parameters!CTId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "]"")), ""[Zone Id].&[3].&[45].&[4311].Children"", ""[Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].&[" & Code.EDecrypt(Parameters!PId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].&[" & Code.EDecrypt(Parameters!CTId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].Children"")) '
="select {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]} on Columns,
[Top5Camps] on rows
from [All Communication Analysis]"
="WITH SET [Top5Camps] AS ' STRTOSET(IIF(ISERROR(STRTOVALUE(""[Zone Id].), ""[Zone Id].&[3].&[45].&[4311].Children"", ""[Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].&[" & Code.EDecrypt(Parameters!PId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].&[" & Code.EDecrypt(Parameters!CTId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].Children"")) '
="select {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]} on Columns,
[Top5Camps] on rows
from [All Communication Analysis]"
="WITH SET Top5Camps AS ' Descendants( [Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "], 3)'
SELECT {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]}
ON COLUMNS, [Top5Camps] DIMENSION PROPERTIES [Zone].[Campaign Launch Date], MEMBER_CAPTION, MEMBER_KEY
ON ROWS
from [All communication analysis]"
type Exception report
message
description The server encountered an internal error () that prevented it from fulfilling this request.
exception
="WITH SET Top5Camps AS ' Descendants( [Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].&[" & Code.EDecrypt(Parameters!PId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].&[" & Code.EDecrypt(Parameters!CTId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "], 1)'
SELECT {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]}
ON COLUMNS, [Top5Camps] DIMENSION PROPERTIES [Zone].[Campaign Launch Date], [Zone].[Url Count], MEMBER_CAPTION, MEMBER_KEY
ON ROWS
from [communication ]"
One link I came accross when using deminsion properties. Try not to make queries with properties because they are slow. But if you have to:
http://www.sqlservercentral.com/blogs/mysqllearnings/archive/2009/04/15/key-column-of-a-dimension-in-ssas.aspx
Each product has a weight associated with it. The requirement is to filter all the products whose weight is 5.
To achieve this, weight should be the member property of the product. This time, it cannot be made as a key column. But, again instead of using ".properties" function, functions like "EXISTS" can be used.
A general way of writing this would be
Filter ([Prod Dim].[Prod Hier].[SKU Lvl].members,[Prod Dim].[Prod Hier].[SKU Lvl].currentmember.properties("Weight") = 5)
In this case, it has to filter each member by getting the property of each member and checking whether the weight is less than 5. Instead of using the ".properties" function, it can be written using "EXISTS" function.
Exists ([Prod Dim].[Prod Hier].[SKU Lvl].members,filter([Prod Dim].[Weight].[Weight].members,[Prod Dim].[Weight].currentmember is [Prod Dim].[Weight].[5]))
Also, it is better to use the function "IS" instead of "=" because IS function eliminates the conversion of the given string to a member.
Because of doing this, there was a considerable improvement in the query performance. These were the scenarios where I felt natural key was useful when specified as the key column.
WITH SET [MyRange] AS
'{[Time].[2009].&[5].[22].[26] : [Time].[2009].&[6].[26].[24]}'
MEMBER [Time].[Range] AS 'Aggregate([MyRange],[Measures].CurrentMember)'
select {[Measures].[Sends],[Measures].[Views], [Measures].[Clicks]} on columns,
{[Zone Id].&[142]}
on rows
from [All Communication Analysis] where ([Time].[Range])
This is an OR query. I got this query from HERE
SELECT [Measures].[Internet Sales Amount] ON 0
FROM [Adventure Works]
WHERE(
{([Date].[Calendar Year].&[2003],[Customer].[Country].[All Customers])
,
([Date].[Calendar Year].[All Periods],[Customer].[Country].&[United States])})
WITH SET [Top5Camps] AS ' TopCount({Descendants( [Zone Id].&[14],3)},200,([Measures].[Sends])) '
select {[Measures].[Sends],[Measures].[Views],[Measures].[Clicks],[Measures].[Clicks Text],[Measures].[Clicks Html],[Measures].[Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count]} on Columns,
[Top5Camps] DIMENSION PROPERTIES [Zone].[uniq views],[Zone].[uniq soft bounces], [Zone].[uniq hard bounces],[Zone].[uniq clicks html], [Zone].[uniq clicks text],[Zone].[uniq clicks],[Zone].[uniq bounces], [Zone].[Campaign Launch Date],[Zone].[Url Count], MEMBER_CAPTION, MEMBER_KEY on rows
from [Communication]
--Some of these below use custom code in reporting services. The queries are formed in visual basic. Maybe someday I'll take the time to make these real queries.
="select {[RunningTotalSubs]} on Columns, CrossJoin(TopCount( { [DMA].children}, 5000, ([RunningTotalSubs])),
{[Time].[2004].&[1].[1].[1], [Time].[2005].&[1].[1].[1],[Time]}) on rows
from [consumers] where ( [Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "])"
WITH
MEMBER [Measures].[Variant Percentage] AS
'(( [Time].[2004].&[1].[1].[1], [RunningTotalSubs] ) -( [Time].[2005].&[1].[1].[1], [RunningTotalSubs] ))/( [Time].[All Time], [RunningTotalSubs] ) '
select {[RunningTotalSubs],[Measures].[Variant Percentage] } on Columns, CrossJoin(TopCount( { [DMA].children}, 5000, ([RunningTotalSubs])),
{[Time].[2004].&[1].[1].[1], [Time].[2005].&[1].[1].[1],[Time]}) on rows
from [consumers] where ( [Zone Id].&[14])
WITH SET [Top5Camps] AS
' Descendants([Zone Id].&[14].&[74],2) '
SELECT {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]}
="select { { ([Age Group].[All Age Group], [Gender].[All Gender]) },{ ([Age Group].[All Age Group], [Gender].[Male]) },{ ([Age Group].[All Age Group], [Gender].[Female]) },{ ([Age Group].[All Age Group], [Gender].[Unknown]) },{ ([Age Group].[13 and 17], [Gender].[All Gender]) },{ ([Age Group].[18 and 24], [Gender].[All Gender]) },{ ([Age Group].[25 and 34], [Gender].[All Gender]) } ,{ ([Age Group].[35 and 44], [Gender].[All Gender]) },{ ([Age Group].[45 and 54], [Gender].[All Gender]) },{ ([Age Group].[over 55], [Gender].[All Gender]) },{ ([Age Group].[Unknown], [Gender].[All Gender]) }
} on Columns, NON EMPTY { TopCount( { [Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].children}, 10, ([RunningTotalSubs])) } on Rows from [consumers] where ("&
iif(Parameters!dt.Value="d",Code.getCubeDate(Parameters!p1.Value, Parameters!dt.Value),"[Time]") &",[RunningTotalSubs])"
="select { { ([Age Group].[All Age Group], [Gender].[All Gender]) },{ ([Age Group].[All Age Group], [Gender].[Male]) },{ ([Age Group].[All Age Group], [Gender].[Female]) },{ ([Age Group].[All Age Group], [Gender].[Unknown]) },{ ([Age Group].[13 and 17], [Gender].[All Gender]) },{ ([Age Group].[18 and 24], [Gender].[All Gender]) },{ ([Age Group].[25 and 34], [Gender].[All Gender]) } ,{ ([Age Group].[35 and 44], [Gender].[All Gender]) },{ ([Age Group].[45 and 54], [Gender].[All Gender]) },{ ([Age Group].[over 55], [Gender].[All Gender]) },{ ([Age Group].[Unknown], [Gender].[All Gender]) }
} on Columns, NON EMPTY { TopCount( { [Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].children}, 10, ([RunningTotalSubs])) } DIMENSION PROPERTIES [Zone Id].[Campaign Type Id].[Key], [Zone Id].[Campaign Type Id].[Name] on Rows from [consumers] where ("&
iif(Parameters!dt.Value="d",Code.getCubeDate(Parameters!p1.Value, Parameters!dt.Value),"[Time]") &",[RunningTotalSubs])"
ON COLUMNS,
Order([Top5Camps], [Top5Camps].currentmember.properties("Campaign Launch Date") ,desc) DIMENSION PROPERTIES [Zone].[Campaign Launch Date], [Zone].[Url Count], MEMBER_CAPTION, MEMBER_KEY
ON ROWS
from [All communication analysis]
WITH SET [Top5Camps] AS
' Descendants([Zone Id].&[14].&[74],2) '
SELECT {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]}
ON COLUMNS, [Top5Camps] DIMENSION PROPERTIES [Zone].[Campaign Launch Date], [Zone].[Url Count], MEMBER_CAPTION, MEMBER_KEY
ON ROWS
from [All communication analysis]
="WITH SET [Top5Camps] AS ' STRTOSET(IIF(ISERROR(STRTOVALUE(""[Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].&[" & Code.EDecrypt(Parameters!PId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].&[" & Code.EDecrypt(Parameters!CTId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "]"")), ""[Zone Id].&[3].&[45].&[4311].Children"", ""[Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].&[" & Code.EDecrypt(Parameters!PId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].&[" & Code.EDecrypt(Parameters!CTId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].Children"")) '
="select {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]} on Columns,
[Top5Camps] on rows
from [All Communication Analysis]"
="WITH SET [Top5Camps] AS ' STRTOSET(IIF(ISERROR(STRTOVALUE(""[Zone Id].), ""[Zone Id].&[3].&[45].&[4311].Children"", ""[Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].&[" & Code.EDecrypt(Parameters!PId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].&[" & Code.EDecrypt(Parameters!CTId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].Children"")) '
="select {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]} on Columns,
[Top5Camps] on rows
from [All Communication Analysis]"
="WITH SET Top5Camps AS ' Descendants( [Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "], 3)'
SELECT {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]}
ON COLUMNS, [Top5Camps] DIMENSION PROPERTIES [Zone].[Campaign Launch Date], MEMBER_CAPTION, MEMBER_KEY
ON ROWS
from [All communication analysis]"
type Exception report
message
description The server encountered an internal error () that prevented it from fulfilling this request.
exception
="WITH SET Top5Camps AS ' Descendants( [Zone Id].&[" & Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) & "].&[" & Code.EDecrypt(Parameters!PId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "].&[" & Code.EDecrypt(Parameters!CTId.Value,Code.EDecrypt(Parameters!ZoneId.Value,Convert.ToString(20)) ) & "], 1)'
SELECT {[Measures].[Sends],[Measures].[S Sends],[Measures].[Views],[Measures].[Uniq Views],[Measures].[Clicks],[Measures].[Uniq Clicks],[Measures].[Clicks Text],[Measures].[Uniq Clicks Text],[Measures].[Clicks Html],[Measures].[Uniq Clicks Html],[Measures].[Bounces],[Measures].[Uniq Bounces],[Measures].[Soft Bounces],[Measures].[Hard Bounces],[Measures].[Cliks To Buy],[Measures].[Url Count],[Measures].[F2f Count]}
ON COLUMNS, [Top5Camps] DIMENSION PROPERTIES [Zone].[Campaign Launch Date], [Zone].[Url Count], MEMBER_CAPTION, MEMBER_KEY
ON ROWS
from [communication ]"
One link I came accross when using deminsion properties. Try not to make queries with properties because they are slow. But if you have to:
http://www.sqlservercentral.com/blogs/mysqllearnings/archive/2009/04/15/key-column-of-a-dimension-in-ssas.aspx
Each product has a weight associated with it. The requirement is to filter all the products whose weight is 5.
To achieve this, weight should be the member property of the product. This time, it cannot be made as a key column. But, again instead of using ".properties" function, functions like "EXISTS" can be used.
A general way of writing this would be
Filter ([Prod Dim].[Prod Hier].[SKU Lvl].members,[Prod Dim].[Prod Hier].[SKU Lvl].currentmember.properties("Weight") = 5)
In this case, it has to filter each member by getting the property of each member and checking whether the weight is less than 5. Instead of using the ".properties" function, it can be written using "EXISTS" function.
Exists ([Prod Dim].[Prod Hier].[SKU Lvl].members,filter([Prod Dim].[Weight].[Weight].members,[Prod Dim].[Weight].currentmember is [Prod Dim].[Weight].[5]))
Also, it is better to use the function "IS" instead of "=" because IS function eliminates the conversion of the given string to a member.
Because of doing this, there was a considerable improvement in the query performance. These were the scenarios where I felt natural key was useful when specified as the key column.
MDX Calculations
WITH
MEMBER [Measures].[SumOfAG] AS
'( Sum({[Time]}, [Measures].[RunningTotalSubs]))'
MEMBER [Measures].[SuperSum] AS
' (Sum({[Age Group].[13 and 17],[Age Group].[18 and 24],[Age Group].[25 and 34],[Age Group].[35 and 44],[Age Group].[45 and 54],[Age Group].[over 55],[Age Group].[less then 13],[Age Group].[UNKNOWN] },[Measures].[SumOfAG] ))'
MEMBER [Measures].[GMale] AS
'( Sum({([Time],[Gender].[Male])}, [Measures].[RunningTotalSubs]))'
MEMBER [Measures].[GMalePer] AS
' ([Measures].[GMale] / [Measures].[SuperSum])'
MEMBER [Measures].[GFemale] AS
'( Sum({([Time],[Gender].[Female])}, [Measures].[RunningTotalSubs]))'
MEMBER [Measures].[GFemalePer] AS
' ([Measures].[GFemale] / [Measures].[SuperSum])'
MEMBER [Measures].[GUnknown] AS
'( Sum({([Time],[Gender].[Unknown])}, [Measures].[RunningTotalSubs]))'
MEMBER [Measures].[GUnknownPer] AS
' ([Measures].[GUnknown] / [Measures].[SuperSum])'
select {[Measures].[SumOfAG], [Measures].[GMale], [Measures].[GMalePer], [Measures].[GFemale], [Measures].[GFemalePer], [Measures].[GUnknown], [Measures].[GUnknownPer] }
on Columns, { ([Age Group].children) } on Rows from [consumers] where ([Zone Id].&[501])
WITH
MEMBER [Measures].[p1] AS '( [Time].[2008].&[7].[31].[30], [Subs Count] ) '
MEMBER [Measures].[p2] AS '( [Time].[2008].&[6].[27].[30], [Net Subs] ) '
MEMBER [Measures].[Variant Percentage] AS '(( [Time].[2008].&[7].[31].[30], [Net Subs] ) -( [Time].[2008].&[6].[27].[30], [Net Subs] ))/( [Time].[2008].&[6].[27].[30], [Net Subs] ) '
MEMBER [Measures].[SubsDiff] AS '(( [Time].[2008].&[7].[31].[30], [Net Subs] ) -( [Time].[2008].&[6].[27].[30], [Net Subs] )) '
select {[Subs Count], [Net Subs],[RunningTotalSubs],[Measures].[p1], [Measures].[p2], [Measures].[SubsDiff], [Measures].[Variant Percentage] } on columns,
TopCount(NonEmptyCrossJoin({Descendants([Source].Currentmember, 3)},{Descendants([Zone Id].&[1178],2)}) ,150, ([Net Subs])) DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_KEY on rows
FROM [consumers]
MEMBER [Measures].[SumOfAG] AS
'( Sum({[Time]}, [Measures].[RunningTotalSubs]))'
MEMBER [Measures].[SuperSum] AS
' (Sum({[Age Group].[13 and 17],[Age Group].[18 and 24],[Age Group].[25 and 34],[Age Group].[35 and 44],[Age Group].[45 and 54],[Age Group].[over 55],[Age Group].[less then 13],[Age Group].[UNKNOWN] },[Measures].[SumOfAG] ))'
MEMBER [Measures].[GMale] AS
'( Sum({([Time],[Gender].[Male])}, [Measures].[RunningTotalSubs]))'
MEMBER [Measures].[GMalePer] AS
' ([Measures].[GMale] / [Measures].[SuperSum])'
MEMBER [Measures].[GFemale] AS
'( Sum({([Time],[Gender].[Female])}, [Measures].[RunningTotalSubs]))'
MEMBER [Measures].[GFemalePer] AS
' ([Measures].[GFemale] / [Measures].[SuperSum])'
MEMBER [Measures].[GUnknown] AS
'( Sum({([Time],[Gender].[Unknown])}, [Measures].[RunningTotalSubs]))'
MEMBER [Measures].[GUnknownPer] AS
' ([Measures].[GUnknown] / [Measures].[SuperSum])'
select {[Measures].[SumOfAG], [Measures].[GMale], [Measures].[GMalePer], [Measures].[GFemale], [Measures].[GFemalePer], [Measures].[GUnknown], [Measures].[GUnknownPer] }
on Columns, { ([Age Group].children) } on Rows from [consumers] where ([Zone Id].&[501])
WITH
MEMBER [Measures].[p1] AS '( [Time].[2008].&[7].[31].[30], [Subs Count] ) '
MEMBER [Measures].[p2] AS '( [Time].[2008].&[6].[27].[30], [Net Subs] ) '
MEMBER [Measures].[Variant Percentage] AS '(( [Time].[2008].&[7].[31].[30], [Net Subs] ) -( [Time].[2008].&[6].[27].[30], [Net Subs] ))/( [Time].[2008].&[6].[27].[30], [Net Subs] ) '
MEMBER [Measures].[SubsDiff] AS '(( [Time].[2008].&[7].[31].[30], [Net Subs] ) -( [Time].[2008].&[6].[27].[30], [Net Subs] )) '
select {[Subs Count], [Net Subs],[RunningTotalSubs],[Measures].[p1], [Measures].[p2], [Measures].[SubsDiff], [Measures].[Variant Percentage] } on columns,
TopCount(NonEmptyCrossJoin({Descendants([Source].Currentmember, 3)},{Descendants([Zone Id].&[1178],2)}) ,150, ([Net Subs])) DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_KEY on rows
FROM [consumers]
Saturday, June 19, 2010
BIusiness Intelligence over Microsoft Office Excel
Microsoft Office Excel
As one of the most widely used Microsoft Office products, the Excel spreadsheet
program is designed to organize, analyze, and visualize data. Excel is
one of the most powerful desktop applications in the Microsoft BI arsenal.
An analysis tool for everyone
Excel is such a popular data tool that most of the client organizations I visit
use it to run some critical portion of their business. One good reason is that
Excel can be installed on a local computer with no need for administrators
and servers.
The Data Mining Add-in
Microsoft creates Add-ins (new sets of capabilities) as a way to expand what
its products can do; the Data Mining Add-in allows the Excel program running
on your local computer to serve as a data-mining resource for SQL Server
Analysis Services. You can run SSAS Data Mining algorithms using data that
resides in Excel cells to yield important information about your business.
Microsoft Office Visio
The general idea behind Visio is to create flow charts — and to publish these
documents to the Web as interactive diagrams with drill-down capabilities
(users can click their way down to specific data). Microsoft offers a Data
Mining Add-In for Visio that allows users to create interactive documents
with real inlaid data. For example, a decision tree can be published to the
Web with actual business data built in. When users go to the Web site containing
that document, they can click a decision to view its results.
As one of the most widely used Microsoft Office products, the Excel spreadsheet
program is designed to organize, analyze, and visualize data. Excel is
one of the most powerful desktop applications in the Microsoft BI arsenal.
An analysis tool for everyone
Excel is such a popular data tool that most of the client organizations I visit
use it to run some critical portion of their business. One good reason is that
Excel can be installed on a local computer with no need for administrators
and servers.
The Data Mining Add-in
Microsoft creates Add-ins (new sets of capabilities) as a way to expand what
its products can do; the Data Mining Add-in allows the Excel program running
on your local computer to serve as a data-mining resource for SQL Server
Analysis Services. You can run SSAS Data Mining algorithms using data that
resides in Excel cells to yield important information about your business.
Microsoft Office Visio
The general idea behind Visio is to create flow charts — and to publish these
documents to the Web as interactive diagrams with drill-down capabilities
(users can click their way down to specific data). Microsoft offers a Data
Mining Add-In for Visio that allows users to create interactive documents
with real inlaid data. For example, a decision tree can be published to the
Web with actual business data built in. When users go to the Web site containing
that document, they can click a decision to view its results.
Data mining
Computers can be programmed to sort through enormous amounts of data
looking for patterns. It’s an exciting new frontier that goes by many different
names — in business, the most common ones are data mining, predictive
analytics, and machine learning — but this book sticks to “data mining”.
The Microsoft data-mining algorithms are part of SQL Server Analysis
Services, but you don’t have to be a super computer ninja to access and use
them. Microsoft offers a free Excel Data Mining Add-In that transforms Excel
into a simple, intuitive client program for the SSAS data-mining algorithms
looking for patterns. It’s an exciting new frontier that goes by many different
names — in business, the most common ones are data mining, predictive
analytics, and machine learning — but this book sticks to “data mining”.
The Microsoft data-mining algorithms are part of SQL Server Analysis
Services, but you don’t have to be a super computer ninja to access and use
them. Microsoft offers a free Excel Data Mining Add-In that transforms Excel
into a simple, intuitive client program for the SSAS data-mining algorithms
Analyzing data
As you can imagine, the amount of data contained in a modern business is
enormous. If the data were very small, you could simply use Microsoft Excel
and perform all of the ad-hoc analysis you need with a Pivot Table. However,
when the rows of data reach into the billions, Excel is not capable of handling
the analysis on its own. For these massive databases, a concept called OnLine
Analytical Process (OLAP) is required. Microsoft’s implementation of OLAP is
called SQL Server Analysis Services (SSAS), which I cover in detail in Chapter 8.
If you’ve used Excel Pivot Tables before, think of OLAP as essentially a massive
Pivot Table with hundreds of possible pivot points and billions of rows
of data. A Pivot Table allows you to re-order and sum your data based on different
criteria. For example, you may want to see your sales broken down by
region, product, and sales rep one minute and then quickly re-order the groupings
to include product category, state, and store.
In Excel 2010 there is a new featured called PowerPivot that brings OLAP to
your desktop. PowerPivot allows you to pull in millions of rows of data and
work with it just like you would a smaller set of data. After you get your Excel
sheet how you want it, you can upload it to a SharePoint 2010 site and share
it with the rest of your organization.
With PowerPivot you are building your own Cubes right on your desktop using
Excel. If you use PowerPivot, you can brag to your friends and family that you
are an OLAP developer. Just don’t tell them you are simply using Excel and
Microsoft did some magic under the covers.
When you need a predefined and structured Cube that is already built for
you, then you turn to your IT department.
enormous. If the data were very small, you could simply use Microsoft Excel
and perform all of the ad-hoc analysis you need with a Pivot Table. However,
when the rows of data reach into the billions, Excel is not capable of handling
the analysis on its own. For these massive databases, a concept called OnLine
Analytical Process (OLAP) is required. Microsoft’s implementation of OLAP is
called SQL Server Analysis Services (SSAS), which I cover in detail in Chapter 8.
If you’ve used Excel Pivot Tables before, think of OLAP as essentially a massive
Pivot Table with hundreds of possible pivot points and billions of rows
of data. A Pivot Table allows you to re-order and sum your data based on different
criteria. For example, you may want to see your sales broken down by
region, product, and sales rep one minute and then quickly re-order the groupings
to include product category, state, and store.
In Excel 2010 there is a new featured called PowerPivot that brings OLAP to
your desktop. PowerPivot allows you to pull in millions of rows of data and
work with it just like you would a smaller set of data. After you get your Excel
sheet how you want it, you can upload it to a SharePoint 2010 site and share
it with the rest of your organization.
With PowerPivot you are building your own Cubes right on your desktop using
Excel. If you use PowerPivot, you can brag to your friends and family that you
are an OLAP developer. Just don’t tell them you are simply using Excel and
Microsoft did some magic under the covers.
When you need a predefined and structured Cube that is already built for
you, then you turn to your IT department.
Labels:
analytics,
Data analyse,
Data connection,
excel,
microsoft,
office,
OLAP,
powepivot,
Predictive,
ssas,
statistics
Integrating data from many sources
The many different systems and processes that make up an organization
create data in all shapes and forms. This data usually ends up stored in the
individual systems that generated it — but without any standard format.
Fortunately, SQL Server has a component — SQL Server Integration Services
(SSIS) — that can connect to these many different data sources and pull
the data back into the central data warehouse. As the data moves from the
source systems to the Data Warehouse, SSIS can also transform it into a standard
useful format. The whole process is known as Extract, Transform, and
Load (ETL).
create data in all shapes and forms. This data usually ends up stored in the
individual systems that generated it — but without any standard format.
Fortunately, SQL Server has a component — SQL Server Integration Services
(SSIS) — that can connect to these many different data sources and pull
the data back into the central data warehouse. As the data moves from the
source systems to the Data Warehouse, SSIS can also transform it into a standard
useful format. The whole process is known as Extract, Transform, and
Load (ETL).
Reporting on data
When you have a Data Warehouse, you likely don’t want to look at rows
and rows of data; instead, you want to visualize the data and give it meaning.
Building reports that answer a particular question (or set of questions)
means taking raw data and turning it into information that can be used to
make intelligent business decisions. SQL Server Reporting Services (SSRS), a component of SQL Server — builds reports by doing that bit of magic.
SSRS has features that can make your reports as fancy as you like — gauges,
charts, graphs, aggregates, and many other snazzy ways to visualize the data.
and rows of data; instead, you want to visualize the data and give it meaning.
Building reports that answer a particular question (or set of questions)
means taking raw data and turning it into information that can be used to
make intelligent business decisions. SQL Server Reporting Services (SSRS), a component of SQL Server — builds reports by doing that bit of magic.
SSRS has features that can make your reports as fancy as you like — gauges,
charts, graphs, aggregates, and many other snazzy ways to visualize the data.
Data warehousing and data marts
Although computer systems help solve many problems in business, they use
so many different kinds of programs that they can’t always communicate
easily with each other. A tremendous number of systems make up a modern
organization — payroll, accounting, expenses, time, inventory, sales, customer
relations, software licensing, and so on. Many of these systems have
their own databases and ways of storing data. Combining data from the
tangle of systems — let alone doing something useful with the combined
data — becomes extremely difficult.
Business intelligence creates a “big picture” by storing and organizing data
from many disparate systems in one usable format. The idea is to make the
data readily accessible for reporting, analysis, and planning. A data warehouse
is a central database created for just that purpose: making the data
from all those sources useful and accessible for the organization. The idea is
to give decision-makers the information they need for making critical business
decisions.
A data mart is a more specialized tool with a similar purpose; it’s a functional
database that pulls particular information out of the overall Data Warehouse
(or even directly from source systems depending on who you ask) to answer
specific queries. For example, a manufacturing location may need to compile
some specialized data unique to the process used to make a particular product.
The overall data warehouse is too big and complex do that job (or to modify
effectively to handle it), so a smaller version — in BI lingo, a data mart — can be
created for this one manufacturing location.
The Microsoft SQL Server Database Engine manages not only data warehouses,
but also data marts — and both types of data storage can become
massive. Fortunately, SQL Server addresses this problem by storing one
database across a cluster of many different servers. This approach accommodates
the enterprise as it grows in scale.
so many different kinds of programs that they can’t always communicate
easily with each other. A tremendous number of systems make up a modern
organization — payroll, accounting, expenses, time, inventory, sales, customer
relations, software licensing, and so on. Many of these systems have
their own databases and ways of storing data. Combining data from the
tangle of systems — let alone doing something useful with the combined
data — becomes extremely difficult.
Business intelligence creates a “big picture” by storing and organizing data
from many disparate systems in one usable format. The idea is to make the
data readily accessible for reporting, analysis, and planning. A data warehouse
is a central database created for just that purpose: making the data
from all those sources useful and accessible for the organization. The idea is
to give decision-makers the information they need for making critical business
decisions.
A data mart is a more specialized tool with a similar purpose; it’s a functional
database that pulls particular information out of the overall Data Warehouse
(or even directly from source systems depending on who you ask) to answer
specific queries. For example, a manufacturing location may need to compile
some specialized data unique to the process used to make a particular product.
The overall data warehouse is too big and complex do that job (or to modify
effectively to handle it), so a smaller version — in BI lingo, a data mart — can be
created for this one manufacturing location.
The Microsoft SQL Server Database Engine manages not only data warehouses,
but also data marts — and both types of data storage can become
massive. Fortunately, SQL Server addresses this problem by storing one
database across a cluster of many different servers. This approach accommodates
the enterprise as it grows in scale.
Sunday, March 28, 2010
Microsoft Business Intelligence
Think out of the Box, but not out of the Cube, Series
Online analytical processing, or OLAP, has played a key role in the analyst toolbox for navigating dimensional data sets and identifying trends, anomalies, and root causes. The technology driving OLAP has evolved considerably since the 1990s, providing greater visualization, scalability, and analytical horsepower. Yet, more is on the way, including in-memory analytics, improved visualization, increased scalability and performance, and integration with appliances and very large databases.
The future of data mining lies in predictive analytics. The technology innovations in data mining since 2000 have been truly Darwinian and show promise of consolidating and stabilizing around predictive analytics. Variations, novelties and new candidate features have been expressed in a proliferation of small start-ups that have been ruthlessly culled from the herd by a perfect storm of bad economic news. Nevertheless, the emerging market for predictive analytics has been sustained by professional services, service bureaus (rent a recommendation) and profitable applications in verticals such as retail, consumer finance, telecommunications, travel and leisure, and related analytic applications. Predictive analytics have successfully proliferated into applications to support customer recommendations, customer value and churn management, campaign optimization, and fraud detection. On the product side, success stories in demand planning, just in time inventory and market basket optimization are a staple of predictive analytics. Predictive analytics should be used to get to know the customer, segment and predict customer behavior and forecast product demand and related market dynamics. Be realistic about the required complex mixture of business acumen, statistical processing and information technology support as well as the fragility of the resulting predictive model; but make no assumptions about the limits of predictive analytics. Breakthroughs often occur in the application of the tools and methods to new commercial opportunities.
Unfulfilled Expectations: In addition to a perfect storm of tough economic times, now improving measurably, one reason data mining technology has not lived up to its promise is that "data mining" is a vague and ambiguous term. It overlaps with data profiling, data warehousing and even such approaches to data analysis as online analytic processing (OLAP) and enterprise analytic applications.
When high-profile success has occurred (see the front-page article in the Wall Street Journal, "Lucky Numbers: Casino Chain Mines Data on Its Gamblers, And Strikes Pay Dirt" by Christina Binkley, May 4, 2000), this has been a mixed blessing. Such results have attracted a variety of imitators with claims, solutions and products that ultimately fall short of the promises. The promises build on the mining metaphor and typically are made to sound like easy money - "gold in them thar hills." This has resulted in all the usual dilemmas of confused messages from vendors, hyperbole in the press and unfulfilled expectations from end-user enterprises.
Common Goals: The goals of data warehousing, data mining and the emerging trend in predictive analytics overlap. All aim at understanding consumer behaviour, forecasting product demand, managing and building the brand, tracking performance of customers or products in the market and driving incremental revenue from transforming data into information and information into knowledge. However, they cannot be substituted for one another. Ultimately, the path to predictive analytics lies through data mining, but the latter is like the parent who must step aside to let the child develop her or his full potential. This is a trends analysis, not a manifesto in predictive analytics. Yet the slogan rings true, "Data mining is dead! Long live predictive analytics!" The center of design for cutting-edge technology and breakthrough commercial business results has shifted from data warehousing and mining to predictive analytics. From a business perspective, they employ different methods. They are positioned in different places in the technology hierarchy. Finally, they are at different stages of growth in the life cycle of technology innovation.
Technology Cycle: Data warehousing is a mature technology, with approximately 70 percent of Forrester Research survey respondents indicating they have one in production. Data mining has endured significant consolidation of products since 2000, in spite of initial high-profile success stories, and has sought shelter in encapsulating its algorithms in the recommendation engines of marketing and campaign management software. Statistical inference has been transformed into predictive modelling. As we shall see, the emerging trend in predictive analytics has been enabled by the convergence of a variety of factors.
Technology Hierarchy: In the technology hierarchy, data warehousing is generally considered an architecture for data management. Of course, when implemented, a data warehouse is a database providing information about (among many other things) what customers are buying or using which products or services and when and where are they doing so. Data mining is a process for knowledge discovery, primarily relying on generalizations of the "law of large numbers" and the principles of statistics applied to them. Predictive analytics emerges as an application that both builds on and delimits these two predecessor technologies, exploiting large volumes of data and forward-looking inference engines, by definition, providing predictions about diverse domains.
Methods: The method of data warehousing is structured query language (SQL) and its various extensions. Data mining employs the "law of large numbers" and the principles of statistics and probability that address the issues around decision making in uncertainty. Predictive analytics carries forward the work of the two predecessor domains. Though not a silver bullet, better algorithms in operations research, risk minimization and parallel processing, when combined with hardware improvements and the lessons of usability testing, have resulted in successful new predictive applications emerging in the market. (Again, see Figure 1 on predictive analytics enabling technologies.) Widely diverging domains such as the behaviour of consumers, stocks and bonds, and fraud detection have been attacked with significant success by predictive analytics on a progressively incremental scale and scope. The work of the past decade in building the data warehouse and especially of its closely related techniques, particularly parallel processing, are key enabling factors. Statistical processing has been useful in data preparation, model construction and model validation. However, it is only with predictive analytics that the inference and knowledge are actually encoded into the model that, in turn, is encapsulated in a business application.
Definition
This results in the following definition of predictive analytics: Methods of directed and undirected knowledge discovery, relying on statistical algorithms, neural networks and optimization research to prescribe (recommend) and predict (future) actions based on discovering, verifying and applying patterns in data to predict the behavior of customers, products, services, market dynamics and other critical business transactions. In general, tools in predictive analytics employ methods to identify and relate independent and dependent variables - the independent variable being "responsible for" the dependent one and the way in which the variables "relate," providing a pattern and a model for the behavior of the downstream variables.
In data warehousing, the analyst asks a question of the data set with a predefined set of conditions and qualifications, and a known output structure. The traditional data cube addresses: What customers are buying or using which product or service and when and where are they doing so? Typically, the question is represented in a piece of SQL against a relational database. The business insight needed to craft the question to be answered by the data warehouse remains hidden in a black box - the analyst's head. Data mining gives us tools with which to engage in question formulation based primarily on the "law of large numbers" of classic statistics. Predictive analytics have introduced decision trees, neural networks and other pattern-matching algorithms constrained by data percolation. It is true that in doing so, technologies such as neural networks have themselves become a black box. However, neural networks and related technologies have enabled significant progress in automating, formulating and answering questions not previously envisioned. In science, such a practice is called "hypothesis formation," where the hypothesis is treated as a question to be defined, validated and refuted or confirmed by the data.
For more visit my Blog:
http://bim-businessintelligence.blogspot.com/
Online analytical processing, or OLAP, has played a key role in the analyst toolbox for navigating dimensional data sets and identifying trends, anomalies, and root causes. The technology driving OLAP has evolved considerably since the 1990s, providing greater visualization, scalability, and analytical horsepower. Yet, more is on the way, including in-memory analytics, improved visualization, increased scalability and performance, and integration with appliances and very large databases.
The future of data mining lies in predictive analytics. The technology innovations in data mining since 2000 have been truly Darwinian and show promise of consolidating and stabilizing around predictive analytics. Variations, novelties and new candidate features have been expressed in a proliferation of small start-ups that have been ruthlessly culled from the herd by a perfect storm of bad economic news. Nevertheless, the emerging market for predictive analytics has been sustained by professional services, service bureaus (rent a recommendation) and profitable applications in verticals such as retail, consumer finance, telecommunications, travel and leisure, and related analytic applications. Predictive analytics have successfully proliferated into applications to support customer recommendations, customer value and churn management, campaign optimization, and fraud detection. On the product side, success stories in demand planning, just in time inventory and market basket optimization are a staple of predictive analytics. Predictive analytics should be used to get to know the customer, segment and predict customer behavior and forecast product demand and related market dynamics. Be realistic about the required complex mixture of business acumen, statistical processing and information technology support as well as the fragility of the resulting predictive model; but make no assumptions about the limits of predictive analytics. Breakthroughs often occur in the application of the tools and methods to new commercial opportunities.
Unfulfilled Expectations: In addition to a perfect storm of tough economic times, now improving measurably, one reason data mining technology has not lived up to its promise is that "data mining" is a vague and ambiguous term. It overlaps with data profiling, data warehousing and even such approaches to data analysis as online analytic processing (OLAP) and enterprise analytic applications.
When high-profile success has occurred (see the front-page article in the Wall Street Journal, "Lucky Numbers: Casino Chain Mines Data on Its Gamblers, And Strikes Pay Dirt" by Christina Binkley, May 4, 2000), this has been a mixed blessing. Such results have attracted a variety of imitators with claims, solutions and products that ultimately fall short of the promises. The promises build on the mining metaphor and typically are made to sound like easy money - "gold in them thar hills." This has resulted in all the usual dilemmas of confused messages from vendors, hyperbole in the press and unfulfilled expectations from end-user enterprises.
Common Goals: The goals of data warehousing, data mining and the emerging trend in predictive analytics overlap. All aim at understanding consumer behaviour, forecasting product demand, managing and building the brand, tracking performance of customers or products in the market and driving incremental revenue from transforming data into information and information into knowledge. However, they cannot be substituted for one another. Ultimately, the path to predictive analytics lies through data mining, but the latter is like the parent who must step aside to let the child develop her or his full potential. This is a trends analysis, not a manifesto in predictive analytics. Yet the slogan rings true, "Data mining is dead! Long live predictive analytics!" The center of design for cutting-edge technology and breakthrough commercial business results has shifted from data warehousing and mining to predictive analytics. From a business perspective, they employ different methods. They are positioned in different places in the technology hierarchy. Finally, they are at different stages of growth in the life cycle of technology innovation.
Technology Cycle: Data warehousing is a mature technology, with approximately 70 percent of Forrester Research survey respondents indicating they have one in production. Data mining has endured significant consolidation of products since 2000, in spite of initial high-profile success stories, and has sought shelter in encapsulating its algorithms in the recommendation engines of marketing and campaign management software. Statistical inference has been transformed into predictive modelling. As we shall see, the emerging trend in predictive analytics has been enabled by the convergence of a variety of factors.
Technology Hierarchy: In the technology hierarchy, data warehousing is generally considered an architecture for data management. Of course, when implemented, a data warehouse is a database providing information about (among many other things) what customers are buying or using which products or services and when and where are they doing so. Data mining is a process for knowledge discovery, primarily relying on generalizations of the "law of large numbers" and the principles of statistics applied to them. Predictive analytics emerges as an application that both builds on and delimits these two predecessor technologies, exploiting large volumes of data and forward-looking inference engines, by definition, providing predictions about diverse domains.
Methods: The method of data warehousing is structured query language (SQL) and its various extensions. Data mining employs the "law of large numbers" and the principles of statistics and probability that address the issues around decision making in uncertainty. Predictive analytics carries forward the work of the two predecessor domains. Though not a silver bullet, better algorithms in operations research, risk minimization and parallel processing, when combined with hardware improvements and the lessons of usability testing, have resulted in successful new predictive applications emerging in the market. (Again, see Figure 1 on predictive analytics enabling technologies.) Widely diverging domains such as the behaviour of consumers, stocks and bonds, and fraud detection have been attacked with significant success by predictive analytics on a progressively incremental scale and scope. The work of the past decade in building the data warehouse and especially of its closely related techniques, particularly parallel processing, are key enabling factors. Statistical processing has been useful in data preparation, model construction and model validation. However, it is only with predictive analytics that the inference and knowledge are actually encoded into the model that, in turn, is encapsulated in a business application.
Definition
This results in the following definition of predictive analytics: Methods of directed and undirected knowledge discovery, relying on statistical algorithms, neural networks and optimization research to prescribe (recommend) and predict (future) actions based on discovering, verifying and applying patterns in data to predict the behavior of customers, products, services, market dynamics and other critical business transactions. In general, tools in predictive analytics employ methods to identify and relate independent and dependent variables - the independent variable being "responsible for" the dependent one and the way in which the variables "relate," providing a pattern and a model for the behavior of the downstream variables.
In data warehousing, the analyst asks a question of the data set with a predefined set of conditions and qualifications, and a known output structure. The traditional data cube addresses: What customers are buying or using which product or service and when and where are they doing so? Typically, the question is represented in a piece of SQL against a relational database. The business insight needed to craft the question to be answered by the data warehouse remains hidden in a black box - the analyst's head. Data mining gives us tools with which to engage in question formulation based primarily on the "law of large numbers" of classic statistics. Predictive analytics have introduced decision trees, neural networks and other pattern-matching algorithms constrained by data percolation. It is true that in doing so, technologies such as neural networks have themselves become a black box. However, neural networks and related technologies have enabled significant progress in automating, formulating and answering questions not previously envisioned. In science, such a practice is called "hypothesis formation," where the hypothesis is treated as a question to be defined, validated and refuted or confirmed by the data.
For more visit my Blog:
http://bim-businessintelligence.blogspot.com/
Subscribe to:
Posts (Atom)