This research work will be published in IJSER (International Journal of Sceince and Engineering Research), 8 August Edition 2012. Part of the research paper wil be posted here:
ELegant Analytics - ELA presents
Personal Finance Intelligence
Medical doctors always say that the best medicine is the preventive one and research shows that humans work very little on this direction. The subject of this research is not Medicine, but not less important, Economy.
Financial crisis happened before, is happening now and will happen in future if we do not create a preventive plan. One of the solutions that aim to be part of a “preventive medicine” for financial crises is produced in the data laboratories of ELA (ELegant Analytics) and its name is PFI.
What is PFI?
Personal Finance Intelligence (PFI) represents the name of the Business Intelligence solution for personal finance and planning of your budget. Inspired by the TV Show “Luksusfellen” (A show broadcasted in Norway about people struggling with their economy and living luxurious live-“Luxurious buddy”.), this Business Intelligence approach may be a solution for all these who fail to maintain well their own economy, for those who want to perform their economy and last but not least the bank itself.
The purpose of this project is to create a Customer Analytical Cube that would process data for each bank costumer using his/her history for its own benefit and then queries back with the most important answers that customers and the bank itself need.
This solution will include also benchmarking against an Imaginary subject that can be Min, Max or Avg of the customer’s values in a set that can be certain region, for a period of time, age group, sex, income ranges etc.
For having more control and planning your own economy, targeting will be an option when users (bank customers) can put their targets for costs or income a month, a quarter or a year ahead and always will be warned when they are about to achieve the costs amount targeted.
Project is also meant to be use for the bank itself in cases when they want to evaluate a customer and his/her behave regarding his/her finance stability, because today Credit scoring system lack for some important data that can make decision more accurate. The customer behave will be same important for the bank, so the bank will know what type of customer is and how he/she handle his/her economy.
Focus and goals
The main focus of this project is the customer, his/her history and his/her behave.
Our first goal is to make possible data collection for customers in the smallest transaction granularity as possible by not impersonating data. This way, bank operates with the whole data diving into details secure and lawfully. Our second goal is by doing Business Intelligence with his historical data to give alerts and advices where he/she is performing bad or giving support where he/she is doing well.
Our third goal is to see where the customer stands, comparing with the region where he/she lives, comparing with his age group, sex and income. This is going to help him improve savings and cut costs by showing how people around him can do with same budget.
Our fourth, but not less important goal is related to the bank itself, where the bank can have clear financial picture for its customer and can decide much better than credit scoring system.
This system lowers the risk and improves the loyalty with customers.
Data center that has the capacity to
• Data transfer once a day or live-data (for the bank side)
• Centralized Customer Intelligence for the Entire Bank
• Live Data transfer and access
• Separate service for each client
• Client vs. Average, Max or Min of a set of clients (Benchmarking)
• Other Intelligence analysis (Geography, age, sex etc…)
Processes on the fly (Administration and Maintenance):
• Optimizing ETL
• Optimizing DB and DWH
• Optimizing Indexing and Data Volume
• Query Performance regarding MDX calculated measures
Project Closure Recommendations
It is very recommended that privacy and security issues regarding credential information about customers to be distributed in a high consideration.
Also users’ impersonation with data source and data reported is highly recommended to be solved in the best possible way, including data source security till Cube role group’s security.
Other recommendation is regarding planning the data volume and performance upon queries requests in a large volume of data in production environment.
Data Architecture of the PFI Solution
* If you want to download the full research just click here.
Thursday, July 26, 2012
Tuesday, July 24, 2012
A fantastic post by Andrew McAfee
Human intuition can be astonishingly good, especially after it's improved by experience. Savvy poker players are so good at reading their opponents' cards and bluffs that they seem to have x-ray vision. Firefighters can, under extreme duress, anticipate how flames will spread through a building. And nurses in neonatal ICUs can tell if a baby has a dangerous infection even before blood test results come back from the lab.
The lexicon to describe this phenomenon is mostly mystical in nature. Poker players have a sixth sense; firefighters feel the blaze's intentions; Nurses just know what seems like an infection. They can't even tell us what data and cues they use to make their excellent judgments; their intuition springs from a deep place that can't be easily examined. . Examples like these give many people the impression that human intuition is generally reliable, and that we should rely more on the decisions and predictions that come to us in the blink of an eye.
This is deeply misguided advice. We should rely less, not more, on intuition.
A huge body of research has clarified much about how intuition works, and how it doesn't. Here's some of what we've learned:
•It takes a long time to build good intuition. Chess players, for example, need 10 years of dedicated study and competition to assemble a sufficient mental repertoire of board patterns.
•Intuition only works well in specific environments, ones that provide a person with good cues and rapid feedback . Cues are accurate indications about what's going to happen next. They exist in poker and firefighting, but not in, say, stock markets. Despite what chartists think, it's impossible to build good intuition about future market moves because no publicly available information provides good cues about later stock movements. Feedback from the environment is information about what worked and what didn't. It exists in neonatal ICUs because babies stay there for a while. It's hard, though, to build medical intuition about conditions that change after the patient has left the care environment, since there's no feedback loop.
•We apply intuition inconsistently. Even experts are inconsistent. One study determined what criteria clinical psychologists used to diagnose their patients, and then created simple models based on these criteria. Then, the researchers presented the doctors with new patients to diagnose and also diagnosed those new patients with their models. The models did a better job diagnosing the new cases than did the humans whose knowledge was used to build them. The best explanation for this is that people applied what they knew inconsistently — their intuition varied. Models, though, don't have intuition.
•It's easy to make bad judgments quickly. We have a many biases that lead us astray when making assessments. Here's just one example. If I ask a group of people "Is the average price of German cars more or less than $100,000?" and then ask them to estimate the average price of German cars, they'll "anchor" around BMWs and other high-end makes when estimating. If I ask a parallel group the same two questions but say "more or less than $30,000" instead, they'll anchor around VWs and give a much lower estimate. How much lower? About $35,000 on average, or half the difference in the two anchor prices. How information is presented affects what we think.
•We can't know tell where our ideas come from. There's no way for even an experienced person to know if a spontaneous idea is the result of legitimate expert intuition or of a pernicious bias. In other words, we have lousy intuition about our intuition.
My conclusion from all of this research and much more I've looked at is that intuition is similar to what I think of Tom Cruise's acting ability: real, but vastly overrated and deployed far too often.
So can we do better? Do we have an alternative to relying on human intuition, especially in complicated situations where there are a lot of factors at play? Sure. We have a large toolkit of statistical techniques designed to find patterns in masses of data (even big masses of messy data), and to deliver best guesses about cause-and-effect relationships. No responsible statistician would say that these techniques are perfect or guaranteed to work, but they're pretty good.
The arsenal of statistical techniques can be applied to almost any setting, including wine evaluation. Princeton economist Orley Ashenfleter predicts Bordeaux wine quality (and hence eventual price) using a model he developed that takes into account winter and harvest rainfall and growing season temperature. Massively influential wine critic Robert Parker has called Ashenfleter an "absolute total sham" and his approach "so absurd as to be laughable." But as Ian Ayres recounts in his great book Supercrunchers, Ashenfelter was right and Parker wrong about the '86 vintage, and the way-out-on-a-limb predictions Ashenfelter made about the sublime quality of the '89 and '90 wines turned out to be spot on.
Those of us who aren't wine snobs or speculators probably don't care too much about the prices of first-growth Bordeaux, but most of us would benefit from accurate predictions about such things as academic performance in college; diagnoses of throat infections and gastrointestinal disorders; occupational choice; and whether or not someone is going to stay in a job, become a juvenile delinquent, or commit suicide.
I chose those seemingly random topics because they're ones where statistically-based algorithms have demonstrated at least a 17 percent advantage over the judgments of human experts.
But aren't there at least as many areas where the humans beat the algorithms? Apparently not. A 2000 paper surveyed 136 studies in which human judgment was compared to algorithmic prediction. Sixty-five of the studies found no real difference between the two, and 63 found that the equation performed significantly better than the person. Only eight of the studies found that people were significantly better predictors of the task at hand. If you're keeping score, that's just under a 6% win rate for the people and their intuition, and a 46% rate of clear losses.
So why do we continue to place so much stock in intuition and expert judgment? I ask this question in all seriousness. Overall, we get inferior decisions and outcomes in crucial situations when we rely on human judgment and intuition instead of on hard, cold, boring data and math. This may be an uncomfortable conclusion, especially for today's intuitive experts, but so what? I can't think of a good reason for putting their interests over the interests of patients, customers, shareholders, and others affected by their judgments.
So do we just dispense with the human experts altogether, or take away all their discretion and tell them to do whatever the computer says? In a few situations, this is exactly what's been done. For most of us, our credit scores are an excellent predictor of whether we'll pay back a loan, and banks have long relied on them to make automated yes/no decisions about offering credit. (The sub-prime mortgage meltdown stemmed in part from the fact that lenders started ignoring or downplaying credit scores in their desire to keep the money flowing. This wasn't intuition as much as rank greed, but it shows another important aspect of relying on algorithms: They're not greedy, either).
In most cases, though, it's not feasible or smart to take people out of the decision-making loop entirely. When this is the case, a wise move is to follow the trail being blazed by practitioners of evidence-based medicine , and to place human decision makers in the middle of a computer-mediated process that presents an initial answer or decision generated from the best available data and knowledge. In many cases, this answer will be computer generated and statistically based. It gives the expert involved the opportunity to override the default decision. It monitors how often overrides occur, and why. it feeds back data on override frequency to both the experts and their bosses. It monitors outcomes/results of the decision (if possible) so that both algorithms and intuition can be improved.
Over time, we'll get more data, more powerful computers, and better predictive algorithms. We'll also do better at helping group-level (as opposed to individual) decision making, since many organizations require consensus for important decisions. This means that the 'market share' of computer automated or mediated decisions should go up, and intuition's market share should go down. We can feel sorry for the human experts whose roles will be diminished as this happens. I'm more inclined, however, to feel sorry for the people on the receiving end of today's intuitive decisions and judgments.
What do you think? Am I being too hard on intuitive decision making, or not hard enough? Can experts and algorithms learn to get along? Have you seen cases where they're doing so? Leave a comment, please, and let us know.