 This + 400k other summaries
 A unique study and practice tool
 Never study anything twice again
 Get the grades you hope for
 100% sure, 100% understanding
A snapshot of the summary  Part 2: Quantitative Analysis

Reading 1  Probabilities
This is a preview. There are 3 more flashcards available for chapter 22/08/2016
Show more cards here 
What is a PDF, probability density/distribution function?Continuous   > The likelihood of outcomes occurring between two points
Discrete   > The likelihood of a specific outcome 
What is a CDF, cumulative density/distribution function?The probability of a random variable (continuous or discrete) being LESS than or equal to a certain value. As probabilities can't be negative, this is a nondecreasing function. The CDF can be derived from PDF, by summing all PDF below the specific value. Function wise, take the primitive of the PDF.
Note: 1  CDF will give the likelihood of larger than. 
What is the inverse CDF?The inverse of a CDF: F1 (p) = a instead of F(a) = p. Rewrite left and right hand of the respective formula. Via this way it is easily calculated what value a company would loss at least at a certain probability, like ValueatRisk.

What are mutually exclusive events?The probability that ANY of two mutually exclusive events occurring is just the sum of their individual probabilities. They can neither happen at the same time, like raining yes/no.
Union: P[A U B] = P[A] + P[B] 
What are independent events?Even when we know that a certain event happens, that doesn't tell us anything on the probability that the other event would happen (given the other). Do they influence each other, yes or no?
Independent: P[A] = P[ A  B ]
Joint (both) probability of independent: P[A n B] = P[A] * P[B] 
What about probability matrices?They are useful when dealing with joint probabilities of two variables, by putting them in a probability table.

Reading 2  Basic Statistics
This is a preview. There are 12 more flashcards available for chapter 23/08/2016
Show more cards here 
What about BLUE, Best Linear Unbiased Estimator?As true parameters of a distribution are unknown, we tend to use estimators. But they won't be estimators if they can't be different every time we take a sample. By BLUE we aim for the best, implying minimum variance, larger sample brings us closer to true parameter, linear combinations.
All estimators in this reading are BLUE or the ratio of BLUE estimators. 
Reading 3  Distributions
This is a preview. There are 12 more flashcards available for chapter 01/09/2016
Show more cards here 
What are parametric and nonparametric distributions?A parametric distribution can be described by a mathematical function. Although more insightful, we are forced to make assumptions. which may not be supported by the realworld data.
A nonparametric distribution cannot be summarized by a mathematical formula. In its simplest form it is just a collection of data. Although real life, they are mostly too specific and therefore less useful for generalization. 
What is a uniform distribution?Continuous (or discrete) random variable that has a constant PDF equal to c, between b1 and b2. It is the most fundamental distribution in statistics. PDF = 1/(b2b2).
Mean: (b2+b1)/2
Variance: 1/12*(b2b1)^2
CDF: (ab1)/(b2b1)
If b1=0 and b2=1 we have a standard uniform distribution. Mostly useful when we combine multiple uniform distributions as they can form more complex distributions. 
What is a Bernoulli distribution?Inredibly simple, either outcome 0 or 1, so just binary.
Mean: p
Variance: p*(1p)
Can be combined with uniform distribution, smaller p it is 1 and large or equal set it to 0.
 Higher grades + faster learning
 Don't study anything twice
 100% sure, 100% understanding