Sample Statistics: Difference between revisions
m (Admin moved page Probability to Sample Statistics) |
|||
(One intermediate revision by the same user not shown) | |||
Line 2: | Line 2: | ||
A '''population parameter''' is the value of a statistic based on population data. This is almost always not measurable. A '''sample statistic''' is the value of a statistic based on sample data. We use sample statistics to estimate population parameters. | A '''population parameter''' is the value of a statistic based on population data. This is almost always not measurable. A '''sample statistic''' is the value of a statistic based on sample data. We use sample statistics to estimate population parameters. | ||
= Experiment = | = Experiment and Events = | ||
An '''experiment''' is a process that results in a random outcome | An '''experiment''' is a process that results in a random outcome that cannot be known in advance with certainty. We are interested in a particular subset of outcomes: an '''event'''. | ||
For example, consider rolling a 6-sided die. The act of rolling it is the ''experiment'', and the ''event'' can be rolling an odd number or rolling a 6. | |||
== Event Operations == | |||
Consider two events <math>A, B</math>. | |||
The '''union''' of those two events are the outcomes that belongs to A, B, or both. | |||
The '''intersection''' of those two events are the outcomes that belongs to both and only both. | |||
The '''complement''' of event A is all outcomes that does not belong to A. | |||
== Misc. Terminology == | |||
The two events are '''mutually disjoint''' if their ''intersection'' is the null set. | |||
When we select observations randomly from a population, if the observation can be selected again, the selection is done '''with replacement'''. Otherwise, it is '''without replacement'''. | |||
= Probability = | |||
The '''probability''' of an event is a number representing how likely an outcome belongs to the event. It is pretty intuitive so I'm going to skip on explaining it. | |||
== Classic Probability == | |||
'''Logic based''' (or '''classic''') probability assumes all possible events are equally likely based on a logical/natural assumption. We assume we know all possible otucomes in advance, and the probability is calculated by | |||
<math>P(A) = \frac{\text{Number of events with }A}{\text{Number of all possible events}}</math> | |||
This, naturally, requires a lot of assumptions. Who is to say that all outcomes are equally likely? The counterpart is then experimental probability. | |||
== Experimental Probability == | |||
'''Experimental''' (or '''relative frequency''') probability is based on experiments and/or samples. | |||
<math>\hat{P}(A) = \frac{\text{Number of events with }A \text{ occurred in the sample}}{\text{Sample size } n}</math> | |||
As the number of samples increased, so do the accuracy of this sample probability. | |||
== Fundamental Rules == | |||
There are serveral fundamental rules of probability that I'm not gonna elaborate on since they are pretty intuitive. | |||
# All probability add up to 1 | |||
# Probability is between 0 and 1 | |||
# P(A) = 1 - P(A<sup>C</sup>) | |||
# <math>P(A \cup B) = P(A) + P(B) - P(A \cap B)</math> | |||
== Conditional Probability == | |||
We are interested in the probability of an event given that another event has been observed. This is the '''conditional probability''' of ''A given B'': | |||
<math>P(A|B) = \frac{P(A \cap B)}{P(B)}</math> | |||
Doing some quick substitutions, we have '''Bayes' Theorem''', giving us P(A|B) if we have P(B|A) It's literally just substitution. | |||
<math>P(A|B) = \frac{P(B|A) P(A)}{P(B|A) P(A) + P(B|A^C) P(A^C)}</math> |
Latest revision as of 06:11, 19 March 2024
A population parameter is the value of a statistic based on population data. This is almost always not measurable. A sample statistic is the value of a statistic based on sample data. We use sample statistics to estimate population parameters.
Experiment and Events
An experiment is a process that results in a random outcome that cannot be known in advance with certainty. We are interested in a particular subset of outcomes: an event.
For example, consider rolling a 6-sided die. The act of rolling it is the experiment, and the event can be rolling an odd number or rolling a 6.
Event Operations
Consider two events .
The union of those two events are the outcomes that belongs to A, B, or both.
The intersection of those two events are the outcomes that belongs to both and only both.
The complement of event A is all outcomes that does not belong to A.
Misc. Terminology
The two events are mutually disjoint if their intersection is the null set.
When we select observations randomly from a population, if the observation can be selected again, the selection is done with replacement. Otherwise, it is without replacement.
Probability
The probability of an event is a number representing how likely an outcome belongs to the event. It is pretty intuitive so I'm going to skip on explaining it.
Classic Probability
Logic based (or classic) probability assumes all possible events are equally likely based on a logical/natural assumption. We assume we know all possible otucomes in advance, and the probability is calculated by
This, naturally, requires a lot of assumptions. Who is to say that all outcomes are equally likely? The counterpart is then experimental probability.
Experimental Probability
Experimental (or relative frequency) probability is based on experiments and/or samples.
As the number of samples increased, so do the accuracy of this sample probability.
Fundamental Rules
There are serveral fundamental rules of probability that I'm not gonna elaborate on since they are pretty intuitive.
- All probability add up to 1
- Probability is between 0 and 1
- P(A) = 1 - P(AC)
Conditional Probability
We are interested in the probability of an event given that another event has been observed. This is the conditional probability of A given B:
Doing some quick substitutions, we have Bayes' Theorem, giving us P(A|B) if we have P(B|A) It's literally just substitution.