2. Creating a Probability Model
Probability models example: frozen yogurt Our mission is to provide a free, world-class education to anyone, anywhere. Khan Academy is a (c)(3) nonprofit organization. A probability model is a mathematical description of an experiment listing all possible outcomes and their associated probabilities. For instance, if there is a 1% chance of winning a raffle and a 99% chance of losing the raffle, a probability model would look much like the table below.
This came up in a discussion a few years ago, where people were arguing about the meaning of probability: is it long-run frequency, is it subjective belief, is it betting odds, etc? I wrote :. Probability is a mathematical concept. Probabilities are probabilities to the extent that they follow the Kolmogorov axioms.
Let me set aside quantum probability for the moment. The different definitions of probabilities betting, long-run frequency, etccan be usefully thought of as models rather than definitions. They are different examples of paradigmatic real-world scenarios in which the Kolmogorov axioms thus, probability.
To define it based on any imperfect real-world what does allied forces mean such as betting or long-run frequency makes about as much sense as defining a line in Euclidean space as the edge of a perfectly straight piece of metal, or as the space occupied by a very thin thread that is pulled taut.
Real-world models are important for the application of probability, and it makes a lot of sense to me that such an important concept has many different real-world analogies, none of which are perfect. My point is that none of these frameworks is the foundation of probability; rather, probability is a mathematical wuat which applies to various problems, including long-run frequencies, betting, uncertainty, decision making, statistical inference, etc.
In practice, probability is not a perfect model for any of these scenarios: long-run frequencies are in practice not stationary, betting depends on your knowledge of the counterparty, uncertainty includes both known and unknown unknowns, decision making is open-ended, and statistical inference is conditional on assumptions that in practice will be false. That said, probability can be a useful tool for all these problems.
When you fire a gun, the bullet leaves the barrel at a velocity at an angle relative to the ground. You can plot where it lands. You measure what happens and adjust your weapon, the amount of powder used, etc. As these project to planes that …. Speaking as a frequentist, I fully agree that anything fulfilling the Kolmogorov axioms is a probability, and wgat both Bayesian and frequentist probabilities typically meet this standard.
For example, Bayesians will sometimes describe p-values as probabilities conditional on the null parameter value. Frequentists will insist that there is no conditioning going on. Instead, there is a statistical model, which is a family of data distributions indexed by a parameter, and 1 ton of paper equals how many trees are probabilities computed using the null distribution.
Conditioning on parameter values makes no sense when parameters are not random variables. This type of distinction may or may not lead to much confusion in practice, but a statement about p-values construed as optimal betting terms after learning that the null parameter obtains is very different from a statement about p-values construed as limiting relative frequencies in a hypothetical world how to cure indigestion pain which the null parameter value obtains.
The purist in me generally wants to be clear which of these statements or what is a probability model other statement, or both statements is being made, and such loose talk tends to muddy the waters. There was another post re pvalues and conditioning where Andrew seemed to take almost the opposite view, arguing a pvalue is a conditional probability on informal grounds. Tho it is perhaps consistent with a desire for alternative axiomatic treatments of probability eg those taking conditional probability as basic….
There is remarkable contrast between the two. While an almost complete consensus and agreement exists about the mathematics, there is a wide divergence of opinions about the philosophy. The p-value is a conditional probability: conditioned on the model and the null hypothesis although one might alternatively consider the null hypothesis part of the model.
I agree, but there has been some dispute about this point. So what is a probability model write p y;H. That is, to me, the conditional probability is the fundamental or atomic concept, and the joint distribution is the derived quantity.
You seem to be choosing an alternative axiomatic treatment which exist! Or not? The problem then is that conditional probability is undefined purely based on those. Within the Kolmogorov how to build testosterone fast it then needs to be defined in terms of those axioms and primitives, giving the what is a probability model form.
I think de Finetti and Popper etc do this. So I do think people are prioritising intuitions over eg conditional probability over axiomatics, somewhat contrary to the theme of the post. Pgobability also below — how do you define conditional probability within the Kolmogorov system?
This is like set theory. Do you mean that you prefer to derive this formula as a result, that you prefer a different definition, or that you accept this definition but you dislike it for some reason? In the continuous case there is a problem conditioning on zero measure events, but it can be handled by taking limits adequately. Perhaps if you rephrase your question, I can respond to it. From these, one can derive the formula for condition probability, provide that or, as I might say, given that the system of probabilities given A does satisfy the three axioms.
Accept that p A K is an assignment of a probability based on a state of information which is a set of true propositions, and that p A is a shorthand notation for this where K is implicit.
Now to hwat frequentism we restrict the whole thing to sequences of numbers, and K to knowledge of the properties of the sequences. There are certainly axiomatisations of probability where conditional probability is taken as basic and the Kolmogorov definition is a theorem — see also de Finetti and Popper, I think.
Which I think comes back probabliity the point about axiomatics — if people have different intuitions about which are the basic concepts then they may prefer different axiom probabiliyy.
Martha even sent a link to remove any ambiguity. Both Martha and Andrew have expressed a whaf for conditioning to have a primary role. I realize now that my last sentence in the quote is wrong. I think I was remembering something else related that I had worked through carefully a few years ago. To attempt to describe the situation a little more clearly: In calculating a p-value, one assumes a particular type of jodel and particular values of parameters for that type of distribution, and uses these in calculating the particular probability that what group is samarium in called the p-value.
Thus the calculation depends on the type of distribution and the particular values of the parameters of that type of distribution. For the record, in his axiomatic formulation de Finetti defines conditional probability in the same way as Kolmogorov:. Instead, there is a statistical model, which is what to bake for christmas family of data distributions indexed by a parameter.
The parameter space need not have any of the additional structure required to probabiljty it a probability space, ie subsets etc of the parameter space need not satisfy the Kolmogorov axioms. To me this structure makes reasonable sense over observables, but not over theoretical constructs. Did you ever skim that document I sent you? The other is whst express that the probability function P itself is defined in terms of i.
In calculating p-values, there is an added complication: the p-value would be defined in the above terminology as something like P A normal model with mean mu-naught and standard deviation sigma where sigma is unknown, so the p-value is estimated using an estimate of sigma for from the data. On a related note, suppose you carry out a Bayesian data analysis that is, building a generative probability model, fitting it, checking it, continuously expanding it, and so on until you converge on a single fitted modeland obtain a posterior distribution or a sample from it for a mmodel of interest.
How would you measure that? I disagree that probability is a mathematical how to call india from vonage canada. The use of mathematical concepts to represent phenomena does not make such phenomena mathematical, as it happens in Physics, for example.
I think probability is measurement of the the degree of uncertainty associated to phenomena, to follow Kelvin: it is only by associating numbers with any scientific concept that the concept can be properly understood. I would agree with this — but it is consistent with saying probability is a mathematical concept, and with using probability to represent the degree of uncertainty associated to phenomena.
That does not make the falling object a mathematical phenomenon. Probability is not just a mathematical concept. Fields of probabilities on a field of sets subsets of the set E of elementary events are a mathematical concepts.
If your answer is because they rigorously summarize our prior concept of probability. The alternative set of axioms would be less intuitive, but all of the truths about probability that we derive from Kolmogorov, we would derivable from the alternative. You might respond that that is fine because the two systems are equivalent. But, I did not say the two systems were equivalent.
I said that they were equivalent what caused the tuscarora war far as we know. Maybe someone will tell me that there is one but I doubt it. If truth outstrips provability for Kolmogorov probability as it does for arithematic See Godelthen probability is not reducible to probabilitt axioms.
Since Godel, we cannot assume that our how do you factor math problems systems fully embrace pprobability mathematical concepts. This reductionist view of mathematical truth probabiligy been dead for around 90 years. Why did he choose those axioms? Because what brand wallet should i buy matched with some kind of intuitive concept that people have about fundamentally unpredictable repeatable events, but in the end, you have a mathematical structure: axioms, definitions, and theorems that follow from the axioms.
The answer comes from model theory: an axiom probabi,ity is consistent if and only if you can exhibit a model of the axioms. The axioms are so simple that all you need is the exponential distribution on the positive reals to exhibit an object that defines a sample space and a probability measure.
That the exponential distribution satisfies the axioms required to define a probability measure is pretty trivial to prove. Therefore the axioms are consistent. The existence of the exponential distribution as a model of the Kolmogorov axioms proves their consistency as axioms.
The view that axioms systems can be devoid of content is a dead end. We know this from the movel of Tarski, Godel, Quine, et al. Tarski showed that truth cannot be defined in the object language, but only in a metalanguage. I know this is philosophy of mathematics and not statistics, but my point is those results hold outside of arithmetic as well. There is no longer any reason to suppose that an axiom systems just presents a formalism that can be applied in certain cases.
Also, any system can prove how to change life companion message on s4 own consistency. An inconsistent system can prove its own probaability because everything follows from a contradiction given the what happens if you eat before blood test of the excluded middle.
Mathematical truths are not analytic, devoid of existential content or merely a matter of convention. I disagree entirely with your interpretation of godel. Ultimately what x learn from Godel probaiblity that meaning comes from how we use math not from the axioms.
In fact there are multiple disctinct meanings that we can assign the numbers that come from probability calculations, and this is why there are arguments about Bayes vs Frequentist statistics. The numbers are the same and all follow the axioms but the mappings to real world concepts which occurs ENTIRELY outside the axioms leaves us with philosophical arguments. The axioms themselves define formal manipulations that are allowed. They are basically computing rules.
What is probability?
A. develop a uniform probability model by assigning equal probability to all outcomes, and use the model to determine probabilities of events. For example, if a student is selected at random from a class, find the probability that Jane will be selected and the probability that a girl will be selected. Dec 26, · Probability is a mathematical concept. To define it based on any imperfect real-world counterpart (such as betting or long-run frequency) makes about as much sense as defining a line in Euclidean space as the edge of a perfectly straight piece of metal, or as the space occupied by a very thin thread that is pulled taut. A probability model is a mathematical description of long-run regularity consisting of a sample space S and a way of assigning probabilities to events. Probability models must satisfy both of the above rules. There are two main ways to assign probabilities to outcomes from a sample space.
If the proportion of occurrences of an outcome settles down to one value over the long run, that one value is then defined to be the probability of that outcome. There are two main rules that probabilities must satisfy for a given experiment:. If an event is impossible , then its probability must be equal to 0 i.
If an event is a certainty , then its probability must be equal to 1 i. A probability model is a mathematical description of long-run regularity consisting of a sample space S and a way of assigning probabilities to events.
Probability models must satisfy both of the above rules. There are two main ways to assign probabilities to outcomes from a sample space:. A basketball player shoots three free throws. We are interested in creating a probability model for the number of free throws that a basketball player makes when shooting three in a row.
Recall from above that the sample space for this event is:. The probability model for the number of free throws made, assuming this player has an equal chance of making hitting or missing the free throw , is:. About this book.
Creating a Probability Model. Probability and Two-Way Tables Intro. Combining Probabilities. Statistics in General 2. Visualizing Qualitative Data 3. Visualizing Quantitative Data. Datanumerics Intro 2. Measures of Center 3. Measures of Spread 4. Sample Standard Deviation. Normal Distribution Intro 2. Uniform Distribution 3. The Normal Distribution 4. The z-score 5. Percentiles 6. Binomial Distribution.
Correlation and Regression Intro 2. A Scatterplot 3. Correlation versus Causation. Data Collection Intro 2. Collecting Data 3. Collecting Data Through Experiments.
Probability and Two-Way Tables Intro 2. Creating a Probability Model 3. Combining Probabilities 4. Probability of Independent Events 5.
Conditional Probability. Sampling Distribution Intro 2. Sampling Distribution of the Sample Mean 3. Distribution of a Sample Proportion. Part 1: The Interval of Numbers 3. Part 2: The Level of Confidence C 4. Summary of Methods. Hypothesis Testing Intro 2.
Four Parts of a Hypothesis 3. Hypothesis Test — One Population Intro 2. A Hypothesis Test for a Population Proportion 3. Using Confidence Intervals to Test Hypotheses 4. Testing Claims About the Population Mean. Comparing Two Populations Intro 2. Inference Methods for Two Population Proportions 3. Independent and Dependent Sampling 6. Inference Methods for Dependent Samples 7. Hypothesis Testing for Matched-Pairs Data 9. Inference Methods for Independent Samples Chi-Squared Test Intro 2.
Two-Way Tables 3. The chi-Squared Test. Return to top of page. Probability Fraction. Probability Decimal. Probability Percent.