Outcome (probability)
In probability theory, an outcome is a possible result of an experiment or trial.[1] Each possible outcome of a particular experiment is unique, and different outcomes are mutually exclusive (only one outcome will occur on each trial of the experiment). All of the possible outcomes of an experiment form the elements of a sample space.[2] For the experiment where we flip a coin twice, the four possible outcomes that make up our sample space are (H, T), (T, H), (T, T) and (H, H), where "H" represents a "heads", and "T" represents a "tails". Outcomes should not be confused with events, which are sets (or informally, "groups") of outcomes. For comparison, we could define an event to occur when "at least one 'heads'" is flipped in the experiment - that is, when the outcome contains at least one 'heads'. This event would contain all outcomes in the sample space except the element (T, T). Sets of outcomes: eventsSince individual outcomes may be of little practical interest, or because there may be prohibitively (even infinitely) many of them, outcomes are grouped into sets of outcomes that satisfy some condition, which are called "events." The collection of all such events is a sigma-algebra.[3] An event containing exactly one outcome is called an elementary event. The event that contains all possible outcomes of an experiment is its sample space. A single outcome can be a part of many different events.[4] Typically, when the sample space is finite, any subset of the sample space is an event (that is, all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is uncountably infinite (most notably when the outcome must be some real number). So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events. Probability of an outcomeOutcomes may occur with probabilities that are between zero and one (inclusively). In a discrete probability distribution whose sample space is finite, each outcome is assigned a particular probability. In contrast, in a continuous distribution, individual outcomes all have zero probability, and non-zero probabilities can only be assigned to ranges of outcomes. Some "mixed" distributions contain both stretches of continuous outcomes and some discrete outcomes; the discrete outcomes in such distributions can be called atoms and can have non-zero probabilities.[5] Under the measure-theoretic definition of a probability space, the probability of an outcome need not even be defined. In particular, the set of events on which probability is defined may be some σ-algebra on and not necessarily the full power set. Equally likely outcomesIn some sample spaces, it is reasonable to estimate or assume that all outcomes in the space are equally likely (that they occur with equal probability). For example, when tossing an ordinary coin, one typically assumes that the outcomes "head" and "tail" are equally likely to occur. An implicit assumption that all outcomes are equally likely underpins most randomization tools used in common games of chance (e.g. rolling dice, shuffling cards, spinning tops or wheels, drawing lots, etc.). Of course, players in such games can try to cheat by subtly introducing systematic deviations from equal likelihood (for example, with marked cards, loaded or shaved dice, and other methods). Some treatments of probability assume that the various outcomes of an experiment are always defined so as to be equally likely.[6] However, there are experiments that are not easily described by a set of equally likely outcomes— for example, if one were to toss a thumb tack many times and observe whether it landed with its point upward or downward, there is no symmetry to suggest that the two outcomes should be equally likely. See also
References
External links
|