The law of averages or what is the secret of successful sellers. Average values ​​Strong law of large numbers

The words about large numbers refer to the number of tests - a large number of values ​​​​of a random variable or the cumulative action of a large number of random variables are considered. The essence of this law is as follows: although it is impossible to predict what value a single random variable will take in a single experiment, however, the total result of the action of a large number of independent random variables loses its random character and can be predicted almost reliably (i.e. with high probability). For example, it is impossible to predict which side a coin will fall on. However, if you toss 2 tons of coins, then with great certainty it can be argued that the weight of the coins that fell with the coat of arms up is 1 ton.

First of all, the so-called Chebyshev inequality refers to the law of large numbers, which estimates in a separate test the probability of a random variable accepting a value that deviates from the average value by no more than a given value.

Chebyshev's inequality. Let X is an arbitrary random variable, a=M(X) , a D(X) is its dispersion. Then

Example. The nominal (i.e. required) value of the diameter of the sleeve machined on the machine is 5mm, and the variance is no more 0.01 (this is the accuracy tolerance of the machine). Estimate the probability that in the manufacture of one bushing, the deviation of its diameter from the nominal will be less than 0.5mm .

Solution. Let r.v. X- the diameter of the manufactured bushing. By condition, its mathematical expectation is equal to the nominal diameter (if there is no systematic failure in setting up the machine): a=M(X)=5 , and the variance D(X)≤0.01. Applying the Chebyshev inequality for ε = 0.5, we get:

Thus, the probability of such a deviation is quite high, and therefore we can conclude that in the case of a single production of a part, it is almost certain that the deviation of the diameter from the nominal will not exceed 0.5mm .

Basically, the standard deviation σ characterizes average deviation of a random variable from its center (i.e. from its mathematical expectation). Because it average deviation, then large deviations (emphasis on o) are possible during testing. How large deviations are practically possible? When studying normally distributed random variables, we derived the “three sigma” rule: a normally distributed random variable X in a single test practically does not deviate from its average further than , where σ= σ(X) is the standard deviation of r.v. X. We deduced such a rule from the fact that we obtained the inequality

.

Let us now estimate the probability for arbitrary random variable X accept a value that differs from the mean by no more than three times the standard deviation. Applying the Chebyshev inequality for ε = and given that D(X)=σ 2 , we get:

.

In this way, in general we can estimate the probability of a random variable deviating from its mean by no more than three standard deviations by the number 0.89 , while for a normal distribution it can be guaranteed with probability 0.997 .

Chebyshev's inequality can be generalized to a system of independent identically distributed random variables.

Generalized Chebyshev's inequality. If independent random variables X 1 , X 2 , … , X n M(X i )= a and dispersions D(X i )= D, then

At n=1 this inequality goes over into the Chebyshev inequality formulated above.

The Chebyshev inequality, having independent significance for solving the corresponding problems, is used to prove the so-called Chebyshev theorem. We first describe the essence of this theorem and then give its formal formulation.

Let X 1 , X 2 , … , X n– a large number of independent random variables with mathematical expectations M(X 1 )=a 1 , … , M(X n )=a n. Although each of them, as a result of the experiment, can take a value far from its average (i.e., mathematical expectation), however, a random variable
, equal to their arithmetic mean, with a high probability will take a value close to a fixed number
(this is the average of all mathematical expectations). This means the following. Let, as a result of the test, independent random variables X 1 , X 2 , … , X n(there are a lot of them!) have taken the values ​​accordingly X 1 , X 2 , … , X n respectively. Then if these values ​​themselves may turn out to be far from the average values ​​of the corresponding random variables, their average value
is likely to be close to
. Thus, the arithmetic mean of a large number of random variables already loses its random character and can be predicted with great accuracy. This can be explained by the fact that random deviations of the values X i from a i can be of different signs, and therefore in total these deviations are compensated with a high probability.

Terema Chebysheva (law of large numbers in the form of Chebyshev). Let X 1 , X 2 , … , X n is a sequence of pairwise independent random variables whose variances are limited to the same number. Then, no matter how small the number ε we take, the probability of inequality

will be arbitrarily close to unity if the number n random variables to take large enough. Formally, this means that under the conditions of the theorem

This type of convergence is called convergence in probability and is denoted by:

Thus, Chebyshev's theorem says that if there are a sufficiently large number of independent random variables, then their arithmetic mean in a single test will almost certainly take a value close to the mean of their mathematical expectations.

Most often, the Chebyshev theorem is applied in a situation where random variables X 1 , X 2 , … , X n have the same distribution (i.e. the same distribution law or the same probability density). In fact, this is just a large number of instances of the same random variable.

Consequence(of the generalized Chebyshev inequality). If independent random variables X 1 , X 2 , … , X n have the same distribution with mathematical expectations M(X i )= a and dispersions D(X i )= D, then

, i.e.
.

The proof follows from the generalized Chebyshev inequality by passing to the limit as n→∞ .

We note once again that the equalities written above do not guarantee that the value of the quantity
tends to a at n→∞. This value is still a random variable, and its individual values ​​can be quite far from a. But the probability of such (far from a) values ​​with increasing n tends to 0.

Comment. The conclusion of the corollary is obviously also valid in the more general case when the independent random variables X 1 , X 2 , … , X n have a different distribution, but the same mathematical expectations (equal a) and the variances limited in the aggregate. This makes it possible to predict the accuracy of measuring a certain quantity, even if these measurements are made by different instruments.

Let us consider in more detail the application of this corollary to the measurement of quantities. Let's use some device n measurements of the same quantity, the true value of which is a and we don't know. The results of such measurements X 1 , X 2 , … , X n may differ significantly from each other (and from the true value a) due to various random factors (pressure drops, temperatures, random vibration, etc.). Consider the r.v. X- instrument reading for a single measurement of a quantity, as well as a set of r.v. X 1 , X 2 , … , X n- instrument reading at the first, second, ..., last measurement. Thus, each of the quantities X 1 , X 2 , … , X n there is just one of the instances of the r.v. X, and therefore they all have the same distribution as the r.v. X. Since the measurement results are independent of each other, the r.v. X 1 , X 2 , … , X n can be considered independent. If the device does not give a systematic error (for example, zero is not “knocked down” on the scale, the spring is not stretched, etc.), then we can assume that the mathematical expectation M(X) = a, and therefore M(X 1 ) = ... = M(X n ) = a. Thus, the conditions of the above corollary are satisfied, and therefore, as an approximate value of the quantity a we can take the "implementation" of a random variable
in our experiment (consisting of a series of n measurements), i.e.

.

With a large number of measurements, the good accuracy of the calculation using this formula is practically reliable. This is the rationale for the practical principle that, with a large number of measurements, their arithmetic mean practically does not differ much from the true value of the measured quantity.

The “selective” method, which is widely used in mathematical statistics, is based on the law of large numbers, which allows obtaining its objective characteristics with acceptable accuracy from a relatively small sample of values ​​of a random variable. But this will be discussed in the next section.

Example. On a measuring device that does not make systematic distortions, a certain quantity is measured a once (received value X 1 ), and then another 99 times (obtained values X 2 , … , X 100 ). For the true value of measurement a first take the result of the first measurement
, and then the arithmetic mean of all measurements
. The measurement accuracy of the device is such that the standard deviation of the measurement σ is not more than 1 (because the dispersion D 2 also does not exceed 1). For each of the measurement methods, estimate the probability that the measurement error does not exceed 2.

Solution. Let r.v. X- instrument reading for a single measurement. Then by condition M(X)=a. To answer the questions posed, we apply the generalized Chebyshev inequality

for ε =2 first for n=1 and then for n=100 . In the first case, we get
, and in the second. Thus, the second case practically guarantees the given measurement accuracy, while the first one leaves serious doubts in this sense.

Let us apply the above statements to the random variables that arise in the Bernoulli scheme. Let us recall the essence of this scheme. Let it be produced n independent tests, in each of which some event BUT can appear with the same probability R, a q=1–r(by meaning, this is the probability of the opposite event - not the occurrence of an event BUT) . Let's spend some number n such tests. Consider random variables: X 1 – number of occurrences of the event BUT in 1 th test, ..., X n– number of occurrences of the event BUT in n th test. All introduced r.v. can take values 0 or 1 (event BUT may appear in the test or not), and the value 1 conditionally accepted in each trial with a probability p(probability of occurrence of an event BUT in each test), and the value 0 with probability q= 1 p. Therefore, these quantities have the same distribution laws:

X 1

X n

Therefore, the average values ​​of these quantities and their dispersions are also the same: M(X 1 )=0 q+1 p= p, …, M(X n )= p ; D(X 1 )=(0 2 q+1 2 p)− p 2 = p∙(1− p)= p q, … , D(X n )= p q . Substituting these values ​​into the generalized Chebyshev inequality, we obtain

.

It is clear that the r.v. X=X 1 +…+X n is the number of occurrences of the event BUT in all n trials (as they say - "the number of successes" in n tests). Let in the n test event BUT appeared in k of them. Then the previous inequality can be written as

.

But the magnitude
, equal to the ratio of the number of occurrences of the event BUT in n independent trials, to the total number of trials, previously called the relative event rate BUT in n tests. Therefore, there is an inequality

.

Passing now to the limit at n→∞, we get
, i.e.
(according to probability). This is the content of the law of large numbers in the form of Bernoulli. It follows from this that for a sufficiently large number of trials n arbitrarily small deviations of the relative frequency
events from its probability R are almost certain events, and large deviations are almost impossible. The resulting conclusion about such stability of relative frequencies (which we previously referred to as experimental fact) justifies the previously introduced statistical definition of the probability of an event as a number around which the relative frequency of an event fluctuates.

Considering that the expression pq= p∙(1− p)= pp 2 does not exceed on the change interval
(it is easy to verify this by finding the minimum of this function on this segment), from the above inequality
easy to get that

,

which is used in solving the corresponding problems (one of them will be given below).

Example. The coin was flipped 1000 times. Estimate the probability that the deviation of the relative frequency of the appearance of the coat of arms from its probability will be less than 0.1.

Solution. Applying the inequality
at p= q=1/2 , n=1000 , ε=0.1, we get .

Example. Estimate the probability that, under the conditions of the previous example, the number k of the dropped coats of arms will be in the range of 400 before 600 .

Solution. Condition 400< k<600 means that 400/1000< k/ n<600/1000 , i.e. 0.4< W n (A)<0.6 or
. As we have just seen from the previous example, the probability of such an event is at least 0.975 .

Example. To calculate the probability of some event BUT 1000 experiments were carried out, in which the event BUT appeared 300 times. Estimate the probability that the relative frequency (equal to 300/1000=0.3) is different from the true probability R no further than 0.1 .

Solution. Applying the above inequality
for n=1000, ε=0.1 , we get .

Lecture 8. Section 1. Probability theory

Issues under consideration

1) The law of large numbers.

2) Central limit theorem.

The law of large numbers.

The law of large numbers in a broad sense is understood as the general principle according to which, with a large number of random variables, their average result ceases to be random and can be predicted with a high degree of certainty.

The law of large numbers in the narrow sense is understood as a number of mathematical theorems, in each of which, under certain conditions, the possibility of approximating the average characteristics of a large number of tests is established.

to some definite constants. In proving theorems of this kind, Markov's and Chebyshev's inequalities are used, which are also of independent interest.

Theorem 1(Markov's inequality). If a random variable takes non-negative values ​​and has a mathematical expectation, then for any positive number the inequality

Proof we will carry out for a discrete random variable. We will assume that it takes values ​​from which the first ones are less than or equal and all the others are greater Then

where

Example 1 The average number of calls arriving at the factory switch in an hour is 300. Estimate the probability that in the next hour the number of calls to the switch:

1) will exceed 400;

2) will be no more than 500.

Solution. 1) Let the random variable be the number of calls arriving at the switch during an hour. The mean value is . So we need to evaluate. According to the Markov inequality

2) Thus, the probability that the number of calls will be no more than 500 is at least 0.4.

Example 2 The sum of all deposits in a bank branch is 2 million rubles, and the probability that a randomly taken deposit does not exceed 10 thousand rubles is 0.6. What can be said about the number of contributors?

Solution. Let a randomly taken value be the size of a randomly taken contribution, and the number of all contributions. Then (thousand). According to Markov's inequality, whence

Example 3 Let be the time of a student being late for a lecture, and it is known that, on average, he is late for 1 minute. Estimate the probability that the student will be at least 5 minutes late.

Solution. By assumption Applying the Markov inequality, we obtain that

Thus, out of every 5 students, no more than 1 student will be late by at least 5 minutes.

Theorem 2 (Chebyshev's inequality). .

Proof. Let a random variable X be given by a series of distributions

According to the definition of dispersion Let us exclude from this sum those terms for which . At the same time, since all terms are non-negative, the sum can only decrease. For definiteness, we will assume that the first k terms. Then

Consequently, .

Chebyshev's inequality makes it possible to estimate from above the probability of a random variable deviating from its mathematical expectation based on information only about its variance. It is widely used, for example, in the theory of estimation.

Example 4 A coin is tossed 10,000 times. Estimate the probability that the frequency of the coat of arms differs from 0.01 or more.

Solution. Let us introduce independent random variables , where is a random variable with the distribution series

Then since it is distributed according to the binomial law with The frequency of appearance of the coat of arms is a random variable where . Therefore, the dispersion of the frequency of the appearance of the coat of arms is According to the Chebyshev inequality, .

Thus, on average, in no more than a quarter of the cases at 10,000 coin tosses, the frequency of the coat of arms will differ from by one hundredth or more.

Theorem 3 (Chebyshev). If are independent random variables whose variances are uniformly bounded (), then

Proof. Because

then applying the Chebyshev inequality, we obtain

Since the probability of an event cannot be greater than 1, we get what we want.

Consequence 1. If are independent random variables with uniformly bounded variances and the same mathematical expectation equal to a, then

Equality (1) suggests that random deviations of individual independent random variables from their common average value, when large in their mass, cancel each other out. Therefore, although the quantities themselves are random, their average at large, it is practically no longer random and close to . This means that if it is not known in advance, then it can be calculated using the arithmetic mean. This property of sequences of independent random variables is called the law of statistical stability. The law of statistical stability substantiates the possibility of applying the analysis of statistics in making specific management decisions.

Theorem 4 (Bernoulli). If in each of P independent experiments, the probability p of the occurrence of event A is constant, then

,

where is the number of occurrences of event A for these P tests.

Proof. We introduce independent random variables , where X i is a random variable with a distribution series

Then M(X i)=p, D(X i)=pq. Since , then D(X i) are limited in aggregate. It follows from Chebyshev's theorem that

.

But X 1 + X 2 + ... + X P is the number of occurrences of event A in a series of P tests.

The meaning of Bernoulli's theorem is that with an unlimited increase in the number of identical independent experiments, with practical certainty, it can be argued that the frequency of the occurrence of an event will differ arbitrarily little from the probability of its occurrence in a separate experiment ( statistical stability of the event probability). Therefore, Bernoulli's theorem serves as a bridge from the theory of applications to its applications.


What is the secret of successful sellers? If you watch the best salespeople of any company, you will notice that they have one thing in common. Each of them meets with more people and makes more presentations than the less successful salespeople. These people understand that sales is a numbers game, and the more people they tell about their products or services, the more deals they close, that's all. They understand that if they communicate not only with those few who will definitely say yes to them, but also with those whose interest in their proposal is not so great, then the law of averages will work in their favor.


Your earnings will depend on the number of sales, but at the same time, they will be directly proportional to the number of presentations you make. Once you understand and begin to put into practice the law of averages, the anxiety associated with starting a new business or working in a new field will begin to decrease. And as a result, a sense of control and confidence in their ability to earn will begin to grow. If you just make presentations and hone your skills in the process, there will be deals.

Rather than thinking about the number of deals, think about the number of presentations. It makes no sense to wake up in the morning or come home in the evening and start wondering who will buy your product. Instead, it's best to plan each day for how many calls you need to make. And then, no matter what - make all those calls! This approach will make your job easier - because it's a simple and specific goal. If you know that you have a very specific and achievable goal in front of you, it will be easier for you to make the planned number of calls. If you hear "yes" a couple of times during this process, so much the better!

And if "no", then in the evening you will feel that you honestly did everything you could, and you will not be tormented by thoughts about how much money you have earned, or how many partners you have acquired in a day.

Let's say in your company or your business, the average salesperson closes one deal every four presentations. Now imagine that you are drawing cards from a deck. Each card of three suits - spades, diamonds and clubs - is a presentation where you professionally present a product, service or opportunity. You do it the best you can, but you still don't close the deal. And each heart card is a deal that allows you to get money or acquire a new companion.

In such a situation, wouldn't you want to draw as many cards from the deck as possible? Suppose you are offered to draw as many cards as you want, while paying you or suggesting a new companion each time you draw a heart card. You will begin to draw cards enthusiastically, barely noticing what suit the card has just been pulled out.

You know that there are thirteen hearts in a deck of fifty-two cards. And in two decks - twenty-six heart cards, and so on. Will you be disappointed by drawing spades, diamonds or clubs? Of course not! You will only think that each such "miss" brings you closer - to what? To the card of hearts!

But you know what? You have already been given this offer. You are in a unique position to earn as much as you want and draw as many heart cards as you want to draw in your life. And if you just "draw cards" conscientiously, improve your skills and endure a little spade, diamond and club, then you will become an excellent salesman and succeed.

One of the things that makes selling so much fun is that every time you shuffle the deck, the cards are shuffled differently. Sometimes all the hearts end up at the beginning of the deck, and after a successful streak (when it already seems to us that we will never lose!) We are waiting for a long row of cards of a different suit. And another time, to get to the first heart, you have to go through an infinite number of spades, clubs and tambourines. And sometimes cards of different suits fall out strictly in turn. But in any case, in every deck of fifty-two cards, in some order, there are always thirteen hearts. Just pull out the cards until you find them.



From: Leylya,  

Law of Large Numbers

Law of large numbers in probability theory states that the empirical mean (arithmetic mean) of a sufficiently large finite sample from a fixed distribution is close to the theoretical mean (expectation) of this distribution. Depending on the type of convergence, there is a weak law of large numbers, when convergence in probability takes place, and a strong law of large numbers, when convergence almost everywhere takes place.

There will always be such a number of trials that, with any predetermined probability, the relative frequency of occurrence of some event will differ arbitrarily little from its probability.

The general meaning of the law of large numbers is that the joint action of a large number of random factors leads to a result that is almost independent of chance.

Methods for estimating probability based on the analysis of a finite sample are based on this property. A good example is the prediction of election results based on a survey of a sample of voters.

Weak law of large numbers

Let there be an infinite sequence (consecutive enumeration) of identically distributed and uncorrelated random variables , defined on the same probability space . That is, their covariance. Let . Let us denote the sample mean of the first terms:

Strong law of large numbers

Let there be an infinite sequence of independent identically distributed random variables , defined on the same probability space . Let . Let us denote the sample mean of the first terms:

.

Then almost certainly.

see also

Literature

  • Shiryaev A. N. Probability, - M .: Science. 1989.
  • Chistyakov V.P. Probability theory course, - M., 1982.

Wikimedia Foundation. 2010 .

  • Cinema of Russia
  • Gromeka, Mikhail Stepanovich

See what the "Law of Large Numbers" is in other dictionaries:

    LAW OF GREAT NUMBERS- (law of large numbers) In the case when the behavior of individual members of the population is highly distinctive, the behavior of the group is on average more predictable than the behavior of any of its members. The trend in which groups ... ... Economic dictionary

    LAW OF GREAT NUMBERS- see LARGE NUMBERS LAW. Antinazi. Encyclopedia of Sociology, 2009 ... Encyclopedia of Sociology

    Law of Large Numbers- the principle according to which the quantitative patterns inherent in mass social phenomena are most clearly manifested with a sufficiently large number of observations. Single phenomena are more susceptible to the effects of random and ... ... Glossary of business terms

    LAW OF GREAT NUMBERS- claims that with a probability close to one, the arithmetic mean of a large number of random variables of approximately the same order will differ little from a constant equal to the arithmetic mean of the mathematical expectations of these variables. Difference… … Geological Encyclopedia

    law of large numbers- — [Ya.N. Luginsky, M.S. Fezi Zhilinskaya, Yu.S. Kabirov. English Russian Dictionary of Electrical Engineering and Power Industry, Moscow, 1999] Electrical engineering topics, basic concepts EN law of averageslaw of large numbers ... Technical Translator's Handbook

    law of large numbers- didžiųjų skaičių dėsnis statusas T sritis fizika atitikmenys: engl. law of large numbers vok. Gesetz der großen Zahlen, n rus. law of large numbers, m pranc. loi des grands nombres, f … Fizikos terminų žodynas

    LAW OF GREAT NUMBERS- a general principle, due to which the combined action of random factors leads, under certain very general conditions, to a result that is almost independent of chance. The convergence of the frequency of occurrence of a random event with its probability with an increase in the number ... ... Russian sociological encyclopedia

    Law of Large Numbers- the law stating that the cumulative action of a large number of random factors leads, under certain very general conditions, to a result almost independent of chance ... Sociology: a dictionary

    LAW OF GREAT NUMBERS- statistical law expressing the relationship of statistical indicators (parameters) of the sample and the general population. The actual values ​​of statistical indicators obtained from a certain sample always differ from the so-called. theoretical ... ... Sociology: Encyclopedia

    LAW OF GREAT NUMBERS- the principle that the frequency of financial losses of a certain type can be predicted with high accuracy when there are a large number of losses of similar types ... Encyclopedic Dictionary of Economics and Law

Books

  • A set of tables. Maths. Theory of Probability and Mathematical Statistics. 6 tables + methodology, . The tables are printed on thick polygraphic cardboard measuring 680 x 980 mm. The kit includes a brochure with methodological recommendations for teachers. Educational album of 6 sheets. Random…


 
Articles on topic:
Everything you need to know about SD memory cards so you don't screw up when buying Connect sd
(4 ratings) If you don't have enough internal storage on your device, you can use the SD card as internal storage for your Android phone. This feature, called Adoptable Storage, allows the Android OS to format external media
How to turn the wheels in GTA Online and more in the GTA Online FAQ
Why doesn't gta online connect? It's simple, the server is temporarily off / inactive or not working. Go to another. How to disable online games in the browser. How to disable the launch of the Online Update Clinet application in the Connect manager? ... on skkoko I know when you mind
Ace of Spades in combination with other cards
The most common interpretations of the card are: the promise of a pleasant acquaintance, unexpected joy, previously unexperienced emotions and sensations, receiving a present, a visit to a married couple. Ace of hearts, the meaning of the card when characterizing a particular person you
How to build a relocation horoscope correctly Make a map by date of birth with decoding
The natal chart speaks of the innate qualities and abilities of its owner, the local chart speaks of local circumstances initiated by the place of action. They are equal in importance, because the life of many people passes away from their place of birth. Follow the local map