Sponsored Links
-->

Jumat, 23 Februari 2018

The Coupon Collector Problem - YouTube
src: i.ytimg.com

In probability theory, the coupon collector's problem describes the "collect all coupons and win" contests. It asks the following question: Suppose that there is an urn of n different coupons, from which coupons are being collected, equally likely, with replacement. What is the probability that more than t sample trials are needed to collect all n coupons? An alternative statement is: Given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as ? ( n log ( n ) ) {\displaystyle \Theta (n\log(n))} . For example, when n = 50 it takes about 225 trials on average to collect all 50 coupons.


Video Coupon collector's problem



Solution

Calculating the expectation

Let T be the time to collect all n coupons, and let ti be the time to collect the i-th coupon after i - 1 coupons have been collected. Think of T and ti as random variables. Observe that the probability of collecting a new coupon is pi = (n - (i - 1))/n. Therefore, ti has geometric distribution with expectation 1/pi. By the linearity of expectations we have:

E ( T ) = E ( t 1 ) + E ( t 2 ) + ? + E ( t n ) = 1 p 1 + 1 p 2 + ? + 1 p n = n n + n n - 1 + ? + n 1 = n ? ( 1 1 + 1 2 + ? + 1 n ) = n ? H n . {\displaystyle {\begin{aligned}\operatorname {E} (T)&=\operatorname {E} (t_{1})+\operatorname {E} (t_{2})+\cdots +\operatorname {E} (t_{n})={\frac {1}{p_{1}}}+{\frac {1}{p_{2}}}+\cdots +{\frac {1}{p_{n}}}\\&={\frac {n}{n}}+{\frac {n}{n-1}}+\cdots +{\frac {n}{1}}\\&=n\cdot \left({\frac {1}{1}}+{\frac {1}{2}}+\cdots +{\frac {1}{n}}\right)\\&=n\cdot H_{n}.\end{aligned}}}

Here Hn is the n-th harmonic number. Using the asymptotics of the harmonic numbers, we obtain:

E ( T ) = n ? H n = n log n + ? n + 1 2 + O ( 1 / n ) , {\displaystyle \operatorname {E} (T)=n\cdot H_{n}=n\log n+\gamma n+{\frac {1}{2}}+O(1/n),}

where ? ? 0.5772156649 {\displaystyle \gamma \approx 0.5772156649} is the Euler-Mascheroni constant.

Now one can use the Markov inequality to bound the desired probability:

P ( T >= c n H n ) <= 1 c . {\displaystyle \operatorname {P} (T\geq cnH_{n})\leq {\frac {1}{c}}.}

Calculating the variance

Using the independence of random variables ti, we obtain:

Var ( T ) = Var ( t 1 ) + Var ( t 2 ) + ? + Var ( t n ) = 1 - p 1 p 1 2 + 1 - p 2 p 2 2 + ? + 1 - p n p n 2 < ( n 2 n 2 + n 2 ( n - 1 ) 2 + ? + n 2 1 2 ) = n 2 ? ( 1 1 2 + 1 2 2 + ? + 1 n 2 ) < ? 2 6 n 2 {\displaystyle {\begin{aligned}\operatorname {Var} (T)&=\operatorname {Var} (t_{1})+\operatorname {Var} (t_{2})+\cdots +\operatorname {Var} (t_{n})\\&={\frac {1-p_{1}}{p_{1}^{2}}}+{\frac {1-p_{2}}{p_{2}^{2}}}+\cdots +{\frac {1-p_{n}}{p_{n}^{2}}}\\&<\left({\frac {n^{2}}{n^{2}}}+{\frac {n^{2}}{(n-1)^{2}}}+\cdots +{\frac {n^{2}}{1^{2}}}\right)\\&=n^{2}\cdot \left({\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{n^{2}}}\right)\\&<{\frac {\pi ^{2}}{6}}n^{2}\end{aligned}}}

since ? 2 6 = 1 1 2 + 1 2 2 + ? + 1 n 2 + ? {\displaystyle {\frac {\pi ^{2}}{6}}={\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{n^{2}}}+\cdots } (see Basel problem).

Now one can use the Chebyshev inequality to bound the desired probability:

P ( | T - n H n | >= c n ) <= ? 2 6 c 2 . {\displaystyle \operatorname {P} \left(|T-nH_{n}|\geq cn\right)\leq {\frac {\pi ^{2}}{6c^{2}}}.}

Tail estimates

A different upper bound can be derived from the following observation. Let Z i r {\displaystyle {Z}_{i}^{r}} denote the event that the i {\displaystyle i} -th coupon was not picked in the first r {\displaystyle r} trials. Then:

P [ Z i r ] = ( 1 - 1 n ) r <= e - r / n {\displaystyle {\begin{aligned}P\left[{Z}_{i}^{r}\right]=\left(1-{\frac {1}{n}}\right)^{r}\leq e^{-r/n}\end{aligned}}}

Thus, for r = ? n log n {\displaystyle r=\beta n\log n} , we have P [ Z i r ] <= e ( - ? n log n ) / n = n - ? {\displaystyle P\left[{Z}_{i}^{r}\right]\leq e^{(-\beta n\log n)/n}=n^{-\beta }} .

P [ T > ? n log n ] = P [ ? i Z i ? n log n ] <= n ? P [ Z 1 ? n log n ] <= n - ? + 1 {\displaystyle {\begin{aligned}P\left[T>\beta n\log n\right]=P\left[\bigcup _{i}{Z}_{i}^{\beta n\log n}\right]\leq n\cdot P[{Z}_{1}^{\beta n\log n}]\leq n^{-\beta +1}\end{aligned}}}

Maps Coupon collector's problem



Extensions and generalizations

  • Paul Erd?s and Alfréd Rényi proved the limit theorem for the distribution of T. This result is a further extension of previous bounds.
P ( T < n log n + c n ) -> e - e - c ,     as   n -> ? . {\displaystyle \operatorname {P} (T<n\log n+cn)\to e^{-e^{-c}},\ \ {\text{as}}\ n\to \infty .}
  • Donald J. Newman and Lawrence Shepp found a generalization of the coupon collector's problem when m copies of each coupon need to be collected. Let Tm be the first time m copies of each coupon are collected. They showed that the expectation in this case satisfies:
E ( T m ) = n log n + ( m - 1 ) n log log n + O ( n ) ,   as   n -> ? . {\displaystyle \operatorname {E} (T_{m})=n\log n+(m-1)n\log \log n+O(n),\ {\text{as}}\ n\to \infty .}
Here m is fixed. When m = 1 we get the earlier formula for the expectation.
  • Common generalization, also due to Erd?s and Rényi:
P ( T m < n log n + ( m - 1 ) n log log n + c n ) -> e - e - c / ( m - 1 ) ! ,     as   n -> ? . {\displaystyle \operatorname {P} {\bigl (}T_{m}<n\log n+(m-1)n\log \log n+cn{\bigr )}\to e^{-e^{-c}/(m-1)!},\ \ {\text{as}}\ n\to \infty .}
  • In the general case of a nonuniform probability distribution, according to Philippe Flajolet,
E ( T ) = ? 0 ? ( 1 - ? i = 1 n ( 1 - e - p i t ) ) d t . {\displaystyle E(T)=\int _{0}^{\infty }{\big (}1-\prod _{i=1}^{n}(1-e^{-p_{i}t}){\big )}dt.}

coupon problem
src: classics.understars.org


See also

  • Watterson estimator
  • Birthday problem

coupon collector's problem | The Mathalicious Blog
src: blog.mathalicious.com


Notes


Complete Solution Is In The Short TUTORIAL - YouTube
src: i.ytimg.com


References

  • Blom, Gunnar; Holst, Lars; Sandell, Dennis (1994), "7.5 Coupon collecting I, 7.6 Coupon collecting II, and 15.4 Coupon collecting III", Problems and Snapshots from the World of Probability, New York: Springer-Verlag, pp. 85-87, 191, ISBN 0-387-94161-4, MR 1265713 .
  • Dawkins, Brian (1991), "Siobhan's problem: the coupon collector revisited", The American Statistician, 45 (1): 76-82, doi:10.2307/2685247, JSTOR 2685247 .
  • Erd?s, Paul; Rényi, Alfréd (1961), "On a classical problem of probability theory" (PDF), Magyar Tudományos Akadémia Matematikai Kutató Intézetének Közleményei, 6: 215-220, MR 0150807 .
  • Newman, Donald J.; Shepp, Lawrence (1960), "The double dixie cup problem", American Mathematical Monthly, 67: 58-61, doi:10.2307/2308930, MR 0120672 
  • Flajolet, Philippe; Gardy, Danièle; Thimonier, Loÿs (1992), "Birthday paradox, coupon collectors, caching algorithms and self-organizing search", Discrete Applied Mathematics, 39 (3): 207-229, doi:10.1016/0166-218X(92)90177-C, MR 1189469 .
  • Isaac, Richard (1995), "8.4 The coupon collector's problem solved", The Pleasures of Probability, Undergraduate Texts in Mathematics, New York: Springer-Verlag, pp. 80-82, ISBN 0-387-94415-X, MR 1329545 .
  • Motwani, Rajeev; Raghavan, Prabhakar (1995), "3.6. The Coupon Collector's Problem", Randomized algorithms, Cambridge: Cambridge University Press, pp. 57-63, MR 1344451 .

coupon problem
src: morethanahomeschoolmom.com


External links

  • "Coupon Collector Problem" by Ed Pegg, Jr., the Wolfram Demonstrations Project. Mathematica package.
  • How Many Singles, Doubles, Triples, Etc., Should The Coupon Collector Expect?, a short note by Doron Zeilberger.

Source of article : Wikipedia