📜 ⬆️ ⬇️

Sampling and calculation accuracy

A number of my colleagues are faced with the problem that in order to calculate a certain metric, for example, a conversion rate, it is necessary to check the entire database. Or you need to conduct detailed research on each client, where there are millions of customers. These kinds of checks can work for quite a long time, even in storage facilities specially made for this. It's not very cool to wait for 5-15-40 minutes, while a simple metric is considered to find out that you need to count something else or add something else.


Sampling is one of the solutions to this problem: we are not trying to calculate our metric on the whole data array, but take a subset that representatively represents the metrics we need. This sample can be 1000 times smaller than our data set, but at the same time it is good enough to show the numbers we need.


In this article, I decided to demonstrate how the sampling sample sizes affect the final metric error.


Problem


The key question is: how well does the sample describe the “population”? Once we take a sample from the common array, the metrics we get are random variables. Different samples will give us different metrics results. Different, does not mean any. Probability theory tells us that sampling metric values ​​should be grouped around the true metric value (taken over the entire sample) with a certain level of error. At the same time, we often have problems where, to solve, we can manage with different levels of error. It's one thing to figure out whether we get a conversion of 50% or 10%, and another thing is to get the result with an accuracy of 50.01% vs 50.02%.


Interestingly, from the point of view of the theory, the conversion rate observed by us over the entire sample is also a random variable, since The “theoretical” conversion rate can only be calculated on a sample of infinite size. This means that even all our observations in the database actually give an estimate of the conversion with their accuracy, although it seems to us that these calculated figures of ours are absolutely accurate. This also leads to the conclusion that even if today the conversion rate is different from yesterday's, it does not mean that something has changed, but only means that today's sample (all observations in the database) from the general population (all possible observations of this day, which occurred and did not occur) gave a slightly different result than yesterday. In any case, for any honest product or analyst, this should be a basic hypothesis.


Task statement


Suppose we have 1,000,000 entries in the database of the form 0/1, which tell us about whether there was a conversion on the event. Then the conversion rate is simply the sum of 1 divided by 1 million.


Question: if we take a sample of size N, then by how much and with what probability will the conversion rate differ from that calculated for the entire sample?


Theoretical reasoning


The task is to calculate the confidence interval of the conversion coefficient for a sample of a given size for the binomial distribution.


From theory, the standard deviation for the binomial distribution is:
S = sqrt (p * (1 - p) / N)


Where
p - conversion rate
N - sample size
S - standard deviation


I will not take the confidence interval directly from the theory. There is a rather complicated and tangled mat, which ultimately relates the standard deviation and the final estimate of the confidence interval.


Let's develop an "intuition" about the standard deviation formula:


  1. The larger the sample size, the smaller the error. In this case, the error falls in the inverse quadratic dependence, i.e. an increase in the sample by 4 times increases the accuracy only 2 times. This means that at some point, increasing the sample size will not give special advantages, and also means that a fairly high accuracy can be obtained with a rather small sample.


  1. There is an error dependence on the conversion rate. The relative error (i.e., the ratio of the error to the magnitude of the conversion rate) has the “nasty” tendency to be the greater, the lower the conversion rate:


  1. As we can see, the error "flies up" into the sky with a low conversion rate. This means that if you sample rare events, then you need large sample sizes, otherwise you will get a conversion estimate with a very large error.

Modeling


We can completely move away from the theoretical solution and solve the problem "head on." Thanks to the R language, this is now very easy to do. To answer the question of which error we get in sampling, you can simply do a thousand samples and see what error we get.


The approach is as follows:


  1. We take different conversion rates (from 0.01% to 50%).
  2. We take 1000 samples of 10, 100, 1000, 10000, 50000, 100000, 250000, 500000 elements in the sample
  3. We count the conversion rate for each group of samples (1000 coefficients)
  4. We build a histogram for each group of samples and determine the extent to which 60%, 80% and 90% of the observed conversion rates lie.

Code on R that generates data:


sample.size <- c(10, 100, 1000, 10000, 50000, 100000, 250000, 500000) bootstrap = 1000 Error <- NULL len = 1000000 for (prob in c(0.0001, 0.001, 0.01, 0.1, 0.5)){ CRsub <- data.table(sample_size = 0, CR = 0) v1 = seq(1,len) v2 = rbinom(len, 1, prob) set = data.table(index = v1, conv = v2) print(paste('probability is: ', prob)) for (j in 1:length(sample.size)){ for(i in 1:bootstrap){ ss <- sample.size[j] subset <- set[round(runif(ss, min = 1, max = len),0),] CRsample <- sum(subset$conv)/dim(subset)[1] CRsub <- rbind(CRsub, data.table(sample_size = ss, CR = CRsample)) } print(paste('sample size is:', sample.size[j])) q <- quantile(CRsub[sample_size == ss, CR], probs = c(0.05,0.1, 0.2, 0.8, 0.9, 0.95)) Error <- rbind(Error, cbind(prob,ss,t(q))) } 

As a result, we get the following table (more will be the graphics, but the details are better seen in the table).


Conversion rateSample sizefive%ten%20%80%90%95%
0.0001ten000000
0.0001100000000
0.00011000000000.001
0.000110,0000000.00020.00020.0003
0.000150,0000.000040.000040.000060.000140.000160.00018
0.0001100,0000.000050.000060.000070.000130.000140.00016
0.0001250,0000.0000720.00007960.0000880.000120.0001280.000136
0.0001500,0000.000080.0000840.0000920.0001140.0001220.000128
0.001ten000000
0.001100000000.01
0.00110000000.0020.0020.003
0.00110,0000.00050.00060.00070.00130.00140.0016
0.00150,0000.00080.0008580.000920.001160.001220.00126
0.001100,0000.000870.000910.000950.001120.001160.0012105
0.001250,0000.000920.0009480.0009720.0010840.0011160.0011362
0.001500,0000.0009520.00096980.0009880.0010660.0010860.0011041
0.01ten000000.1
0.011000000.020.020.03
0.0110000.0060.0060.0080.0130.0140.015
0.0110,0000.00860.00890.00920.01090.01140.0118
0.0150,0000.00930.00950.00970.01040.01060.0108
0.01100,0000.00950.00960.00980.01030.01040.0106
0.01250,0000.00970.00980.00990.01020.01030.0104
0.01500,0000.00980.00990.00990.01020.01020.0103
0.1ten0000.20.20.3
0.11000.050.060.070.130.140.15
0.110000.0860.08890.0930.1080.11210.117
0.110,0000.09540.09630.09790.10280.10410.1055
0.150,0000.0980.09860.09920.10140.10190.1024
0.1100,0000.09870.0990.09940.10110.10140.1018
0.1250,0000.09930.09950.09980.10080.10110.1013
0.1500,0000.09960.09980.10.10070.10090.101
0.5ten0.20.30.40.60.70.8
0.51000.420.440.460.540.560.58
0.510000.4730.4780.4860.5130.520.525
0.510,0000.49220.49390.49590.50440.50610.5078
0.550,0000.49620.49680.49780.50180.50280.5036
0.5100,0000.49740.49790.49860.50140.50210.5027
0.5250,0000.49840.49870.49920.50080.50130.5017
0.5500,0000.49880.49910.49940.50060.50090.5011

Consider cases with 10% conversion and low 0.01% conversion, since they are clearly visible all the features of working with sampling.


At 10% conversion, the picture looks pretty simple:



The points are the edges of the 5-95% confidence interval, i.e. making a sample in 90% of cases we will receive CR on a sample within this interval. The vertical scale is the sample size (logarithmic scale), the horizontal one is the value of the conversion rate. The vertical bar is the "true" CR.


Here we see the same thing that we saw from the theoretical model: accuracy increases as the sample size grows, while one fairly "converges" and the sample gets a result close to the "true" one. A total of 1000 samples, we have 8.6% - 11.7%, which for a number of tasks will be enough. And on 10 thousand already 9.5% - 10.55%.


Things are worse with rare events and this is consistent with the theory:



A low conversion rate of 0.01% has a problem with statistics of 1 million observations, and with samples the situation is even worse. The error becomes simply gigantic. In samples up to 10,000, the metric is not valid in principle. For example, in a sample of 10 observations, my generator simply received 0 times a 1000 conversion, so there is only 1 point. On 100 thousand we have a range from 0.005% to 0.0016%, that is, we can make a mistake by almost half the coefficient with this sampling.


It is also worth noting that when you see conversion of such a small scale per 1 million tests, then you have just a big natural error. From this it follows that conclusions on the dynamics of such rare events should be made on really large samples, otherwise you just chase ghosts, random fluctuations in the data.


Findings:


  1. Sampling working method to get estimates
  2. The accuracy of the samples increases with increasing sample size and decreases with a decrease in the conversion rate.
  3. The accuracy of the estimates can be modeled for your task and thus choose the best sampling for yourself.
  4. It is important to remember that rare events sample badly.
  5. In general, rare events are difficult to analyze, they require large data samples without samples.

')

Source: https://habr.com/ru/post/458890/


All Articles