For many people, this will come as a surprise, but in some cases it is possible to determine a shift (bias) in the selection process without knowing anything about the pool of candidates. And this is great, because, among other things, this means that a third party can use this technique to determine the shift without coordinating this with those who make the selection.
You can use this technique if (a) you have at least a random sample of applicants who have been selected, (b) their effectiveness has been evaluated, (c) the groups of candidates you compare have approximately the same distribution of possibilities.
How it works?
Think about what it means to be biased. What can mean bias towards candidates of type x during the selection process, which makes them difficult? [1] This contributes to the fact that candidates of type X, who managed to pass the selection process, will be better than the rest, successfully passed his candidates. And if the results of the work of all these successful candidates to succumb to analytics, then we will find out who is the best.
')
Of course, the test that you use for this should be justified. And, in particular, it should not be reduced to "no" by the bias that you are trying to measure. In those areas where efficiency can be measured, it is fairly easy to determine bias. Do you want to check whether a similar phenomenon has occurred in relation to a particular type of candidate? Then check if other candidates are not inferior in their effectiveness. It is not just an assumption-based bias method. This is what bias means.
For example, many believe that venture capital firms are biased against women founders. This can be easily detected: do startups with female founders show better results among their portfolio companies? A couple of months ago, one of the VC-companies (probably unintentionally) published the results of studies that prove a bias towards this type of candidates. In the first round of financing, it was found that among the company's portfolio of companies, startups with women at the head surpassed the rest by 63%. [2]
I began my story with the assumption that this technique will come as a surprise to many people, for the reason that we rarely see research of this type. I am sure that First Round will be surprised by the fact that the selection of candidates is biased. I seriously doubt their understanding that they publish data not on the trends in the development of start-ups, but on the results of their own bias in selecting companies. If they were aware of what all these numbers really mean, they would have presented them differently.
I am sure that this technique will be increasingly used in the future. The information necessary for such research is becoming increasingly available. Previously, it was not easy to get information about who exactly the organizations choose for these or other tasks - this information was guarded by them jealously guarded. Now, such data are often made available to the public, and those who are interested in it can easily get it.
Notes[1] This technique is not suitable if the selection process required different things from different types of applicants: for example, if the employer hired men, based on their abilities, and women - from their appearance.
[2] Paul Bachkheit noted that First Round had excluded Uber from research, which had become the fund’s most successful investment. And while it does make sense to exclude certain drop-out indicators from some types of research, start-ups investing profit studies are not one of them.
I would like to thank Sam Altman, Jessica Livingston and Geoff Ralston for reading the draft article.