
Digital innovations and advances in the field of artificial intelligence (AI) have generated many
tools for finding and identifying potential candidates. Many of the technologies promise to help organizations find the right person for a specific place and sift out the wrong people faster than ever.
These tools provide organizations with unprecedented opportunities for making decisions about human capital based on data processing. They also have the potential to democratize feedback - millions of candidates can get an assessment of their strengths, areas for development and the choice of both a potential career and a suitable organization. In particular, we are
seeing rapid growth (and corresponding investments) in game-based evaluations, bots for processing posts in social networks, linguistic analysis of candidate texts, video-interviews using algorithms to analyze speech content, tone of voice, emotional states, non-verbal behavior and temperament .
By undermining the basics of hiring and evaluating staff, these tools leave questions about their accuracy, privacy, and ethical and legal implications. This is especially evident in comparison with such time-tested psychometric techniques as the
NEO-PI-R , the Wanderlik
test, the Raven standard progressive matrix test, or the Hogan personality test. All of them were created with the help of a
scientific approach and thoroughly checked at the relevant workplaces, as a result of which a reliable correspondence was obtained between the assessments of candidates and their effectiveness in the workplace (and the evidence was published in independent scientific journals that are credible). Recently, the US Senate was even worried about whether new technologies (especially facial analysis) would adversely affect the equality of all candidates.
')
In this article, we focus on the potential consequences of new technologies related to the privacy of candidates, as well as on the protection of candidates by the law on the inadmissibility of discrimination against persons with disabilities, and other federal and state laws. Employers understand that they cannot ask candidates questions about their marital status or political views, whether they have a pregnancy, sexual orientation, physical or mental illness, problems with alcohol, drugs or lack of sleep. However, new technologies may not be able to directly take into account these factors without the proper consent of the candidate.
Before delving into the existing ambiguities of the brave new world of candidate ratings, it would be nice to look at the past. Psychometric evaluations have existed for more than 100 years, and they began to be widely used after the so-called. alpha test for the US military, who divided the recruits into categories and determined the likelihood of their success in various roles. Traditionally,
psychometry is divided into three broad categories: cognitive abilities, or intellect; personality, or temperament; mental health or clinical diagnosis.
After the adoption
of the Disabilities Act (ADA) in 1990, employers were generally forbidden to take an interest in people's physical disabilities, their mental health or clinical diagnoses as part of a preliminary assessment of candidates, and companies that violated this law were tried and censured. In essence, physical or mental disability is considered “personal” information that the employer cannot be interested in during the interview, just as he cannot ask questions about privacy or take personal demographic information into account when making decisions.
Tests of cognitive abilities and intelligence were
recognized as reliable methods for predicting success at work in a wide range of professions. However, such assessments may be discriminatory if they adversely affect some
particular groups of people, determined, for example, by gender, race, age or nationality. If an employer uses an assessment whose
adverse effect was found on the basis of relative assessments for different specific groups of people, he must prove that this assessment technology is work-related and predicts success in a particular workplace.
Personality assessments are less likely to incur accusations of discrimination on employers, since there is almost no correlation between personality characteristics and demographic characteristics. It is also worth noting that the relationship between personality and effectiveness at work depends on the context (ie, on the type of work).
Unfortunately, about the new generation of search tools that are increasingly used in preliminary assessments, much less information has been accumulated. Many of the tools appeared as technological innovations, not as scientifically created methods or research programs. As a result, it is not always clear what exactly they assess, whether the underlying hypotheses are legitimate, and whether they can be expected to predict the effectiveness of a candidate in the workplace. For example, the physical properties of speech and the voice of a person — which have long been associated with
personality traits — were associated with differences in labor rates. If the instrument prefers such features of speech as modulation, tone or “friendly” voice, which are not distinguished from any particular group of people, then this does not cause legal problems. But such tools may not have been scientifically verified and therefore not monitored for potential discrimination - which means that the employer may be held responsible for blindly following their recommendations. In addition, there is not yet a convincing hypothesis or conclusions about whether it is
ethical to screen out people based on their voice, a property that is determined by physiology and cannot be changed.
Similarly, the activity in social networks - for example, the use of Facebook or Twitter - reflects the
intelligence and personality traits of a person, including their
dark side . However, is it ethical to process this data for the purpose of hiring, if users use these applications for different purposes, and did not give their consent to analyze the data in order to build conclusions based on their public posts?
In the context of hiring, new technologies raise many new ethical and legal issues regarding privacy, which we believe should be publicly discussed, namely:
1) What are the temptations for companies regarding the privacy of a candidate related to personal characteristics?With the advancement of technology, big data and AI will be able to more accurately determine the characteristic features that describe personal characteristics. For example, today likes on Facebook can be used with considerable precision to
determine sexual orientation and race. It is also easy to determine political preferences and religious beliefs. Could companies be tempted to use such tools for screening candidates if they consider that since decisions are not made on the basis of these characteristics directly, they will be legal? Perhaps the employer does not violate any laws, evaluating the candidate based on personal information, but the company may incur legal risks if it bases its hiring decisions on whether the candidate belongs to particular groups - by place of birth, race or mother tongue - or on the basis of private information that she has no rights to consider, for example, physical illnesses or mental ailments. It is not yet clear how the courts will handle situations in which the employer has based on tools using these indirect characteristics; however, it is clear that it is illegal to act on the basis of certain special or private characteristics, regardless of how they were identified.
This may also be applicable to face recognition software, as recent studies
predict that face recognition AI will soon be able to accurately determine candidates ’sexual and political orientation, as well as their“ inner state, ”which includes mood and emotions. How can the application of the law on persons with disabilities change? In addition, the lie detector law for employees generally prohibits employers from using such tests in hiring, and the
law on nondisclosure of genetic information prohibits them from using genetic information to make hiring decisions. But what if exactly the same information about truth, lies and genetic traits can be gathered using the tools mentioned?
2) What temptations will companies face with regards to the privacy of candidates in their lifestyle and occupations?Employers now have access to information such as a candidate’s chekina in a church every Saturday morning, a review of the dementia care center that he arranged for his elderly parent, or a third divorce statement. All such things, and many others, are easy to spot in the digital age. Big data keeps track of us wherever we go online, and collect information that can be processed by such tools, which we still cannot imagine - tools that, in principle, can tell an employer about whether we are suitable on certain roles. And this big data will only get bigger;
According to experts , 90% of all data in the world was created only in the last two years. And the extension of the data is followed by the potential expansion of their unfair use, leading to discrimination - intentional or accidental.
Unlike the European Union, which harmonized its approach to privacy with the data protection law (GDPR), the United States relies on a patch-patching approach, largely provided by state laws. They began to enact certain laws regarding social networks in 2012, in order to prohibit employers from requiring candidates to passwords to personal accounts in the form of a condition necessary for hiring. More than twenty states have passed laws of this kind. However, in the field of general protection of privacy in the use of new technologies in the workplace such activity is not observed. In particular, California passed a law potentially limiting the employer's ability to use data from a candidate or employee. In general, state and federal courts have yet to adopt a unified platform to analyze the protection of employees' privacy from new technologies. The bottom line is that, so far, the fate of employee privacy in an era of big data remains uncertain. This puts employers in a conflict situation requiring caution. Emerging advanced technologies can be extremely useful. But they give the employers information that was previously considered personal. Is it legal to use it for hire? Is it ethical to study it without the consent of the candidate?
2) What temptations will companies face with regards to the privacy of candidates related to their disability?The Disabled Persons Act includes both mental and physical diseases, and defines a person as disabled if the disease significantly limits his life activity, if such restrictions are recorded in the person’s history, or other people feel that he has restrictions. About ten years ago, the US Equal Employment Commission (EEOC) issued recommendations stating that the ever-expanding list of mental diseases described in the psychiatric literature should be considered as mental limitations, and that it becomes easier for people to fall under the scope of the Disability Act. As a result, people who have significant problems in communicating with others, with concentration or with behavior in society can fall into the category of people protected by this law.
Technology, in addition to raising new questions about disability, presents new dilemmas concerning differences between people, be they demographic or other. There are already registered situations in which such systems demonstrated learned distortions, especially related to race and gender. For example, Amazon developed an automatic employee search program for examining resumes - and abandoned it when they realized that it was
not racially neutral . To reduce such bias, the developers balance the data used to train the AI ​​models so that they represent all groups accordingly. The more information technology has for learning, the better it will be able to control the appearance of potential distortions.
In conclusion, we note that technologies can already cross the boundaries of public and private properties, features and personal states, and there is every reason to believe that in the future this will only worsen. Employers, using AI, big data, social networks and machine learning, will receive ever-increasing access to the personal lives of candidates, their personal characteristics, difficulties and psychological states. Many of the new questions about privacy raised by us above have no easy answers, but we believe that all of them are worthy of a public discussion.