According to R. D. Gatewood and H. S. Field, employee selection is the “ process of collecting and evaluating information about an individual in order to extend an offer of employment.” In an organization, it is the main part of overall staffing process. This includes all the activities from Human Resource Planning, Recruitment to Retention. (By doing human resource planning, the organization projects its likely demand for personnel with particular knowledge, skills, and abilities (KSA’s), and compares that to the anticipated availability of such personnel in the internal or external labour markets.)
Job Analysis: Pearn and Kandola (1993, p1) defines “ job analysis as a form of research and a systematic procedure for obtaining detailed and objective details about the job, task or role that will be performed in future or is currently being performed”. Job analysis is the first step in the selection process of the employee for a job. This is also known as “ Occupational Analysis”. Job Analysis is a process of examining a job in detail. The main purpose of Job Analysis is to determine and identify the duties and requirements of a particular job. It also helps us to provide information about the job description and person specification.
http://www. hr-guide. com/data/G000. htm
People Resourcing, Stephen Taylor, 4th edition, Pg. No. 187
Job Description: Description which is also known as Job summary explains the role of a person and accountability. It gives the detailed description of job and its expectations. It also gives idea about the training needs for the job and determination of pay rates to the employees. It is a tool in recruitment and selection process.
http://www. businessballs. com/jobdescription. htm
http://www. businessbureau-uk. co. uk/personnel/recruitment/job_description. htm.
People Resourcing, Stephen Taylor, 4th edition, Pg. No. 194
Person Specification: It describes the need of a person to perform the job. Therefore, it is the basis for selecting a person who fits the job. Person specification also helps the organisation to review and re-design the job if required. It also tell us what are the skills the person should have to do the job in good manner.
http://tutor2u. net/business/people/recruitment_personspecification. asp
People Resourcing, Stephen Taylor, 4th edition, Pg. No. 198
Recruitment: This is the process of attracting the candidates for employment. Recruitment process involves searching for the candidates and selection of the candidate who is suitable for the job. In recruitment there are many steps to follow from Attracting, Screening, Assessing, Short listing, Interviews, Testing and final selection.
Selection: Selection is the final stage in the selection process where the final candidates who are suitable for the job will be selected. Before giving appointments or offers the organisation has to do the background check of the candidate. In that they have to check the qualifications, criminal records and previous experience.
Appointment: Appointment is the next process where the selected candidates will be offered a job and the task to perform. If there is a need for the candidate has to take a training then the organisation has to provide training program before placing him in the job.
(Reference: People Resourcing, Stephen Taylor, 4th edition, Pg. No. 187)
Selection:
Organisations are made of people and in an age of increasingly complex technology, organisations are stating that ’employees are our greatest asset’ (Sue Newell and Viv Shackleton). The job requirements of organisations vary and some people are best suited for some roles and organisations than others. Therefore, as per CIPD, it is important to have an effective recruitment and selection system which will thereby help in selecting the right person, at the right time, in the right place is crucial to organisational performance.
There are several definitions for Selection. F. W Taylor (1911), one of the earliest management writers, stressed the importance of, ‘ best man for the job.’ He was of the opinion that people should be selected for their particular skills and abilities which should be tested prior to the selection decision rather than based on who they knew or who was in the queue first. According to Hackett selection is concerned more with “ predicting which candidates will make the most appropriate contribution to the organisation now and in the future.” Gupta (2006) defines it as a process of choosing the most suitable persons out of all participants. Selection according to Roberts, G (2005) is the most important element in the organisation’s management of people. Where there is faulty selection, the organisation suffers waste of time, money and choosing the wrong candidates leading to absenteeism and labour turnover.
Different selection methodology:
Application Forms:
The information of the individual is collected systematically and presented in a consistent format in an application form thereby making it easier to assess the candidate’s suitability for the job (CIPD). There has been a rise in the usage of application form as a basis for employment decisions. CIPD (2003) also reports that 80% of the organisations surveyed use applications forms. While it acted as a useful preliminary to interviews and decisions, it also made sorting of applications and shortlisting easier. According to Huczynski and Buchanan (2007), application form provides background but is also impersonal. Interviewers used this as a basis for their interviews with information from the application form being taken and improved upon during the interview.
The predictive validity of application form is 0. 2 which is a fairly low predictive validity (Roberts, 2005). According to Gareths, the low rating is more of a reflection on the poor use of application forms. He further goes to say that if it is used to screen acquired competencies, then application forms can be considered as an effective technique especially if used properly with a clear rating system.
Interviews:
The individual interview is the most familiar and most commonly used method of selection. It provides for face to face discussion and also provides the best opportunity for the organisation to establish a good rapport with the candidate (Armstrong, 2003). According to Pilbeam & Corbridge (2006), interview is more than a selection method. It is a forum where information about the organisation and about the job is given to the candidate. There is however, more scope for bias if one interviewer is used. There are two classifications of interviews – structured and unstructured interviews. While the predictive validity of unstructured interviews is 0. 2, structured interviews have a predictive validity of 0. 4. Recent studies have also shown that behaviorial interviews which are based on experience are more effective predictors of success as compared to situational which is based on future scenario interview (Roberts, 2005). Although, there has been research done to suggest that traditional interviews are poor predictors of performance, one of the reason as to why they are still popular is the fact that they are cost effective (Taylor, 2002).
Personality Tests:
Personality tests are used mainly for management, professional and graduate jobs. CIPD (2005) states that personality test ‘ can be useful as an added dimension to decision-making but only when practitioners generally use those instruments that have been rigorously developed and for which thoroughly researched validation evidence and norms are available.’ Taylor (2002) states that when used carefully and professionally, personality test has a useful role to play in the selection process. According to a report by IRS (1997d: 13), personality testing remained a stable selection procedure in the 1990s with approximately three-fifths of the organisation stating that it was used as one of the process for selection for certain positions.
Biodata:
Biodata provides a highly structured method of sifting applications. It consists of demographic details like the age, sex and family circumstances, education and professional qualifications, previous employment history,
Assessment Centre:
Assessment centres are used by organisations for various purposes like selection of candidates for jobs, for promotion and for decisions about the potential development of individuals over a long period of time (Fletcher, 1991). In UK, many organisations use this method of selection especially for the recruitment of graduates (Keenan, 1997). According to Robertson (1996), assessment centres not only ensure that the organisations appoint, develop and promote only people who are effective in their job, it also benefits the individual in terms of greater job satisfaction, good career prospects and enhanced earnings. For the organisation, this tool enables the effectiveness of the job holder to be reflected in terms of organisational effectiveness and therefore considerable financial gains.
PSYCHOMETRIC TESTS:
What are Psychometric Tests?
The British Psychological Society definition of a psychometric test:
‘ a psychological test is any procedure on the basis of which inferences are made concerning a person’s capacity, propensity or liability to act, react, experience, or to structure or order thought or behaviour in particular ways’.
Who uses psychometric tests?
Large, medium, and an increasing number of small firms use psychometric tests. Over 70 % of larger companies are currently using psychometric tests to gather vital information from potential and current employees. More and more companies are using psychometric tests for:
graduate recruitment
filtering out candidates when there are large numbers of applicants
Over 70% of large companies use them in their recruitment process and small companies are using them increasingly.
They are also used to assess existing employees for:
training and staff development needs
promotion
What do psychometric tests measure?
Psychometric tests may measure aptitude, personality or interests:
Aptitude Tests – these measure how people differ in their ability to perform or carry out different tasks. (these are the type you are most likely to find at the first stage of a selection process).
Interest Tests – these measure how people vary in their motivation, in the direction and strength of their interests, and in their values and opinions (these are less likely to be used on new graduates but are sometimes).
Personality Tests – these measure how people differ in their style or manner of doing things, and in the way they interact with their environment and other people (personality).
Whereas aptitude tests measure your maximum performance capacity, the other tests examine typical or preferred behaviour.
Why are Tests Used?
If psychometric tests are to be useful as indicators of shifts in the demand for skills, then it is important that organisations’ use of tests is linked to their wish to measure the skills of prospective employees. If tests are in use for other reasons, then this would undermine their usefulness as indicators of skill demands. Do organisations in the UK make use of tests in order to measure work force skills, or have they adopted tests for some other reason, or set of reasons? Here we look at the rather limited evidence available on this question. There are a
few surveys which have asked organisations why they make use of tests, and there is a more
speculative literature dealing with change in test use over time. We take each of these in
turn.
Some past results suggest that the perceived objectivity of tests, their predictive abilities, as well as their ability to filter out unsuitable candidates were important reasons for test use by companies and local authorities. Some quite similar results were obtained in the IRS (1997) survey. The data show that companies believe the tests are valid measures of something useful, although it gives us no insight into what exactly the companies are, or think they are, measuring through the tests. It also does not explain why there have been such sizeable changes in test use since the 1980s. In what follows we divide the current literature on changes in test use into those which concentrate on changes in the labour market, and those which focus on other possible reasons for changes in the use of tests, or indeed changes in recruitment and selection practices more generally.
Why use psychometrics in an employment setting?
The main advantages of using psychometric tests are:
Objectivity – they dramatically reduce bias and personal perspective.
Clarity – they provide a robust framework and structure.
Equality and fairness for all individuals (tests are standardised so that all individuals receive the same treatment).
Increase the likelihood of being able to predict future job performance (they have a high level of ‘ predictive validity’).
The identification of training needs.
Encourage employers to do thorough job analysis in order to identify appropriate skills and abilities. This helps to ensure that candidates for a position are assessed on skills only relevant to the job.
What are psychometric tests used for?
Some uses of psychometric tests are:
Selection of candidates to jobs
Personal development/identification of training needs/staff development
Careers guidance
Building and developing teams
Psychometric tests have been used since the early part of the 20th century and were originally developed for use in educational psychology. These days, outside of education, you are most likely to encounter psychometric testing as part of the recruitment or selection process. Tests of this sort are devised by occupational psychologists and their aim is to provide employers with a reliable method of selecting the most suitable job applicants or candidates for promotion.
Psychometric tests aim to measure attributes like intelligence, aptitude and personality. They provide a potential employer with an insight into how well you work with other people, how well you handle stress, and whether you will be able to cope with the intellectual demands of the job.
Most of the established psychometric tests used in recruitment and selection make no attempt to analyze your emotional or psychological stability and should not be confused with tests used in clinical psychology. However, in recent years there has been rapid growth (particularly in the US) of tests that claim to measure your integrity or honesty and your predisposition to anger. These tests have attracted a lot of controversy, because of questions about their validity, but their popularity with employers has continued to increase.
Psychometric testing is now used by over 80% of the Fortune 500 companies in the USA and by over 75% of the Times Top 100 companies in the UK. Information technology companies, financial institutions, management consultancies, local authorities, the civil service, police forces, fire services and the armed forces all make extensive use of use psychometric testing.
As an indicator of your personality, preferences and abilities, psychometric tests can help prospective employers to find the best match of individual to occupation and working environment. As a recruitment and selection tool, these tests can be applied in a straightforward way at the early stages of selection to screen-out candidates who are likely to be unsuitable for the job. They can also provide management with guidance on career progression for existing employees.
Because of their importance in making personnel decisions it is vital that the tests themselves are known to produce accurate results based on standardized methods and statistical principles.
A psychometric test must be:
Objective: The score must not affected by the testers’ beliefs or values
Standardized: It must be administered under controlled conditions
Reliable: It must minimize and quantify any intrinsic errors
Predictive: It must make an accurate prediction of performance
Non Discriminatory: It must not disadvantage any group on the basis of gender, culture, ethnicity, etc.
VALIDITY
Validity refers to the quality of a measure that exists when the measure assesses a construct. In the selection context, validity refers to the appropriateness, meaningfulness, and usefulness of the inferences made about applicants during the selection process. It is concerned with the issue of whether applicants will actually perform the job as well as expected based on the inferences made during the selection process. The closer the applicants’ actual job performances match their expected performances, the greater the validity of the selection process.
(http://www. referenceforbusiness. com/management/Em-Exp/Employee-Screening-and-Selection. html)
Face Validity
Face validity is concerned with how a measure or procedure appears. Does it seem like a reasonable way to gain the information the researchers are attempting to obtain? Does it seem well designed? Does it seem as though it will work reliably? Unlike content validity, face validity does not depend on established theories for support (Fink, 1995).
Criterion Related Validity
Criterion related validity, also referred to as instrumental validity, is used to demonstrate the accuracy of a measure or procedure by comparing it with another measure or procedure which has been demonstrated to be valid.
For example, imagine a hands-on driving test has been shown to be an accurate test of driving skills. By comparing the scores on the written driving test with the scores from the hands-on driving test, the written test can be validated by using a criterion related strategy in which the hands-on driving test is compared to the written test.
Construct Validity
Construct validity seeks agreement between a theoretical concept and a specific measuring device or procedure. For example, a researcher inventing a new IQ test might spend a great deal of time attempting to “ define” intelligence in order to reach an acceptable level of construct validity.
Construct validity can be broken down into two sub-categories: Convergent validity and discriminate validity. Convergent validity is the actual general agreement among ratings, gathered independently of one another, where measures should be theoretically related. Discriminate validity is the lack of a relationship among measures which theoretically should not be related.
To understand whether a piece of research has construct validity, three steps should be followed. First, the theoretical relationships must be specified. Second, the empirical relationships between the measures of the concepts must be examined. Third, the empirical evidence must be interpreted in terms of how it clarifies the construct validity of the particular measure being tested (Carmines & Zeller, p. 23).
Content Validity
Content Validity is based on the extent to which a measurement reflects the specific intended domain of content (Carmines & Zeller, 1991, p. 20).
Content validity is illustrated using the following examples: Researchers aim to study mathematical learning and create a survey to test for mathematical skill. If these researchers only tested for multiplication and then drew conclusions from that survey, their study would not show content validity because it excludes other mathematical functions. Although the establishment of content validity for placement-type exams seems relatively straight-forward, the process becomes more complex as it moves into the more abstract domain of socio-cultural studies. For example, a researcher needing to measure an attitude like self-esteem must decide what constitutes a relevant domain of content for that attitude. For socio-cultural studies, content validity forces the researchers to define the very domains they are attempting to study.
RELIABILITY
Reliability is the extent to which an experiment, test, or any measuring procedure yields the same result on repeated trials. Reliability is concerned with the accuracy of the actual measuring instrument or procedure.
( http://writing. colostate. edu/guides/research/relval/pop2a. cfm)
Equivalency Reliability
Equivalency reliability is the extent to which two items measure identical concepts at an identical level of difficulty. Equivalency reliability is determined by relating two sets of test scores to one another to highlight the degree of relationship or association. In quantitative studies and particularly in experimental studies, a correlation coefficient, statistically referred to as r, is used to show the strength of the correlation between a dependent variable (the subject under study), and one or more independent variables, which are manipulated to determine effects on the dependent variable. An important consideration is that equivalency reliability is concerned with correlational, not causal, relationships.
For example, a researcher studying university English students happened to notice that when some students were studying for finals, their holiday shopping began. Intrigued by this, the researcher attempted to observe how often, or to what degree, this these two behaviors co-occurred throughout the academic year. The researcher used the results of the observations to assess the correlation between studying throughout the academic year and shopping for gifts. The researcher concluded there was poor equivalency reliability between the two actions. In other words, studying was not a reliable predictor of shopping for gifts.
Stability Reliability
Stability reliability (sometimes called test, re-test reliability) is the agreement of measuring instruments over time. To determine stability, a measure or test is repeated on the same subjects at a future date. Results are compared and correlated with the initial test to give a measure of stability.
An example of stability reliability would be the method of maintaining weights used by the U. S. Bureau of Standards. Platinum objects of fixed weight (one kilogram, one pound, etc…) are kept locked away. Once a year they are taken out and weighed, allowing scales to be reset so they are “ weighing” accurately. Keeping track of how much the scales are off from year to year establishes a stability reliability for these instruments. In this instance, the platinum weights themselves are assumed to have a perfectly fixed stability reliability.
Internal Consistency
Internal consistency is the extent to which tests or procedures assess the same characteristic, skill or quality. It is a measure of the precision between the observers or of the measuring instruments used in a study. This type of reliability often helps researchers interpret data and predict the value of scores and the limits of the relationship among variables.
For example, a researcher designs a questionnaire to find out about college students’ dissatisfaction with a particular textbook. Analyzing the internal consistency of the survey items dealing with dissatisfaction will reveal the extent to which items on the questionnaire focus on the notion of dissatisfaction.
Interrater Reliability
Interrater reliability is the extent to which two or more individuals (coders or raters) agree. Interrater reliability addresses the consistency of the implementation of a rating system.
A test of interrater reliability would be the following scenario: Two or more researchers are observing a high school classroom. The class is discussing a movie that they have just viewed as a group. The researchers have a sliding rating scale (1 being most positive, 5 being most negative) with which they are rating the student’s oral responses. Interrater reliability assesses the consistency of how the rating system is implemented. For example, if one researcher gives a “ 1” to a student response, while another researcher gives a “ 5,” obviously the interrater reliability would be inconsistent. Interrater reliability is dependent upon the ability of two or more individuals to be consistent. Training, education and monitoring skills can enhance interrater reliability.
There can be validity without reliability if reliability is defined as consistency among independent measures. Reliability is an aspect of construct validity. As assessment becomes less standardized, distinctions between reliability and validity blur. (Moss, 1994)
The two most important and fundamental characteristics of any measurement procedure are reliability and validity. (Michael J. Miller, Ph. D.) (http://www. michaeljmillerphd. com/res500_lecturenotes/Reliability_and_Validity. pdf)
From the above explanations by the authors it is clear that both validity and reliability is important aspect in the selection process where they rely on one other thing. It is also possible that there can be reliability without validity if the reliability is consistent in certain aspects.
VALIDITY IN SELECTION METHODS:
The attainment of validity depends heavily on the appropriateness of the particular selection technique used. Validity means the truthfulness of the test. We should use some test to know whether our selection process is valid or not. A firm should use a selection method that is reliable and accurate in measuring the needed qualifications of an employee. The reliability of a measure refers to its consistency. Reliable evaluations are consistent across both people and time. Reliability is maximized when two people evaluating the same candidate provide the same ratings, and when the ratings of a candidate taken at two different times are the same. When selection scores are unreliable, their validity is diminished. Some of the factors affecting the reliability of selection measures are:
Emotional and physical state of the candidate: For example if the candidate is in tensed mood where he is not able to perform well in the interview then he may be not selected for the job.
Lack of rapport with the administrator of the measure: If the candidate and the administer is not communicate well then there will be a problem which will affect the reliability.
Inadequate knowledge of how to respond to a measure: If the candidate is illiterate or he don’t know anything about the job or role.
Individual differences among respondents: Each and every individual is different from others. If the administer uses the same technique for every individual then it will not be giving the same result expected.
Question difficulty: If the interviewer is not clear in what he is doing then the reliability of the process is failed.
Length of measure: If the length of measure is too long then there will be a conflict in the reliability and validity in selection process.
The Validity of Tests
While the immediate causes of test use may include a variety of factors internal and external to the company, the adoption of formal tests for selection rests on the belief that they provide reliable and valid information about a variety of relevant characteristics. Do the tests predict job performance i. e. do those who score well in psychometric tests go on to do well in the job? There is compelling evidence from the research literature that cognitive ability tests are successful in predicting performance. There is a long history of investigation of this topic
amongst psychologists and a great deal of evidence had accumulated on the predictive power
of measures of general intelligence, for example in Ghiselli’s (1966) well-known study.
However, until about twenty-five or thirty years ago there was an apparent tendency for different measures to vary enormously in their predictive power, implying that the validity of
a given measure was highly sector and indeed firm specific. This perception has now changed due largely to the work of Schmidt and Hunter (1998) who conducted meta-analytic studies which demonstrated the underlying consistency in this set of work. Schmidt and Hunter showed that the apparent variability was in fact largely the result of sampling error (deriving from small sample sizes) along with a number of other measurement artefacts. Cognitive tests were confirmed as good predictors of performance across a very broad range of jobs. The predictive validity of personality testing is more controversial. There has been a good deal of debate about whether personality measures are valid predictors, with some commentators suggesting that reported correlations in this field could be of little value, or even entirely spurious (Blinkhorn and Johnson, 1990). Meta-analysis has given some support to the use of personality tests in recruitment and selection. Tett et al (1991) conducted a meta-analytic review of 494 studies in this field, and found significant correlations between personality scales and measures of job performance. Unlike the case of cognitive ability measures, however, there is no unifying ‘ g’ factor for personality measures, so that careful attention has to be paid to the relevant characteristics for each type of job. Indeed Tett et al found that studies which were ‘ confirmatory’ i. e. had clear prior hypotheses about the traits likely to be relevant for particular occupations obtained much higher validities than studies which were ‘ exploratory’ or data-driven. Studies that made use of job analysis so as to be clear about which characteristics were required for the job also obtained higher validities than those which made no use of job analysis.
Growth in test use seems to have taken off at some point in the 1980s. By the late 1980s and early 1990s, researchers were beginning to discern substantial shifts in companies’ selection techniques. Shackleton and Newell (1991), comparing their survey results with those of Mabey five years previously, reported what they felt was an encouraging trend towards higher proportions of companies making use of more reliable and valid methods of selection. Since then surveys have continued to suggest that more organisations have adopted psychological testing. In the main, it is large organisations which have chosen to use tests.
Psychometric testing is not unknown in smaller organisations, but they tend to be deterred by
the costs of the tests and the low numbers of vacancies which they have. There are now a wide range of tests on the market, and new products are being introduced all the time. These may be completely new products, or up-dates of well established tests. Some tests measure broad skills while others are more narrowly focused on particular occupations, whether managerial, technical, or manual. There are tests of cognitive ability, literacy and numeracy skills, as well as personality questionnaires designed to assess softer, people-oriented competencies.
The costs of tests are quite substantial, and suggest that employers which use them are
likely to be drawing on them for a clear purpose, rather than just responding to some passing management fad. The rather limited survey evidence available on why tests are used does show that prediction of job performance is an important factor, as well as the perceived objectivity of the tests.
Because most surveys are relatively small-scale, and only make very broad distin