A cover article ‘The Search for Talent’ by The Economist calls unsuccessful hiring the biggest issue with businesses nowadays. According to some estimates, 50% of hiring managers regret about their recruitment decisions after 6 months of work with a new employee (Economist 2006). Obviously, it is rather costly to make hiring mistakes as they can throw the organization back in development, jeopardize culture, and result in the inefficient use of the organization’s time and money. This problem calls for more scrutiny, especially in relation to proper assessment techniques on the interview day. The present report wishes to discuss one such group assessment experiment, where the candidates’ qualifications were determined by an assessment centre with the help of a group exercise and graded according to their communication and leadership talents. This report also examines relative strengths and weaknesses of the assessment centre by drawing on the available literature on the subject.
First-Class Online Research Paper Writing Service
- Your research paper is written by a PhD professor
- Your requirements and targets are always met
- You are able to control the progress of your writing assignment
- You get a chance to become an excellent student!
Our assessment centre consisted of four members, who were to evaluate the performance of four potential candidates (two males and two females) for a made-up manager position. As a selection procedure, we offered the candidates a choice of three group exercises, which we hoped would allow us to identify the necessary managerial skills we had been looking for. The candidates chose to create a logo of Sam Adams restaurant in Oxford, using the preliminary info we provided. They only had 20 minutes to complete the task and they could only use coins, props, pens, and notebooks in drafting the logo. We assessed them against our criteria of communication and leadership skills, which make up a good manager. For example, we assessed the candidates’ presentation abilities and their aptitude for customer service excellence and for coordination among various departments of the company. Upon the completion of the exercise, we prepared and delivered feedback to the candidates in view of the competencies above. All of the candidates receied positive feedback with minor remarks, e.g., “Good communication skills, but you need to be a bit softer while speaking”, “Good leadership quality, but need to present more ideas”, “Good ideas, but you need to stick to them and execute”, etc.
According to Petrides et al. (2011, p. 227), a good assessment consists of three criteria: criterion validity, content validity, and construct-related validity. Content validity assumes that the exercises that candidates undertake during their evaluation by an assessment centre correspond to the nature of their subsequent duties at a given job. If we talk about the creation of a logo, it is a common function of the company’s marketing department, which promotes the company’s or the product’s image to customers. Nonetheless, it may not necessarily be the case with managers, who normally occupy supervisory positions and are not directly involved in designing logos, which is mainly a prerogative of brand designers. Therefore, we should conclude about partial content validity of our assessment centre’s work.
Criterion validity deals with how accurately the assessment centre predicts the actual work performance of new hires. We had many opportunities to observe the candidates in action and some of us could have even developed that “gut feeling” concerning some candidates. So we may speak with some certainty that the outcome of the exercise and individual member’s contributions provide a plausible picture of their future performance at a workplace. Yet, as we do not have fuller background information on the candidates (e.g., past work history and psychometrical tests), we should leave some room for uncertainly as regards the criterion validity of assessment.
Finally, construct-related validity relates to how accurately the assessment centre’s indicators capture the essence of variables being measured (Woehr & Arthur 2003). In the case with our assessment centre, we measured applicants on a limmited number of criteria, which, to our mind, have most weight in identifying the managerial competencies. However, there may be other, broader factors at play that influence future managers and that we may have considered as well, e.g., individual leadership style and job motivation (McGregor 2005).
On the positive side, our assessment centre managed to preserve gender equality in shortlisting the candidates. This goes contrary to popular conception that the sexes of assessor and assesse can explain many hiring decisions (Walsh et al. 1987). With this in mind, we selected an equal number of men and women for evaluation and avoided the gender bias by administering and commenting on the same exercise for all candidates. We also allowed adequate timing for all phases of assessment preparation, starting from room setup until the actual feedback. While working on the exercise, the candidates could unravel some of their hidden potential, which they had not been aware of before (Cooper et al. 2012). As our assessment panel consisted of various experts, we could give a well-rounded feedback on each aspect of the candidates’ performance and suggest areas for improvement.
In retrospect, our assessment centre adequately evaluated the candidates’ managerial talents by factoring in the three validity constructs for such assessments and analysing shortcomings thereto. As is the case with many other assessment centres nowadays, the selection procedures proved rather costly and time-consuming, which calls for revisiting the efficiency of future assessments (Cooper et al. 2012). At the same time, despite the time and money constraints, our assessment centre could consider including a larger evaluation framework on communication and leadership competencies, which would draw a more reliable picture of the candidate. Our assessment should also introduce background information on the candidates’ job performance, past and future, and adjust current assessment measures in response to new information.
Most popular orders