Are job attributes that provide the basis for evaluating the relative growth of jobs inside an organization?

Hardware-Accelerated Volume Rendering

HANSPETER PFISTER, in Visualization Handbook, 2005

11.3.4.2 Post-Classification

In post-classification, the mapping to RGBα values is applied to a continuous, interpolated scalar field. Post-classification is easier to implement than pre-classification and mostly used in the hardware-accelerated algorithms described below. Note that post-classification does not require one to use associated colors, although it is still possible to do so. In that case, the transfer functions (see below) are stored for associated colors.

As discussed by Engel et al. [25], pre-classification and post-classification produce different results because classification is in general a nonlinear operation. The nonlinearity of tri-linear interpolation may lead to artifacts if it is applied to pre-classified data. For post-classification, the evaluation of the nonlinear classification function in a linearly interpolated scalar field produces the correct result [6, 82]. However, as noted above, pre-classification remains a very important tool in medical imaging, and it is important that the hardware-accelerated volume rendering method be able to support both options.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123875822500137

Job Design and Evaluation: Organizational Aspects

A.J. Arthurs, in International Encyclopedia of the Social & Behavioral Sciences, 2001

Job evaluation is a method that is used to produce a hierarchy of jobs in an organization as the basis for determining relative pay levels. It seeks to measure the relative value of jobs, not that of the job holders. The main aim of job evaluation is to provide an acceptable rationale for determining the pay of existing hierarchies of jobs and for slotting in new ones. It may be implemented unilaterally by management or with varying degrees of participation by the workforce. Acceptability, consensus, and the maintenance of traditional hierarchical structures are normally the goal of such schemes. Job evaluation schemes do not directly determine rates of pay. They are concerned with relationships, not absolutes. The rate for the job or the salary for a job grade is influenced by a number of factors outside the scope of most schemes. Normally pay is linked to external market rates, the relative bargaining strengths of the negotiating bodies, and traditional patterns of pay differentials between jobs. In recent years equal pay legislation has exposed job evaluation to sometimes rigorous analysis in the courtroom. Out of this are emerging principles which seek to ensure that job evaluation schemes are constructed and implemented transparently and without obvious bias.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767042856

Job Design and Evaluation: Organizational Aspects

Alan J. Arthurs, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

The Rise of Job Evaluation

Job evaluation grew up in the first half of the twentieth century, in an era of large, bureaucratic organizations and increasing unionization. There was a shift toward more ‘rational’ and ‘scientific’ methods of determining pay differentials (Ruona and Gibson, 2004). A major concern was to justify pay levels since many industrial disputes were concerned with wages. Collective bargaining, unilateral wage determination, or individual negotiations were seen as unsystematic compared to job evaluation. The rise of mass production and sometimes-militant industrial unionism was also important. As a result of new production methods introduced at the end of the 1930s and during World War Two, the number of varied yet unskilled occupations increased sharply. The clear line between mass production and traditional craft jobs had become blurred. Pay on the basis of job content, not personal attributes, complemented this system of industrial relations.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868730777

Comparable Worth in Gender Studies

R.J. Steinberg, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Job Evaluation and Gender Bias

Job evaluation is the institutional mechanism that perpetuates wage discrimination, especially in medium-sized and large organizations. Approximately two-thirds of all employers use some form of job evaluation. A minority of work organizations set wages without using these formal processes. They rely instead on local market wages, employer biases, and institutional inertia in setting wages. However, if large employers modified job evaluation practices to pay women fairly, smaller employers who do not use such procedures would be affected by these outcomes. There are many aspects of gender bias that remain in virtually every traditional job evaluation system currently available to employers (Steinberg and Haignere 1987, Acker 1989, Steinberg 1990). Four of the most common forms of bias are discussed below.

First, the prerequisites, tasks, and work context of jobs historically performed by women have been ignored or taken-for-granted. Working with mentally ill or dying patients or reporting to multiple supervisors are not considered to be stressful working conditions. Working with noisy machinery, however, is considered stressful. University secretaries who protect the confidentiality of the records they work with do not receive compensation for that responsibility. When examining the skills associated with clerical work, most job evaluation system's treat as invisible requirements for knowledge of spelling and grammar, ability to compose straightforward correspondence, knowledge of the substantive work of the office, and knowledge of the organizational shortcuts within a bureaucracy. The ability of a public health nurse to break down complex and technical material to present to nontechnical audiences is also not acknowledged as a communication skill in traditional job evaluation. If job content remains unacknowledged in the process of describing and evaluating jobs, it is not taken into account when determining a wage rate.

Second, evaluation systems confuse the content of the job with stereotypic ideas about the characteristics of the typical jobholder. Since authority is associated with masculinity, managers are perceived as running offices and departments. The work of the secretary in the actual daily running of an office remains invisible, especially if she performs her work competently. The technical skills and life-and-death responsibilities required of a Registered Nurse are often subordinated to her perceived responsibility for the emotional comfort of her patients. Social psychological experiments confirm that the value of an activity is lowered simply by its association with women (Shepala and Viviano 1984).

Third, the content of work historically performed by women is recognized in the evaluation hierarchy of complexity, but, by definition, is assumed to be less complex than the content of work historically performed by men. For example, both women's and men's jobs require perceptual skills. Male jobs are more likely to require spatial perceptual skills, female jobs visual skills (England 1992). In traditional job evaluation systems spatial skills are treated as more complex than visual skills. Similarly, women's and men's work both require human relations skills, but the human relations skills associated with men's work often involve power and control, while the human relations skills in women's work often involve taking care of others. Job evaluation systems define power and control over others as involving more complex skills than emotional labor. Virtually every off-the-shelf system of job evaluation defines skill, effort, responsibility, and working conditions in way's that treat the job content of male work as more complex than that of female work.

Fourth, some job evaluation systems negatively value job content associated with female jobs in a way that actually lowers wages. One study of a major university found that staff whose jobs involved working with students on a regular basis actually ‘lost’ pay for that aspect of their job net of other aspects of their job content. Another study found that working with difficult clients and dying patients actually lowered pay independent of other job content. The ironic logic of this evaluation outcome is that the more an incumbent is required to perform the undervalued content, the less the incumbent earns.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767039310

Ergonomics and Work Assessments

Ev Innes, in Ergonomics for Therapists (Third Edition), 2008

FUNCTIONAL CAPACITY EVALUATION (JOB)

Functional capacity evaluations (job) (FCEJ) are “primarily conducted to determine the worker's suitability to return to work and develop an appropriate rehabilitation plan, either in the form of a return-to-work program or a clinic-based work conditioning/hardening program” (p. 57).52 In Australia FCEJs are often conducted in conjunction with a workplace assessment (WPA) in which the therapist does an on-site assessment of the worker's preinjury duties and potential suitable duties that may be included in a return-to-work plan. The WPA also includes assessment of the work environment, including any equipment or tools that may be used. In New South Wales, Australia, a return-to-work plan cannot be ap-proved unless a WPA has been conducted by an occupational therapist or physiotherapist.

Therapists often design their own FCEs, especially if assessing a worker's ability to return to a specific job.18,55,72 The preferred type of FCEJ for many therapists is a battery of tests of the therapist's own design that may use elements of established FCEs, when the subtests are ap-propriate and relevant to the specific job to which the worker is returning.18,52,72 Many also use work simulation, such as setting up a keyboard task for a worker returning to computer-based duties.

Kim will use an FCEJ to determine Lucy's current abilities and how these relate to her specific work requirements in order to develop an appropriate return-to-work plan. Some components of standardized FCEs will be used, such as upper limb reaching components. Kim will also simulate some of Lucy's job demands by setting up a data-entry task on a computer workstation similar to that used by Lucy at work. Kim will make modifications and adjustments to the workstation to determine what is optimal for Lucy. Kim may use computer workstation checklists as well as observing and measuring Lucy's performance.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978032304853850007X

Employee rewards

Chris Rowley, Wes Harry, in Managing People Globally, 2011

3.5.4.2 Grading (job classification)

This approach to job evaluation is similar to ranking except that classes or grades are established first, and jobs are then placed into these pre-formed grades. Jobs are usually evaluated on the basis of the whole job using one factor, such as difficulty. Although grading is more systematic than ranking, there does remain a largely subjective dimension and, therefore, it can cause disgruntlement among employees. Yet, this is a common means of evaluating jobs in parts of Asia and is usually established using very subjective criteria.

Some of the developments in the area of rewards have an impact here. One such development that has attracted much attention in the West and among international firms in Asia is ‘broadbanding’. This is an attempt to retain the positive features of traditional pay scales while reducing its less desirable aspects, such as tendencies to focus on promotion over performance, an unwillingness to undertake duties associated with higher grades and the inability to offer higher salaries to new employees. Basically, broadbanding involves retaining some form of grading system, but with a reduced number of grades or salary bands and with pay variations within them based on performance rather than the nature of the job. However, there is a desire to retain a skeletal grading system as this gives order to the structure and helps to justify differentials.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843342236500031

Job Analysis, Design, and Evaluation

Robert Smither, in Encyclopedia of Applied Psychology, 2004

3 Job Evaluation

As suggested previously, job evaluation is the procedure used to determine the relative worth of jobs within an organization. Most jobs are evaluated in terms of education required, skills involved in job performance, level of responsibility, working conditions, and factors in the local economy. Information from the evaluation is used to determine salary levels for jobs.

There are several methods for job evaluation. The point method is the most widely used. Under this procedure, subject matter experts identify job dimensions that will be the basis for assigning salary level. These dimensions—known as commensurable factors—are then described in terms of behaviors typical of different levels of skill and assigned a number that reflects differences in skill levels. Jobs can then be scored in terms of the number of points and compared to each other to determine salary level.

Job ranking is the most straightforward method of job evaluation and is most often used in small organizations. In this procedure, a SME ranks all jobs in an organization or a department in terms of the importance of their contribution to the overall organizational goals. In contrast to the point method, job ranking is a subjective procedure that does not consider the magnitude of the difference between jobs.

Finally, the job grading method places jobs into levels or grades. Jobs requiring less skill, for example, would be placed together—irrespective of the work performed—in a lower grade, whereas jobs requiring more skill would be grouped in a higher grade. The U.S. government’s GS system is an example of job grading.

Regardless of the procedure used, job evaluation results in a hierarchical ranking of the worth of each job within the organization. The final step in determining salaries is external validation of pay scales. This is usually done through surveys of salaries paid for similar jobs in other organizations.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0126574103002920

Standard Transport Appraisal Methods

Marco Dean, in Advances in Transport Policy and Planning, 2020

1 Introduction

Ex ante appraisal and ex post evaluation have always been part of the planning and decision-making process. However, while until the beginning of the 20th century these steps presented a rather informal character, since the 1930s, the progressive adoption of more rigorous planning procedures has increased the need for formal methods capable of ensuring more systematic (ex ante and ex post) assessments of both “soft” policies and “hard” plans and projects (Alexander, 2006). Hence, in the course of time, a number of different appraisal and evaluation methods, techniques and tools have been proposed in the attempt to ensure more informed decisions, and, at the same time, embrace new planning paradigms and respond appropriately to sustainable development concerns and other emerging global challenges (Dean, 2018; Dimitriou et al., 2016; Goodman and Hastak, 2006; McAllister, 1982). The existing appraisal and evaluation methods can be classified in several ways (e.g., Faludi and Voogd, 1985; Oliveira and Pinho, 2010; Rogers and Duffy, 2012; Söderbaum, 1998). One of the simplest classification schemes is based on the number of objectives and decision criteria considered in the analysis. From this point of view, it is possible to distinguish between two families of methods, although, as highlighted further below, their boundaries are frequently blurred:

Mono-criterion methods, which assess a given plan against a single and specific objective. This family includes, for instance, cost-benefit analysis (CBA), which assesses a plan primarily against the objective of economic efficiency (as shown by the benefit-cost ratio or the net present value of the plan), by translating all impacts into discounted monetary terms.

Multi-criteria methods, which appraise or evaluate a plan by taking into account (more explicitly than mono-criterion methods) the various dimensions of interest, and the interplay between multiple, often contrasting, objectives, and different decision criteria and metrics.

Hence, contrary to what is commonly believed, multi-criteria analysis (MCA) does not constitute a single specific method. Rather, it should be intended as an umbrella term for a number of different techniques and tools by which multiple objectives and decision criteria (or attributes) can be formally incorporated into the analysis of a problem. MCA is generally assumed to have originated in the fields of mathematics and operational research during the second half of the previous century, with the works of Kuhn and Tucker (1951) and Charnes and Cooper (e.g., Charnes et al., 1955; Charnes and Cooper, 1961) on goal programming which are commonly regarded as one of the major stimuli for the development of MCA methods. However, as pointed out by Köksalan et al. (2011), the real roots of this discipline are much more ancient and are deeply entwined with studies of classical economists and mathematicians, which are also at the origin of CBA. Over the decades, the evolution of MCA has been directly or indirectly influenced by research in different areas of study (e.g., utility and value theories, social choice theory, revealed preference theory, game theory, and fuzzy and rough set theories) so that, presently, the realm of MCA comprises many subfields and different schools of thought (Bana e Costa et al., 1997; Figueira et al., 2005a; Köksalan et al., 2011). Since the late 20th century, MCA methods have ignited a growing interest among both researchers and practitioners working across a range of disciplines. This can be primarily attributed to an ever-greater awareness of the fact that many contemporary planning and policy problems facing society have a multi-dimensional nature and therefore require the careful examination of a variety of different, often conflicting, perspectives and aspects (Giampietro, 2003; Munda, 1995, 2008). MCA has thus progressively gained importance as an appraisal and evaluation approach in a number of fields, including ecology, sustainability and environmental science (Herath and Prato, 2006; Huang et al., 2011; Wang et al., 2009), health care decision-making (Thokala et al., 2016), banking and finance (Aruldoss et al., 2013), urban and regional planning (Nijkamp and Rietveld, 1990; Voogd, 1983) and transport project appraisal and evaluation (Macharis and Bernardini, 2015). A significant contribution to the introduction and diffusion of MCA in the field of land use and transport planning was given, in particular, by Nathaniel Lichfield and Morris Hill, whose studies on appraisal and evaluation methods, conducted between the 1950s and 1970s, culminated in the introduction of the Planning Balance Sheet (Lichfield, 1956, 1960, 1966, 1969), later expanded and renamed as Community Impact Evaluation (Lichfield, 1996), and the Goal-Achievement Matrix (Hill, 1966, 1968, 1973). These methods, which at that time became soon regarded as the foremost challengers to the long-established CBA, can be considered to have represented a sort of benchmark for many other MCA techniques and approaches proposed in this field over the course of time (Dean, 2018; Dimitriou et al., 2016). Against this backdrop, this chapter, drawing on a comprehensive analysis of the academic and gray literature, has a threefold objective. Firstly, it seeks to bring order to this “methodological chaos” by providing a brief, yet comprehensive overview of the different MCA methods available. Secondly, it aims at illustrating the current state of the art in the use and application of MCA in the transport sector, highlighting differences between theory and practice. Thirdly, it also attempts to discuss the potential advantages and limitations of MCA by adopting, as far as possible, a balanced and neutral perspective so as to break down clichés and false beliefs on the subject.

The chapter consists of six further sections. Section 2 describes the main elements and features of MCA. Section 3 offers an examination of the key principles and theoretical foundations of the most widely known MCA methods, while Section 4 illustrates the MCA tools and techniques which are used by practitioners in transport project appraisal and evaluation. Section 5 analyses the strengths and weaknesses of MCA. Section 6 focuses on participatory MCA methods, which, especially over the past two decades, have been devised and promoted by many scholars with the view to producing more thorough, transparent and democratic assessments of transport policies and projects. Finally, Section 7 concludes the chapter by highlighting its key insights and outlining future research needs in this area.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S2543000920300147

Compensation

Henk Thierry, in Encyclopedia of Applied Psychology, 2004

3 Job Evaluation

There are many different systems of job evaluation in use. Common to most systems is that key jobs are described according to their content and accompanying standard work conditions and are subsequently analyzed and appraised in terms of a job evaluation system's characteristics (e.g., required knowledge, problem solving, supervision). Frequently, a point rating procedure, in which numerical values are assigned to the main job features, is applied. The sum total of these values constitutes the “job value.” This approach is, to some extent, derived from the technique of job analysis, a well-known subject area within work psychology. Through job analysis, the work elements of a job (e.g., cutting; sewing) are carefully analyzed in terms of those worker characteristics that are required for successful job performance (e.g., abilities, skills, personality factors). The objective of any job analysis system is to grasp the essence of a job in terms of a variety of distinct “worker” and/or “work” qualifications.

Would the latter objective also hold for job evaluation? Research shows that these systems usually measure one common factor: extent of required education. In practice, several (professional) groups in organizations tend to stress the relevance of particular job qualities (e.g., leadership behavior, manual dexterity and sensitivity for specific tools in production work) that are believed to have been underrated in a particular job evaluation system. But factor-analytic studies show that even when a job evaluation system is “extended” to include such particular job qualities, second and/or third factors—if they can be identified at all—have much in common with the first (general education) factor. In other words: job evaluation measures the educational background required for adequate decision making in a job. Although the concepts and wording of job characteristics in various job evaluation systems might seem to differ from one another, they tap into the same construct. This is also exemplified through the custom, in several countries, of using a simple arithmetic formula to translate job value scores from one system of job evaluation into the scores of another system.

Obviously, these procedures for determining job value and base pay level do have important implications. First, the process of translating work descriptions or characteristics into worker attributes is quite vulnerable because it requires psychological expert knowledge to determine the variety of attributes (e.g., abilities, aptitudes, skills) relevant to adequately perform a particular job. Research has shown repeatedly that ratings may be biased in various respects due to the “halo effect,” gender discrimination, implicit personality theory, and the like. This theory refers to a pattern among attributes that a particular rater believes exists. One example might be that somebody scoring high on emotional distance is thought to excel in analytical thinking, have a “helicopter view,” and engage in a structuring leadership style because the rater assumes such a pattern to be “logical.”

Second, regardless of an individual worker's personal conception of his or her job's main content, that job's value is determined mainly by the level and nature of the required educational background. Consequently, a worker may perceive the evaluation of his or her job as not representing the worker's ideas about the job's actual content but rather as being the outcome of a bureaucratic exercise. Related to this is a third issue: Job evaluation may be experienced as a “harness” (because it measures one factor nearly exclusively), especially when a fine-grained salary structure is tied to the job value structure. This characteristic often makes it difficult and time-consuming to adjust job value and base salary to changed conditions. Flexible procedures have been designed to accommodate these needs.

Fourth, the larger the base pay proportion, the more organization members may perceive their pay as meaningful to satisfying important personal needs, according to the reflection theory on compensation. This theory holds that pay acquires meaning as it conveys information relevant to the self-concept of a person. Four meanings are distinguished: motivational properties, relative position, control, and spending. However, meaningfulness often diminishes as employees reach the high end of their salary scale without having much expectation of improving their level of pay. This implies that although base pay may motivate employees initially to stay in their jobs and to maintain at least an acceptable level of performance, this motivational force may wear out the longer employees remain in their jobs without noticeable changes in their level of pay.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0126574103003299

Program Evaluation and Research Designs

John DiNardo, David S. Lee, in Handbook of Labor Economics, 2011

2.3 The “parameter of interest” in an ex ante (predictive) evaluation problem

We now consider a particular kind of predictive, or ex ante, evaluation problem: suppose the researcher is interested in predicting the effects of a program “out of sample”. For example, the impact of the Job Corps Training program on the earnings of youth in 1983 in the 10 largest metropolitan areas in the US may be the focus of an ex post evaluation, simply because the data at hand comes from such a setting. But it is natural to ask any one or a combination of the following questions: What would be the impact today (or some date in the future)? What would be the impact of an expanded version of the program in more cities (as opposed to the limited number of sites in the data)? What would be the impact on an older group of participants (as opposed to only the youth)? What would be the impact of a program that expanded eligibility for the program? These are examples of the questions that are in the domain of an ex ante evaluation problem.

Note that while the ex post evaluation problem has a descriptive motivation—the above questions implicitly have a prescriptive motivation. After all, there seems no other practical reason why knowing the impact of the program “today” would be any “better” than knowing the impact of the program 20 years ago, other than because such knowledge helps us make a particular policy decision today. Similarly, the only reason we would deem it “better” to know the impact for an older group of participants, or participants from less disadvantaged backgrounds, or participants in a broader group of cities is because we would like to evaluate whether actually targeting the program along any of these dimensions would be a good idea.

One can characterize an important distinction between the ex post and ex ante evaluation problems in terms of Eq. (4). In an ex post evaluation, the weights ψ(u) are dictated by the constraints of the available data, and what causal effects are most plausibly identified. It is simply accepted as a fact—however disappointing it may be to the researcher—that there are only a few different weighted average effects that can be plausibly identified, whatever weights ψ(u) they involve. By contrast, in an ex ante evaluation, the weights ψ(w) are chosen by the researcher, irrespective of the feasibility of attaining the implied weighted average “of interest”. These weights may reflect the researcher’s subjective judgement about what is an “interesting” population to study. Alternatively, they may be implied by a specific normative framework. A clear example of the latter is found in Heckman and Vytlacil (2005), who begin with a Benthamite social welfare function to define a “policy relevant treatment effect”, which is a weighted average treatment effect with a particular form for the weights ψ(u).

One can thus view “external validity” to be the degree of similarity between the weights characterized in the ex post evaluation and the weights defined as being “of interest” in an ex ante evaluation. From this perspective, any claim about whether a particular causal inference is “externally valid” is necessarily imprecise without a clear definition of the desired weights and their theoretical justification. Again, the PRTE of Heckman and Vytlacil (2005) is a nice example where such a precise justification is given.

Overall, in contrast to the ex post evaluation, the goals of an ex ante evaluation are not necessarily tied to the specific context of or data collected on any particular program. In some cases, the researcher may be interested in the likely effects of a program on a population for which the program was already implemented; the goals of the ex post and ex ante evaluation would then be similar. But in other cases, the researcher may have reason to be interested in the likely effects of the program on different populations or in different “economic environments”; in these cases ex post and ex ante evaluations—even when they use the same data—would be expected to yield different results. It should be clear that however credible or reliable the ex post causal inferences are, ex ante evaluations using the same data will necessarily be more speculative and dependent on more assumptions, just as forecasting out of sample is a more speculative exercise than within-sample prediction.

2.3.1 Using ex post evaluations for ex ante predictions

In this chapter, we focus most of our attention on the goals of the ex post evaluation problem, that of achieving a high degree of internal validity. We recognize that the weighted average effects that are often identified in ex post evaluation research designs may not correspond to a potentially more intuitive “parameter of interest”, raising the issue of “external validity”. Accordingly—using well-known results in the econometric and evaluation literature—we sketch out a few approaches for extrapolating from the average effects obtained from the ex post analysis to effects that might be the focus of an ex ante evaluation.

Throughout the chapter, we limit ourselves to contexts in which a potential instrument is binary, because the real-world examples where potential instruments have been explicitly or “naturally” randomized, the instrument is invariably binary. As is well-understood in the evaluation literature, this creates a gap between what causal effects we can estimate and the potentially more “general” average effects of interest. It is intuitive that such a gap would diminish if one had access to an instrumental variable that is continuously distributed. Indeed, as Heckman and Vytlacil (2005) show, when the instrument Z is essentially randomized (and excluded from the outcome equation) and continuously distributed in such a way that Pr[D=1|Z=1] is continuously distributed on the unit interval, then the full set of what they define as Marginal Treatment Effects (MTE) can be used construct various policy parameters of interest.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0169721811004114

What is basis of job evaluation?

Four primary methods of job evaluations used to set compensation levels are point factor, factor comparison, job ranking and job classification.

What does job evaluation determine the relative worth of jobs within a firm?

Job Evaluation (JE) is the process of objectively determining the relative worth of jobs within an organization. It involves a systematic study and analysis of job duties and requirements. The evaluation is based on a number of compensable factors.

Which factors provide the basis for assessing and comparing the relative overall worth of different jobs?

Relative worth is determined mainly on the basis of Job Description and Job Specification only. Job Evaluation helps to determine wages and salary grades for all jobs. Employees need to be compensated depending on the grades of jobs they perform. Remuneration must be based on the relative worth of each job.

Is a job evaluation system is used to determine the relative value of one job to another?

A job evaluation is a systematic way of determining the value/worth of a job in relation to other jobs in an organization. It tries to make a systematic comparison between jobs to assess their relative worth for the purpose of establishing a rational pay structure.