20180708214927practical_guidelines_for_implementing_preemployment_integrity_tests 20180708214932psychological_testing_in_personnel_selection__part_i_a_century_of_psychological_testing 20180708214908advancing_personality_assessment_terminology_time_to_retire_objective_and_projective_as_personality_test_descriptors 20180708214918counseling_and_testing_what_counselors_need_to_know_about_state_laws_on_assessment_and_testing 20180708214901for_this_week1 x
creating a hypothetical case assessment in your area of interest
the study guide is in the word document. I provided some of the pdf from the Kaplan library.
my concentration track is in general psychology.
let me know if you need anything else.
Article
Public Personnel Management
Practical Guidelines for
Reprints and permissions:
sagepubcom/joumalsPermissionsnav
, . ^ D O I : 10.1177/0091025013487049
Integrity Tests ppmsagepubcom
®SAGE
Saul Fine’
Abstract
Integrity tests have been v\̂ ell researched in recent decades and have consistently
been found to be effective predictors of counterproductive behaviors in a variety
of occupational settings. In practice, however, tbe unique nature of integrity tests
and their constructs have made their integration into organizations’ recruitment
processes somewhat challenging. In light of this situation, the present article outlines a
number of practical guidelines that organizations can follow to belp ensure successful
integrity testing procedures. These guidelines are based on best practice standards
for preemployment testing and describe the fundamental need for carefully planned
and well-communicated implementation stages, whicb may include an initial audit
of the organization’s counterproductive behaviors, setting realistic and measurable
objectives for the test’s use, choosing the appropriate test, correctly positioning the
test within tbe recruitment process, training the organization’s staff and piloting the
test, making accurate hiring decisions and providing appropriate candidate feedback,
and finally monitoring tbe test’s performance and employees’ behaviors over time.
Keywords
integrity testing, guidelines
Introduction
In public- and private-sector organizations around the world, job applicants are
screened and assessed by a number of different selection methods before they
are hired. These methods nearly always include some form of resume review and
one or more personal interviews, and may also be supplemented with the use of
‘Midot Ltd., Bnei Brak, Israel
Corresponding Author:
Saul Fine, Midot Ltd., I I Ben Gurion St., Vita Towers, Bnei Brak, S1260, Israel.
Email: saul@midot.com
282 Public Personnel Management 42(2)
psychological assessment tools. While these psychological assessments can vary in
terms of the competencies they measure, ranging from mental abilities and skills to
personality fraits, when organizations are particularly interested in the honesty of their
new employees, they will often choose to administer integrity tests as well (Miner &
Capps, 1996).
Integrity tests are designed to screen-out high-risk candidates as a means to miti-
gate subsequent incidences of counterproductive work behaviors (CWBs) and occupa-
tional offenses, such as theft, fraud, bribery, violence, and drug use (Murphy, 1993).
To do so, integrity tests may include items with direct questions to job applicants
regarding their attitudes toward CWBs in general and occupational offenses in particu-
lar (Sackett, Burris, & Callahan, 1989). Accordingly, individuals who tend to identify
with counterproductive behaviors, believe that such behaviors are pervasive or justifi-
able, are lenient toward their perpefrators, and/or have been involved in such behav-
iors themselves are predicted to have greater propensities toward engaging in such
behaviors themselves in the futtire (Wanek, 1999). A prototypical item from an integ-
rity test, for example, might be the statement “most employees will steal from their
employers at least once,” whereby, a candidate’s agreement or disagreement to this
statement is essentially indicative of his or her perceived pervasiveness of employee
thefts.
Indeed, a vast amount of research and meta-analytic evidence over the past few
decades has shown integrity tests to be significant predictors of CWB in a variety of
settings (Ones, Viswesvaran, & Schmidt, 1993) and able to successfully reduce CWBs
when utilized in the selection process (Jones, 1991). However, for any tool to be oper-
ationally effective, it needs to be properly implemented into the organization’s overall
recruitment and selection process. Proper implementation may include issues such as
clarifying the test’s objectives, the influence the test has on the hiring decision,
fraining the test’s adminisfrators, monitoring the test’s performance, and so on.
Consequently, the unsuccessful implementation of one or more of these areas can
render even well-developed and validated assessments more or less ineffective.
With respect to integrity testing, implementations can be especially challenging for
at least two main reasons. First, there is often confusion as to how to integrate integrity
tests, which predict negative behaviors, into the overall selection and assessment pro-
cess, which is normally designed to predict positive performance. Second, adding to
this confusion, is an uncertainty regarding the proper interaction between human
resource specialists, who are typically in charge of the recruitment and assessment pro-
cess (i.e., selecting-in promising job candidates), and security personnel, who are more
often in charge of assessing personnel risk (i.e., screening-out high-risk candidates).
In light of these challenges, this article describes a number of practical guidelines
to help personnel specialists ensure a successful implementation of integrity tests in
their organizations. These guidelines are based on best practice standards for preem-
ployment testing in general (American Educational Research Association, American
Psychological Association, & National Council of Measurement in Education, 1999),
and related writings in particular (Association of Test Publishers, 2010; Werner & Joy,
1991), and are endorsed by the experience of developing and providing integrity tests
for public- and private-sector organizations around the world.
Fine 283
The Initial Audit
An important stage before implementing an integrity test is the assessment of the orga-
nization’s current situation in terms of the nature and frequency of occupational
offenses and other counterproductive behaviors that occur. This stage is highly advis-
able, as it can give the organization’s decision makers an accurate (and sometimes first
time) look at the actual behaviors of their employees, which is an essential prerequisite
to setting realistic objectives and making meaningful changes later on. An organiza-
tion’s current situation might be audited by summarizing issues such as the number of
disciplinary actions, the number and types of incidents, rates of voluntary and invol-
untary tumover, and performance appraisal records. Such issues may then be cross-
referenced against specific branches, departments, jobs, tenure, or against industry
benchmarks in general, whereby the most behaviorally problematic segments of the
organization may be identified. Finally, these job segments should be ranked in order
of their perceived risk to the organization, in terms of their incidence as well as in
terms of their value to the organization to help prioritize intervention efforts. In many
cases, the results of this type of exercise can prove to be surprising. For example, the
highest incidence of sabotage, property theft, or corporate espionage in a given orga-
nization may not come from its agents or officers, but from its subcontracted mainte-
nance staff, who often work late unsupervised hours and who may have been hired via
an outside firm that did not go through the same thorough hiring process and security
checks as the organization’s permanent employees.
To carry out the audit, it is generally sufficient to summarize data that are readily
available from the organization’s personnel or corporate security database (e.g., over
the previous 12 months), although personal and group interviews and employee sur-
veys can be extremely insightful as well. In fact, interviews and surveys can help
provide a far better understanding of the causes behind certain behavioral incidents,
above and beyond the plain numbers themselves. For example, the behaviors of the
maintenance staff from the previous example may tum out to be related to the fact that
they are actually underpaid, imtrained, and mistreated by their contractors. In this situ-
ation, therefore, handling the issue on an individual level would probably not solve the
behavioral problems in the long term.
In either case, the frequency of recorded incidents should be translated into the
estimated financial losses incurred to the organization over that period. These losses
may include the direct costs of the incident themselves as well as many indirect costs
such as income loss, tumover costs, legal expenses, productivity, reputation damage,
and so on. Finally, a summary of these figures should be presented to the organiza-
tion’s decision makers, with an outline of possible strategies for improvement.
Setting Realistic Objectives
Based on the results of the organizational audit (above), an organization may decide to
adopt a number of procedures to improve its current situation. One such decision, for
example, may be to start using an integrity test, while others may be to change work
policies, carry out backgroxmd checks, or increase surveillance methods, for example.
284 Public Personnel Management 42(2)
However, before integrating an integrity test (or any assessment tool for that matter),
it is important to clarify the intended objectives for the test and to set realistic expecta-
tions for its desired effects. In other words, an organization considering to use an
integrity test should be clear why it is doing so, and what results it expects to gain
from it.
In many cases, the organization will have a clear and specific objective in mind,
such as reducing incidents of theft or fraud, which are believed to be preventable by a
more scmpulous assessment of the organization’s job applicants. In other cases, the
objectives may be more general, such as when the organization sees integrity as a key
competency for the success of its employees and the business, and wants to take mea-
sures to hire honest employees as a result; or when the organization would like to
adhere to certain regulatory requirements (e.g., the Sarbanes-Oxley Act of 2002 in the
United States).
In all events, the organization should be aware of the realistic benefits the integrify
test can provide them. Specifically, the organization should be cautious not to be mis-
led into believing that the integrity tool will resolve all damages caused by counterpro-
ductive behaviors, such as eliminating incidence of theft and fraud altogether. Instead,
it should be xmderstood that integrity tests, when properly implemented, can be
extremely efficient means for reducing such behaviors. However, as with any selec-
tion tool, they will still erroneously miss some nature offenders and erroneously reject
some honest others.
Much attention in the literature has been given to this latter issue, known as
“false positives,” which deserves some attention. A primary source of the problem
of false positives is related to the difficulty identifying behaviors with low base
rates, such as CWBs (Murphy, 1987). One of the surest ways around this issue is to
supplement integrify test scores with other tools, such as background checks, refer-
ences, work histories, and stmctured interviews. However, the problem itself
should also be put into perspective: False positives are a natural part of any selec-
tion process; they refer to individual decisions, whereas personnel selection usually
focuses on group decisions; and the altemative to not using such a test will almost
eeriainly result in more false positives (and false negatives) than with the test
(Sackett & Wanek, 1996).
In addition, it should be recognized that an integrify test will be less effective in
some situations than others. For example, it is known that an employee’s working
environment and other extemal factors may infiuence his or her potential involvement
in deviant behaviors, above and beyond those predicted by the integrify test alone
(Fine, Horowitz, Weigler, & Basis, 2010). Situational issues should therefore also be
taken into account as well when setting expectations.
Finally, the organization should review the companies’ relevant job descriptions to
make sure that the integrify eonstmct is officially recognized as a necessary job
requirement where it is intended to be used. To be sure, integrify as a job requirement
is ubiquitous, because it is universally considered to be an essential competency for a
wide variefy of jobs as well as a key organizational value in the public and private sec-
tors alike (American Management Association, 2002; Kouzes & Posner, 2009).
Fine 285
Choosing the Right Test
Once the objectives and expectations have been properly outlined, it is important to
locate the right test. At the most basic level, the right test should be designed for the
intended purposes; have been used successflilly in similar situations; have been known
to be appropriate for the target candidate population, culture, language level, and dif-
ficulty; and have the relevant technical documentation to support.
Still, there are many well-developedl integrity tests available commercially today,
and it can be fairly confusing to choose between them. In general, there are two main
types of integrity tests: overt tests and personality-based tests. The main distinction
between these two types is that overt tests directly measure opinions and admissions
toward counterproductive behaviors, whereas personality-based tests measure per-
sonal character fraits that are inferentially related to these behaviors (Sackett et al.,
1989). Research has found overt and personality-based integrity tests to be moderately
intercorrelated (Hogan & Brinkmeyer, 1997), with both having significant operational
validities for predicting overall CWBs (Ones et al., 1993; Van Iddekinge, Roth,
Raymark, & Odle-Dusseau, 2012).
Based on experience, it may be grossly generalized that security personnel tend to
prefer overt tests due to the direct and context-specific nature of their items, which can
also help them corroborate information from other sources and/or serve as a basis for
interviews or reference checks. Human resource specialists, however, may tend to
prefer personality-based tests, as these tests describe candidates in terms of fraits and
behavioral tendencies, and provide summary scores and narratives that are similar in
form to those found in traditional personality inventories. Personality-based tests are
also perceived to be less prone to faking (Alliger & Dwight, 2000), which can some-
times be a deterrent for using overt tests, even though the overall effects of faking on
integrity test validities may actually be minimal (Ones & Viswesvaran, 1998a).
Beyond the type of test, basic logistic factors such as whether the test is web-based,
multilingual, timely, costly, customizable, and user friendly, are all important to con-
sider when choosing a test. Perhaps most important, however, is to select an integrity test
based on its professional qualities. These include development method, reliability, valid-
ity, fairness, legal defensibility, fakability, and cultural adaptability. Well-developed
tests will always have comprehensive technical manuals and published research reports
that describe these issues, and some will have been professionally reviewed as well. In
addition, respected test suppliers will usually require the test’s adminisfrators to be
frained on the correct usage of their tests. Finally, it is important to choose a test whose
suppliers offer professional consulting services for piloting, norming, and validating
their tests in the organization later on. Organizations should insist on receiving copies of
these materials and discuss these issues well in advance to ascertain (perhaps with the
help of an independent consultant) the quality of the test and its legal defensibility.
Positioning the Test
Once the right test has been chosen, and with the organization’s objectives still in
mind, the next step is to sfrategically position the test within the recruitment process
286 Public Personnel Management 42(2)
for maximal effectiveness. One of the greatest challenges for integrating a new assess-
ment tool is to consider how it should influence the overall selection decision. This
question is directly related to the degree of incremental validify yielded by the test
above and beyond the other assessment tools. In general, integrify tests have been
found to provide a high degree of incremental validify to traditional tools (Schmidt &
Hunter, 1998), which is due in part to their low correlations with traditional cognitive-
based assessment tools and only moderate relationship with traditional personalify
inventories (Wanek, 1999). Accordingly, most traditional assessment solutions are
unable to provide reliable measures of integrify on their own, or predict CWB to a
similar degree. And, due to their incremental validities, integrify tests can be effective
when placed at various points in the recruitment process, aggregated with other mea-
sures, or used as a separate assessment stage.
Because integrify tests are often used to screen-out high-risk candidates, rather than
to select-in high potentials, they are typically used as either initial screening tools or
later as one of the final screening tools. Accordingly, the integrify test will not neces-
sarily disrupt the organization’s current process. Instead, it will more likely enhance
the process by either adding a single (yet critical) dimension not yet formally mea-
sured or by building on other integrify measures already in place. Of cotirse, before
positioning the test, it is important to understand the current recruitment stages and
tools, the constructs measured, and the candidates’ and recruiters’ current roles
throughout this process.
In terms of taking ownership of the process, some organizations prefer human
resources personnel to be in charge of integrify testing, especially when it is used as a
prescreening psychological assessment tool. Other organizations, however, prefer
securify personnel to be in charge of testing, especially when it is used as a final per-
sonnel risk screening tool. While both approaches are reasonable, it is most important
that the assessment information gathered be shared between these two groups to maxi-
mize effectiveness. Specifically, when human resources uses integrify tests to pre-
screen candidates, they should communicate the test results to the securify persotmel,
who can often make good use of the contents of the integrify reports in their inter-
views, background checks, or reference checks. Similarly, when used as a final screen-
ing tool, securify personnel and human resource specialists should integrate all of the
information collected from one another in making hiring recommendations.
As initial screening tools, integrify tests are attractive for their ease and speed to
administer, and relative low costs—aspects that may be especially advantageous when
the initial application process and the integrify test are completed online. As an initial
screening tool, the integrify test is less likely to be used together with many other risk
assessments. This may be due in part because other risk assessments are too expensive
(e.g., background or reference checks) to be administered to all applicants or because
they require the candidate’s physical presence (e.g., interviews or assessment centers).
Therefore, organizations should consider the percentage of candidates that will be
rejected via pre-screening against their overall recruitment needs. Accordingly, when
used as a prescreening tool, relatively low test cutoff scores are usually suggested to
minimize false positives.
Fine 287
When used as one of the final screening tools, it is advantageous to take a more
holistic approach, considering (or perhaps aggregating) scores from additional assess-
ment tools as well, especially those tools that measure or predict similar integrity-
related constructs. This latter approach will almost certainly lead to more reliable and
accurate hiring decisions than those attained by single measures alone. In addition, as
a final screening tool, it may be easier to facilitate a smooth “hand-off of the process
from human resources to security personnel, whereby security personnel evaluate the
personnel risk of those already pre-screened by human resources, as the final hurdle in
the selection process.
Defining Success Factors
At this point, it is recommended to outline a set of success factors that will allow the
organization to systematically measure the integrity tool’s effectiveness once it is
eventually implemented and to align the organization’s expectations with these ends.
Test suppliers usually have experience in this area, and should therefore be consulted
regarding the appropriate method (i.e., how and when) to be used with their tools.
To be sure, these measures should be derived from the objectives and expectations
defined beforehand. Specifically, when the organization’s objectives for the test are of
a general nature, subjective gains may be of primary interest, such as increased super-
visor or peer ratings of the employees’ integrity over time. These aspects can be mea-
sured via interviews or surveys before and after the test is implemented, for example.
Where the objectives are more specific, such as reducing incidents of counterpro-
ductive behaviors, the organization should measure these behaviors directly. Measuring
objective behaviors may be done in several ways. Some of the more popular methods
include confrasting the rate of reported incidents before and after the test’s implemen-
tation, or against other branches/departments where the test may not yet have been
implemented. The main advantage to these methods of “contrasted groups” is that they
are fairly sfraightforward to calculate and interpret. The main disadvantages of these
methods, however, are that other policies and procedures in the organization may have
changed as well over this period, and therefore behavioral differences many not be
directly attributable to the test itself In addition, it may take several months before
those tested have been used long enough to study noticeable differences in their behav-
iors. In light of these issues, other methods are available to measure the effectiveness
of testing, such as correlating test scores with future, concurrent, or past behaviors,
although these analyses can be complex and typically require the assistance of a
frained industrial psychologist.
In any event, it is important that the outcome of such analyses be franslated and
communicated to the organization in terms of their potential financial savings. These
potential benefits are essential to help the organization’s decision makers understand
the tangible returns the test can have on the organization’s investment. In fact, it is
advisable to provide rough estimates of these calculations well in advance of the test’s
implementation, based on reportedly similar cases found in the professional literature
and/or in consultation with the test supplier. Monetary benefits can be computed based
288 Public Personnel Management 42(2)
on fairly straightforward cost-benefit analyses that essentially include the expected
savings due to prevented thefts, frauds, reduced tumover, and so on, less the cost of
testing. Sturman and Sherwyn (2007) showed, for example, that screening job appli-
cants using an overt integrity test was able to reduce the average cost of worker com-
pensation claims by as much as 68% and yield a substantial retum on investment for
the organization in the process (Sturman & Sherwyn, 2007).
Piloting the Test in Your Organization
A smart way to try-out an integrity test, before rolling it out to the entire organization,
is to carry out a controlled pilot study. Piloting can be especially important for very
large or public-sector organizations, wherein changing assessment and selection prac-
tices often takes time and proven success. Ideally, this pilot should be designed to
ostensibly measure the predefined success factors (above). In that way, decision mak-
ers can most succinctly assess whether the pilot was successftil and the test effective.
In addition, if not defined previously, the pilot should also assess candidate and
recmiter feedback regarding the perceived appropriateness, faimess, and validity of
the test. Finally, irrespective of the success factors, the pilot is a good opportunity to
highlight potential logistic or technical problems associated with using the test, which
can be corrected before the test is in wider use.
Intentionally narrow in scope, a pilot should usually focus on a specific department
or branch within the organization in which particular improvements are needed and
can be measured, or where the security risk to the organization is considered to be
particularly high. A time frame of 3 to 6 months using the test should be sufficient for
a pilot of this nature.
The pilot itself should be “championed” by a senior manager in the organization,
whose responsibility will be to ensure the test is properly used, to schedule and
coordinate the pilot’s milestones, and to report the results to the organization’s
decision makers. While this last point may seem obvious, it is not uncommon for
organizations to take on a new test with no particular plan for when or how to
evaluate it later on.
Finally, it is important to keep test suppliers involved in the pilot. They are the
experts of their own tests and can offer valuable advice in terms of designing an appro-
priate method for administering, analyzing, and documenting the results, and for mak-
ing future recommendations.
Using the Test Operationally
Assuming the pilot is found to have been successful, the next step is to roll out the test
for its wider use in the organization. In doing so, the objectives, success factors, assess-
ment processes, and lessons leamed from the pilot should be reviewed, updated, and
documented as necessary. Then, an official organizational policy should be written to
all relevant HR and security personnel summarizing these issues and stating the future
usage of the test. This paper should also outline the influence the test will have on the
Fine 289
hiring decision process and how all individuals are expected to adhere to this policy
after they have been professionally trained accordingly. A separate policy letter may
be appropriate for HR and securify staff, depending on their required involvement in
the assessment process. Nevertheless, publishing this policy intemally is important to
ensure that all relevant staff are completely synchronized in terms of the next opera-
tional steps.
It is critical that all relevant HR and securify personnel be properly trained by the
test’s suppliers regarding important theoretical and operational issues such as the
rationale behind the test, the faimess and validify of the test, how to administer the test,
reading and interpreting the test’s reports, integrating the results with other measures,
and making professional hiring decisions. Training sessions should also explain data
protection issues, whereby all reports are to be kept confidential and never accessible
by or transferred to unauthorized personnel.
Once trained, the test’s administrators should be left with training manuals and a
contact person at the test supplier’s company who can be reached for additional ques-
tions as necessary. Finally, it is important that as the organization’s staff change, train-
ing is freated as a necessary requirement for all relevant new employees.
Providing Feedback to Candidates
Perhaps one of the more sensitive issues regarding integrify testing is the mislabeling
of low scorers. As such, while training sessions will most likely cover this issue, it is
important that organizations set their own clear policies on this matter. Specifically, all
relevant staff should xmderstand that low scorers are not dishonest people. Rather,
integrify tests are designed to provide an evaluated level of risk toward certain coun-
terproductive behaviors, such that when used consistently in the selection process,
hiring low-risk candidates and rejecting high-risk candidates will lead to less dishonest
behaviors overall.
Accordingly, low-scoring candidates should not be told that the test has found them
to be dishonest. Instead, where feedback is needed, candidates’ results should be
described to them (and others) in terms of the negative work attitudes that were derived
from their responses to key questions, which are often related to subsequent behaviors,
but not in terms of “passing” or “failing” the test, and certainly not in terms of the
primary basis for being hired or rejected. Moreover, feedback should be given sensi-
tively and with cognizance over the common misconceptions and mislabeling of low
scorers.
Despite the above suggestions, it should be duly noted that giving specific feedback
regarding individual integrify test scores is usually unnecessary, as would be tme for
any other type of assessment. Consider a case, for example, where a candidate did very
poorly on a certain group discussion exercise or personal interview. Clearly, the orga-
nization would not readily inform the candidate that this one assessment was the
deciding reason for not hiring him or her. In most situations, therefore, it is sufficient
to inform low scorers that they were not found suitable in general over the whole
recmitment process after considering all of the relevant factors.
290 Pub/)c Personnel Management 42(2)
Monitoring and Following Up
While the test is being used operationally, it should be monitored periodically for per-
formance issues. Once or twice a year is usually sufficient for this, assuming that the
pilot stage was well monitored; otherwise, more frequent monitoring is recommended
during the 1st year. Some of the issues to look out for include test norms (i.e., the dis-
tributions of scores and their implications), effectiveness (i.e., the degree to which
incidents of counterproductive behaviors change based on test scores), fairness (i.e.,
the degree to which the test may adversely discriminate against protected minority
groups), and personnel feedback (i.e., the degree to which fhe test is perceived as being
an effective tool).
In terms of norms, it is reasonable that some candidates in certain organizational
and geographical cultures may respond systematically differently fo the items in the
test, warranting an adjustment of the test’s norms. This will help make sure the distri-
bution of scores is localized and will help avoid a situation of the test inadvertently
yielding too many high or low scores. Adjusting norms should always be done in
cooperation with the test supplier.
In terms of effectiveness, it is important to carry out periodic follow-up studies
regarding the counterproductive behaviors in the organization, and to report back to
the organization’s decision makers the monetary and behavioral benefits and overall
utility of the integrity test in the organization.
Regarding fairness and adverse impact, it is important to keep clear records of can-
didates’ demographics (i.e., age, gender, and race) to make sure the percentage of
those hired are proportionate to the percentage of candidates in each group. In general,
it should be noted that integrity tests are known to be typically fair and nondiscrimina-
tory in a variety of settings (Ones & Viswesvaran, 1998b). So, while this is not typi-
cally an area of concern for integrity testing, it should be carefully monitored
nonetheless.
In terms of personnel feedback, it is important to monitor the opinions of adminis-
frators regarding the perceived usefulness and fairness of the test, to address specific
issues, update users on objectively measured results from using the test, and refrain
them as necessary. Candidate reactions are also good to monitor, although it may be
surprising to learn that integrity tests do not usually elicit the negative reactions that
are sometimes suspected (Berry, Sackett, & Wiemarm, 2007; Sackett & Wanek, 1996).
As a general rule, whenever issues in any of these areas arise, it is advised to con-
sult the test supplier for the appropriate solutions.
Concluding Remarks
These guidelines may provide public- and private-sector organizations with some
important practical issues to consider when implementing integrity tests into their
recruitment and selection processes. Among the issues raised here, proper plarming
and awareness toward specific and measurable objectives are perhaps the most key
elements. Accordingly, adopting at least some of the steps described here may facili-
tate a more effective assessment process using integrity tests.
Fine
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship,
and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of
this article.
References
AUiger, G. M., & Dwight, S. A. (2000). A meta-analytic investigation of the susceptibility of
integrity tests to response distortion. Educational and Psychological Measurement, 60,
59-72.
American Educational Research Association, American Psychological Association, & National
Council of Measurement in Education. (1999). Standards for educational and psychologi-
cal testing. Washington, DC: Author.
American Management Association. (2002). Corporate values survey. Retrieved from
http://www.amanet.org/training/whitepapers/2002-corporate-values-survey-35.aspx
Association of Test Publishers. (2010). Model guidelines for preemployment integrity testing
(3rd ed.). Washington DC: Author.
Berry, C. M., Sackett, P. R., & Wiemann, S. (2007). A review of recent development in integrity
test research. Personnel Psychology, 60, 271-301.
Fine, S., Horowitz, I., Weigler, H., & Basis, L. (2010). Is good character enough? The effects
of situational variables on the relationship between integrity and counterproductive work
behaviors. Human Resource Management Review, 20, 73-84.
Hogan, J., & Bdnkmeyer, K. (1997). Bridging the gap between overt and personality-based
integrity tests. Personnel Psychology, 50, 587-599.
Jones, J. W. (1991). Preemployment honesty testing: Current research and future directions.
New York, NY: Quorum.
Kouzes, J. M., & Posner, B. Z. (2009). To lead, create a shared vision. Harvard Business
Review, 87(1), 20-21.
Miner, J. B., & Capps, M. H. (1996). How honesty testing works. Westport, CT: Quorum.
Murphy, K. R. (1987). Detecting infrequent deception. Journal of Applied Psychology, 72,611-614.
Murphy, K. R. (1993). Honesty in the workplace. Pacific Grove, CA: Brooks/Cole Publishing.
Ones, D. S., & Viswesvaran, C. (1998a). The effects of social desirability and faking on person-
ality and integrity assessment for personnel selection. Human Performance, 11, 245-269.
Ones, D. S., & Viswesvaran, C. (1998b). Gender, age and race differences on overt integrity
tests: Results across four large-scale job applicant datasets. Journal of Applied Psychology,
83, 35-42.
Ones, D. S., Viswesvaran, C , & Schmidt, F. L. (1993). Comprehensive meta-analysis of integ-
rity test validities: Findings and implications for personnel selection and theories of job
performance [Monograph]. Journal of Applied Psychology, 78, 679-703.
Sackett, P. R., Burris, L. R., & Callahan, C. (1989). Integrity testing for personnel selection: An
update. Personnel Psychology, 42, 491-529.
Sackett, P. R., & Wanek, J. E. (1996). New developments in the use of measures of honesty,
integrity, conscientiousness, dependability, trustworthiness, and reliability for personnel
selection. Personnel Psychology, 49, 787-829.
292 Public Personnel Management 42(2)
Sarbanes-Oxley Act of 2002, Pub.L. No. 107-204, 116 Stat. 745, enacted July 30, 2002.
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psy-
chology: Practical and theoretical implications of 85 years of research findings. Psychological
Bulletin, 124, 262-274.
Sturman, M. C:, & Sherwyn, J. D. (2007). The truth about integrity tests: The validity and util-
ity of integrity testing for the hospitality industry. Ithaca, NY: The Center for Hospitality
Research, Cornell University.
Van Iddekinge, C. H., Roth, P. L., Raymark, P. H., & Odle-Dusseau, H. N. (2012). The
criterion-related validity of integrity tests: An updated meta-analysis. Journal of Applied
Psychology, 97, 499-530.
Wanek, J. E. (1999). Integrity and honesty testing: What do we know? How do we use it?
International Journal of Selection and Assessment, 7, 183-195.
Werner, S. H., & Joy, D. S. (1991). Incorporating an integrity assessment system into the per-
sonnel selection process: Some recommendations. In J. W. Jones (Ed.), Preemployment
honesty testing: Current research and future directions (pp. 223-228). New York, NY:
Quorum.
Author Biography
Saul Fine is an industrial psychologist specializing in personnel selection and assessment. He is
currently vice president of Research and Development at Midot Ltd., and an adjunct lecturer at
the University of Haifa, in Israel.
Copyright of Public Personnel Management is the property of International Public
Management Association for Human Resources and its content may not be copied or emailed
to multiple sites or posted to a listserv without the copyright holder’s express written
permission. However, users may print, download, or email articles for individual use.
Psychological Testing in
Personnel Selection,
Part I: A Century of
Psychological Testing
By Wesley A. Scroggins, Ph.D., Steven L Thomas, Ph.D., and Jerry A.
Morris, Psy.D.
This article is the first in a three-part series that examines the deveiopment of
selection testing. Part I focuses on the historical deveiopment of personnel
seiection testing from the late 19th century to the present, with particular attention
given to personality testing. Attention is given to the efforts of eariy industrial
psychologists that shaped and defined the role of testing in the scientific selection
of employees. Part li examines the deveiopment of methods and standards in
employment testing with particular emphasis on seiection vaiidity and utility.
Issues of selection fairness and discrimination in seiection are explored as they
relate to psychological testing. Part Ml explores the development and appiication
of personality testing. The transient nature of modeis of personaiity is noted, and
current paradigms and the utility and fairness of personaiity testing for modern
organizations are discussed.
T
he application of psychological testing to human resource selection, particular-
ly the use of instruments designed to assess personality traits, has a long, col-
orful, and somewhat contentious history. Personnel selection in general, and
the concomitant use of varied forms of psychological testing in particular, has its ori-
gins in the late 19th century. Much of the developmental work in the scientific meth-
ods of selection can be traced to the efforts of early industrial psychologists to support
the military through two world wars, as well as their contemporaneous marketing
efforts to have their craft applied to organizational problems. From the natural selec-
tion concepts that formed the foundations of Frederic Taylor’s scientific management,
through the informal techniques of early character analysis and to the modern appli-
cation of selection instruments based on statistical analyses of test reliability and valid-
ity, the use of tests and other techniques for the improvement of personnel selection
and performance has never been without controversy. Whether the tension was over
the proper role of testing professionals, the appropriate balancing of management
demands for efficiency and fairness to employees, or the usefulness of tests them-
selves, the unfolding history of psychological testing represents a microcosm of Amer-
ican business history.
Public Personnel Management Volume 37 No. 1 Spring 2008 99
Similarly, the study of personality has a rich and varied tradition within the field of
psychology The controversy over the desirability of using personality testing to make
selection decisions has deep historical roots. Traditionally, many industrial
psychologists rejected the use of personality testing because they believed the practice
was unreliable and invalid. Indeed, one classic text in personnel testing devotes an
entire chapter to the special problems that exist in using personality testing in
selection.! Most of the early research on personality testing found low validity and
reliability coefficients, and literature reviews dating from the 1960s that reinforced the
shortcomings of personality testing^ led to a move away from personality testing in
selection. Many HR practitioners, however, have continued to use personality testing
with an optimistic and enduring faith in its ability to discriminate between good and
poor job candidates.’
Contemporary researchers have pointed to many problems in personality testing
as explanations for its inability to predict job performance. Chief among these are that
there has never been a generally accepted definition of personality or an agreed-upon
set of personality traits. Theories and models of personality and personality traits have
ranged from Eysenck’s 2 basic dimensions of personality to Cattell’s 171 traits and have
included nearly everything in between.”* It has not been until relatively recently that the
Big Five model of personality embraced the notion that a broad definition of
personality that collapses specific traits into mor-e general personality dimensions can
be used to predict the broad set of behaviors that define job performance.’
This article explores the evolution of personnel selection testing in general, and
of personality testing in particular. We describe the historical development of
personality testing and the impact of the work of eady industrial psychologists that has
shaped and defined the.role of testing in the scientific selection of employees. We
highlight the transient nature of models of personality the description of personality
traits, and the use of personality instruments while examining the evolution of
personality measures and the ways research has shaped the construct of personality
and its measures. Just as many of yesterday’s models have lost their luster, today’s
personality models and test instruments may be viewed very differently in the future.
This realization opens the door to exciting research and development possibilities, as
well as prospects for a renewed use for personality testing.
The Origins of Industriai Psychoiogy
The roots of psychological testing lay in the origins of industrial psychology in the late
19th and early 20th centuries. The field represented the convergence of scholarship
and application from the disparate fields of psychology engineering, and business. As
early as the 1880s, authors such as Henry R. Towne and Henry Metcalf had proposed
that business management, viewed as an art in the late 19th century, should be thought
of as a science and would benefit from engineering’s professionalization because it had
foundations in and proclivity for science.^ Although schools of management science in
the engineering disciplines had been founded on the East Coast, late 19th-century
universities did not readily embrace either engineering or business curricula. However,
the Morrill Act of 1862 ushered in ah era of change in higher education by promoting
100 Public Personnel Management Volume 37 No. 1 Spring 2008
the chartering of land-grant universities that moved away from offering a strictly liberal
arts education and toward technical education.” Several well-known and prestigious
universities, including the Universities of Chicago, Pennsylvania, and California at
Berkeley, incorporated management and engineering programs in their curricula by
the beginning of the 20th century.^
Although psychologists, as practitioners of the traditionally scholarly discipline of
psychology,, resisted the application of psychological models and theories to
managerial problems, individuals such as Walter Dill Scott and Hugo Munsterberg
founded the field of industrial psychology when they began to explore the serious
application of psychological principles to problems in education, law, marketing, and
management.9’^° The following years saw rapid growth in the application of industrial
psychology in the area of market psychology by practitioners who wanted to address
complex business problems. Among the tools those researchers deployed were
psychological tests aimed at addressing the growing problem of identifying individuals
who would be effective employees.
The Role of Scientific iVIanagement
The historical developments in management science and psychology that lead to the
general acceptance and application of psychological testing are underexplored. One of
the most influential pioneers of the late 19th and early 20th centuries was Frederic W
Taylor, who was an 1883 engineering graduate of the Stevens Institute of Technology
and an employee of Midvale Steel Company. Taylor’s influence began with the
publication of his ‘A Piece-Rate System, Being a Step Toward Partial Solution of t h e
Labor Problem” in 1895.
Tihe article was a prescriptive piece that addressed industrial efficiency problems
by scientifically analyzing work behaviors, establishing performance standards, and
selecting laborers using scientific methods.^^ Taylor’s model of scientific management
allowejd managers to use scientific principles to address the problem of soldiering (i.e.,
employees working at a contrived slow pace) and to establish job redesign and
incentive motivation systems.^^
Importantly, Taylor also suggested that a rational justification for employment
policies was making wages contingent on meeting standards for job performance. The
standards Taylor proposed were based on time and motion studies of optimal job
performance. The idea was that tying payment to piece rate accelerated natural
selection and that individuals who were best suited to a task would earn the highest
wages jwhile increasing productivity and lowering labor costs.^^
Taylor thought that scientific management would usher in what he called the
“mental revolution,” and he advocated scientific selection and training as the principle
for hiring, cooperation over individualism, and an equal division of work best suited to
management and employees.i”*-!’Taylor thought that efficiency started in the mind of
the worker. In Taylor’s system of HR management, workers must be motivated by
incentives that are appropriately arranged to create drive and block soldiering. He held
that managers could establish contextual rewards that reach the internal mental state
—j _ _
Public Personnel Management Volume 37 No. 1 Spring 2008 101
of the worker and channel it into productivity. Thus, the roots of applying scientific
principles to the selection and other aspects of managing employees were established
in both the practice of management and the university training of HR professionals in
the early 1900s.i’̂
After Taylor’s death in 1915, his successors, including Herrington Emerson
(founder of one of the first U.S. management consulting firms) and Frank Gilbreth
(famous refiner of motion studies related to bricklaying), carried on the scientific
management method and refined it by attempting to account for the mindset of
workers and the psychological aspects of the worker-manager relationship.^’^
Subsequently, the supporters of scientific management in industry and academia began
to have closer alliances with psychology. Lillian Gilbreth’s The Psychology of
Management was an early bridge between the disciplines of management engineering
and applied psychology. In 1919 Harlow Person was appointed managing director of
the Taylor Society, and he, as scientific management’s chief spokesperson, broadened
the group’s alliance with psychology to deal with the weaknesses of approaching the
human element in management strictly through quantitative methods. These events
stimulated a close relationship between HR selection and management and
psychology that led to numerous psychologists publishing industrial psychology
articles in the Taylor Society’s journal.
Concerns about balancing business and industry needs for efficiency with
workers’ needs were thrust to the forefront as proponents and practitioners of
scientific management began to discuss such constructs as “mental revolution,”
“natural selection,” and “optimal productivity.” The danger of seeking productivity and
efficiency at the expense of treating workers humanely loomed as the potential of
scientific management was increasingly applied to the workplace.
Scientific management became so popular in the early decades of the 20th
century that governments began to use its principles in the military. ̂ ‘̂ Opposition grew
to this HR strategy, however. By 1911 union opposition was so great that labor
denounced scientific management and called for strikes to combat it.’̂ i xhe U.S.
Congress investigated the management system, and while laws limiting the application
of statistics to the hiring, retention, and promotion of employees were considered,
none were ever enacted.
The Roots of Psychological Testing
At the same time that this management revolution emphasizing the use of human
engineering within the business and engineering communities was occurring,
psychologists were applying scientific principles to business problems. And the first
marketable application in psychology was the psychological test.
In order to market themselves to businesses during the early 1900s, psychologists
began to describe themselves as “human engineers.” Most specifically, psychologist
wanted to solicit support for the use of tests for the scientific selection and evaluation
of employees.^°
102 Public Personnel Management Volume 37 No. 1 Spring 2008
The use of psychological testing in law and business were promoted by
psychologists such as Hugo Munsterberg in the early 20th century.̂ ^ A German
immigrant who desired to make a positive impact upon American society, Munsterberg
used popular media to take psychological testing out of the research laboratory and to
the attention of industry and society.̂ ^ By 1916 Walter Dill Scott became the first
American academic to carry the title of professor of applied psychology, and students
could ‘get a graduate degree in applied psychology with private business support at
Carnegie Technical Institute. Scott later headed the Committee on Classification of
Personnel for the Army and developed rating scales for officer promotion. He also
developed the U.S. Army’s tests for skill assessment and established personnel
departments in all of the Army’s divisions.^^
During 1916 the National Academy of Sciences created the National Research
Council (NRC) to organize scientific support for the impending U.S. war effort. The
NRC subcommittee, called the Committee of Psychology, was led by Robert Yerkes,
who was then the president of the American Psychological Association. In the spring of
1917 the United States entered World War I, and a prominent group of Harvard
University psychologists, including Yerkes and doctors Thorndike, Thurstone, and Otis
postulated that the war effort could be helped by psychological methods to select,
categorize, and make assignment and training decisions for troops.̂ “^ Walter Dill Scott
lobbied for the importance of placement testing for placing soldiers into jobs that
matched their abilities. Scott and his committee developed 112 tests to place people in
83 different jobs for the military, and they administered their tests to about 3.5 million
soldiers.
Although there was considerable reluctance by many in the military to accept the
legitimacy of testing, the fact that the government budgeted for testing and accepted
test results provided a degree of public validation of testing.^^ The wide use of
psychological testing for selection and classification, motivation, and training decisions
had begun by the end of World War I.
I
I
Psychological Testing After World War I
During the years between World War I and World War II, the business environment
continued to evolve, and organizational complexity increased at the sarne rate as
organizational size. The pressures organizations felt regarding competition and
increased labor regulation provided even more impetus for the development of
rational management systems and the application of scientific methods to improve
performance. As a result, a number of individuals referred to by Van De Water as
“entreiDreneurial psychologists” attempted to address managers’ and employees’ needs
by expanding the boundaries of psychology through self-promotion and the
establishment of professional organizations, journals, and consulting services.^^
One movement responsible for the marketing of psychological testing and the
application of scientific and psychological principles to business problems originated in
1916, when G. Stanley Hall, John Wallace Baird, and Ludwig Reinhold Geissler founded
the Journal of Applied Psychology (JAP). During the first 12 years of that journal’s
I
Public Personnel Management Volume 37 No. 1 Spring 2008 103
publication, business leaders were invited to participate, and many prestigious
companies submitted material. By 1930, however, the practitioner content was largely
replaced by empirical articles that critically examining a number of vocations and
business practices, especially employee selection techniques. Several selection tools,
including employment interviewing, letters of reference, character analysis, and
photographs as employee selection instruments were discredited by studies reported
mJAP. Psychological instruments were developed to address these problems.^”
As psychologists used experimental studies and the scientific method to discredit
competitors’ instruments and to establish the value of their own instruments,
standards for test development and use emerged. Also, recommendations for the
training of industrial psychologists were developed, and test publishing companies like
the Psychological Corporation appeared. Increasingly, industrial psychologists drew a
clear distinction between industrial psychology and scientific management.^^
Psychologists emphasized the importance of individual human factors such as
personality and intelligence as determinants of work behavior, in contrast to scientific
management’s focus on contextual factors such as incentive systems.^^ Time and
motion studies were discredited by industrial psychologists who saw scientific
management’s failure to consider the human element in the workplace to be a critical
weakness. They saw job performance as related to individual differences in satisfaction,
personality, or intelligence, all of which could be measured by psychological
techniques.^° With this shift in paradigms, psychologists attempted to seize the high
scientific ground of developing, evaluating, and validating employee selection and
placement techniques and instruments.
World War II and Formal Military and Industrial and
Organizational Psychology
At the start of World War II, the U.S. military, having considerable experience with
psychological testing during wartime selection and placement processes, set up the
personnel testing section of the National Guard’s Army Adjunct General’s Office. The
federal government also set up the NRC Emergency Committee on Psychology and its
subcommittee, the Committee on Service Personnel Selection and Training, as well as
the Army Air Force Aviation Psychology Program.̂ ^ During the war, military psychology
and psychological services were firmly established as essential to the nation’s defense
efforts. By the early 1940s, psychologists were able to assess and validate the
techniques of classification and training, and significant advances were made in the
analysis of the role of human factors in the design and operation of equipment, job
performance evaluation, testing, training technology, and adaptation to special
environments. In 1946 the American Psychological Association established the Division
of Military Psychology (Division 19) to create a forum for military research and to
advance psychology in the military.
The capacity of psychological tests to find and predict merit was well documented
by military psychologists in the United States and other countries by the 1940s.̂ -̂̂ ^
Because of the successes of psychology during World War II, Congress established the
104 Public Personnel Management Volume 37 No. 1 Spring 2008
Office’ of Naval Research to support scientific research.^” The National Science
Foundation was established in 1950 to provide a continued federal research effort, and
the, and the U.S. Air Force eventually merged several programs in 1954 into the Air
Force Personnel and Training Research Center, which became the Air Force Human
Resources Laboratory in 1968.^’ The Personnel Research Section of the Army Adjutant
General’s Office evolved into the current Army Research Institute for the Behavioral
Sciences in 1972.
The need to classify and select large numbers of recruits for military service led, in
1940, to the formation of the Committee on Classification of Military Personnel. The
committee was established to work with the adjutant general’s personnel testing
section.
The development and dissemination of the Army General Classification Test to
replace the U. S. Army’s system of alpha and beta developed in World War I was a major
development in personnel selection and classification testing. Psychologists developed
aptitude tests and tests of special skills, developed assessment center techniques, and
set the stage for the later development of the Armed Forces Qualification Test (AFQT)
and the Armed Services Vocational Aptitude Battery (ASVAB).’̂ In what became the
nation’s largest personnel system—processing over 800,000 recruits annually—
psychological tests for justifying selection, placement, and training decisions became
institutionalized and accepted by the 1950s.
Personality Testing: A Field in Search of Respect
The disparate fields of psychology, engineering, and management eventually merged to
address the practical application of their respective fields to organizational problems.
However, the acceptance of psychological testing and that measure’s successful
application to organizational issues was not uniform across all areas. While some forms
of psychological testing gained wide acceptance and public support, other forms of
testing did not. For example, the utility of cognitive ability tests in selection is well
established, and one can make the case that the very precise effectiveness in these tests
and their ability to predict job skill acquisition and certain types of performance on an
indiviclual and reference group basis is one of the reasons for such careful regulation of
these tools. The economic value of using these selection instruments has been well
established, with research indicating that high selection cut of scores on valid selection
tools can identify superior workers that produce outcomes that are as high as 48
percentage points higher than the average categorical worker on outcome measures
for managerial or professional positions.^”
Personality tests are a somewhat different story. The use of these tests in
employment selection is much more controversial. In contrast to cognitive tests, the
prevailing view of personality testing in personnel selection is that it lacks validity, that
the tests are easily faked, and that the tests are generally unsuitable for preemployment
screening. Blinkorn and Johnson concluded that the generally low validities of
personality measures and the problem of faking make it difficult to recommend
personality measures as an alternative in employment selection.^^
Public Personnel Management Volume 37 No. 1 Spring 2008 105
Many of the problems in personality testing stem from historical controversies
over the essence of personality, its definition, the descriptions and measures of
personality traits, and how personality traits interact with behavior and with each other.
Prior to the development ofthe Big Five personality models, general agreement on the
dimensions of personality was lacking.̂ ^ Indeed the Handbook of Industrial and
Organization Psychology, in its 1976 chapter on personality, describes a confusing set
of motivation models, trait theories, and personality instruments originating from
Hippocrates and continuing to the 1960s. While an examination of these models and
theories is far beyond the scope of this article, the ideas’ range and breadth serve to
underscore the problems in defining suitable personality measures for selection
purposes. Indeed, the textbook chapter provides a list of more than 30 personality
instruments, including brief and long self-report measures, measures of values,
vocational interest measures, and projective techniques.”” The problem is that many of
these measures are clinical or developmental instruments inappropriately used in
personnel selection, while others have not demonstrated sufficient reliability or validity
to be adequate as selection measures.”^
Thus, the usefulness of personality testing in selection has traditionally been a
source of controversy subject to widely varying opinions.^^ While common sense tells
us that personality should influence performance, and studies show that there is fairly
consistent agreement on the sets of personality traits commonly possessed by
successful managers, historical reviews of the research exploring the validity of
personality testing has generally pessimistically concluded that personality testing has
little utility”̂ ””* Recent research in personality testing has altered these conclusions,
and there seems to be considerably more optimism about the role of personality
testing in selection.’*5
In Parts II and III of this series, the development of psychological testing and the
role of personality testing in selection are further explored. The second article will
describe refinements in the methods used to evaluate selection success and explore
the emerging post-Title VII issues related to selection fairness and testing-induced
discrimination in the form of adverse impact. The third article will focus on recent
developments in personality testing and its utility as a selection tool.
Notes
‘ Guion, R. M. (1965). Personnel testing. New York: McGraw-Hill.
2 Guion, R. M., & Gottier, R. F. (1966). Validity of personality measures in personnel selection.
Personnel Psychology, 18, 135-164.
5 Gatewood, R. D., & Field, H. S. (1998) Human resource selection (4th ed.). Forth Worth, TX:
Dryden Press.
” Dunnette, M. D. (Ed.). (1976). Handbook of industrial and organizational psychology.
Chicago: Rand McNally.
5 Heneman, H. G., Ill, Judge, T. A., & Heneman, R. L (2000). Staffing organizations (3rd ed.).
Burr Ridge, IL: Irwin McGraw-Hill.
* Van De Water, T. L. (1997). Psychology’s entrepreneurs and the marketing of industrial
psYcho\ogy. Journal of Applied Psychology, 82(4), 486-499.
106 Public Personnel Management Volume 37 No. 1 Spring 2008
‘ Van De Water, T. L. (1997). Ibid.
8 Wren, D. A. (1994). The evolution of management thought (4th ed.). New York: Wiley
‘ Hearnshaw, L. S. (1987). The shaping of modern psychology. London: Routledge & Kegan Paul.
10 Mankin, D., Ames, R. E., Jr., & Grodski, M. A. (Eds.). (1980). Classics of industrial and
organizational psychology. Oak Park, IL: Moore Publishing Company
11 Taylor, F. W (1895). A piece-rate system, being a step toward partial solution of the labor
problem. Transactions of the American Society of Mechanical Engineers, 16, 865-883.
12 Moorhead, G., & Griffin, R.W (1995). Organizational behavior: Managing people and
organizations (4th ed.). New York: Houghton Mifflin Company
15 Taylor, F. W (1916). The principles of scientific management. Reprinted in D. Mankin, R. E.
Ames, Jr., & M. A. Grodski (Eds.) (1980). Classics of industrial and organizational psychology.
(pp. 15-28). Oak Park, IL: Moore Publishing Company.
1’* Van De Water, T. L. (1997). Op cit.
15 Tayldr, F. W (1916). Op cit.
!« Tayldr, F. W (1916). Op cit.
1̂ Van De Water, T. L. (1997). Op cit.
1̂ Peterson, P (1990). Fighting for a better Navy: An attempt at scientific management
{I9(y!>-19\2). fournal of Management, 16(1), I5l-l(>(>.
1′ Nadwony, M. J. (1955). Scientific management and the unions: 1900-1932. Cambridge, MA:
Harvard University Press.
2″ Dewey J. (1922). Education as engineering. The New Republic, 32, 89-91; Van De Water, T. L.
(1997). Op cit. Dodge, R. (1919). Mental engineering during the war. American Review of
Reviews, 59, 504-508.
21 Munsterberg, H, (\9W). American problems. New York: Moffat, Yard and Co.; D. Mankin, R. E.
Ames, Jr., & M. A. Grodski (Eds). (1980). Op cit.
22 Van De Water, T. L. (1997). Op cit.
23 Driskell, J. E., & Olmstead, B. (1989). Psychology and the military research applications and
trends. American Psychologist, 44(1), 43-54.
2-* Driskell, J. E., & Olmstead, B. (1989). Ibid.
25 Van De Water, T. L. (1997). Op cit.
2« Van De Water, T. L. (1997). Op cit.
27 Van De Water, T. L. (1997). Op cit.
28 Van De Water, T L. (1997). Op cit.
2′ Viteles, M. (1932). Industrial Psychology. New York: W W Norton.
5″ Bingham, W V (1924a). Intelligence scores and business success, fournal of Applied
Psychology, 8: 1-22, Bingham, W V (1924b). What industrial psychology asks of management.
Bulletin ofthe Taylor Society, 9, 243-248, Bingham, W V (1928). Industrial psychology: Its
progress in the United States. Bulletin ofthe Taylor Society, 13:187-198.
51 Driskell, J. E., and Olmstead, B. (1989). Op cit.
52 Vernon, P (1947). Research on personnel selection in the Royal Navy and British Army
American Psychologist, 2, 35-51.
I .
Public Personnel Management Volume 37 No. 1 Spring 2008 107
55 Flanagan, J. (1947). Scientific development o f t h e use of human resources: Progress in the
Army Air Forces. Science, 105, 57-60.
5” Lubinski, D. (1996). Applied individual differences research and its quantitative methods.
Psychology, Public Policy, and Law, 2(2), 187-203.
55 Driskell, J. E., and Olmstead, B. (1989). Op dt.
5” Driskell, J. E., and Olmstead, B. (1989). Op cit.
5′ Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel
psychology: Practical and theoretical implications of 85 years of research findings. Psychological
Bulletin, 124(2), 262-274.
58 Blinkhorn, S., & Johnson, C. (1990). The insignificance of personality testing. Nature,
348(6303), 671-672.
59 Heneman, H. G., Ill, Judge, T. A., & Heneman, R. L. (2000). Op cit.
•̂o Hough, H, (1976). Personality and personality assessment. In M. D. Dunnette (Ed.), Handbook,
of Industrial and Organizational Psychology (pp. 571-607). Chicago: Rand McNally
”I Heneman, H. G., Ill, Judge, T. A., and Heneman, R. L. (2000). Op cit.
••̂ Gatewood, R.D. & Field, H. S. (1998). Op cit.
”5 Grimsley G., & Jarrett, H. (1975). The relation of past managerial achievement to test measures
obtained in the employment situation: methodology and results—II. Personnel Psychology, 28,
215-231, Bray, D. W, Campbell, R. J., & Grant, D. L. (1979). Formative years in business.
Hunnington, NY: Robert E. Krieger.
‘*” Guion, R. M., & Gottier, R. F. (1966). Op cit.
”5 Heneman H. G., Ill, Judge, T. A., & Heneman, R. L. (2000). Op cit.
Authors
Wesley A. Scroggins, Ph.D.
Assistant
Professor of Management
Department of Management
Missouri State University
901 S. National
Springfield, MO 65897
(417) 836-5505
wesscroggins@missouristate.edu
Steven L. Thomas, Ph.D.
Professor of Management
Missouri State University
901 S. National
Springfield, MO 65897
(417) 836-5076
steventhomas@missouristate.edu
108 Public Personnel Management Volume 37 No. 1 Spring 2008
Jerry A. Morris, Psy.D.
Consultant
Morris & Morris, Inc.
815 S. Ash
Nevada, MO 64772
(417) 667-8352
morris i9@ipa.net
Wesley A. Scroggins, Ph.D., is an assistant professor of management at Missouri State
University. He earned his Ph.D. from New Mexico University and has published several
articles in the HR management areas of selection and compensation.
Steven L. Thomas, Ph.D., is a professor of management at Missouri State University
He earned his Ph.D. from the University of Kansas and has published extensively in
labor relations and HR management.
Jerry A. Morris, Psy.D., has an M.B.A. from Missouri State University and is a licensed
psychologist, director of a mental health care center, author, and consultant.
I
Public Personnel Management Volume 37 No. 1 Spring 2008 109
GUIDELINES EDITORIAL
MEYER AND KURTZADVANCING ASSESSMENT TERMINOLOGY
Advancing Personality Assessment Terminology:
Time to Retire “Objective” and “Projective”
As Personality Test Descriptors
Gregory J. Meyer
Department of Psychology
University of Toledo
John E. Kurtz
Department of Psychology
Villanova University
For decades psychologists have classified personality tests
dichotomously as objective or projective. These terms appear
in scientific articles and textbooks and have become so en-
trenched that it is common to see separate courses in graduate
clinical programs using these labels in course titles (e.g.,
“Objective Assessment,” “Projectives”). In the interest of ad-
vancing the science of personality assessment, we believe it
is time to end this historical practice and retire these terms
from our formal lexicon and general discourse describing the
methods of personality assessment.
For personality tests, the term objective typically refers to
instruments in which the stimulus is an adjective, proposi-
tion, or question that is presented to a person who is required
to indicate how accurately it describes his or her personality
using a limited set of externally provided response options
(true vs. false, yes vs. no, Likert scale, etc.). What is objective
about such a procedure is that the psychologist administering
the test does not need to rely on judgment to classify or inter-
pret the test-taker’s response; the intended response is clearly
indicated and scored according to a pre-existing key. As a re-
sult, however, the necessity for judgment is passed on to the
test taker. She must interpret the question, consider her per-
sonal characteristics, evaluate herself relative to others as
best she can, decide the extent to which the characteristic fits
her personality, and then choose whether to honestly convey
this information in her response.
On the other hand, the term projective typically refers to
instruments in which the stimulus is a task or activity that is
presented to a person who is required to generate a response
with minimal external guidance or constraints imposed on
the nature of that response. What is projective in a test like
this is the requirement to generate a response in the face of
ambiguity; in so doing, the person projects or puts forward
elements of her personal characteristics.
Unfortunately, the terms objective and projective carry
multiple, often unclear, meanings, including some connota-
tions that are very misleading when applied to personality as-
sessment instruments and methods. For instance, the term
objective implies accuracy and precision that is impervious
to biasing influences. These are desirable and positive con-
notations. One problem is that these positive connotations
are not fully warranted for the inventories to which they typi-
cally refer. Scoring errors are certainly one potential concern
(e.g., Allard & Faust, 2000). More substantively, however, if
the kind of self-report scales that are classified as objective
actually were “objective” in a meaningful sense of that word,
then there would not be such a huge literature examining the
various response styles and biases that affect scores derived
from these instruments. In fact, the literature addressing the
topic of response styles, malingering, and test bias in these
measures appears larger than the literature on any other fo-
cused issue concerning their validity or application. Beyond
bias and frank distortion, Meehl (1945) pointed out more
than half a century ago that the processes influencing a test-
taker’s response include ambiguity inherent in the test items,
limitations in self-knowledge or self-perception, personal
dynamics, and even projections. Another serious issue that
results from applying the term objective to certain personal-
ity instruments is that those so labeled will tend to be viewed
positively simply by virtue of the term’s positive connota-
tions. Tests that are not so categorized will tend to be viewed
less positively, regardless of psychometric data, because they
are, after all, not objective. Accordingly, an unintended con-
sequence of this terminology is that it may encourage or per-
JOURNAL OF PERSONALITY ASSESSMENT, 87(3), 223–225
Copyright © 2006, Lawrence Erlbaum Associates, Inc.
petuate prejudices regarding the many alternative methods of
assessment that do not carry the objective label.
At the same time, the connotations of the term projective
also do not always apply when considering the instruments
typically classified as projective. For instance, responses to
the Rorschach inkblots often have more to do with stimulus
classification and problem solving styles than to projection
in a classical Freudian sense of the term, where undesirable
personal feelings or impulses are seen as residing outside the
self (see Exner, 1989). Similar difficulties emerge when con-
sidering the expanded definition of the term projective as
Frank (1939) first defined it in reference to types of personal-
ity tests. Frank considered a projective test one that would
induce the individual to reveal his way of organizing experi-
ence by giving him a field (objects, materials, experiences)
with relatively little structure and cultural patterning so that
the personality can project upon that plastic field his way of
seeing life, his meanings, significances, patterns, and espe-
cially his feelings. Thus we elicit a projection of the individ-
ual personality’s private world because he has to organize the
field, interpret the material and react affectively to it. … The
important and determining process is the subject’s personal-
ity which operates upon the stimulus-situation as if it had a
wholly private significance for him alone or an entirely plas-
tic character which made it yield to the subject’s control.
(italics in the original; pp. 402–403)
This conceptualization of a projective test implies that
stimulus features or task requirements are essentially imma-
terial; personality characteristics will shine through with
force and clarity regardless of the medium. Although desir-
able, this view is clearly incorrect. For instance, it is well
documented that the largest source of variability in Ror-
schach scores is the number and complexity of responses
given (e.g., Meyer, 1993, 1997). The personality characteris-
tics associated with this style of responding are interpretively
quite important in their own right. However, the presence of
this response complexity confounds efforts to interpret the
test scores that psychologists are most interested in interpret-
ing (e.g., Exner, 2003).1 The situation is similar with the-
matic storytelling techniques, in which the number of words
given and the specific stimulus pictures selected for use exert
a powerful influence on the final scores obtained (e.g.,
Blankenship et al., 2006; Hibbard et al., 1994; Pang &
Schultheiss, 2005).
Thus, the old and familiar terminology of objective and pro-
jective personality tests has misleading connotations that will
not serve the field well as we seek to have a more differentiated
understanding of assessment methods. A relevant question
then becomes: What is better alternative terminology?
It is fairly easy to identify reasonable alternatives to sup-
plant the term objective. Almost exclusively, this term has
been applied to structured questionnaires that are completed
by the target person him or herself. Consequently, a reason-
able alternative is to refer to these tests as “self-report inven-
tories” or “patient-rated questionnaires.” Moreover, to
advance the science of assessment, it is equally important to
differentiate self-report inventories from inventories com-
pleted by knowledgeable informants. Given that sources of
information in personality assessment are far from inter-
changeable (e.g., Achenbach, Krukowski, Dumenci, &
Ivanova, 2005; Achenbach, McConaughy, & Howell, 1987;
Costa & McCrae, 1992; De Los Reyes & Kazdin, 2005;
Kraemer, Measelle, Ablow, Essex, Boyce, & Kupfer, 2003;
Meyer, 2002; Meyer et al., 2001), it would be optimal to fur-
ther differentiate all questionnaire methods by specifying the
type of informant providing judgments. Thus, peer ratings
would be labeled as such and differentiated from spouse-
report scales, parent-rated questionnaires, and so forth.
It is not as easy to identify a single term or phrase that could
supplant the term projective. In fact, when discussing this is-
sue with colleagues, disagreements about a suitable substitute
appear to be one of the greatest obstacles to change. No single
term seems fully adequate. The instruments that are typically
subsumed under the projective label include the Rorschach
(1921/1942) and other inkblot tests (e.g., Holtzman, Thorpe,
Swartz, & Herron, 1961), Murray’s (1943) Thematic
Apperception Test and the subsequently developed Picture
Story Exercise stimuli (e.g., Smith, 1992), sentence comple-
tion measures, and various figure drawing tasks (e.g., Naglieri
& Pfeiffer, 1992). The wide differences among these tasks
makes it challenging to find a suitable alternative term that ac-
commodates all of their diverse features. Some possibilities
include “performance tasks,” “behavioral tasks,” “construc-
tive methods,” “free response measures,” “expressive person-
ality tests,” “implicit methods,” or even “attributive tests.” It is
unlikely that any one of these labels would satisfy all experts.
However, it is the very difficulty of finding a suitable alterna-
tive that speaks to the inadvisability of using a global term to
characterize the essence of all these measures. In turn, this
highlights the need to drop the term projective from the assess-
ment method lexicon.
One of the initial steps to advance the scientific understand-
ing of any phenomenon is to name and classify its components
in a meaningful way. The unsuitable and primitive nature of
the term projective is revealed when trying to arrive at an um-
brella label to characterize tasks as diverse as drawing one’s
family, telling stories in response to pictures, and stating what
an inkblot looks like. Applying a global and undifferentiated
term to such a diverse array of assessment tasks seems akin to
physicians classifying medical tests as either “visual tests” or
“nonvisual tests,” with the visual category including tasks
ranging from observing reflexes to endoscopy to MRI, and the
nonvisual category including tasks ranging from palpation
224 MEYER AND KURTZ
1The confounding influence of this so called “first factor” vari-
ance is pervasive with other instruments as well. An excellent dis-
cussion of the problem and of a sophisticated effort to mitigate its in-
fluence on the MMPI–2 can be found in the recently published
Special Issue of the Journal of Personality Assessment (Meyer,
2006) dealing with the MMPI–2 Restructured Clinical Scales
(Tellegen et al., 2003).
methods (e.g., abdominal tenderness) to olfactory methods
(e.g., odors indicative of infection) to auditory methods (e.g.,
detecting wheezes with a stethoscope).
Just as it would be regressive to apply such a simplistic cate-
gorization to medical tests, the field of personality assessment
will not advance by relying on crude terminology to globally
characterize all the tasks that are not self-report questionnaires
or informant rating scales. Thus, if one of the substitute terms
noted above does not seem suitable to replace projective, it
would be most optimal for clinicians, researchers, and teach-
ers to simply refer to assessment tasks by their specific name,
for example, the Rorschach Inkblot Method, Holtzman Ink-
blot Task, Murray’s TAT, Loevinger’s SCT. The Journal of
Personality Assessment will facilitate the transition to more
adequately differentiated assessment terminology by asking
authors to avoid referring to categories of personality tests as
objective or projective. We hope other assessment journals
will join this effort and adopt a similar position.
This editorial guideline is not meant to imply that the words
objective and projective cannot be used in the context of refer-
ring to specific data from personality instruments. It is cer-
tainly true that all personality tests can provide more or less
objective data. It is also the case that instruments like the Ror-
schach or TAT can capture projected personality characteris-
tics, whether defined narrowly as by Freud or more broadly as
by Frank, and this can also occur when patients complete self-
report inventories (Meehl, 1945). There is no problem if au-
thors carefully and deliberately choose these terms to further
scientific communication (e.g., when one is describing as-
pects of inkblot responses that are truly believed to indicate
projected dynamics). Rather, our objection is with the reflex-
ive use of historically ingrained terms that poorly describe the
complex and distinctive methods used to assess personality.
ACKNOWLEDGMENTS
Our thoughts on this topic have benefited from helpful and
insightful input from many people. Those who commented
on this document included Robert Bornstein, Anita Boss,
Virginia Brabender, Philip Caracena, Robert Erard, Barton
Evans, Leonard Handler, Radhika Krishnamurthy, Robert
McGrath, Joni Mihura, David Nichols, Bruce Smith, Donald
Viglione, Irving Weiner, and Jed Yalof.
REFERENCES
Achenbach, T. M., Krukowski, R. A., Dumenci, L., & Ivanova, M. Y. (2005).
Assessment of adult psychopathology: Meta-analyses and implications of
cross-informant correlations. Psychological Bulletin, 131, 361–382.
Achenbach, T. M., McConaughy, S. H., & Howell, C. T. (1987). Child/adoles-
cent behavioral and emotional problems: Implications of cross-informant
correlations for situational specificity. Psychological Bulletin, 101, 213–232.
Allard, G., & Faust, D. (2000). Errors in scoring objective personality tests.
Assessment, 7, 119–129.
Blankenship, V., Vega, C. M., Ramos, E., Romero, K., Warren, K., Keenan,
K., et al. (2006). Using the multifaceted Rasch model to improve the
TAT/PSE measure of need for achievement. Journal of Personality As-
sessment, 86, 100–114.
Costa, P. T., Jr., & McCrae, R. R. (1992). Revised NEO Personality Inventory
(NEO–PI–R) and NEO Five-Factor Inventory (NEO–FFI) professional
manual. Odessa, FL: Psychological Assessment Resources.
De Los Reyes, A., & Kazdin, A. E. (2005). Informant discrepancies in the
assessment of childhood psychopathology: A critical review, theoretical
framework, and recommendations for further study. Psychological Bulle-
tin, 131, 483–509.
Exner, J. E. (1989). Searching for projection in the Rorschach. Journal of
Personality Assessment, 53, 520–536.
Exner, J. E. (2003). The Rorschach: A Comprehensive System (4th ed.). New
York: Wiley.
Frank, L. K. (1939). Projective methods for the study of personality. Journal
of Psychology, 8, 389–413.
Hibbard, S., Farmer, L., Wells, C., Difillipo, E., Barry, W., Korman, R., &
Sloan, P. (1994). Validation of Cramer’s Defense Mechanism Manual for
the TAT. Journal of Personality Assessment, 63, 197–210.
Holtzman, W. H., Thorpe, J. S., Swartz, J. D., & Herron, E. W. (1961). Ink-
blot perception and personality. Austin: University of Texas.
Kraemer, H. C., Measelle, J. R., Ablow, J. C., Essex, M. J., Boyce, W. T., &
Kupfer, D. J. (2003). A new approach to integrating data from multiple infor-
mants in psychiatric assessment and research: Mixing and matching contexts
and perspectives. American Journal of Psychiatry, 160, 1566–1577.
Meehl, P. E. (1945). The dynamics of “structured” personality tests. Journal
of Clinical Psychology, 1, 296–303
Meyer, G. J. (1993). The impact of response frequency on the Rorschach
constellation indices and on their validity with diagnostic and MMPI–2
criteria. Journal of Personality Assessment, 60, 153–180.
Meyer, G. J. (1997). On the integration of personality assessment methods: The
Rorschach and MMPI. Journal of Personality Assessment, 68, 297–330.
Meyer, G. J. (2002). Implications of information-gathering methods for a re-
fined taxonomy of psychopathology. In L. E. Beutler & M. Malik (Eds.),
Rethinking the DSM: Psychological perspectives (pp. 69–105). Washing-
ton, DC: American Psychological Association.
Meyer, G. J. (Ed.). (2006). The MMPI–2 Restructured Clinical Scales [Spe-
cial Issue]. Journal of Personality Assessment, 87(2).
Meyer, G. J., Finn, S. E., Eyde, L., Kay, G. G., Moreland, K. L., Dies, R. R.,
et al. (2001). Psychological testing and psychological assessment: A re-
view of evidence and issues. American Psychologist, 56, 128–165.
Murray, H. A. (1943). Thematic Apperception Test manual. Cambridge,
MA: Harvard University Press.
Naglieri, J. A., & Pfeiffer, S. I. (1992). Validity of the Draw A Person:
Screening Procedure For Emotional Disturbance with a socially-
emotionally disturbed sample. Psychological Assessment, 4, 156–159.
Pang, J. S., & Schultheiss, O. C. (2005). Assessing implicit motives in U.S. col-
lege students: Effects of picture type and position, gender and ethnicity, and
cross-cultural comparisons. Journal of Personality Assessment, 85, 280–294.
Rorschach, H. (1942). Psychodiagnostics (5th ed.). Berne, Switzerland:
Verlag Hans Huber. (Original work published 1921)
Smith, C. P. (Ed.). (1992). Motivation and personality: Handbook of the-
matic content analysis. New York: Cambridge University Press.
Tellegen, A., Ben-Porath, Y. S., McNulty, J. L., Arbisi, P. A., Graham, J. R.,
& Kaemmer, B. (2003). The MMPI-2 Restructured Clinical Scales: De-
velopment, validation, and interpretation. Minneapolis: University of
Minnesota Press.
Gregory J. Meyer
Department of Psychology
Mail Stop 948
University of Toledo
2801 Bancroft Street
Toledo, OH 43606
Email: gmeyer@utnet.utoledo.edu
ADVANCING ASSESSMENT TERMINOLOGY 225
31
Measurement and Evaluation in
Counseling and Development
Volume 42 Number 1
April 2009 31-45
© 2009 The Author(s)
10.1177/0748175609333561
http://mec.sagepub.com
Counseling and Testing: What
Counselors Need to Know About
State Laws on Assessment and Testing
Kim A. Naugle
Eastern Kentucky University, Richmond
This article discusses testing in counseling, the history of psychology’s attempts to restrict
access to testing, and the potential impact on the public. Counselors are encouraged to obtain
appropriate training in assessment and to understand that testing is not only consistent with
fair testing policies but also essential for ethical practice.
Keywords: testing; assessment; state law; fairness; ethical
Counselors are often uncertain about the role that assessment—particularly,
testing—should play in their practice. Regard-
less of the setting in which they work or
their specialization area, counselors need to
be aware of the ethical and legal role that
assessment plays in their professional prac-
tice. Counseling assessment, including
various forms of testing, has always been
interwoven within the counselor’s role. In The
Standards for Educational and Psychological
Testing, the American Educational Research
Association, American Psychological Asso-
ciation, and the National Council on Measure-
ment in Education (1999) define assessment as
“any method used to measure characteristics
of people, programs, or objects” (p. 2).
Counseling assessment includes several
types of measurement instruments and tools,
many of which have been in use for several
centuries. For instance, proficiency testing
was evident in China as early as 2200 bce
(Cohen & Swerdlik, 1999). The true launch
of testing, as viewed in modern times, came
in the 19th century with the work of Francis
Galton, who is credited with formulating such
assessment tools as questionnaires, rating
scales, and self-report inventories. Galton
strongly influenced American psychologist
James Catell, among others; in fact, it was
Catell who coined the term mental test.
Other contributors to counseling assess-
ment include Alfred Binet, who began work
with what has become the modern form of
intelligence tests, and L. M. Terman, who
translated Binet’s work into the Stanford–
Binet Intelligence Test. The Army Alpha
and Army Beta group that administered
intelligence tests following World War I
were also major contributions, as was the
first publication of the Mental Measurement
Yearbook, in 1939, which marked the
beginning of a resource for identifying and
evaluating assessment instruments. Other
significant contributions include Hathaway
and McKinley’s Minnesota Multiphasic
Personality Inventory (developed in the
early 1940s), minimum competency testing
Author’s Note: Correspondence concerning this
article should be addressed to Kim A. Naugle, College
of Education, Eastern Kentucky University, 521
Lancaster Avenue, Combs Room 420, Richmond, KY
40475; e-mail: kim.naugle@eku.edu.
Article
32 Measurement and Evaluation in Counseling and Development
(in the 1970s), and the current use of
computer appraisals, as well as updated
revisions of past inventories.
Authentic testing is one of the assessment
movements of the 1990s that continues to
influence the field of educational assessment.
It has strongly influenced the testing practices
of teachers and school systems in particular
in assessing the academic progress of stud-
ents. Rather than rely on annual achievement
tests to assess students, the teachers compile
student portfolios to provide a more inclusive
source of measurement (Whiston, 2000).
This movement has been a major force in
assessment in Kentucky schools as part of
the Kentucky Educational Reform Act—first
with the Kentucky Information Results Infor-
mation system and now with Common wealth
Accountability Testing System (Kentucky
Department of Education, 2000).
As testing practices and assessment tools
have developed and evolved throughout
the last century, so has the need for their
availability and the availability of trained
professionals to administer, score, and inter-
pret the results. Counselors and other mental
health professionals with the appropriate
training and competency in assessment have
become increasingly important in filling this
need. Assessment is not just used for gathering
data for doctoral studies and publications; it is
a tool through which counselors, clinicians,
and other mental health professionals are able
to measure such human constructs as emotion,
intelligence, personality, self-esteem, and
aptitude. These constructs are not always
directly and conclusively evaluated by obser-
vation and interview alone—that is, counselors
and other qualified professionals can perform
other forms of assessment (e.g., tests). Whiston
(2000) described that behavior is sampled in
many ways including how individuals speak
and respond to questions and that, when they
take a test, it is a sample of their behavior in
that instant only. By assessing these samples
of human behavior, the professional is better
equipped to evaluate, define, and diagnose the
client’s problem; develop and implement
effective treatment plans; and have a gauge to
rate the counseling process.
Test Publisher–Recommended
Qualifications of Test Users
Test publishers have designated different
levels of responsibility for monitoring the
competencies of those who purchase and
utilize assessment instruments. The Psycho-
logical Corporation (n.d.), for example,
has issued four levels of competency for
individ uals, organizations, and agencies
who are interested in purchasing tests—
specifically, Levels A, B, C, and Q. Level
A tests currently require no qualifications
relating to purch ase. Level B requirements
include a master’s degree in either psycho-
logy or education, appropriate training in
assessment, or mem bership in a professional
association that requires assessment training.
Level C quali fications include a doctorate in
psychology or education, the appropriate
training in assessment, or the validation
of licensure/certification that requires the
professional to have the appropriate training
and experience in counseling assessment.
Level Q purchase qualifications specify a
background relative to the testing purchase,
as well as training in the ethical use,
administration, and inter pretation of tests.
This leveling process requires completion
of a qualifications form for all levels, as
well as the professional’s attesting to having
the training in assessment as mandated
by the guidelines listed in the Standards
for Educational and Psychological Testing
(American Psychological Association, American
Educational Research Associa tion, & Nat-
ional Council on Measurement in Education,
1999). These guidelines state that it is
Naugle / Counseling and Testing 33
the experience, training, and certification
held that should be the deciding factor of
eligibility in test assessment. How ever, the
1985 revision of the standards—jointly auth-
ored by representatives of the American
Educational Research Associa tion, the
American Psychological Association, and
the National Council on Measurement in
Education—does not recommend the use
of classification levels (Moreland, Eyde,
Robertson, Primoff, & Most, 1995).
As another example, Western Psycholo-
gical Services (2001) is a founding member
of the Association of Test Publishers and
claims to be “America’s leading publisher
of high quality assessment materials since
1948” (p. 258). The Western Psychological
Services routes materials such as tests,
books, software, and therapy materials to
psychologists, coun selors, other mental health
professionals, special education coordinators,
and human resources development/personnel
specialists. This publisher specifies an
array of profes sionals who are capable of
purchasing and administering counseling
assessment tools and psychological tests.
However, they note that to purchase these
tests, one must be a “qualified professional.”
The process of determining who is qualified
to purchase which tests begins with an initial
request and completion of a qualification
questionnaire, which includes questions
about the individual’s assuming overall
responsibility for the interpretation and use
of the test. The form requires the purchaser
to provide general background, educational,
and professional information and then to
send it to Western Psychological Services
for review. A decision is then made to
determine if the applicant has the knowledge
base, training, and experience to qualify for
the purchase and use of the test requested
(Western Psychological Services, 2001).
Multi-Health Systems is a publishing
company that advertises a variety of
assessment tools available to psychologists,
psychiatrists, mental health professionals,
human resource professionals, and special
education coordinators/counselors. Multi-
Health Systems designates two levels of
qualifications for test purchasers. The mini-
mum eligibility described in Level B involves
having completed appropriate course work
in tests and measurements at a university (or
corresponding and documented training).
Level C tests add to the Level B requirements
in that the user must also have “training and/
or experience in the use of tests and must
have completed an advanced degree in an
appropriate profession” (Multi-Health Sys-
tems, 2001, p. 122). Surpassing both levels
are those restricted tests listed by Multi-
Health Systems in which the mental health
professional must complete a purchaser
qualification form similar to that requested
by Western Psychological Services.
Professional Organization–
Recommended Test User
Qualification
The American Psychological Association,
the American Counseling Association, and
mental health organizations in general have
accepted the above qualifications, including
those established and supported by the
Psychological Corporation. Table 1 sum-
marizes the recommendations of a number
of professional organizations, concerning their
ideas of the qualifications that a profes-
sional should have in order to administer an
assessment tool.
As previously stated, the levels of compe-
tency required for test purchase are sup-
ported in the Standards for Educational and
Psychological Testing (American Educa-
tional Research Association, American Psy-
chological Association, & National Council
on Measurement and Education, 1999), as
initiated by studies conducted by the Joint
Committee on Testing Practices (2002a; APA,
34 Measurement and Evaluation in Counseling and Development
2001) and which resulted in the formation of
the Test User Qualifications Working Group.
Because of financial reasons, with drawal of
American Psychological Association support,
and other reasons, the committee disbanded
in December 2007 (Kennedy, 2008). The
Joint Committee on Testing Practices was
truly a joint committee in that it consisted
of representatives from the American Coun-
seling Association, the American Educa-
tional Research Association, the American
Psychological Association, the American
Speech-Language-Hearing Association, the
National Association of School Psychol ogists,
the National Association of Test Directors,
and the National Council on Measurement
in Education. The bylaws of this committee
encouraged professional organizations and
test publishers to work together in the
improvement of assessment use, not the
promotion of test restriction. Moreland et
al. (1995) states that “experience, training,
and certification should be considered in
assessing competence to use tests . . .” and
that “educational efforts will ultimately be
more effective in promoting good testing
practices than efforts to limit the use of tests”
(p. 14 and 22). Anastasi (1992) supports this
statement by defining the specific knowledge
needed by all test users—namely, she or
he must possess skills and experience in
statistical techniques of psychometrics and
must have knowledge of pertinent facts
and characteristics of behavioral science:
“The ultimate responsibility for integrating
the information and using it in individual
assessment and decision making rests with
the counselor” (p. 611).
The Test User Qualification Working
Group developed a model exemplifying
the knowledge and skills needed by coun-
selors and other professionals to prevent
test misuse; this model is included in the
Responsibilities of Users of Standardized
Tests (Association for Assessment in Coun-
seling, 1987). In fact, Elmore and Ekstrom
(1993) found that ethical and standardized
test use by counselors parallels measure-
ment training. As stated, the cre dentialing
of counselors requires course work in the
Table 1
Assessment Qualifications of Professional Organizations
Qualifications ACA CACREP NBCC FACT ATP NCME ACA–Ma
Course work in appraisal, × × × × × × ×
assessment, and testing
Master’s, specialist, or doctorate × × × × × × ×
in counseling or related field
Obtain passing score on the National ×
Counselor Examination
Qualifying experience under supervision × × × × × × ×
Appropriate levels of training for × × × × × × ×
specific tests
Need for assessment to assist with × × × × × × ×
accurate diagnosis, treatment planning,
and intervention
Note: All information obtained via the appropriate organization’s Web site. ACA = American Counseling
Association; CACREP = Council for Accreditation of Counseling and Related Educational Programs; NBCC =
National Board for Certified Counselors; FACT = Fair Access Coalition on Testing; ATP = Association of Test
Publishers; NCME = National Council on Measurement in Education.
a. Model legislation for state licensure.
Naugle / Counseling and Testing 35
competencies of individual and group
assessment. Familiarizing counselors with
Responsibilities of Users of Standardized
Tests promotes appropriate assessment quali-
fications for counselors.
These required qualifications are rela tively
agreed on within the field, and there is apparent
agreement within professional organizations
that the goal is not to limit professionals’ access
to tests; however, discrepancies surrounding
who is qualified to administer, score, and
interpret these assessments instruments exist
and so chal lenge the rights of many mental
health prof essionals. Cohen and Swerdlik
(1999) described that many professionals
currently use psychological testing including
educators, other mental health professionals
and other health care providers. Nevertheless,
the American Psychological Association and a
number of state psychology boards have made,
and are continually making, moves to restrict
the access and use of the majority of assessment
instruments—namely, to psychologists who
are licensed in a given state.
These attempts at restriction are not made
without opposition. The Association of Test
Publishers (2002) has joined with more than
30 professional associations to form the
Fair Access Coalition on Testing (2009a).
This organization has grown, in part, as
a response to the need to monitor those
restriction attempts at both the national
level and the state level. The mission and
policy statement of the Association of Test
Publishers are founded on the same type of
requirements developed and put forth by the
Psychological Corporation. For example,
the association proposes that assessment
professionals have access to testing instru-
ments based on their levels of education,
training, and experience in administering,
scoring, and interpreting psychological or
other assessment instruments. Its position
is that trained mental health professionals,
not only trained psychologists, have the
capability and right to use assessments. This
is exemplified in association’s policy state-
ment on fair access to psychological tests:
It is the policy of [the Association of Test
Publishers] to oppose all efforts to restrict
use of assessment instruments exclusively
to psychologists licensed in a given state
or states, and that [the association] shall
monitor closely any attempts to restrict
use based on licensure as a psychologist,
and shall intervene where appropriate to
ensure open and equal access to the use of
assessment instruments for all qualified
professionals. (Association of Test Pub-
lishers, 2002, para. 9)
Fair Access Coalition on Testing (2009b)
has carried this position even further in
addressing its stated mission of dedication
to: “the protection and support of public
access to professionals and organizations
who have demonstrated competence in the
administration and interpretation of assess-
ment instruments, including psychological
tests” (para.1). FACT (2009c) identifies five
goals for the organization addressing their
mission including its second goal which
states it: “monitors state and national legis-
lation and regulatory actions to assure that
all qualified professionals are permitted to
administer test instruments” (para. 1).
Counselors can demonstrate that they
meet these levels of education, training, and
experience in administering, scoring, and
interpreting assessment instruments in a
number of ways. For instance, understanding
the laws that govern the licensing and
accreditation of counselors and mental
health professionals could be one way
that counselors address this demonstration
of competence. For example, the 1994
American Counseling Association’s model
legislation for state licensure of professional
counselors was developed and revised by
organizations responsible for credentialing
professional counselors—these included the
American Association of State Counseling
36 Measurement and Evaluation in Counseling and Development
Boards, the Council for Accreditation
of Counseling and Related Educational
Pro grams, the Council on Rehabilitation
Edu cation, the Commission for Rehabilita-
tion Counselor Certification, the National
Board for Certified Counselors, and the
National Rehabilitation Counselors Associa-
tion (Glosoff, Benshoff, Hosie, & Maki,
1995). This model legislation comprises
such guide lines as the need for counselors to
conduct assessments and diagnoses to create
treatment plans and strategic interventions.
In preparation for these requirements under
this model, professionals who are seeking
licensure must have completed specific
course work in the assessment, appraisal, and
testing of individuals. This criterion certainly
correlates with the standards put forth in the
policy statement of the Asso ciation of Test
Publishers (2002) on fair access to psycho-
logical tests and the Psy chological Corpora-
tion’s listed qualifica tions (n.d.) for Levels
A and B.
Counselors having graduated from
a program accredited by the Council for
Accreditation of Counseling and Related
Educational Programs (2001) or from a
master’s degree program (or above) that
follows the council’s standards will have at
least met the Psychological Corporation’s
Level A and B assessment requirements,
given that they are directly reflected in
the standards. These standards stipulate that
graduates from approved programs have
curricular experiences and demonstrated
knowledge in eight core areas, including
assessment. Furthermore, they must demonst-
rate curricular experiences and demonstrated
knowledge in a variety of assessment-related
areas, including the basic concepts of stan-
dardized and nonstandardized testing, the
utilization of individual and group-based
test and inventory methods, and the appro-
priate strategies for selecting, administering,
and interpreting assessment and evaluation
instruments. Likewise, counselors seeking
licensure in a state following the American
Counseling Association’s model legislation
guidelines must have studied the following
areas in their course work: historical per-
spectives of assessment, basic concepts of
standardized and nonstandardized testing,
statistical concepts, reliability, validity,
assessment factors related to specific popu-
lations and nondiscriminatory evaluations,
strategies in selecting the test population,
diagnoses and mental/emotional status
understanding, and ethical/legal issues in
assessment (Glossof et al., 1995). The
National Board for Certified Counselors
(2005) specifies in its requirements for
earning certification as a Nationally Certified
Counselor that the applicant must have had
course work in appraisal. It then specifies in
its code of ethics—specifically, “Section D:
Measurement and Evaluation”—the same
requirements for knowledge, training, and
experience before administering, scoring,
and interpreting any assessment instru-
ment put forth by the Association of Test
Publishers. Once again, the requirements
focus on the counselors’ having received
the appropriate levels of training for speci-
fic tests and a recognition of their levels of
competence before using any instrument.
Additional guidelines revolve around
such issues as knowledge of the tests, the
population to be measured, stereotypical
concerns, the welfare of the client, and
test security (Association of Test Publishers,
2002).
State-Defined Qualifications
for Test Users
Forty-nine states consider mental health
counseling and school counseling as licensed/
Naugle / Counseling and Testing 37
certified professions. These states pay addi-
tional attention to the role of counselors in
testing, through the legal statutes and regu-
lations that govern these professions. To
become a licensed clinical counselor in the
state of Kentucky, for example, one must
have completed 60 graduate hours in nine
core areas derived from the Council for
Accreditation of Counseling and Related
Educational Programs:
the helping relationship, including coun-
seling theory and practice; human growth
and development; lifestyle and career
development; group dynamics, process,
counseling and consultation; assessment,
appraisal and testing of individuals; social
and cultural foundations, including multi-
cultural issues; principles of etiology,
diagnosis, treatment planning and preven-
tion of mental and emotional disorders and
dysfunctional behavior; research and eval-
uation; and professional orientation and
ethics. (Kentucky Board of Certification
for Professional Counselors, 2002, p. 3)
Also, a licensed professional clinical
counselor must have a master’s degree or
above in counseling or a related field, obtain
a passing score on the National Counselor
Examination, and have had a minimum
of 4,000 hours of post-master’s supervised
experience in counseling, as approved by
the board. These requirements are found
in the Kentucky Revised Statutes, Section
335.525 (Kentucky Legislature, n.d.-c).
The Kentucky Administrative Regulations
(Kentucky Legislature, n.d.-d) support this
legislation—specifically, 201 KAR 36:060,
which relates to “qualifying experience
under supervision.” Within this regulation,
the practice of counseling is defined as
professional counseling services delivered
within the scope of Section 2 of this admin-
istrative regulation, which involves the appli-
cation of mental health and development
principals, methods, or procedure—including
the assessment, evaluation, diagnosis, and
treatment of emotional disorders or mental
illnesses—to assist individuals to achieve
more effective personal, social, educational,
or career development and adjustment. Table
2 specifies the current definitions and trends
concerning assessment in the 49 states with
professional counselor laws.
The administrative regulations concern-
ing the provisional and standard certificates
of School Guidance Counselors in the state
of Kentucky are found under KAR 3:060,
and they relate to the statutory authority
of the Kentucky Revised Statutes—namely,
161.020, 161.028, and 161.030 (Kentucky
Administrative Regulations, n.d.-b). None
of this legislature specifies education, train-
ing or experience associated with assess-
ment, evaluation or testing. However, the
Kentucky Education Professional Standards
Board (2005) did adopt and publish standards
for new and experienced school counselors
(respectively, provisional certificate versus
standard certificate) that have assessment as
Standard 7 of the eight standards (Kentucky
Board of Certification for Professional
Counselors, 2002). This standard says that
the school counselor must understand the
school’s testing program and know how to
plan and evaluate it; assess, interpret, and
communicate learning results with respect
to aptitude, achievement, interests, tempera-
ments, and learning styles; collaborate with
staff on assessment; use assessment results
and other data in formulating career and
graduation plans; coordinate student records
to ensure confidentiality of assessment
results; and provide orientation for others on
the school assessment program (Kentucky
Board of Certification for Professional
Counselors, 2002). In addition, 16 KAR
3070 describes the “endorsement for indi-
vidual intellectual assessment” to the provi-
sional and standard certificate in school
38 Measurement and Evaluation in Counseling and Development
Table 2
Assessment Legislation via a State-by-State Basis
Assessment, Tests, Assessment as
Appraisal, Normal Tests, Not One
State Testinga Assessmentb Administeredc Administeredd Core Areae
Alabama × × × ×
Alaska × × × ×f
Arizona × × ×
Arkansas × × × ×
Californiag
Colorado × × ×
Connecticut × ×
Delawareh
DC × ×
Florida × ×
Georgia × × ×
Hawaii ×
Idaho × ×f
Illinois ×f
Indiana × ×
Iowa ×
Kansas × × × ×
Kentucky × ×
Louisiana × × ×
Maine ×
Marylandh ×
Massachusetts × ×
Michigan × ×
Minnesota × ×
Mississippi × × ×
Missouri × × ×
Montana × ×
Nebraska × × × ×f
Nevada × × ×
New Hampshire ×
New Jersey × ×
New Mexico × ×
New York × ×
North Carolina × ×
North Dakota ×
Ohio × ×
Oklahoma × ×
Oregon × ×
Pennsylvania × ×
Rhode Island ×
South Carolina × ×
South Dakota ×
Tennessee × × × ×
Texas × × × ×
Utah × ×
(continued)
Naugle / Counseling and Testing 39
counseling (a somewhat unique certification
to the state of Kentucky); as such, it states
that an endorsement for individual intel-
lectual assessment shall be issued to an
applicant already holding certification as a
guidance counselor, who has completed 12
semester hours of graduate credit, including
course work in basic testing and measure-
ment concepts that related directly to indi-
vidual intellectual assessment, as well as
a supervised practicum for administering,
scoring, and interpreting indivi dual intellec-
tual assessments (Kentucky Administrative
Regulations, n.d.-a). Within the 49 states
that have counselor licensure laws, only 6
place restrictions on specific assessment
areas; in fact, Alabama, Arkansas, and Texas
all disallow testing and assessment involv-
ing projective techniques for the purpose
of assessing personality. Tennessee also
disallows the counselor’s use of projective
techniques, as well as tests and assessments
used to diagnosis or identify pathologies,
not to mention human intelli gence tests
(Tennessee Board of Professional Coun-
selors, 2007). Alaska law prohibits the
use of projective techniques and intelli-
gence tests (State of Alaska, Department
of Commerce, Community, and Economic
Development, 2007). Nebraska disallows
the measuring of personality or intelligence
for the pur pose of diagnosis and treatment
planning (Nebraska Health and Human
Services System, 2007).
These laws and administrative regula-
tions, even with the few restrictions noted
above, point to recognition by state governing
agencies that the role of counselors in all
settings includes the administration, scoring,
and interpretation of assessment instru-
ments; that is, such regulations demonstrate
the state’s recognition that counselors are
valuable entities in meeting the demand for
assessment present in today’s society.
Despite this recognition, increasing attempts
of restriction are threatening the job welfare
Table 2 (continued)
Assessment, Tests, Assessment as
Appraisal, Normal Tests, Not One
State Testinga Assessmentb Administeredc Administeredd Core Areae
Vermonth ×
Virginia × ×
Washingtonh
West Virginia × ×
Wisconsin ×
Wyoming × ×
Note. All assessment information based on state laws and legislation as obtained through state Web sites.
a. Specifies assessment, appraisal, or testing in definition or scope of practice.
b. Includes descriptions of accepted or normal assessment, appraisal, and testing practices in body of licensure
or regulation laws other than in definition of scope of practice.
c. Specifies certain types of tests that can be administered.
d. Specifies certain types of tests that cannot be administered.
e. Includes assessment as one core area for educational requirements for professional counselors.
f. Specifies appraisal or assessment as an optional area of requirement.
g. No licensure law at this time.
h. No licensure information regarding appraisal, assessment, or testing was found at the completion of this
table.
40 Measurement and Evaluation in Counseling and Development
of professional counselors as well as other
professionals such as School Psychologists
not licensed as psychologists. Hyman and
Kaplinski (1994) for example, in an article
concerning school psychologists, report that
“assessment is the core contribution that
got us into the schools, has kept us there,
and allows us to expand into other roles”
(p. 570). School counselors are responsible
for six major job expectations: counseling
(individual and group), pupil assessment,
consultation, information officer, school
program facilitator, and research and evalu-
ation (Schafer, 1995). Of these six, pupil
assessment, program evaluation, and using
basic research relate to assessment. Schafer
(1995) found that the skills required by the
Council for Accreditation of Counseling and
Related Educational Programs, if they were
applied appropriately and assessed by the
program delivering them effectively, would
prepare a beginning level school counselor
to meet these job expectations. Furthermore,
a division of the American Counseling
Associa tion, the Association for Assessment
in Counseling (n.d.), has developed a list of
required competencies that school counsel-
ors must have in the areas of assessment and
evaluation:
1. School counselors are skilled in
choosing assessment strategies.
2. School counselors can identify, access,
and evaluate the most commonly used
assessment instruments.
3. School counselors are skilled in the
techniques of administration and meth-
ods of scoring assessment instruments.
4. School counselors are skilled in interpret-
ing and reporting assessment results.
5. School counselors are skilled in using
assessment results in decision making.
6. School counselors are skilled in pro-
ducing, interpreting, and presenting
statistical information about assess-
ment results.
7. School counselors are skilled in con-
ducting and interpreting evaluations
of school counseling programs and
counseling-related interventions.
8. School counselors are skilled in adapt-
ing and using questionnaires, surveys,
and other assessments to meet local
needs.
9. School counselors know how to
engage in professionally responsible
assessment and evaluation practice. If
school counselors are restricted from
using both educational and counseling
assessment devices, the detection of
children with special needs will be
delayed and possibly overlooked.
Fair Access for Counselors
and Their Clients
The qualifications of test users are of
great concern and they should be monitored—
but not limited or restricted by any one
discipline. With the adoption of the Code of
Professional Responsibilities in Educational
Measurement by the National Council on
Measurement in Education (1996), an addi-
tional step was taken to uphold the ethical
standards of test use and to prevent test
misuse. The council’s purpose in developing
the code is to direct the conduct of its
members who are involved in educational
assess ment. Professionals adhering to the
ethical responsibilities described in the code
will be acting on the criteria previously
established by the Standards for Educatio
nal and Psychological Testing (American
Educational Research Association, Amer-
ican Psycho logical Association, & National
Council on Measurement and Education,
1999) and the Code of Fair Testing Practices
in Educa tion (Joint Committee on Testing
Practices, 2002b). These criteria include
judging the technical adequacy of tests, as
well as deciphering which test is best test to
Naugle / Counseling and Testing 41
use and how to score the results. Vacc,
Juhnke, and Nilsen (2001) support this theory
by stating that “an effective and constructive
way to address the misuse of tests and test
results is through professional organizations’
codes of ethics, which are used to regulate
members’ behaviors” (p. 217).
Despite the approved qualifications and
required standards for assessment purchase
and use, test restrictions are multiplying.
The American Psychological Association is
supporting legal interventions that would
restrict the use of assessment instruments to
licensed psychologists—as reported by
Clawson (1997),
The position of the Fair Access Coalition
on Testing is that the American Psycho-
logical Association efforts will reduce
needed services to the general public, vio-
late existing professional policies of both
the American Counseling Association and
American Psychological Association, initi-
ate counterproductive turf wars, and
turn existing collaboration among profes-
sional organizations into time-consuming,
resource-devouring, nonproductive con-
flict. (p. 90)
By issuing these test restrictions, the
American Psychological Association is vio-
lating its own model licensure act and its own
draft revision by interfering with other pro-
fessions. Both of these model licensure acts
state that as long as other professionals do not
represent themselves as psychologists, then
the prevention of other trained professionals
and/or their services will not be attempted
(American Psychological Association, 2009).
Furthermore, it is a lack of clarity that adds
to the professional boundaries between psy-
chologists and other mental health profes-
sionals. The psychologists agree that trained
professionals have the capability to admin-
ister tests; however, training requirements
are quite vague. For example, they include
course work, experience, supervision, and
exposure to the population that the tests are
to measure. The majority of counseling pro-
fessionals see the need for cooperation
between disciplines to benefit clients. Even
the doctoral-level psychologists, along with
mental health professionals, must have
training and experience in the administra-
tion of a particular test. The requirements
are not intended to exclude on the basis of
degree area alone.
Be that as it may, professionals who are
not deemed psychologists or psychiatrists
but who possess the required skills are being
discriminated against. Eyde, Moreland,
Robertson, Primoff, and Most (1988) noted
that the American Psychological Association
has promoted the restriction of test use based
on title (psychologist) against its own model
legislation and against published documents
from its own science directorate on test user
qualifications both nationally and in state
branches. Attempts of test restriction have
occurred in such places as Florida, Indiana,
South Carolina, Iowa, Louisiana, Alaska,
and California. For example, the American
Psychological Association granted $14,500
to the Louisiana State Board of Examiners in
Psychology to enforce its psychologist law,
to file suits against a licensed profes sional
counselor and a national certified counselor
(Clawson, 1997). Both suits were initiated
to prevent professionals other than licensed
psychologists from using psycho logical tests
(i.e., even if the professionals demonstrated
test-specific qualifications)—tests such as the
Bender–Gestalt Test, the Achenbach Child
Behavior Checklist, the Woodcock–Johnson
Psycho-Educational Battery–Revised, and
the Kinetic Family Drawings.
In another example of an attempt to
restrict assessment, the California Board of
Psychology decided that to administer the
Myers-Briggs Type Indicator, one must be
a licensed psychologist. Luckily, counselors
42 Measurement and Evaluation in Counseling and Development
in the California Fair Access Coalition on
Testing were able to produce a reversal
to the decision, based on the following
arguments: First, the Myers-Briggs Type
Indicator is used not only for diagnosis but
also in business, group, religious, educa-
tional, and career areas; second, the Myers-
Briggs Type Indicator was not developed
by a psychologist; finally, public access to the
Myers-Briggs Type Indicator, via the Internet
and local bookstores, makes the restriction
of the test nearly impossible to ensure. A suit
was also filed in California against a doctoral-
level special education examiner. In this case,
the California Board of Psychology instructed
the examiner to stop the administration and
use of the Wechsler Adult Intelligence Scale–
Revised, the Woodcock-Johnson Psycho-
Educational Test–Revised, and the Wide
Range Achieve ment Test. This examiner
then had to hire a licensed psychologist to
administer the tests. Given that the doctoral-
level special education examiner was
more trained and more experienced in the
administration of these tests than the licensed
psychologist whom she hired, she had to train
the psychologist in their administration. The
suit that created this situation was dropped
in 1997, with the help and influence of the
California Senate and Assembly Committee
(Clawson, 1997).
As of January 1, 1999, counselors in
Alaska were granted the rights to diagnose
and treat mental/emotional disorders under
that state’s licensed professional counselor
law. However, these professionals are
prohibited from using projective assess-
ments, as well as individually administered
intelli gence tests. This is done despite the
fact that the licensure law of Alaska requires
coun selors to have a 48-hour master’s degree
in counseling or a related field, 12 additional
hours, a written exam, and 3,000 hours of
supervision (State of Alaska, Department of
Commerce, Community, and Economic
Development, 2007).
The Indiana State Board of Psychology
proposed Indiana Code 25-33-1-3.g, which,
according to Toner and Arnold (1998),
would enable the board to
establish, maintain, and update a list of
psychological instruments that, in the
words of the legislature, could create a
danger to the public because of their design
and complexity if improperly adminis-
tered and interpreted by individuals other
than those designated in the statute. (p. 1)
The professionals most affected by this
law, if enacted, would be social workers,
marriage and family therapists, and the
Mental Health Counselor Licensure Board.
Once again, the Fair Access Coalition on
Testing was contacted to aid the Association
of Test Publishers in lobbying against this
code, which would have restricted access to
318 psychological assessments that were to
be restricted. The coalition successfully lob-
bied to end these restrictions.
Recommendations and
Conclusions
Counselors have a responsibility to be
aware of the role of assessment in counseling
and to be sure that they effectively carry out
and protect this role in their practices. If the
right to use assessment tools is granted to
doctoral-level psychologists only, then services
available to the public will diminish (e.g.,
diagnosis and treatment). In short, counselors
who lose the right to assessment will lose
the ability to diagnose, and they will have a
valuable tool of treatment elimi nated from
their access. As reported by Whiston (2000),
“most managed health care organizations
require that a diagnosis be made before
they will reimburse practitioners. Therefore,
if counselors lose the right to assess in a
state, they will be eliminated from the private
practice market in the state” (p. 10).
Naugle / Counseling and Testing 43
This will also have a direct impact on
rural and economically deprived urban
populations. Test restriction would cheat
these underserved populations of proper
mental health care and attention. The
majority of underserved populations are
attended by master’s-level mental health
professionals, not doctoral-level psycho-
logists. Test restriction would result in the
victimization of those needing care. “The
ultimate product of psychological test
restriction would likely be less available
service to clients” (Clawson, 1997, p. 93).
If counselors continue to seek the training
and experience needed in providing effective
assessment, then restrictions by outside
disciplines should not be conducted and/or
accepted. Counselors must be granted the
right to test use if they take the responsibility
to learn the laws and ethical guidelines
surrounding the use and administration of
assessment instruments. Furthermore, those
counselors having graduated from programs
that meet the standards of the Council for
Accreditation of Counseling and Related
Educational Programs are, by definition, in
adherence with the assessment administra-
tion guidelines proposed by the American
Psychological Association and the American
Counseling Association. The unethical rest-
riction of assessment rights among disciplines
results in unethical treatment for the client.
References
American Educational Research Association, American
Psychological Association, & National Council on
Measurement and Education. (1999). The standards
for educational and psychological testing. Washington
DC: American Educational Research Association.
American Psychological Association. (2001). Joint
committee on testing practices. Retrieved September
24, 2007, from http://www.apa.org/science/jctpweb
.html
American Psychological Association. (2009). Model
act for state licensure: Public comment. Retrieved
March 9, 2009, from http://forms.apa.org/practice/
modelactlicensure/mla-review-2009
Anastasi, A. (1992). What counselors should know
about the use and interpretation of psychological
tests. Journal of Counseling & Development,
70(5), 610–616.
Association for Assessment in Counseling. (1987).
Responsibilities of users of standardized tests.
Retrieved October 16, 2007, from http://aac.ncat
.edu/documents/rust.html
Association for Assessment in Counseling. (n.d.).
Competencies in assessment and evaluation for school
counselors. Retreived September 24, 2007, from
http://aac.ncat.edu/documents/atsc_cmptncy.htm
Association of Test Publishers. (2002, June 7).
Association of Test Publishers policy statement
on fair access to psychological tests. Retrieved
September 24, 2007, from http://www.testpublishers
.org/atpp.htm
Clawson, T. W. (1997). Control of psychological test-
ing: The threat and a response. Journal of
Counseling & Development, 76, 90–94.
Cohen, R. J., & Swerdlik, M. E. (1999). Psychological
testing and assessment: An introduction to tests
and measurement (4th ed.). Mountain View, CA:
Mayfield.
Council for Accreditation of Counseling and Related
Educational Programs. (2001). The 2001 stan
dards. Retrieved October 2, 2007, from http://
www.cacrep.org/2001Standards.html
Elmore, P. B., & Ekstrom, R. (1993). Counselors test
use practices: Indicators of the adequacy of mea-
surement training. Measurement and Evaluation in
Counseling and Development, 26(2), 116–125.
Eyde, L. D., Moreland, K. L., Robertson, F. J.,
Primoff, E. S., & Most, R. B. (1988). Test user
qualifications: A databased approach to promot
ing good test use. Washington, DC: American
Psychological Association, Science Directorate.
Fair Access Coalition on Testing. (2009a). FACT mem-
bership, organizations and individuals. Retrieved
March 9, 2009, from http://www.fairaccess.org/
membership/memberlist
Fair Access Coalition on Testing. (2009b). FACTs
Mission. Retrieved March 9, 2009, from http://
www.fairaccess.org/hone
Fair Access Coalition on Testing. (2009c). About
FACT: FACT goals. Retrieved March 9, 2009,
from http://www.fairaccess.org/aboutfact/whatfact
does
Glosoff, H. L., Benshoff, J. M., Hosie, T. W., & Maki,
D. R. (1995). The 1994 ACA model legislation for
licensed professional counselors. Journal of
Counseling & Development, 73, 209–220.
44 Measurement and Evaluation in Counseling and Development
Hyman, I. A., & Kaplinski, K. (1994). Will the real
school psychologist please stand up: Is the past a
prologue for the future of school psychology?
School Psychology Review, 23(4), 564–584.
Joint Committee on Testing Practices. (2002a). About
the Joint Committee on Testing Practices. Retrieved
September 24, 2007, from http://www.apa.org/
science/jctpweb.html
Joint Committee on Testing Practices. (2002b). Code
of fair testing practices in education. Washington,
DC: Author.
Kennedy, A. (2008, February). Long-standing JCTP
calls it quits. Counseling Today, pp. 1, 27.
Kentucky Board of Certification for Professional
Counselors. (2002). An act relating to licensed
professional clinical counselors. Retrieved February
20, 2009, from http://lrc.ky.gov/recarch/00rs/HB290/
bill
Kentucky Department of Education. (2000). The 10th
anniversary of education reform in Kentucky.
Retreived October 16, 2007, from http://www.kde
.state.ky.us/NR/rdonlyres/EF0A1C1D-F709-44D3-
8CC2-74E113172B51/0/10thAnniversaryReport
Kentucky Education Professional Standards Board.
(2005). Standards for guidance counseling programs.
Retrieved October 16, 2007, from http://www
.kyepsb.net/documents/EduPrep/STANDARDS%
20FOR%20GUIDANCE%20 COUNSELING%20
PROGRAMS(3)
Kentucky Legislature. (n.d.-a). Endorsement for indi-
vidual assessment. In Kentucky administrative
regulations. Retrieved February 20, 2008, from
http://www.lrc.ky.gov/kar/016/003/070.htm
Kentucky Legislature. (n.d.-b). Guidance counselor,
provisional and standard certificates, all grades. In
Kentucky administrative regulations. Retrieved
February 20, 2008, from http://www.lrc.ky.gov/
kar/016/003/060.htm
Kentucky Legislature. (n.d.-c). 335.525: Licensing require-
ments—Fees. In Kentucky revised statutes. Retrieved
September 24, 2007, from http://162.114.4.13/KRS/
335-00/525.PDF
Kentucky Legislature. (n.d.-d). Qualifying experience
under supervision. In Kentucky administrative
regulations. Retrieved February 20, 2008, from
http://www.lrc.ky.gov/kar/201/036/060.htm
Moreland, K. L., Eyde, L. D., Robertson, J. F.,
Primoff, E. S., & Most, R. B. (1995). Assessment
of test user qualifications: A research-based mea-
surement procedure. American Psychologist,
50(1), 14–23.
Multi-Health Systems. (2001). MHS catalog. North
Tonawanda, NY: Author.
National Board for Certified Counselors. (2005) NBCC
code of ethics. Retrieved February 20, 2009, from
http://www.nbcc.org/AssetManagerFiles/ethics/
nbcc-codeofethics
National Council on Measurement in Education.
(1996). Code of professional responsibilities in
educational measurement. Assessment in Education:
Principles, Policy, & Practice, 3(3), 401–411.
Nebraska Health and Human Services System. (2007).
Mental health practice. Retrieved March 21, 2009,
from http://www.hhs.state.ne.us/crl/mhcs/mental/
mentalhealth.htm
Psychological Corporation. (n.d.). Ordering informa
tion. Retrieved October 16, 2007, from http://
harcourtassessment.com/haiweb/Cultures/en-US/
Harcourt/ProductsAndServices/HowToOrder/
Qualifications.htm
Schafer, W. D. (1995). Assessment skills for school coun
selors (Report No. EDO-CG-95-2). Greensboro;
University of North Carolina at Greensboro, School
of Education. (ERIC Document Reproduction
Service No. ED387709)
State of Alaska, Department of Commerce, Commu-
nity, and Economic Development. (2007). Statutes
and regulations: Professional counselors. Retrieved
October 2, 2007, from http://www.dced.state.ak
.us/occ/pub/CounselorStatutes
Tennessee Board of Professional Counselors. (2007).
Chapter 04501. In Rules of the Tennessee Board of
Professional Counselors. Retrieved October 16, 2007,
from http://state.tn.us/sos/rules/040/0450-01
Toner, M. P., & Arnold, D. W. (1998) Fair access alert:
Indiana State Board of Psychology proposes list of
tests to be restricted. Retrieved October 16, 2007,
from http://www.testpublishers.org/f1998nl.htm
Vacc, N. A., Juhnke, G. A., & Nilsen, K. A. (2001).
Community mental health service provider’s code
of ethics and the standards for educational and
psychological testing. Journal of Counseling &
Development, 79(2), 217–222.
Western Psychological Services. (2001). WPS 2001
catalog. Los Angeles: Author.
Whiston, S. C. (2000). Principles of applications of
assessment in counseling. Stamford, CT: Brooks/
Cole.
Kim A. Naugle is a tenured professor in the
Counseling and Educational Psychology Department
Naugle / Counseling and Testing 45
at Eastern Kentucky University where he also serves
as the Associate Dean of the College of Education.
His research interests include effective assessment
and evaluation including outcome assessment
as well as research on effective collaboration for
For reprints and permissions queries, please visit SAGE’s Web site at http://www.sagepub.com/journals
Permissions.nav.
scholarship and effective use of technology such as
in establishing teacher presence in web-based
instruction. He is also currently working on an arti-
cle and a book chapter on the school counselor’s role
in transitions for special needs children.
Copyright of Measurement & Evaluation in Counseling & Development is the property of
Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted
to a listserv without the copyright holder’s express written permission. However, users may
print, download, or email articles for individual use.
For this week’s Assignment you will be creating a hypothetical case assessment in your area of interest involving a client presenting with an issue that merits an adult (18 years – 70 years) personality assessment.
Write a minimum 10-page essay incorporating the following elements:
·
Include the following demographic and personal information: fictional name, age, marital status, number of children, educational level, income level, and relevant medical history. Describe clearly the nature of the client’s presenting problem (e.g., consideration of inpatient substance abuse treatment, a candidate for successful completion of a behavior analysis/application program, or a patient with a psychiatric diagnosis, etc.).
· Explain the initial problem in sufficient detail to make clear your decisions regarding assessment.
Based on the above, answer the following questions in essay format:
1. What would be the most appropriate instrument in personality assessment for evaluating the primary presenting problem?
2. What are the strengths regarding what this instrument can tell us about the client?
3. Why is this the most appropriate instrument? (Be sure to include appropriate reference to source materials.)
4. What are the limitations of what this instrument can tell us about the client?
5. Discuss some of the concerns or issues that might arise in a workplace setting with an individual that has this personality disorder. Elaborate on how this disorder could affect the workplace culture/climate in regards to behavior, interpersonal and group interactions, and productivity.
6. In your discussion of the assessment process, including administration and interpretation, consider professional competencies that reflect the professional characteristics, culture of a given work setting and how these practices are essential to an effective multicultural competency environment.
Select an adult personality test – (four) websites for psychological test databases are provided on page 30 of your textbook. Discuss what type of data the test provides and how you might use this particular test to assess a client.
Along with the text, and the four articles listed below, locate an additional two peer reviewed journal articles. (7 total references)
Note: Find a test measuring a construct in your area of interest; do not purchase a review of any test. You will use the websites to look up the name and brief description of a test and then conduct further research on your own about this test.
Read the following articles which are accessible through the following Library links:
Fine, S. (2013). Practical guidelines for implementing preemployment integrity tests. Public Personnel Management, 42(2), 281-292. Retrieved from
http://dx.doi.org.lib.kaplan.edu/10.1177/0091026013487049
Discuss the following in essay format: Address the purpose of preemployment integrity testing.
1. Describe the primary purpose many organizations include integrity testing during the hiring process of new employees.
2. Describe the differences between overt integrity tests and personality-based integrity tests, including when an organization would choose one over the other type.
3. Discuss the guidance the author provides in regards to fairness and adverse impact from integrity tests.
Meyer, G. J., & Kurtz, J. E. (2006). Advancing personality assessment terminology: Time to retire objective and projective as personality test descriptors. Journal of Personality Assessment, 87(3), 223-225. Retrieved from
http://lib.kaplan.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=26774051&site=eds-live
Discuss the following in essay format: Address the historical use of the terms objective and projective to classify a personality test, as well as problems with such classification.
Naugle, K. (2009). Counseling and testing: What counselors need to know about state laws on assessment and testing. Measurement & Evaluation In Counseling & Development, 42(1), 31-45. Retrieved from
http://search.ebscohost.com.lib.kaplan.edu/login.aspx?direct=true&db=rzh&AN=105333289&site=eds-live
Discuss the following in essay format: Review the local laws on assessment and testing, using a comparison between licensed and nonlicensed professionals.
1. Discuss how psychological test publishing companies monitor the competencies of those who purchase and utilize assessment instruments they sell.
2. Using Table 2 – Assessment Legislation via a State-by-State Basis (pgs. 38-39), look up your home state and describe what types of assessment activities the legislation allows a counselor to engage in.
3. Describe why the authors contend that professionals who possess the appropriate coursework, experience and supervision, but are not licensed psychiatrists or psychologists, are being discriminated against when it comes to laws regarding psychological testing.
Scroggins, W., Thomas, S., & Morris, J. (2008). Psychological testing in personnel selection, part I: A century of psychological testing. Public Personnel Management, 37(1), 99-109. Retrieved from
http://search.ebscohost.com.lib.kaplan.edu/login.aspx?direct=true&db=rzh&AN=105729948&site=eds-live
Discuss the following in essay format: Review prior and local hiring practices and the challenges with using personality testing.
1. Why had many industrial psychologists traditionally rejected the use of personality testing while many human resources manager maintain an optimistic and enduring faith in the ability of them to discriminate between good and poor job candidates?
2. Discuss how military psychology and the role of psychological services were considered essential to the nation’s defense efforts during World War II?
3. Describe three ways in which the use of personality tests in employment selection is considered controversial?
The assignment should:
· Follow assignment directions (review grading rubric for best results).
· Use correct APA formatting per the APA Publication Manual, 6th Edition.
· Demonstrate college-level communication through the composition of original materials in Standard American English.
· Be written in Standard American English and be clear, specific, and error-free. If needed, be sure to use the Writing Center for help.
· Be a minimum of 10 pages (not including Title Page and Reference List).