Introduction to Psychological Assessment
A psychological assessment is the attempt of a skilled professional, usually a psychologist, to use the techniques and tools of psychology to learn either general or specific facts about another person, either to inform others of how they function now, or to predict their behavior and functioning in the future.
Maloney and Ward (1976) offer that assessment
- Frequently uses tests
- Typically does not involved defined procedures or steps
- Contributes to some decision process to some problem, often by redefining the problem, breaking the problem down into smaller pieces, or highlighting some part(s) of the problem<
- Requires the examiner to consider, evaluate, and integrate the data
- Produces results that can not be evaluated solely on psychometric grounds
- Is less routine and inflexible, more individualized.
The point of assessment is often diagnosis or classification. These are the act of placing a person in a strictly or loosely defined category of people. This allows us to quickly understand what they are like in general, and to assess the presence of other relevant characteristics based upon people similar to them. There are several parts to assessment.
Note that an interview can be conducted in many ways and for a variety of purposes. Below are several aspects in which to view an interview.
- verbal and face-to-face - what does the client tell you? How much information are they willing/able to provide?
- para-verbal- how does the client speak? At normal pace, tone, volume, inflection? What is their command of English, how well do they choose their words? Do they pick up on non-verbal cues for speech and turn taking? How organized is their speech?
- situation - Is the client cooperative? Is their participation voluntary? For what purpose is the interview conducted? Where is the interview conducted?
- Structured - The SCID-R is the Structured Clinical Interview for the DSM-III-R and is, as the name implies, an example of a very structured. It is designed to provide a diagnosis for a client by detailed questioning of the client in a "yes/no" or "definitely/somewhat/not at all" forced choice format. It is broken up into different sections reflecting the diagnosis in question. Often Structured interviews use closed questions, which require a simple pre-determined answer. Examples of closed questions are "When did this problem begin? Was there any particular stressor going on at that time? Can you tell me about how this problem started?" Closed interviews are better suited for specific information gathering.
- Unstructured - Other interviews can be less structured and allow the client more control over the topic and direction of the interview. Unstructured interviews are better suited for general information gathering, and structured interviews for specific information gathering. Unstructured interviews often use open questions, which ask for more explanation and elaboration on the part of the client. Examples of open questions are "What was happening in your life when this problem started? How did you feel then? How did this all start?" Open interviews are better suited for general information gathering.
Interviews can be used for clinical purposes (such as the SCID-R) or for research purposes (such as to determine moral development or ego state).
- How does the person act? Nervous, calm, smug? What they do and do not do? Do they make and maintain eye contact? How close to you do they sit? Often, behavior observations are some of the most important information you can gather.
- Behavioral observations may be used clinically (such as to add to interview information or to assess results of treatment) or in research settings (to see which treatment is more efficient or as a DV)
There are basically seven types of tests:
- Group educational tests such as the California Achievement Test
- Ability and preference tests such as the Myers-Briggs
- LD and neuropsychology tests such as the Halstead Reitan Battery
- Individual intelligence tests such as the WAIS and WISC
- Readiness tests such as the Metropolitan Readiness Tests
- Objective personality tests such as the MMPI2 or PAI
- Self-administered, scored, and interpreted tests, such as data base user qualification tests
There are generally three parties involved in testing according to the Standards for Educational and Psychological Testing, though this could become four:
- Test Developer - This may be a company, an individual, a school.... The Test Developer has certain responsibilities in developing, marketing, distributing tests and educating test users.
- Test User - This may be a counselor, a clinician, a personnel official.... The Test User has certain responsibilities in selecting, using, scoring, interpreting, and utilizing tests.
- Test Taker - This may be the client in many cases. The Test Taker has certain rights regarding tests, their use, and the information gained from them.
- Test Utilizer - may be the test taker, but in other cases however, a business or organization may send a person to be tested. Thus, the organization also has certain rights regarding tests, their use, and the information gained from them.
The Test Developer should
- Construct a manual containing all relevant information, such as
- the development and purpose of the test
- information on standardized administration and scoring
- data on the collection and composition of the standardization sample
- information on the test reliability and validity
- adequate information for the educated consumer to determine the appropriate and inappropriate use of the test
- references to relevant published research regarding the test and its use
- information on correct interpretation and application and possible sources of misuse, as well as any bias in test construction or use
- Support the information provided with data.
- Adhere to all ethical guidelines regarding advertising, distributing, and marketing testing material.
The Test User should
- Be aware of the limits of tests, in regards to reliability, validity, standard error of measurement, confidence intervals, as well as appropriate interpretation and use of the instrument. If you have any questions about tests, consult the Mental Measurement Yearbook, Tests in Print, or the 1984 Joint Technical Standards for Educational and Psychological Testing.
- Read the manual and understand all relevant information
- Be responsible for
- assessing your own competence regarding use of a test or the competence of those you employ for that purpose
- adhering to the appropriate use of the test as stated in the manual
- being aware of any test bias or client characteristics that might decrease the validity of the test results or interpretation and report it with the testing report of selection, data, interpretation, and application.
- Protect test security where such security is vital to test reliability and validity.
- Be aware of the dangers of automated testing services and realize that they are to be used only by professionals
- Inform the client to be tested as to the purpose and potential use and applicability of the testing materials and results, as well as who will potentially have access to the results. The test user has the responsibility to see that the results are made available and used only for and by those specified in the consent agreement. Obsolete information should be regularly purged from records.
Good test use
Good test use requires:
- Comprehensive assessment using history and test scores
- Acceptance of the responsibility for proper test use
- Consideration of the Standard Error of Measurement and other psychometric knowledge
- Maintaining integrity of test results (such as the correct use of cut-off scores)
- Accurate scoring
- Appropriate use of norms
- Willingness to provide interpretive feedback and guidance to test takers
A good test is both reliable and valid, and has good norms.
- Reliability, briefly, refers to the consistency of the test results. For example, IQ is not presumed to vary much from week to week, and as such, test results from an IQ test should be highly reliable. On the other hand, transient mood states do not last long, and a measurement of such moods should not be very reliable over long periods of time. A measurement of transient mood state may still be shown reliable if it correlates well with other tests or behavior observations indicative of transient mood states.
- Validity, briefly, refers to how well a test measures what it says it does. In a simple way, validity tells you if the hammer is the right tool to fix a chair, and reliability tells you how good a hammer you have. A test of intelligence based on eye color (blue eyed people are more intelligent than brown eyed people) would certainly be reliable, because eye color doesn't change, but it would not be very valid, because IQ and eye color have little to do with each other.
- Norms are designed to tell you what the result of measurement (a number) means in relation to other results (numbers). The "normative sample" should be very representative of the sample of people who will be given the test. Thus, if a test is to be used on the general population, the normative sample should be large, include people from ethnically and culturally diverse backgrounds, and include people from all levels of income and educational status.
Test Taker Rights
The Test-Taker has the right:
- to have the directions of testing as well as the results of an evaluation explained in language that they can understand.
- to have the confidentiality of that information maintained within the limits promised during informed consent.
- to have the results of the testing explained to them in a meaningful way, and in most cases to know to whom and how these results were shared.