The Framework, Concepts and Methods of the Competency Outcomes and Performance Assessment (COPA) Model

  • Carrie B. Lenburg, EdD, RN, FAAN
    Carrie B. Lenburg, EdD, RN, FAAN

    Dr Lenburg, Loewenberg Chair of Excellence in the School of Nursing, University of Memphis from 1997-1999, worked with the nursing faculty to convert the BSN program to the competency outcomes and performance assessment model and methods. She also is consultant to the nursing faculty of the University of Colorado Health Science Center to integrate the model into its range of four degree programs (BSN, MSN, ND, and PhD), and into all UC-SON Internet courses. She also is ongoing consultant to the newly developing BSN program at King College (Bristol TN), implementing the COPA Model from the outset. From 1973-1991 she coordinated the development, implementation and evaluation of the New York Regents College External Degree Nursing Program.

Abstract

A significant category of issues and problems related to promoting competence pertains to the limitations or absence of a cohesive conceptual framework that supports learning and assessment methods focused on practice competencies. Typically, teachers in academic and practice settings use traditional course objectives, lectures, and evaluation methods that often are characterized as teacher-focused, subjective, and inconsistent. These historical practices obscure the development of a specific delineation of practice competencies to be attained and documented. The basic problems center on changing these traditional methods and implementing others that are more outcomes oriented and consistent with contemporary practice needs, and doing so from the foundation of a defensible and cohesive conceptual framework. The purpose of this article is to describe the importance of such a framework and the integration of essential concepts in developing and implementing competency outcomes, interactive learning strategies, and psychometrically-sound performance assessment methods. The COPA Model is explored in detail to illustrate the integration of these concepts into an effective framework that supports competency outcomes and assessment required for contemporary practice. It presents an example to stimulate adaptation and application to meet the goals of diverse academic and practice entities. Although this article describes application of the COPA Model in the academic setting, the principles and criteria presented are equally applicable for educators in the service setting.

Key Words: assessment, competence, competency-based, evaluation, nursing education, performance examinations, testing concepts

Overview of Conceptual Framework

This article continues the exploration of issues related to competence initiated in the preceding article by focusing on the need for, and usefulness of, a relevant and cohesive conceptual framework. It describes an integrated outcomes-oriented system based on concepts related to creating practice competency categories, implementing interactive learning methods and key psychometric concepts that support performance assessment methods. Such a framework is useful to educators in both academic and service settings for promoting competence and accountability. The problems, context, and rationale for mandating initial and continuing competence in education and practice are explored in the preceding article.

Fundamental problems associated with developing and implementing competency-based programs can be linked to a lack of emphasis on them in teacher preparation, with resulting deficits in programs preparing nurses for general or specialized practice. The historical use of individual, subjective and inconsistent methods and lack of established conceptual frameworks perpetuates the problems in both education and practice settings. The reorganization of existing programs to integrate outcomes-oriented learning and performance assessment concepts, however, presents a broad array of issues and concerns for those involved in developing them or being evaluated by them. They range from determining what content to include or exclude, where and how it should be distributed, which teaching methods to use, and the ever-worrisome question of how to evaluate achievement objectively and legally. The challenge has become even more problematic with the escalating and complex changes in demographic, socioeconomic, and political circumstances and the resulting domino effects on education and healthcare systems.

A number of issues relate to changing teaching methods to meet the performance expectations of academics and employers. Typically, instructors find it difficult to give up content and skills traditionally considered essential. It is easier to keep adding rather than to make the hard choices to deliberately reorganize content and methods to be more consistent with actual current practice needs. Their questions frequently include: What should we give up? How can we justify not "covering all the content"? How can we include the exploding volume of information available on the Internet and in encyclopedic textbooks? And how much more can we expect of students who already are overloaded with multiple other responsibilities? The situation is amplified by employers and practitioners who question what teachers are teaching and expecting of students, and why new graduates are not as competent as they need to be in the current work environment. They resent spending time and funds to "reteach" graduates and to provide extensive orientations before they can safely implement the skills required in professional positions (del Bueno, 1995; del Bueno & Beay, 1995). And graduates are caught in between, feeling that they have had tremendous work loads while in school, and yet are under-prepared and lacking in confidence in practice.

The most troubling and insightful questions go to the heart of the issue: If instructors teach and evaluate everything possible, will learners be competent for practice? Is competence determined by the volume of what is taught and evaluated? Is it acceptable to continue to use traditional educational methods in spite of revolutionary changes in healthcare delivery? The reflective response is "No." Programs of learning and evaluation need to be redesigned, not patched up. A growing number of responsible leaders in many facets of the profession believe it is time to work collaboratively to create rational and comprehensive models to promote effective and efficient learning and validation of competencies essential for beginning and advanced practice. This author believes that such models have more potential for success, in spite of resistance, if they are grounded in a conceptual framework that is comprehensive, cohesive and consistent with the realistic needs of the practice community.

The COPA Model

A competency-based approach requires that educators (academic or non-academic) analyze relevant current environment and needs, from which they determine content and competencies to be achieved in the instructional program. This perspective and those related to adult education, interactive and student-focused learning strategies, outcomes and assessment of performance competencies, provide the foundation for the competency outcomes system described in this article.

During the past three decades, Lenburg (1990, 1992-1995, 1998, 1999) developed the Competency Outcomes and Performance Assessment (COPA) Model, based on extensive work with the New York Regents College Nursing Program (1973-91) and multiple other educational, service and organizational entities and conducting research related to them. It is a holistic but focused model that requires the integration of practice-based outcomes, interactive learning methods, and performance assessment of competencies. The essential components are reviewed briefly below to illustrate one approach that is applicable in education and service environments; other articles in this issue of OJIN and related citations provide additional helpful details. The works reported by Bondy (1984), Waltz and Strickland (1990), Anthony and del Bueno (1993), Greenwood (1994), Krichbaum, et al (1994), and Curley (1998) offer suggestions for other approaches.

The basic organizing framework for the COPA Model is simple but comprehensive. It requires the faculty, and/or others responsible for program (or course) development, to analyze and respond realistically and collaboratively to four essential questions. They are:

  1. What are the essential competencies and outcomes for contemporary practice?
  2. What are the indicators that define those competencies?
  3. What are the most effective ways to learn those competencies? And,
  4. What are the most effective ways to document that learners and/or practitioners have achieved the required competencies?

Specifying Competency Outcomes: Eight Core Practice Competencies

The first of the four guiding questions is: What are the essential competencies and outcomes for contemporary practice? Answering this question has two major components: identifying the required competencies, and wording them as practice-based competency outcomes rather than as traditional and obtuse objectives. Redirecting the focus to achieve actual competence for practice challenges leaders in the profession to come to consensus about the major competency categories and subskills essential for diverse segments of practice. A comprehensive, concept-oriented model that allows for flexible adaptation and corresponding assessment methods is logical and essential.

In the COPA Model, the constellation of eight core practice competencies are categories under which a flexible array of specific skills can be clustered for particular levels, types, or foci of practice. These core competency categories collectively define practice and are applicable universally in education and practice environments. Although many of them are required simultaneously in actual practice, they are discrete skills that can be adapted to fit specific settings, clients, employees, and types and levels of students and practitioners. These essential core competencies are: assessment and intervention, communication, critical thinking, teaching, human caring relationships, management, leadership, and knowledge integration skills (Lenburg, 1992-1995, 1998, 1999; Luttrell et al, 1999). Essentially all specific subskills nurses perform can be listed under one of these competency categories, as illustrated in the examples in Figure 1. Each competence category can incorporate a flexible array of subskills that further specifies the required practice abilities for particular levels or types of employees or students in diverse settings.

Figure. 1. Lenburg's Eight Core Practice Competencies with Subskill Examples

1. Assessment and Intervention Skills

  1. safety and protection
  2. assessment and monitoring
  3. therapeutic treatments and procedures

2. Communication Skills

  1. oral skills
    1. talking, listening, with individuals
    2. interviewing; history taking
    3. group discussion, interacting
    4. telling, showing, reporting
  2. writing skills
    1. clinical reports, care plans, charting
    2. agency reports, forms, memos
    3. articles, manuals
  3. computing skills (information processing; using computers)
    1. related to clients, agencies, other authorities
    2. related to information search and inquiry
    3. related to professional responsibilities

3. Critical Thinking Skills:

  1. evaluation; integrating pertinent data from multiple sources
  2. problem solving; diagnostic reasoning; creating alternatives
  3. decision making; prioritizing
  4. scientific inquiry; research process

4. Human Caring and Relationship Skills

  1. morality, ethics, legality
  2. cultural respect; cooperative interpersonal relationships
  3. client advocacy

5. Management Skills

  1. administration, organization, coordination
  2. planning, delegation, supervision of others
  3. human and material resource utilization
  4. accountability and responsibility; performance appraisals and QI

6. Leadership Skills

  1. collaboration; assertiveness, risk taking
  2. creativity, vision to formulate alternatives
  3. planning, anticipating, supporting with evidence
  4. professional accountability, role behaviors, appearance

7. Teaching Skills

  1. individuals and groups; clients, coworkers, others
  2. health promotion; health restoration

8. Knowledge Integration Skills:

  1. nursing, healthcare and related disciplines
  2. liberal arts, natural and social sciences, and related disciplines

This framework is attractive, in part, because of the universality of its competency classification and its applicability to education and practice in various circumstances and environments. Quite simply, these core competencies outline the array of abilities all nurses need to incorporate in fulfilling their roles, whether clinical, educational, administrative or otherwise; the related subskills and their implementation will vary with circumstances. The examples in Figure 1 are cited to stimulate ideas for adaptation to particular situations, such as case management systems as described by Lenburg (1999) in the Cohen and Cesta (1999) text. They also apply to other disciplines as illustrated in the conference proceeding of the American Association of Primary Care Physicians (Lenburg, 1994).

Specifying Competency Outcomes: Conversion of Objectives to Outcome Statements

A major difficulty instructors have, regardless of setting, is changing from the traditional perspective of writing and using behavioral objectives to more contemporary competency outcomes as the blueprint for basic and advanced learning and practice of the discipline. Outcomes are the results to be attained, the end product, the focus of all related activities; they require learners to engage in and become competent in skills used in practice. Objectives, on the other hand, as commonly used, focus on ways of learning and directions for learning the content. Most often they do not reflect practice-related abilities for which the content is to be learned. This section explores the guidelines and process of writing outcome statements and making this fundamental transition; some examples are cited for a variety of levels and competencies. Even competencies such as cultural competence, perceived by many to be more difficult and challenging, can be written as an outcome statement with critical elements for diverse types and levels of students, practitioners, and/or organizations, as seen in Figure 2 (Lenburg, et al, 1995).

Figure 2. Examples of Nursing Skills and Related Critical Elements

Administer Medications: The administration of parenteral and non-parenteral medications to clients between 2 and 10 years of age in designated clinical setting.

By the conclusion of this course (module), the student will be able to:

  1. Calculate fractional dosage consistent with standards of practice, using body surface area or child's weight to verify safe range of dosage

  2. Administer the prescribed dose of medication to designated patient, using prescribed route, within +/- 30 minutes of prescribed time

  3. Administer medications to children using technique consistent with patient-related data and medication standards

  4. Monitor patient response to medications within 15-20 minutes of administration

  5. Document administration of medication according to agency protocol (or, exam record forms, as pertinent)

Incorporate cultural competence skills in clinical practice: The integration of diverse aspects of cultural values and preferences for clients, related others, and peers.

By the end of this course, the student will be able to:

  1. Show respect for diverse values and preferences of other individuals and groups
  2. Take action to learn about cultural variations and values in others
  3. Integrate knowledge of cultural variation into professional practice
  4. Take action to change negative and prejudicial behaviors in self and others

Critical Thinking in Case Management:

Plan a study to investigate differences in variance for a designated protocol among patients with similar conditions (eg, DRG classification)

At the designated time, the case manager will:

  1. Write the specific question to be explored, including related components
  2. Write the justification for investigating the specified problem
  3. Describe methods to be used, including subjects, timeline, and methods of data collection and analysis
  4. Develop a budget that is adequate and efficient for the purpose
  5. Outline potential consequences of positive and negative findings

Examples of converting objectives to outcomes.

The following example of unit objectives was extracted from an actual college-level course syllabus on Life Span Development. The stem reads: Upon completion of the units of the course, the student will be able to:

Unit I

  1. Discuss the relationship between individual and family development
  2. List and describe the 7 parameters needing consideration in conducting a systematic family assessment

Unit II

  1. A. Discuss the developmental tasks of the marital dyad

    These objectives are traditionally written but have little to do with actual practice, or nursing competence. "List" and "discuss" may give direction for learning certain basic aspects of family assessment and development, but they do not correspond to competencies nurses actually use in practice. As worded it is difficult for the learner to know what the result, or outcome of the learning should be, what abilities they are aiming for, and how such learning activities relate to professional practice. In a competency-based system objectives like these are converted to practice outcomes, in response to the fundamental questions: What do nurses actually do in practice with the information studied in this course? How is knowledge used to help clients deal with real situations of illness or promoting health?

    In determining competency outcomes, the realistic answers to some basic questions are required. What is the student expected to be able to do (in cognitive, affective or psychomotor performance) as a result of the course? Will the student write about, or actually perform some activity based on the knowledge? Is the knowing the end outcome, or engaging in nursing activities using the knowledge the competence to be achieved? Verbs such as describe, discuss, identify, list, and explain usually only require knowledge as the outcome. They could be changed to specify active engagement, such as: apply, integrate, implement, differentiate, or formulate. It is essential to determine what performance or competence is expected at the end of the pre-assessment period, the extent to which it is expected, and under what conditions it is to be demonstrated. The outcome statements then are written accordingly.

    The unit objectives are rewritten below to illustrate evolving thinking and final wording as practice outcomes. Focus on the undergirding, but unstated competence used in practice, and consider how the objectives could be rewritten as competency outcomes.

Unit I

 

  • Apply (or use)

 

        theories about family and family development as the basis for conducting a family assessment (Is the highest level of competence

to apply theories? Or is it to: )

 

  • Conduct

 

        a systematic family assessment using theories about family and family development as the basic framework (Nurses conduct

assessments based on theories, information, and protocols.)

Unit II & III

 

  • Explain

 

      the differences in developmental tasks for expectant mothers and fathers, neonates, and infants (Is "explain" the highest level of performance expected? Or, for a more interactive and performance-based outcome, do nurses: )

 

  • Plan

 

        interventions to promote positive parenting by incorporating knowledge of the expectant childbearing family and developmental tasks of its members'¦. (Planning is an essential competence in practice and should

be based on knowledge of various information data sets, standards, and protocols.)

The first suggested revisions (above) are examples of "cart-before-the-horse" thinking. Determine the skill to be emphasized at this level and put it first in the statement; this focuses content organization, learning strategies, and subsequent assessment methods. It also conveys to the student what competent practice actually includes and, therefore, what they must learn.

The following examples were extracted from other courses and illustrate the differences between traditionally phrased, course objectives and their conversion to competency outcome statements (revisions in italics). Unlike process-oriented objectives, outcomes are specific and performance-oriented competencies nurses actually use in practice; they are the results, the abilities, to be attained (outcomes) as a result of the learning experiences. The choice of verbs depends on the level and type of ability expected and the nature of the course.

  1. Describe the importance of nursing's involvement in political action and shaping health care environments.

    Engage in a health related political activity that promotes the importance of the nursing profession in creating change in health care environments.

     

  2. Demonstrate critical thinking, judgment, and cultural competence in applying community health principles into practice.

    Integrate community health principles, critical thinking, reflective judgment, and cultural competence into community health practic

     

  3. Demonstrate knowledge of wellness concepts as a basis for care provision.

    Provide (or: Plan) care to children that incorporates concepts of wellness

     

Some Basic Criteria for Writing Competency Outcome Statements

In making the transition to a competency-based program, the perspective and language must be changed to practice-based outcome statements to be achieved by the learner as a result of the constellation of focused learning activities. The set of outcomes begins with the leading stem: "At the conclusion of the course (module, unit), the learner will be able to:". Each competency outcome statement is characterized as follows:

  1. It is worded as a learner-oriented, essential competence (psychomotor, cognitive, and/or affective) to be achieved by the end of the learning period. It is the highest level of competence expected at this level or for this module, unit, or course and subsumes lower level competencies.
  2. It is worded in clear, specific, unadorned, and concise language readily understood by the learner and teacher, and is measurable. It is action oriented and begins with the verb that most precisely describes the actual, preferred outcome behavior to be achieved.
  3. It is consistent with standards, practice and real world expectations for performance, i.e., what the practitioner (student) actually needs to be able to do.
  4. It contributes to the cluster of abilities needed by the student (graduate) to fulfill the expected overall performance outcomes of the agency or program.

Indicators of Competence Based on Essential Psychometric Concepts

The second of the four guiding questions requires that specific indicators be written to include only those behaviors (actions, responses) that are mandatory for actual practice of that competence. Collectively, these statements define the expected competence in specific, clear and unambiguous terms. In the COPA Model they are called critical elements and are applicable to all core practice competencies, whether assessment, intervention, critical thinking, communication, caring, management, or leadership. Critical elements are written for each skill (or subskill), whether it results in a plan, a budget, a formal paper, nursing interventions, or therapeutic procedures. The competencies and required indicators define specific expectations for practice as discussed elsewhere in this issue, and earlier by Lenburg (1979) and Lenburg & Mitchell (1991). A few examples and criteria for writing critical elements are stated here for clarification.

Critical elements are defined as the set of single, discrete, observable behaviors that are mandatory for the designated skill, at the targeted level of practice. They represent principles that are essential to ascribe competent performance to a given ability; they are not steps in a procedure, even though they are written in a logical sequence. Critical elements are the criterion-referenced performance equivalent to items on a written test. Criteria for writing them are similar to those for outcomes, although critical elements are more singular, specific and circumscribed. The following are four of the most basic criteria:

  1. Begin each critical element with the verb that most succinctly identifies the mandatory aspect of the skill to be demonstrated; on average, four to six critical elements are needed for each skill.
  2. Use language that is clear and unambiguous and has commonly accepted, uniform interpretation. Words such as appropriate, proper, recognize, and acknowledge have very divergent interpretations, cannot be objectively assessed, and should be avoided.
  3. Include behaviors (actions) at the higher end of the thinking and action spectrum expected at the particular level that subsume lower level action subsets. As used in competency assessment, critical elements are required skills expected of all nurses and therefore, 100% of them must be performed as stated.
  4. Include only actions (behaviors) that are essential for documenting competence. That which is essential (must perform) needs to be separated from what is considered ideal (nice to know) during performance examinations; the mandatory elements define the bottom line of actual required performance while the "nice to know" is incorporated into learning experiences but not necessarily included into performance examinations. The baseline of acceptability is not less than sufficient; it is, by definition, "how good is good enough" to be called competent '” all 100% of it.

Figure 2 illustrates different types of nursing competencies and related critical elements. Specific wording is adapted to correspond to the level student or practitioner, setting and other conditions. Any ability required for practice that can be defined can be developed with associated mandatory critical elements and assessed objectively.

Most Effective Ways of Learning Competencies

The third question in the framework is challenging as well, but for somewhat different reasons. Discovering and implementing the most effective ways for learners and practitioners to achieve the required competencies means changing some firmly held habits and myths about the roles of teachers and students, how learning is accomplished, and which content is essential for contemporary (not past) practice. Requirements for individual employment and advancement, as well as for professional and institutional accreditation, however, are forcing the issue (Joint Commission, 1996; NLN, 1992). For faculty, making the transition from lecturer to engaging facilitator who uses multiple student-focused interactive learning strategies may be unsettling or confusing, and usually engenders some resistance or conflict, as described in this volume by Bargagliotti, et al. It is not easy to change teaching behaviors as this requires reconceptualizing the purpose of learning, the focus of content and assignments, and performance assessment methods. It also requires a reassessment of the interface between education and real life professional practice as it is now and is likely to become in the near future.


Lecture and memorization, multiple-choice testing and traditional assignments may lead to course completion, but they often are ineffectual in helping students to become competent and confident in skills such as critical thinking, communication, management, and leadership.

 

In a competency-based system, focused outcomes, content, and interactive learning methods are the hand that fits into the glove of performance assessment. Competency assessment cannot be changed effectively without also changing the learning process. Everything is geared to prepare for professional and personal competence, in work and in life. Without active engagement by the learner (in academic or non-academic courses), it is patently impossible for them to achieve the interactive process competencies cited earlier. Lecture and memorization, multiple-choice testing and traditional assignments may lead to course completion, but they often are ineffectual in helping students to become competent and confident in skills such as critical thinking, communication, management, and leadership. Using realistic strategies such as problem-based learning, case studies based on printed text or computer simulations, and team projects, promote these competencies. This is increasingly important as NCLEX and other critical assessments of competence require more interactive case-oriented and computerized examinations.

Publications in the 1990s on interactive learning, collaborative learning, case studies, simulations and other useful strategies are too abundant to list here, but can be found easily via ASHE-ERIC Higher Education Reports (including Chaffee & Sherr, 1992; Davis & Murrell, 1993), Change, American Association of Higher Education, Jossey-Bass Publishers, as well as the Journal of Nursing Education, Nurse Educator, and other nursing journals. The publications on teaching, learning and assessing critical thinking written by Kurfiss (1988), Hutchins (1993), Bosworth and Hamilton (1994), and Facione, et al (1994) are extremely useful and explore different methods to promote this essential competence. Classroom Assessment Techniques (second edition), written by Angelo and Cross (1993) contains over 50 different learning and assessment strategies to promote critical thinking and communication skills and is an invaluable resource. The strategies are applicable to nursing education and practice and the book provides rationales and guidelines for using each exercise.

Creating Competency Performance Examinations and Assessments

The fourth question that guides the transition to competency-oriented learning and assessment programs focuses on objective performance assessment methods, based on established psychometric concepts. A systematic and comprehensive plan for outcomes assessment is essential for academic, continuing education, and staff development programs. The designated competencies establish the foundation for the kinds of actual or simulation assessments most likely to be effective to document achievement. Objective performance examinations and other forms of assessment are carefully created to insure the learner's competence in the required skills, not merely knowledge about those skills. Knowledge is critical to evidence-based practice, but not just for its own sake; the seamless and continuous integration of a broad range of knowledge into practice is an essential competence that is confirmed through performance assessment.

Kirkwood (1981) observed: "Performance assessment has seldom been tried and found difficult; rather, it has been found difficult and seldom tried." The process of developing and implementing performance assessment methods, whether in academic or non-academic settings, or in didactic or clinical courses, is complex and has many essential components. Lenburg (1978, 1979, 1983, 1984, 1990, 1999), Lenburg & Mitchell (1991), and Luttrell, et al (1999) have described these components in detail; others have applied or adapted them. Some of the basic concepts are summarized below, but the scope of this article precludes elaboration.

Some Essential Basics

Briefly, the evaluation of learners can be divided into either normative-referenced or criterion-referenced to indicate whether the learner is being compared to a group of others or to a set of standards or expectations. Evaluation also is divided into formative or summative types to indicate whether the focus is on the continuing learning period or is at the conclusion of learning time. Competency performance assessment or examinations, as described in the COPA Model, are criterion-referenced and summative methods, although formative examples can be used as well, similar to interim quizzes. The performance of learners (students, practitioners) is judged against a predetermined standard at the conclusion of a designated period of learning and practice. Moreover, performance examinations can be based on various types of simulations or actual situations likely to be encountered in practice. On a continuum from total simulation to total actual performance, various types of performance examinations can be constructed to correspond with the competencies to be assessed; other factors to consider include purpose, human and material resources, environment, persons being evaluated, and potential consequences of the outcomes. A combination of various types of simulation and/or actual performance examinations, along with written examinations usually allows for comprehensive assessment of performance competencies, clinical or otherwise.

Another basic principle is the need to separate learning time from assessment time, which inherently means different roles and functions for instructors and learners during learning and testing times. The learner uses the designated learning time to prepare for the specified time when verification of competence is required based on the established standard. This approach differs from the traditional practice of evaluating the learner during every practice session, resulting in a series of anecdotal notes that subsequently are summarized to determine the extent or quality of learning. In a competency-based system, the learner has a guided and un-judged time for learning followed by a scheduled time to demonstrate and confirm competence. Evaluation focused on helping the learner to become more competent during the learning time still is essential. This is the coaching, teaching, and consultation of expert to learner that promotes competence; it is not appropriate for determining the final grade or competence appraisal. Learners (of whatever type or level) need to know what time it is, and be held accountable accordingly. It is learning time or documentation time?

Essential Psychometric Concepts

Whether the competency performance assessments are actual or simulated, or used in didactic or clinical settings, they are most effective when designed and implemented based on a series of essential psychometric concepts. Faculty have used such concepts for decades to construct norm-referenced paper and pencil tests, but they rarely have used them in clinical evaluation. Lenburg created a constellation of ten basic concepts and adapted them for developing and implementing objective performance examinations (Lenburg, 1979, 1983, 1990, 1999; Lenburg & Mitchell, 1991). Space precludes elaboration here, but a few comments will highlight the usefulness of these concepts to develop more accurate assessment instruments.


When individual examiners begin to digress from the established standards and protocols, objectivity erodes back into subjectivity and inconsistency.

 

The concept of examination is foundational to all the others; the evaluation episode is constructed and implemented as an objective examination to determine competence, not to promote learning per se. The content of the examination is specified by the list of the dimensions of practice (i.e., the skills, competencies) to be included, and their required critical elements that determine the extent and conditions of competence. Objectivity of the assessment process is dependent on two components. First, the content (skills and critical elements) for the particular assessment is specified in writing along with all pertinent logistics and policies. The second component is the consensual agreement of everyone directly involved in any aspect of the examination process; changing human behavior is the more challenging component in the process. It is essential that examiners fully understand and agree with the purpose, level of performance expectations, the process, and the consequences of deviation from them. When individual examiners begin to digress from the established standards and protocols, objectivity erodes back into subjectivity and inconsistency. This "regression to the mean" destroys the process and purpose.

Sampling is essential, as it is in paper/pencil testing, and follows similar patterns of selecting the most frequently encountered, most essential, and most critical skills for the testing period. Determination of these competencies influences the type of examination, the timing, process and other logistics. The level of acceptability is established by the number and type of skills, and 100% performance of the specified critical elements, no more and no less. Protocols are established to insure that each test episode for a given group, is comparable in extent, difficulty, and requirements. To be fair, each person must have equivalent situations; not the same but equivalent. Additional protocols are designed to insure that the process is implemented consistently, regardless of who administers the examination, or when it is conducted. This requires consensus of all those involved, which is based on some degree of involvement in the ongoing process and very specific orientations, monitoring and maintenance programs. When performance examinations are administered in actual clinical environments (not simulations) the concept of flexibility is essential, as each client (patient) is different. Moreover, protocols for using systematized conditions to determine what to do in case of unanticipated events promotes objectivity, comparability, and consistency, and prevents the erosion of the process when assessors simply respond in whatever way they choose at the time.

Two Types of Performance Evaluations

These concepts apply to development and implementation of various types of objective evaluations. Lenburg (1998, 1999; Luttrell et al, 1999) developed two variations of competency assessments to clarify levels and types of expected abilities, entitled Competency Performance Assessments (CPAs) and Competency Performance Examinations (CPEs). CPAs are designed for use in non-clinical, didactic, classroom-type situations and for related types of assignments, such as projects, poster presentations, analyses of reports, research articles, or writing manuals or reports. CPEs are used in clinical, client-related environments and corresponding critical elements are more exacting as they take into account the legal, ethical and professional components of responsible care of actual persons. Thus, the whole range of practice competencies can be objectively assessed using a similar set of psychometric concepts, protocols and policies, regardless of where they are learned, or the type of skills involved.

After any particular CPE or CPA is developed, it must be pilot tested to work out problems, to discover more effective and/or efficient methods, logistics, wording, instructions, or the like (Lenburg, 1979, 1998, 1999; Luttrell, et al, 1999). Logistics and policies also must be developed consistent with the type of assessments, conditions, resources, purposes and other factors. This essential work often is done while final versions of the assessment are completed and the pilot testing is being implemented. In educational settings the most troublesome and critical policies pertain to grading. Essential questions relate to determining how many exams to use, what grade to assign to the "bottom line" and how to use contract grading for students who want to excel. These and other questions influence progression or termination from the course or employment position.

Once the initial competency performance assessments and/or examinations are developed it is essential to establish ongoing responsibilities for orientations, revisions, updating and quality improvements; this usually is a standing committee related to curriculum or evaluation, or staff education or appraisals, or an institutional committee assigned to such activities. An ongoing program of evaluation of the COPA process, policies, consequences, and results needs to be implemented on a regular and systematic basis and results used for quality improvements.

Summary

The COPA Model uses four guiding questions to create an organizing framework for making the transition to a competency outcomes and performance assessment system. It specifies a set of core competency categories that are fundamental to professional practice and recommends interactive learner-focused learning strategies to promote competence in all of them. It uses a constellation of ten psychometric concepts and related principles to develop and implement objective performance assessment procedures. CPAs are used for didactic and CPEs are used for clinical situations to promote accountability for competence in all of the core practice skills. The Model provides an example of a holistic, integrated, and flexible system to promote competent practice that is applicable to education and service purposes and diverse specialties and settings. Another patch-up revision is inadequate. A total redesign of learning and assessment is required to promote competence in today's complex practice environments.

The Author

Carrie B. Lenburg, EdD, RN, FAAN
E-mail: clenburg@earthlink.net

Dr Lenburg, Loewenberg Chair of Excellence in the School of Nursing, University of Memphis from 1997-1999, worked with the nursing faculty to convert the BSN program to the competency outcomes and performance assessment model and methods. She also is consultant to the nursing faculty of the University of Colorado Health Science Center to integrate the model into its range of four degree programs (BSN, MSN, ND, and PhD), and into all UC-SON Internet courses. She also is ongoing consultant to the newly developing BSN program at King College (Bristol TN), implementing the COPA Model from the outset. From 1973-1991 she coordinated the development, implementation and evaluation of the New York Regents College External Degree Nursing Program.


© 1999 Online Journal of Issues in Nursing
Article published September 30, 1999

References

Angelo, T.A. & Cross, K.P. (1993). Classroom assessment techniques: A handbook for faculty. University of Michigan, Ann Arbor, MI: National Center for Research to Improve Postsecondary Teaching and Learning. .

Anthony, C.E., & del Bueno, D. (1993). A performance-based development system. Nursing Management 24(6), 32-34.

Bondy, K. (1984). Clinical evaluation of student performance: The effects of criteria on accuracy and reliability. Research in Nursing and Health, 7(1), 25-33.

Bosworth, K. & Hamilton, S. (Eds.) (1994). Collaborative learning: Underlying processes and effectiv e techniques. (New Directions for Teaching and Learning, No. 59). San Francisco: Jossey-Bass.

Chaffee, E.E. and Sherr, L.A. (1992). Quality: Transforming Postsecondary Education. ASHE-ERIC Higher Education Report No. 3. Washington, DC: Th George Washington University, School of Education and Human Development.

Cohen, E.E. and Cesta, T.G. (Eds.). (1999, in progress). Case management: From concept to evaluation, 3rd edition. St Louis: Mosby.

Curley, M.A.Q. (1998). Patient-nurse synergy: Optimizing patients' outcomes. American Journal of Critical Care 7 (1), 64-72.

Davis, T.M. and Murrell, P.H. (1993). Turning teaching into learning: The role of student responsibility in the collegiate experience. ASHE-ERIC Higher Education Report No. 8. Washington, DC: The George Washington University, School of Education and Human Development.

del Bueno, D. J. (1995). Why can't new grads think like nurses? Nurse Educator, 19(4), 9-11.

del Bueno, D. J. & Beay, P. J. (1995). Focus: Evaluation of preceptor competence and cost in an acute care hospital. Journal of Nursing Staff Development, 11(2), 108-111.

Facione, N.C., Facione, P.A., & Sanchez, CA. (1994). Critical thinking disposition as a measure of competent clinical judgment: The development of the California Critical Thinking Disposition Inventory. J of Nursing Education, 33, 345-350.

Greenwood, A. (ed.). (1994). The national assessment of college student learning: Identification of the skills to be taught, learned, and assessed. A Report of the Proceedings of the Second Study Design Workshop November 1992. Washington, DC: U.S. Dept of Education, National Center for Education Statistics, Office of Educational Research and Improvement (NCES 94-286).

Hutchins, P. (1993). Using case studies to improve college teaching: A guide to more reflective practice. Washington, DC: American Association of Higher Education.

Joint Commission on Accreditation of Healthcare Organizations. (1996). Comprehensive accreditation manual for hospitals: The official handbook. Oakbrook, IL: Author.

Kirkwood, R. (1981). Process or outcome: A false dichotomy. In T.M. Stauffer (Ed). Quality: Higher education's principal challenge. Washington, DC: American Council on Education.

Krichbaum, K., Rowan, M., Duckett, L., Ryden, M. & Savik, K. (1994). The clinical evaluation tool: A measure of the quality of clinical performance of baccalaureate nursing students. J of Nursing Education 33, 395-404.

Kurfiss, J.G. (1988). Critical Thinking: Theory, Research and Possibilities. ASHE-ERIC Higher Education Report No. 2. Washington, D.C.: American Association of Higher Education.

Lenburg, C.B. (1978). The New York Regents External Degrees assessment model. In M. Morgan (ed.), Evaluating clinical competence in the health professions. St Louis: Mosby.

Lenburg, C.B. (1979). The Clinical performance examination: Development and implementation. New York: Appleton-Century-Crofts.

Lenburg, C.B. (1983). Expanding the options through the external degree and regional assessment centers. In B. Bullough, V. Bullough, & M.C. Soukup, editors. Nursing issues and strategies for the eighties. New York: Springer.

Lenburg, C.B. (1984). An update on Regents External Degree Program. Nursing Outlook, 32, 250-254. (early development of very nontraditional program)

Lenburg, C.B. (1990). Do external degree programs really work? Nursing Outlook, 36, 234-238.

Lenburg, C.B. (1992-1995). Competency-based outcomes and performance assessment. Unpublished workshop materials for several institutions or organizations, such as Fairleigh Dickinson University, East TN State University, College of Mount St Joseph, the American Association of Critical Care Nurses, and others.

Lenburg, C.B. (1994). Transformation to a competency-based performance assessment system. Proceedings from the special conference on the assessment of primary care competencies, June 4, 1994, Kansas City, MO.

Lenburg, C.B. (1998). Competency-based outcomes and performance assessment: The COPA Model. Unpublished workshop materials: The University of Memphis, and University of Colorado, Health Science Center.

Lenburg, C.B. (1999, in process). The competency outcomes and performance assessment model applied to nursing case management systems. In E.L. Cohen & T.G. Cesta (Eds.), Case management: From concept to evaluation, 3rd edition. St Louis: Mosby.

Lenburg, C.B. & Mitchell, C.A. (1991). Assessment of outcomes: The design and use of real and simulation nursing performance examinations. Nursing and Health Care 12, 68-74.

Lenburg, C.B., Lipson, J., Demi, A., Blaney, D., Stern, P., Schultz, P., and Gage, L. (1995). Promoting cultural competence in nursing education. Washington DC: American Academy of Nursing.

Luttrell, M.F., Lenburg, C.B., Scherubel, J.C., Jacob, S.R., & Koch, R.W. (1999). Redesigning a BSN curriculum: Competency outcomes for learning and performance assessment. Nursing and Health Care Perspectives, 20, 134-141.

National League for Nursing. (1992). Criteria and guidelines for the evaluation of baccalaureate and higher degree programs in nursing. New York: NLN.

Waltz, C.F. & Strickland, O.L. (Editors). (1990). Measurement of nursing outcomes, volume three: Measuring clinical skills and professional development in education and practice. New York: Springer.

Citation: Lenburg, C. (Sept. 30, 1999): The Framework, Concepts and Methods of the Competency Outcomes and Performance Assessment (COPA) Model. Online Journal of Issues in Nursing. Vol 4, No. 2, Manuscript 2.