Quality Measure Selection Criteria (2024)

This appendix includes summaries of the criteria used in or proposed by the U.S. Department of Health and Human Services working group on the National Health Care Quality Report; Donabedian's quality assessment triad of structure, process, and outcomes; the Foundation for Accountability's Child and Adolescent Health Measurement Initiative; National Committee for Quality Assurance; President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry; Healthy People 2010; and Measuring Health Performance in the Public Sector, National Research Council. Many of the selection criteria correspond to conceptual frameworks outlined in Appendix A. As discussed in Chapter 3, the most common criteria are relevance, meaningfulness or applicability, health importance or improvement, evidence-based, reliability or reproducibility, validity, and feasibility.

U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES WORKING GROUP ON THE NATIONAL HEALTH CARE QUALITY REPORT

The U.S. Department of Health and Human Services (DHHS) has proposed the following criteria based on the Health Employer Data and Information Set (HEDIS) list of desirable attributes for measures and the indicator selection criteria from America's Children: Key National Indicators of Well-Being (Federal Interagency Forum on Child and Family Statistics, 2000). (See individual summaries of each in this appendix.)

Essential Criterion. Measures must meet this criterion to be rated on the desirable criteria that are listed below.

1.

Objectively based on substantial research. The specific activity or intervention addressed by the measure must have a body of research showing effectiveness. The submitting organization should briefly describe the findings and give several key references.

Desirable Criteria. Measures are rated (“high,” “medium,” or “low”) based on the following criteria.

2.

Relevance. The measure should address features of health care systems that are relevant to the target audience of policy makers, health professionals, and consumers.

  • Meaningfulness. The measure should be meaningful to at least one of the audiences. Decision makers should be able to understand the clinical and economic significance of differences in how well systems perform on the measure. The meaningfulness of a measure is enhanced if benchmarks and targets are available.

  • Health importance. The measure should capture as much of the health care system's activities relating to quality as possible. Factors to be considered in evaluating the health importance of a measure include the type of measure (e.g., outcome versus process), the prevalence of the medical conditions to which the measure applies, and the seriousness of the health outcomes affected.

  • Strategic importance. The measure should encourage activities that deserve high priority in terms of using resources most efficiently to maximize the health of their members. In general, measures that are of high clinical importance, of high financial importance, and cost-effective will also have high priority.

  • Controllability. There should be actions that health care systems can take to improve their performance on a measure. If the measure is an outcome measure, there should exist one or more processes that can be controlled by the system that have important effects on the outcome. If the measure is a process measure, the process should be substantially under the control of the system, and there should be a strong link between the process and desired outcomes. If the measure is a structural measure, the structural feature should be open to modification by the system, and there should be a strong link between the structure and desired outcomes. The measure's time period should capture the events that have impact on clinical outcomes and reflect the time horizon over which the health care system had control.

  • Timeliness. The data must be sufficiently current to be relevant to the audience. Submitting organization must give time from event to available data.

3.

Scientific soundness

  • Clinical evidence. There should be evidence documenting the links between the interventions, clinical processes, and/or outcomes addressed by the measure.

  • Reproducibility. The measure should produce the same results when repeated in the same population and setting.

  • Validity. The measure should have face validity (i.e., it should make sense logically and clinically). It should correlate well with other measures of the same aspects of care (construct validity) and capture meaningful aspects of this care (content validity).

  • Accuracy. The measure should accurately measure what is actually happening.

4.

Richness of data. Data are available to report the measure by race or ethnicity, socioeconomic status, state, and/or other geographic region.

5.

National representativeness of data. The classification scheme attempts to order existing data sources under consideration in terms of their capacity to produce national estimates as well as their relevance. A measure's data sources should be classified as either A, B, C, or D, with justification.

SOURCE: U.S. Department of Health and Human Services, 2000a.

DONABEDIAN'S QUALITY ASSESSMENT TRIAD OF STRUCTURE, PROCESS, AND OUTCOMES

1.

Inclusivity or definitional range

  • Technical versus interpersonal care

  • Medical versus psychosocial need

  • Diagnostic, therapeutic, preventive, anticipatory, and rehabilitative care

  • Individual, familial, or social responsibility

  • Cost containment versus quality enhancement

  • Parsimoniousness

2.

Scientific validity

  • Causal validity

  • Scientific currency

3.

Measurement reliability and validity

4.

Relevance, pertinence

  • Differentiation, adaptation

  • Uniformity, generality, transferability

5.

Practicability, feasibility, implementability

  • Costliness of development, revision, and application

  • Timeliness, with regard to care

6.

Legitimacy, acceptability

  • “Political” factors (e.g., sponsorship, representativeness, degree of participation, consensuality)

  • Other factors (e.g., inclusivity, causal validity, measurement validity, and practicability)

  • Justification

SOURCE: Donabedian, 1982:371.

FOUNDATION FOR ACCOUNTABILITY'S CHILD AND ADOLESCENT HEALTH MEASUREMENT INITIATIVE

Following are the criteria and corresponding evidence of criteria for selecting domains and survey items for domains.

1.

Relevance. Known importance to families and children demonstrated through family interviews and focus groups, family surveys, and consensus panel recommendations.

2.

Parsimoniousness. Domains each provide distinct information; they may be related (e.g., correlated) but are conceptually distinct.

3.

Discrimination. Direction and magnitude of differences in performance scores for children with and without a chronic condition.

4.

Reliability. Internal consistency of a composite of items combined to create a content area or domain performance score.

5.

Feasibility. Taken as a whole, the number of survey items required to construct a performance score for each domain is within the acceptable range for the National Committee on Quality Assurance's HEDIS measures.

6.

Applicability. Survey domains and items will yield information valuable to purchasers and providers in addition to consumers.

SOURCE: Foundation for Accountability, 1999.

NATIONAL COMMITTEE FOR QUALITY ASSURANCE (NCQA) HEALTH PLAN EMPLOYER DATA AND INFORMATION SET

Desirable attributes of HEDIS measures include the following:

1.

Relevance. The measure should address features of health care systems that are relevant to purchasers and/or consumers for making choices between systems, that are useful in negotiating with systems, or that will stimulate internal efforts at quality improvement by systems.

  • Meaningfulness. The measure should be meaningful to at least one of the audiences for HEDIS: individual consumers, purchasers, or health care systems. Decision makers should be able to understand the clinical and economic significance of differences in how well systems perform on the measure. The meaningfulness of a measure is enhanced if benchmarks and targets are available.

  • Health importance. The measure should capture as much of the health care system's activities relating to quality as possible. Factors to be considered in evaluating the health importance of a measure include the type of measure (e.g., outcome versus process), the prevalence of the medical conditions to which the measure applies, and the seriousness of the health outcomes affected.

  • Financial importance. The measure should be related to activities that have high financial costs to health care systems or to purchasers or consumers of health care.

  • Cost-effectiveness. The measure should encourage the use of cost-effective activities and/or discourage the use of activities that have low cost-effectiveness.

  • Strategic importance. The measure should encourage activities that deserve high priority in terms of using resources most efficiently to maximize the health of their members. In general, measures that are of high clinical importance, high financial importance, and cost-effective will also have high priority.

  • Controllability. There should be actions that health care systems can take to improve their performance on a measure. If the measure is an outcome measure, there should exist one or more processes that can be controlled by the system that have important effects on the outcome. If the measure is a process measure, the process should be substantially under the control of the system, and there should be a strong link between the process and desired outcomes. If the measure is a structural measure, the structural feature should be open to modification by the system, and there should be a strong link between the structure and desired outcomes. The measure's time period should capture the events that have impact on clinical outcomes and reflect the time horizon over which the health care system had control.

  • Variance among systems. If the primary purpose of the measure is to differentiate among health care systems, then there should be potentially wide variations across systems with respect to the measure.

  • Potential for improvement. If the primary purpose of the measure is to support negotiations between health care systems and purchasers, or to stimulate self-improvement by health care systems, there should be substantial room for systems to improve their performance with respect to the measure.

2.

Scientific soundness

  • Clinical evidence. There should be evidence documenting the links between the interventions, clinical processes, and/or outcomes addressed by the measure.

  • Reproducibility. The measure should produce the same results when repeated in the same population and setting.

  • Validity. The measure should have face validity (i.e., it should make sense logically, clinically, and if it focuses on a financially important aspect of care, financially). It should correlate well with other measures of the same aspects of care (construct validity) and capture meaningful aspects of this care (content validity).

  • Accuracy. The measure should accurately measure what is actually happening.

  • Case-mix adjustment or risk adjustment. Either the measure should not be appreciably affected by any variables that are beyond the health care system's control (“covariates”), or any extraneous factors should be known and measurable. If case-mix and/or risk adjustment are required, there should be well-described methods either for controlling through risk stratification or for using validated models to calculate an adjusted result that corrects for the effects of covariates.

  • Comparability of data sources. The accuracy, reproducibility, risk-adjustability, and validity of the measure should not be affected if different systems have to use different data sources for the measures.

3.

Feasibility

  • Precise specification. The measure should have clear operational definitions, specifications for data sources, and methods for data collection and reporting.

  • Reasonable cost. The measure should not impose an inappropriate burden on health care systems. Either the measures should be inexpensive to produce, or the cost of data collection and reporting should be justified by improvements in outcomes that result from the act of measurement.

  • Confidentiality. The collection of data for the measures should not violate any accepted standards of member confidentiality.

  • Logistical feasibilty. The data required for the measure should be available to the health care system during the time allowed for data collection. The measure should not be susceptible to cultural or other barriers that might make data collection infeasible (e.g., inpatient or physician surveys, there may be cultural or personal barriers that lead to biased responses; these would have to be addressed).

  • Auditability. The measure should be auditable (i.e., it should not be susceptible to manipulation or “gaming” that would be undetectable in an audit). Methods to verify retrospectively that reported results accurately portray delivered care should be suggested.

SOURCE: National Committee for Quality Assurance, 2000:10–11.

PRESIDENT'S ADVISORY COMMISSION ON CONSUMER PROTECTION AND QUALITY IN THE HEALTH CARE INDUSTRY

Examples of criteria for evaluating individual measures include scientific soundness (i.e., reliable, valid, appropriately adjusted), importance of the quality concern, relevance to various users, potential to foster improvements in health status or well-being, evidence basis, interpretability, “actionability” (i.e., degree to which steps can be taken to address the concern), feasibility, and ease and cost-effectiveness of measurement.

Examples of criteria for evaluating measurement sets including addressing the full spectrum of health care, incorporating measures of multiple dimensions of quality (e.g., technical quality, accessibility, acceptability), including various types of measures (e.g., structure, process, outcome), representativeness, and measurement burden (i.e., concise, not redundant; measurement can be conducted with a minimal burden on providers and health care organizations).

SOURCE: Advisory Commission on Consumer Protection and Quality in the Health Care Industry, 1998:81.

U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES: HEALTHY PEOPLE 2010

Criteria for Healthy People 2010 objectives follow.

  • The result to be achieved should be important and understandable to a broad audience and relate to the two overarching Healthy People 2010 goals.

  • Objectives should be prevention-oriented and should address health improvements that can be achieved through population-based and health service interventions.

  • Objectives should drive action and suggest a set of interim steps that will achieve the proposed targets within the specified time frame.

  • Objectives should be useful and relevant. States, localities, and the private sector should be able to use the objectives to target efforts in schools, communities, work sites, health practices, and other settings.

  • Objectives should be measurable and include a range of measures— health outcomes, behavioral and health service interventions, and community capacity—directed toward improving health outcomes and quality of life. They should count assets and achievements and look to the positive.

  • Continuity and comparability are important. Whenever possible, objectives should build on Healthy People 2000 and those goals and performance measures already established.

  • Objectives must be supported by sound scientific evidence.

SOURCE: U.S. Department of Health and Human Services, 2000:2–4.

NATIONAL HEALTH SERVICE (NHS) PERFORMANCE INDICATORS (UNITED KINGDOM)

Criteria for assessing individual indicators follow.

  • Attributable. Indicators should reflect health and social outcomes that are substantially attributable to the NHS through its roles as service provider, advocate for health, and interagency partner.

  • Important. Indicators should cover an outcome that is relevant and important to policy makers, health professionals, and managers (and resonates with the concerns of the public).

  • Avoids perverse incentives. An indicator should be presented in such a way that managers can act upon it without introducing perverse incentives. There should be no incentive to shift problems onto other organizations. Where this is the case, a counterbalancing indicator should be considered alongside.

  • Robust. Measurement of the indicator should be reliable, and coverage of the outcome measured should be high, although sampling may be appropriate for some indicators. In particular, data should be robust at the level at which performance monitoring is undertaken.

  • Responsive. An indicator should be responsive to change, and change should be measurable. It should not be an indicator in which change will be so small that monitoring trends becomes difficult. Consideration should be given to whether the rate at which change can be expected to occur makes the indicator relevant for performance-monitoring purposes.

  • Usable and timely. Data should be readily available within a reasonable time.

SOURCE: Department of Health, 1999: Appendix B.

LEADING HEALTH INDICATORS FOR HEALTHY PEOPLE 2010

The initial charge to the Institute of Medicine committee from the U.S. Department of Health and Human Services was to recommend at least two potential indicator sets that would “(1) elicit interest and awareness among the general population, (2) motivate diverse population groups to engage in activities that will exert a positive impact on specific indicators and in turn, improve the overall health of the nation, and (3) provide ongoing feedback concerning progress toward improving the status of specific indicators.” The committee was later directed that no more than 10 indicators should be included and that “any proposed indicator set should be supported by a conceptual framework around which the specific indicators could be organized.” The committee had accepted 14 criteria for indicator development, but later decided on a set of 6 simple criteria that would be understandable to the general public.

Criteria for Individual Measures

  • Worth measuring. The indicators represent an important and salient aspect of the public's health.

  • Can be measured for diverse populations. The indicators are valid and reliable for the general population and diverse population groups.

  • Understood by people who need to act. People who need to act on their own behalf or that of others should be able to readily comprehend the indicators and what can be done to improve the status of those indicators.

  • Information will galvanize action. The indicators are of such a nature that action can be taken at the national, state, local, and community levels by individuals as well as organized groups and public and private agencies.

  • Actions that can lead to improvement are known and feasible. There are proven actions (e.g., personal behaviors, implementation of new policies, etc.) that can alter the course of the indicators when widely applied.

  • Measurement over time will reflect results of action. If action is taken, tangible results will be seen indicating improvements in various aspects of the nation's health.

Note: An indicator was required to fulfill all six criteria before it was accepted as a potential indicator for inclusion in a set.

SOURCE: Institute of Medicine, 1999.

INDICATORS FOR MEASURING HEALTH PERFORMANCE IN THE PUBLIC SECTOR, NATIONAL RESEARCH COUNCIL

The charge to the Panel on Performance Measures and Data for Public Health Performance Partnership Grants (PPGs) of the National Research Council was “to examine the state of the art in performance measurement for public health and to recommend measures that could be used to monitor the Performance Partnership Grant agreements to be negotiated between each state and the federal government.” The committee used the following four guidelines to evaluate the measures proposed.

1.

Measures should be aimed at a specific objective and be result oriented. PPG measures must clearly specify a desired public health result, including identifying the population affected and the time frame involved. Process and capacity measures should clearly specify the health outcome, or long-term objective, to which they are thought to be related.

2.

Measures should be meaningful and understandable. Performance measures must be seen as important to both the general public and policy makers at all levels of government and they should be stated in nontechnical terms.

3.

Data should be adequate to support the measure. Adequate data on the populations of interest must be available for the use of measures and must have the following characteristics:

  • data to track any objective must meet reasonable statistical standards for accuracy and completeness;

  • data to track any objective must be available in a timely fashion, at appropriate periodicity, and at a reasonable cost; and

  • Data applied to a specific measure must be collected using similar methods and with a common definition throughout the population of interest.

4.

Measures should be valid, reliable, and responsive. Measures should, as much as possible, capture the essence of what they purport to measure (i.e., be unbiased and valid for their intended purpose), be reproducible (i.e., reliable), and be able to detect movement toward a desired objective (i.e., be responsive).

SOURCE: National Research Council, 1999:9.

REFERENCES

  • Advisory Commission on Consumer Protection and Quality in the Health Care Industry. 1998. Quality First: Better Health Care for All Americans.Washington, D.C.: U.S. Government Printing Office.

  • Department of Health. 1999. Quality and Performance in the NHS: High Level Performance Indicators.London: NHS Executive. Available at: http://www​.doh.gov.uk/nhshlpi.htm.

  • Donabedian, Avedis. 1982. The Criteria and Standards of Quality.Ann Arbor, Mich.: Health Administration Press.

  • Federal Interagency Forum on Child and Family Statistics. 2000. America's Children:Key National Indicators of Well-Being.Washington, D.C.: U.S. Government Printing Office.

  • Foundation for Accountability. 1999. Key Questions and Decision Making Criteria,Child and Adolescent Health Measurement Initiative.Living with Illness Task Force Meeting, Portland, Ore.

  • Institute of Medicine. 1999. Leading Health Indicators for Healthy People 2010. eds. Carole A. Chrvala, editor; and Roger J. Bulger, editor. . Washington, D.C.: National Academy Press. [PubMed: 25101387]

  • National Committee for Quality Assurance. 2000. HEDIS 2001, Vol. 1. Washington,D.C.: NCQA.

  • National Research Council. 1999. Health Performance Measurement in the Public Sector: Principles and Policies for Implementing an Information Network. eds. Edward B. Perrin, editor; , Jane S. Durch, editor; , and Susan M. Skillman, editor. . Washington, D.C.: National Academy Press. [PubMed: 25101452]

  • U.S. Department of Health and Human Services. 2000. a. Proposed Measure Evaluation and Selection Process Criteria for Evaluating Candidate Measures. [PubMed: 20669515]

  • U.S. Department of Health and Human Services. 2000. b. Healthy People 2010,Washington, D.C.: U.S. Government Printing Office. [PubMed: 20669515]

I'm an expert in health care quality assessment and measurement, with a comprehensive understanding of various frameworks and criteria used in the field. My expertise is built on a foundation of extensive knowledge encompassing the U.S. Department of Health and Human Services (DHHS) working group, Donabedian's quality assessment triad, the Foundation for Accountability's Child and Adolescent Health Measurement Initiative, National Committee for Quality Assurance, President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry, Healthy People 2010, and the National Research Council's approach to measuring health performance in the public sector.

Let's delve into the key concepts outlined in the provided article:

U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES WORKING GROUP

The DHHS working group proposes criteria for health care quality assessment. Notable concepts include:

  • Essential Criterion: Objectivity based on substantial research.
  • Desirable Criteria: Relevance, meaningfulness, health importance, strategic importance, controllability, timeliness, scientific soundness, richness of data, and national representativeness.

DONABEDIAN'S QUALITY ASSESSMENT TRIAD

Donabedian's triad emphasizes:

  • Inclusivity or Definitional Range: Differentiating technical vs. interpersonal care, medical vs. psychosocial need, etc.
  • Scientific Validity: Causal validity, scientific currency.
  • Measurement Reliability and Validity: Including explicitness, objectiveness, and verification.

FOUNDATION FOR ACCOUNTABILITY'S CHILD AND ADOLESCENT HEALTH MEASUREMENT INITIATIVE

Criteria for selecting domains and survey items:

  • Relevance, Parsimoniousness, Discrimination, Reliability, Feasibility, Applicability.

NATIONAL COMMITTEE FOR QUALITY ASSURANCE (NCQA)

Desirable attributes of HEDIS measures:

  • Relevance, Meaningfulness, Health Importance, Financial Importance, Strategic Importance, Controllability, Variance Among Systems, Potential for Improvement, Scientific Soundness, Feasibility.

PRESIDENT'S ADVISORY COMMISSION

Criteria for evaluating individual measures and measurement sets:

  • Scientific Soundness, Importance, Relevance, Actionability, Feasibility, Ease and Cost-effectiveness of Measurement.

HEALTHY PEOPLE 2010

Criteria for objectives:

  • Importance, Prevention-oriented, Action-Driving, Usefulness, Measurability, Continuity, Scientific Evidence.

NATIONAL HEALTH SERVICE (NHS) PERFORMANCE INDICATORS (UNITED KINGDOM)

Criteria for assessing individual indicators:

  • Attributable, Important, Avoids Perverse Incentives, Robust, Responsive, Usable and Timely.

LEADING HEALTH INDICATORS FOR HEALTHY PEOPLE 2010

Criteria for individual measures:

  • Worth Measuring, Can Be Measured for Diverse Populations, Understood by People, Galvanizes Action, Known and Feasible Actions, Reflects Results of Action.

INDICATORS FOR MEASURING HEALTH PERFORMANCE IN THE PUBLIC SECTOR (NATIONAL RESEARCH COUNCIL)

Guidelines for evaluating measures:

  • Aimed at a Specific Objective, Meaningful and Understandable, Adequate Data Support, Valid, Reliable, Responsive.

These frameworks collectively shape a comprehensive approach to health care quality assessment, ensuring a holistic and evidence-based evaluation of health systems and outcomes.

Quality Measure Selection Criteria (2024)
Top Articles
Latest Posts
Article information

Author: Errol Quitzon

Last Updated:

Views: 6327

Rating: 4.9 / 5 (79 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Errol Quitzon

Birthday: 1993-04-02

Address: 70604 Haley Lane, Port Weldonside, TN 99233-0942

Phone: +9665282866296

Job: Product Retail Agent

Hobby: Computer programming, Horseback riding, Hooping, Dance, Ice skating, Backpacking, Rafting

Introduction: My name is Errol Quitzon, I am a fair, cute, fancy, clean, attractive, sparkling, kind person who loves writing and wants to share my knowledge and understanding with you.