Uncategorized

Read e-book Tests to Evaluate Public Disease Reporting Systems in Local Public Health Agencies

Free download. Book file PDF easily for everyone and every device. You can download and read online Tests to Evaluate Public Disease Reporting Systems in Local Public Health Agencies file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Tests to Evaluate Public Disease Reporting Systems in Local Public Health Agencies book. Happy reading Tests to Evaluate Public Disease Reporting Systems in Local Public Health Agencies Bookeveryone. Download file Free Book PDF Tests to Evaluate Public Disease Reporting Systems in Local Public Health Agencies at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Tests to Evaluate Public Disease Reporting Systems in Local Public Health Agencies Pocket Guide.

Get fast, free shipping with Amazon Prime. Your recently viewed items and featured recommendations. View or edit your browsing history. Get to Know Us. English Choose a language for shopping. Track your recent orders. View or change your orders in Your Account. See FREE shipping information. Return an item here's our Returns Policy. Visit our Help department. Amazon Music Stream millions of songs. Amazon Drive Cloud storage from Amazon. For example, PVP could be determined for the system's data fields, for each data source or combinations of data sources 48 , or for specific health-related events PVP is important because a low value means that noncases might be investigated, and outbreaks might be identified that are not true but are instead artifacts of the public health surveillance system e.

Falsepositive reports can lead to unnecessary interventions, and falsely detected outbreaks can lead to costly investigations and undue concern in the population under surveillance. A public health surveillance system with a high PVP will lead to fewer misdirected resources. The PVP reflects the sensitivity and specificity of the case definition i. The PVP can improve with increasing specificity of the case definition. In addition, good communication between the persons who report cases and the receiving agency can lead to an improved PVP. A public health surveillance system that is representative accurately describes the occurrence of a health-related event over time and its distribution in the population by place and person.

Representativeness is assessed by comparing the characteristics of reported events to all such actual events. Although the latter information is generally not known, some judgment of the representativeness of surveillance data is possible, based on knowledge of characteristics of the population, including, age, socioeconomic status, access to health care, and geographic location 60 ; clinical course of the disease or other health-related event e. Representativeness can be examined through special studies that seek to identify a sample of all cases.

For example, the representativeness of a regional injury surveillance system was examined using a systematic sample of injured persons The study examined statistical measures of population variables e. For many health-related events under surveillance, the proper analysis and interpretation of the data require the calculation of rates.

Updated Guidelines for Evaluating Public Health Surveillance Systems

The denominators for these rate calculations are often obtained from a completely separate data system maintained by another agency e. The choice of an appropriate denominator for the rate calculation should be given careful consideration to ensure an accurate representation of the health-related event over time and by place and person. For example, numerators and denominators must be comparable across categories e.

In addition, consideration should be given to the selection of the standard population for the adjustment of rates To generalize findings from surveillance data to the population at large, the data from a public health surveillance system should accurately reflect the characteristics of the health-related event under surveillance. These characteristics generally relate to time, place, and person.

An important result of evaluating the representativeness of a surveillance system is the identification of population subgroups that might be systematically excluded from the reporting system through inadequate methods of monitoring them. This evaluation process enables appropriate modification of data collection procedures and more accurate projection of incidence of the health-related event in the target population For certain health-related events, the accurate description of the event over time involves targeting appropriate points in a broad spectrum of exposure and the resultant disease or condition.

In the surveillance of cardiovascular diseases, for example, it might be useful to distinguish between preexposure conditions e. The measurement of risk factor behaviors e. Because surveillance data are used to identify groups at high risk and to target and evaluate interventions, being aware of the strengths and limitations of the system's data is important. Errors and bias can be introduced into the system at any stage For example, case ascertainment or selection bias can result from changes in reporting practices over time or from differences in reporting practices by geographic location or by health-care providers.

Differential reporting among population subgroups can result in misleading conclusions about the health-related event under surveillance. Timeliness reflects the speed between steps in a public health surveillance system. A simplified example of the steps in a public health surveillance system is included in this report Figure 2. The time interval linking any two of these steps can be examined.

The interval usually considered first is the amount of time between the onset of a health-related event and the reporting of that event to the public health agency responsible for instituting control and prevention measures. Another aspect of timeliness is the time required for the identification of trends, outbreaks, or the effect of control and prevention measures. Factors that influence the identification process can include the severity and communicability of the health-related event, staffing of the responsible public health agency, and communication among involved health agencies and organizations.

Access Check

The most relevant time interval might vary with the type of health-related event under surveillance. With acute or infectious diseases, for example, the interval from the onset of symptoms or the date of exposure might be used. With chronic diseases, it might be more useful to look at elapsed time from diagnosis rather than from the date of symptom onset.

The timeliness of a public health surveillance system should be evaluated in terms of availability of information for control of a health-related event, including immediate control efforts, prevention of continued exposure, or program planning. The need for rapidity of response in a surveillance system depends on the nature of the health-related event under surveillance and the objectives of that system. A study of a public health surveillance system for Shigella infections, for example, indicated that the typical case of shigellosis was brought to the attention of health officials 11 days after onset of symptoms a period sufficient for the occurrence of secondary and tertiary transmission.

This example indicates that the level of timeliness was not satisfactory for effective disease control However, when a long period of latency occurs between exposure and appearance of disease, the rapid identification of cases of illness might not be as important as the rapid availability of exposure data to provide a basis for interrupting and preventing exposures that lead to disease. For example, children with elevated blood lead levels and no clinically apparent illness are at risk for adverse health-related events.

CDC recommends that follow-up of asymptomatic children with elevated blood lead levels include educational activities regarding lead poisoning prevention and investigation and remediation of sources of lead exposure In addition, surveillance data are being used by public health agencies to track progress toward national and state health objectives 38, The increasing use of electronic data collection from reporting sources e. Stability refers to the reliability i. Measures of the system's stability can include the number of unscheduled outages and down times for the system's computer; the costs involved with any repair of the system's computer, including parts, service, and amount of time required for the repair; the percentage of time the system is operating fully; the desired and actual amount of time required for the system to collect or receive data; the desired and actual amount of time required for the system to manage the data, including transfer, entry, editing, storage, and back-up of data; and the desired and actual amount of time required for the system to release data.

A lack of dedicated resources might affect the stability of a public health surveillance system. For example, workforce shortages can threaten reliability and availability. Yet, regardless of the health-related event being monitored, a stable performance is crucial to the viability of the surveillance system.

Unreliable and unavailable surveillance systems can delay or prevent necessary public health action. A more formal assessment of the system's stability could be made through modeling procedures However, a more useful approach might involve assessing stability based on the purpose and objectives of the system.

Conclusions from the evaluation can be justified through appropriate analysis, synthesis, interpretation, and judgement of the gathered evidence regarding the performance of the public health surveillance system Task D. Because the stakeholders Task A must agree that the conclusions are justified before they will use findings from the evaluation with confidence, the gathered evidence should be linked to their relevant standards for assessing the system's performance Task C. In addition, the conclusions should state whether the surveillance system is addressing an important public health problem Task B.

Before recommending modifications to a system, the evaluation should consider the interdependence of the system's costs Task B. Strengthening one system attribute could adversely affect another attribute of a higher priority.

Efforts to improve sensitivity, PVP, representativeness, timeliness, and stability can increase the cost of a surveillance system, although savings in efficiency with computer technology e. However, as sensitivity increases, PVP might decrease. Efforts to increase sensitivity and PVP might increase the complexity of a surveillance system potentially decreasing its acceptability, timeliness, and flexibility. In a study comparing health-department--initiated active surveillance and providerinitiated passive surveillance, for example, the active surveillance did not improve timeliness, despite increased sensitivity In addition, the recommendations can address concerns about ethical obligations in operating the system In some instances, conclusions from the evaluation indicate that the most appropriate recommendation is to discontinue the public health surveillance system; however, this type of recommendation should be considered carefully before it is issued.

The cost of renewing a system that has been discontinued could be substantially greater than the cost of maintaining it. The stakeholders in the evaluation should consider relevant public health and other consequences of discontinuing a surveillance system. Deliberate effort is needed to ensure that the findings from a public health surveillance system evaluation are used and disseminated appropriately.

When the evaluation design is focused Task C , the stakeholders Task A can comment on decisions that might affect the likelihood of gathering credible evidence regarding the system's performance. During the implementation of the evaluation Tasks D and E , considering how potential findings particularly negative findings could affect decisions made about the surveillance system might be necessary. When conclusions from the evaluation and recommendations are made Task E , follow-up might be necessary to remind intended users of their planned uses and to prevent lessons learned from becoming lost or ignored.

Strategies for communicating the findings from the evaluation and recommendations should be tailored to relevant audiences, including persons who provided data used for the evaluation. In the public health community, for example, a formal written report or oral presentation might be important but not necessarily the only means of communicating findings and recommendations from the evaluation to relevant audiences.

Several examples of formal written reports of surveillance evaluations have been included in peer-reviewed journals 51,53,57,59, However, these guidelines could also be applied to several systems, including health information systems used for public health action, surveillance systems that are pilot tested, and information systems at individual hospitals or health-care centers.

Additional information can also be useful for planning, establishing, as well as efficiently and effectively monitoring a public health surveillance system To promote the best use of public health resources, all public health surveillance systems should be evaluated periodically. No perfect system exists; however, and tradeoffs must always be made. Each system is unique and must balance benefit versus personnel, resources, and cost allocated to each of its components if the system is to achieve its intended purpose and objectives.

Evaluation of reporting timeliness of public health surveillance systems for infectious diseases

The appropriate evaluation of public health surveillance systems becomes paramount as these systems adapt to revised case definitions, new health-related events, new information technology including standards for data collection and sharing , current requirements for protecting patient privacy, data confidentiality, and system security. The goal of this report has been to make the evaluation process inclusive, explicit, and objective. Yet, this report has presented guidelines not absolutes for the evaluation of public health surveillance systems.

Progress in surveillance theory, technology, and practice continues to occur, and guidelines for evaluating a surveillance system will necessarily evolve. Guidelines for evaluating surveillance systems. Health Information and Surveillance System Board.


  • JSTOR: Access Check!
  • Address delivered by Hon. Henry H. Crapo, Governor of Michigan, before the Central Michigan Agricultural Society, at their Sheep-shearing Exhibition held ... College Farm, on Thursday, May 24th, 1866?
  • Evaluation of reporting timeliness of public health surveillance systems for infectious diseases.
  • Recommendations from the Guidelines Working Group.
  • Tests To Evaluate Public Disease Reporting Systems In Local Public Health Agencies!

National Electronic Disease Surveillance System. Accessed May 7, Department of Health and Human Services. Framework for program evaluation in public health. Principles and practice of public health surveillance, 2nd ed. Oxford University Press, Rothman KJ, Greenland S. Modern epidemiology, 2nd ed. Planning a public health surveillance system. Pan American Health Organization ; Future directions for comprehensive public health surveillance and health information systems in the United States.

Am J Epidemiol ; J Public Health Management Practice ;6: An ounce of prevention: Impact of vaccines universally recommended for childrenUnited States, Pertussis and pertussis vaccine: National Academy Press, Measuring loss of life, health, and income due to disease and injury. Public Health Rep ; Vilnius D, Dandoy S. A priority rating system for public health programs. Case definitions for infectious conditions under public health surveillance. Council of State and Territorial Epidemiologists. Integrating public health information and surveillance systems.

J Public Health Management Practice ;2: J Public Health Management Practice ;2 4: A framework for information systems architecture.

Background

IBM Systems J ;26 3. Extending and formalizing the framework for information systems architecture. IBM Systems J ;31 3. The changing health-care information infrastructure in the United States: Data Interchange Standards Association. X12 Standards, release Accredited Standards Committee X12, Health Care Financing Administration. The unified medical language system: College of American Pathologists.

Koo D, Wetterhall SF. Council of State and Territorial Epidemiologists, June Model state public health privacy act. Georgetown University Law Center, Statistical Policy Working Paper Comparison of an active and passive surveillance system of primary care providers for hepatitis, measles, rubella, and salmonellosis in Vermont. Am J Public Health ; Benefitcost analysis of active surveillance of primary care physicians for hepatitis A. The costs and effectiveness of surveillance of communicable disease: J Public Health Med ; Healthy people conference ed, 2 vols.

Behavioral Risk Factor Surveillance System. Findings from a multisite validation study, The Wisconsin firearm-related injury surveillance system. Am J Prev Med ; Validation of malaria surveillance case reports: J Epidemiol Community Health ; Manual for the surveillance of vaccine-preventable diseases. Surveillance for Escherichia coli H7 infections in Minnesota by molecular subtyping. N Engl J Med ; On a method of estimating birth and death rates and the extent of registration.

J Am Stat Assoc ; Accuracy of reporting nosocomial infections in intensive-care--unit patients to the National Nosocomial Infections Surveillance System: Infect Control Hosp Epidemiol ; Evaluating sources of traumatic spinal cord injury surveillance data in Colorado. The surveillance of birth defects: Using administratively collected hospital discharge data for AIDS surveillance. Van Tuinen M, Crosby A.

Missouri firearm-related injury surveillance system. The value of capture-recapture methods even for apparent exhaustive surveys. Evaluation of a national surveillance unit. Arch Dis Child ; Singh J, Foster SO. Sensitivity of poliomyelitis surveillance in India. In addition, studies describing the timeliness of syndromic surveillance systems were excluded. Information available for assessing NNDSS reporting timeliness includes the Morbidity and Mortality Weekly Report [ MMWR ] week number the state assigns to each case and one of the following earliest known dates associated with the incidence of this disease earliest known date from the following list of hierarchical date types: National reporting delay was calculated as the difference in days between the midpoint of the MMWR week and the earliest known date reported in association with the case.

National median reporting timeliness was calculated overall for the years —, for each disease in our study, by date type and state, and across all states. To assess whether analysis of NNDSS data could support the timely identification of multistate outbreaks at the national level, the percentage of NNDSS cases reports reported within one to two incubation periods for each of the diseases was determined.

Incubation periods were used as a surrogate measure for period of communicability which is critical to consider when implementing effective, disease-specific prevention and control measures. For this analysis, estimated incubation periods were used for the seven nationally notifiable infectious diseases selected for this study: These diseases were selected because they were confirmed on the basis of laboratory criteria; they have the potential to occur in epidemics; they were designated nationally notifiable five years or more before the study period began; and the magnitude of reported disease incidence supported this analysis.

Only finalized case-specific data reported from U. Data were analyzed for MMWR years , , and Eight papers were identified that met the inclusion criteria for this study Table 2 - see Additional file: Seven of the eight papers met the inclusion criteria resulting from the literature review; an additional paper was identified from the review of reference lists of studies identified through the Medline search and studies citing CDC's evaluation guidelines [ 3 , 7 ]. Three of the eight papers in this study assessed national reporting timeliness; the remaining five papers focused on local or state reporting timeliness.

The studies of national reporting timeliness focused on the following diseases: The studies of local or state reporting timeliness analyzed data for AIDS [ 14 , 15 ], tuberculosis [ 13 ], influenza-like illness [ 10 ], and meningococcal disease [ 12 ]. In seven of the eight papers, timeliness was calculated as the median reporting delay between the date of disease occurrence e. In one study [ 10 ], epidemic curves were compared for two influenza surveillance systems and timeliness was assessed as the time interval between the epidemic peaks noted in each system.

In addition, two studies described the factors associated with delayed reporting [ 13 , 15 ]. Seven of the eight studies addressed whether the calculated timeliness measure met the needs of the surveillance process being evaluated [ 10 , 12 - 17 ]. Measured timeliness was compared with recommended reporting timeliness in two papers — a national recommendation for local tuberculosis reporting timeliness [ 13 ] and a state mandate for reporting meningococcal disease cases to local public health [ 12 ].

The adequacy of the timeliness measure for the surveillance purpose was also assessed in other ways: The reporting timeliness of AIDS and bacterial meningitis including meningococcal disease surveillance systems were more frequently assessed than those for other infectious diseases. The AIDS reporting timeliness studies indicate that local and national AIDS reporting timeliness meets the goals of the AIDS surveillance systems monitoring trends, targeting prevention programs, estimating needs for medical and social services, and allocating resources [ 14 , 15 , 17 ].

Evaluation of Tennessee's Neisseria meningitidis infection surveillance system for — indicated that the lengthy reporting interval limited the usefulness of the system for supporting rapid response for control and prevention [ 16 ]. In addition, on the basis of nationally notifiable infectious disease data from , bacterial meningitis had the shortest reporting timeliness median 20 days of the other infectious diseases studied [ 11 ]. The definition of reference dates used in the timeliness evaluations varied. The initial date associated with the case varied among date of disease onset, date of diagnosis, and date of positive culture result.

The ending date for the timeliness studies evaluated was the date the case report was received by the public health system, whether at the local, state, or national level. For national evaluations of timeliness, the time period assessed was the sum of Intervals 1, 2, 3, and 4 or only Intervals 2, 3, and 4 with or without inclusion of Intervals 5, 6, 7, and 8, dependent upon state protocol.

H7 infection, 41, cases of hepatitis A virus acute infection, 7, cases of meningococcal disease, 22, cases of pertussis, , cases of salmonellosis, and 60, cases of shigellosis and were reported to NNDSS. Of those, 7, H7 infection, 32, A total of 72, For cases reported with a disease onset date, the median reporting delay across all reporting states varied from 12 days for meningococcal disease to 40 days for pertussis.

For cases reported with a laboratory result date, median reporting delay varied from 10 days for both meningococcal disease and shigellosis to 19 days for pertussis. For example, for meningococcal disease cases reported with a laboratory result date, state-specific median reporting delay varied from a median of 2 days in one state to days in another. Timeliness of reporting of selected nationally notifiable infectious diseases, by date type, NNDSS, — Control of Communicable Diseases Manual 17 th Edition [8].

Excluding these cases as data entry errors, the maximum state-specific median reporting delay is 78 days.

What STD Must Be Reported To The Public Health Department?

In addition, state-specific percentage of cases reported within one or two incubation periods varied for a given disease and date type data not shown. The — NNDSS meningococcal disease median reporting interval between date of disease onset and date of report to CDC in this study was 8 days shorter than a previous study reported [ 11 ] using notifiable disease data for bacterial meningitis median 20 days ; and, the meningococcal disease median reporting delay was 9 days shorter in this study than in a previous study [ 16 ] using Tennessee's data for the years — for Neisseria meningitidis infection median 21 days.

In addition, the median reporting delay between disease onset and the date of report to CDC was shorter in this study than in a previous study which used notifiable disease data by 10 days for hepatitis A, 5 days for salmonellosis, and 8 days for shigellosis [ 11 ]. Few published studies evaluating surveillance systems presented timeliness measures. When timeliness was evaluated, standard methods were not used. Information collected by public health surveillance systems should support the quantitative assessment of timeliness by various steps in the pubic health surveillance process.

Public health programs should periodically assess timeliness of specific steps in the surveillance system process to ensure that the objectives of the surveillance system are being met. A more structured approach to describing timeliness studies should be considered. Published papers describing local or state surveillance system reporting timeliness generally do not explicitly describe the surveillance system processes contributing to the timeliness measure, such as processing and analyzing the data or implementing a public health action before data are reported from a state to CDC.

To facilitate future comparisons of reporting timeliness across jurisdictions, studies should include an explicit description of the public health surveillance reporting process and the surveillance process interval being measured. Additionally, surveillance information systems must support the collection of appropriate reference dates to allow the assessment of the timeliness of specific surveillance processes.

A more structured approach to describing timeliness studies could include a description of the following characteristics: No single timeliness measure will achieve the purpose of all evaluations or meet all the goals of the surveillance system. In addition, if the goal of the surveillance evaluation is to identify ways to improve timeliness, the analysis should identify factors associated with delayed reporting, such as the role of specific case ascertainment sources.

The — national notifiable diseases data were timely enough to support the following surveillance objectives: If NNDSS data are to be used to support timely identification of and response to multistate outbreaks at the national level, the timeliness of reporting needs to be enhanced for all diseases, but especially for diseases with the shortest incubation periods e.

H7, meningococcal disease, salmonellosis, and shigellosis. Until reporting timeliness is enhanced, the application of aberration detection analytic methods to NNDSS data to aid in the identification of changes in disease reporting that may indicate a multistate outbreak in time to alert states for the purposes of disease control and prevention may be of limited use.

Future work to improve reporting timeliness will need to address the substantial variation across states. As states enhance their reporting mechanisms with the use of automated electronic laboratory reporting systems [ 18 ], there may be less variation in state-specific reporting timeliness, but this should be assessed. NNDSS timeliness improved compared to timeliness of notifiable infectious diseases measured in previous reports [ 11 , 16 ]. However, the methods or variables used in these analyses were different.

A few factors may have contributed to improvements in timeliness seen in this study. Since , states have been routinely transmitting electronic case-specific records intended to improve reporting procedures and protocols. In addition, the use of automated electronic laboratory reporting to enhance infectious disease case reporting may have contributed to increased timeliness. Our study findings are subject to several limitations.