Methodology and Source

This website uses data collected by the New York State Department of Health, the Centers for Disease Control, IPRO, and the Centers for Medicare & Medicaid Services. Due to the nature of these sources, some measures are presented differently than others.

  • Percent: Scores with a percent sign are simple percentages. For example, 20% may be the proportion of patients, or 20 out of 100 patients.
  • Rates: Scores marked as rates are the number of times something occurred divided by 1,000. For example, 20 infections per 1,000 hours equals 0.02.
  • Ratio: Scores marked as ratios are compared to a national standard rate of 1.0. For example, a hospital scoring 0.8 is less than the national standard 1.0.

Where possible, we analyze the variation of these scores to see if any scores are significantly better than or worse than the national or state average. If we find a significant variation, we mark it with a green symbol ("better than") or a red symbol ("worse than"). For some measures, a higher score is better; for others, lower is better. The better-performing scores on this website are at the top of the list, and green always signifies higher performance than the national rate.

If national data are available, the New York hospital score will be compared to all hospitals in the United States. If data is only available from New York, hospitals are compared only to other hospitals in New York.

Complications of Care

  • Measure Steward: Agency for Healthcare Research and Quality
  • Measure Specifications: http://www.qualityindicators.ahrq.gov/modules/PSI_TechSpec.aspx
  • Data Source: Claims
  • Data Download: https://health.data.ny.gov/
  • Collection Period: 12 months
  • Update Frequency: Annually
  • Data Presentation: Ratio. If the ratio is greater than 1.0, then the implication is that the provider performed worse than the reference population for that particular indicator. If the ratio is less than 1.0, then the implication is that the provider performed better than the reference population.
  • Risk Adjustment: The measures of serious complications reported are risk adjusted to account for differences in hospital patients' characteristics. In addition, the rates reported are "smoothed" to reflect the fact that measures for small hospitals are measured less accurately (i.e., are less reliable) than for larger hospitals.
  • Performance Categories: To assign hospitals to performance categories, the hospital's interval estimate is compared to the U.S. national rate. If the 95% interval estimate includes the national observed rate for that measure, the hospital's performance is in the "No Significant Difference" category. If the entire 95% interval estimate is below the national rate for that measure, then the hospital is performing "Significantly Better than the National Rate." If the entire 95% interval estimate is above the national rate for that measure, its performance is "Significantly Worse than the National Rate." Hospitals with fewer than 25 eligible cases are placed into a separate category that indicates that the hospital did not have enough cases to reliably tell how well the hospital is performing.

Deaths: Cardiac Surgery

  • Measure Steward: New York State Department of Health
  • Measure Specifications: http://www.health.ny.gov/statistics/diseases/cardiovascular/
  • Data Source: Medical Record
  • Data Download: https://health.data.ny.gov/
  • Collection Period: 36 months
  • Update Frequency: Annually
  • Data Presentation: Percentage. The lowest possible score is 0%, and the highest possible score is 100%.
  • Risk Adjustment: As part of the risk-adjustment process, NYS hospitals where cardiac surgery is performed provide information to the Department of Health for each patient undergoing that procedure. Cardiac surgery departments collect data concerning patients' demographic and clinical characteristics. Approximately 40 of these characteristics (called risk factors) are collected for each patient. Along with information about the procedure, physician, and the patient's status at discharge, these data are entered into a computer and sent to the Department of Health for analysis. Data are verified through review of unusual reporting frequencies, cross-matching of cardiac surgery data with other Department of Health databases, and a review of medical records for a selected sample of cases. These activities are very helpful in ensuring consistent interpretation of data elements across hospitals. Mortality rate is based on deaths occurring during the same hospital stay in which a patient underwent cardiac surgery and on deaths that occur after discharge but within 30 days of surgery.
  • Performance Categories: The risk-adjusted mortality rate (RAMR) represents the best estimate (based on the associated statistical model) of what the provider's mortality rate would have been if the provider had a mix of patients identical to the statewide mix. Thus, the RAMR has, to the extent possible, ironed out differences among providers in patient severity of illness, since it arrives at a mortality rate for each provider for an identical group of patients. If the RAMR is significantly lower than the statewide mortality rate, the provider has a significantly better performance than the state as a whole. If the RAMR is significantly higher than the statewide mortality rate, the provider has a significantly worse performance than the state as a whole.

Deaths: Other Conditions

  • Measure Steward: Agency for Healthcare Research and Quality
  • Measure Specifications: http://www.qualityindicators.ahrq.gov/Modules/IQI_TechSpec.aspx
  • Data Source: Claims
  • Data Download: https://health.data.ny.gov/
  • Collection Period: 12 months
  • Update Frequency: Annual
  • Data Presentation: Ratio. If the ratio is greater than 1.0, then the implication is that the provider performed worse than the reference population for that particular indicator. If the ratio is less than 1.0, then the implication is that the provider performed better than the reference population.
  • Risk Adjustment: The measures reported are risk adjusted to account for differences in hospital patients' characteristics. In addition, the rates reported are "smoothed" to reflect the fact that measures for small hospitals are measured less accurately (i.e., are less reliable) than for larger hospitals.
  • Performance Categories: To assign hospitals to performance categories, the hospital's interval estimate is compared to the U.S. national rate. If the 95% interval estimate includes the national observed rate for that measure, the hospital's performance is in the "No Significant Difference" category. If the entire 95% interval estimate is below the national rate for that measure, then the hospital is performing "Significantly Better than the National Average." If the entire 95% interval estimate is above the national rate for that measure, its performance is "Significantly Worse than the National Average." Hospitals with fewer than 25 eligible cases are placed into a separate category that indicates that the hospital did not have enough cases to reliably tell how well the hospital is performing.

Emergency Department Timeliness

  • Measure Steward: Centers for Medicare & Medicaid Services
  • Measure Specifications: http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier2&cid=1141662756099
  • Data Source: Medical Record
  • Data Download: https://data.medicare.gov/data/hospital-compare
  • Collection Period: 12 months
  • Update Frequency: Quarterly
  • Data Presentation: Minutes. These measures calculate time elapsed in whole minutes.
  • Risk Adjustment: None
  • Performance Categories: The interquartile range is calculated by first arranging the data for each measure in ascending order according to value and then dividing it into 4 roughly equal parts or quartiles. The first quartile value (Q1) splits off the lowest 25% of the data values from the highest 75% of the data values. The third quartile value (Q3) splits off the highest 25% of the data values from the lowest 75%. The interquartile range (IQR) is the difference between Q3 and Q1 and shows how spread out the data values are. Outlier values are defined as those being either 1.5 x IQR below Q1 or 1.5 x IQR above Q3. High or low performers for a particular measure are defined as those hospitals having values lying outside the range defined by Q1 – (1.5 x IQR) and Q3 + (1.5 x IQR) for that measure.

Note: Additional composite scores are calculated by IPRO using methodology prescribed by the Joint Commission. This approach suggests that the composite score be the number of times a hospital performed the appropriate action across all measures, divided by the number of opportunities the hospital had to provide appropriate care.

Hospital-Acquired Infections

  • Measure Steward: Agency for Healthcare Research and Quality
  • Measure Specifications: http://www.cdc.gov/nhsn/acute-care-hospital/index.html
  • Data Source: Medical Record
  • Data Download: https://health.data.ny.gov/Health/Hospital-Acquired-Infections-Beginning-2008/utrt-zdsi
  • Collection Period: 12 months
  • Update Frequency: Annually
  • Data Presentation: Rates, Ratios. Individual measurements are expressed as rates (incidents per 1,000). To combine these different measures into a meaningful score, the overall ("composite") score is a standardized ratio. If the ratio is greater than 1.0, then the implication is that the provider performed worse than the reference population for that particular indicator. If the ratio is less than 1.0, then the implication is that the provider performed better than the reference population.
  • Risk Adjustment: Calculations for the measures adjust for differences in the characteristics of hospitals and patients using a Standardized Infection Ratio (SIR). The SIR is a summary measure that takes into account differences in the types of patients a hospital treats. The SIR may take into account the type of patient care location, number of patients admitted with MRSA or C. difficile, laboratory methods, hospital affiliation with a medical school, bed size of the hospital, patient age, and American Society of Anesthesiologists'' (ASA) classification of physical health. It compares the actual number of HAIs in a facility to a national benchmark based on previous years of reported data and adjusts the data based on several factors.
  • Performance Categories: A confidence interval with a lower and upper limit is displayed around each SIR to indicate that there is a high degree of confidence that the true value of the SIR lies within that interval. An SIR with a lower limit that is greater than 1.0 means that there were more HAIs in a facility or state than were predicted, and the facility is classified as "Worse than the U.S. National Benchmark." If the SIR has an upper limit that is less than 1, then the facility had fewer HAIs than were predicted and is classified as "Better than the U.S. National Benchmark." If the confidence interval includes the value of 1, then there is no statistical difference between the actual number of HAIs and the number predicted, the facility is classified as "No Different than U.S. National Benchmark." If the number of predicted infections is less than 1, the SIR and confidence interval cannot be calculated.

Patient Satisfaction

  • Measure Steward: Agency for Healthcare Research and Quality
  • Measure Specifications: http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier2&cid=1141662756099
  • Data Source: Administrative clinical data, Patient/Individual survey
  • Data Download: https://data.medicare.gov/data/hospital-compare
  • Collection Period: 12 months
  • Update Frequency: Quarterly
  • Data Presentation: Percentage. The lowest possible score is 0%, and the highest possible score is 100%.
  • Risk Adjustment: Preparing the data for public reporting includes taking certain factors into account to ensure fair comparisons among hospitals. For example, the mix of patients can differ from one hospital to the next, and these differences in the patient mix can affect a hospital's HCAHPS results. Patient-mix adjustment takes these differences into account so that the survey results reported on this website are what would be expected for each hospital if all hospitals had a similar mix of patients.
  • Performance Categories: The interquartile range is calculated by first arranging the data for each measure in ascending order according to value and then dividing it into 4 roughly equal parts or quartiles. The first quartile value (Q1) splits off the lowest 25% of the data values from the highest 75% of the data values. The third quartile value (Q3) splits off the highest 25% of the data values from the lowest 75%. The interquartile range (IQR) is the difference between Q3 and Q1 and shows how spread out the data values are. Outlier values are defined as those being either 1.5 x IQR below Q1 or 1.5 x IQR above Q3. High or low performers for a particular measure are defined as those hospitals having values lying outside the range defined by Q1 – (1.5 x IQR) and Q3 + (1.5 x IQR) for that measure.

Readmissions within 30 Days

  • Measure Steward: Centers for Medicare & Medicaid Services
  • Measure Specifications: http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier2&cid=1141662756099
  • Data Source: Claims
  • Data Download: https://data.medicare.gov/data/hospital-compare
  • Collection Period: 12 months
  • Update Frequency: Quarterly
  • Data Presentation: Percentage. The lowest possible score is 0%, and the highest possible score is 100%.
  • Risk Adjustment: To make comparison of hospital performance fair and level the playing field, the 30-day unplanned readmission and (death) mortality measures adjust for patient characteristics that may make death or unplanned readmission more likely, even if the hospital provided higher quality of care. These characteristics include the patient's age, past medical history, and other diseases or conditions (comorbidities) the patient had when they were admitted that are known to increase the patient's risk of dying or of having an unplanned readmission.
  • Performance Categories: To assign hospitals to performance categories, the hospital's interval estimate is compared to the U.S. national 30-day observed unplanned readmission rate or 30-day observed (death) mortality rate. If the 95% interval estimate includes the national observed rate for that measure, the hospital's performance is in the "No Different than U.S. National Rate" category. If the entire 95% interval estimate is below the national observed rate for that measure, then the hospital is performing "Better than U.S. National Rate." If the entire 95% interval estimate is above the national observed rate for that measure, its performance is "Worse than U.S. National Rate." Hospitals with fewer than 25 eligible cases are placed into a separate category that indicates that the hospital did not have enough cases to reliably tell how well the hospital is performing.

Timely and Effective Care

  • Measure Steward: Centers for Medicare & Medicaid Services
  • Measure Specifications: http://www.qualitynet.org/dcs/ContentServer?c=Page&pagename=QnetPublic%2FPage%2FQnetTier2&cid=1141662756099
  • Data Source: Medical Record
  • Data Download: https://data.medicare.gov/data/hospital-compare
  • Collection Period: 12 months
  • Update Frequency: Quarterly
  • Data Presentation: Percentage. The lowest possible score is 0%, and the highest possible score is 100%.
  • Risk Adjustment: None
  • Performance Categories: To assign hospitals to performance categories, the hospital's interval estimate is compared to the U.S. national rate. If the 95% interval estimate includes the national observed rate for that measure, the hospital's performance is in the "No Significant Difference" category. If the entire 95% interval estimate is below the national rate for that measure, then the hospital is performing "Significantly Better than the National Average". If the entire 95% interval estimate is above the national rate for that measure, its performance is "Significantly Worse than the National Average". Hospitals with fewer than 25 eligible cases are placed into a separate category that indicates that the hospital did not have enough cases to reliably tell how well the hospital is performing.

Note: Additional composite scores are calculated by IPRO using methodology prescribed by the Joint Commission . This approach suggests that the composite score be the number of times a hospital performed the appropriate action across all measures, divided by the number of opportunities the hospital had to provide appropriate care.