San Gorgonio Memorial Hospital has been awarded three stars out of five recently by U.S. News & World Report for “Patient Experience,” and earned a “High Performing” rating when it comes to knee replacements, though it was rated “Average” for hip replacement, heart failure and chronic obstructive pulmonary disease.
Denver-based Healthgrades Inc. which offers user-friendly data compilation of doctors, dentists and hospitals that depict satisfaction survey results from the input of more than 100 million users, portrays patients giving a 71 percent “patient experience” rating for San Gorgonio Memorial Hospital, with another 68 percent claiming that they “Would definitely recommend” the hospital, both of which are 2 percent higher than the national average, as defined by Healthgrades.
When it comes to hospital staffing, the Leapfrog Group gave San Gorgonio a “D” during its twice a year safety rankings released last fall, and also gave low grades for safe medication administration, communication about medicines, collapsed lung and dangerous blood clot situations, and clostridium difficile infections, a bacterial infection that can arise after administration of certain antibiotics.
Yet when it comes to overall patient experience and quality of care, our local hospital gets a three-star rating out of five from the Centers for Medicare & Medicaid Services Hospital Compare program.
So what is a potential patient to think when it comes to how the world views the quality and safety of a hospital, and how should they determine whether — insurance notwithstanding — they should go there for their needs when their livelihood depends on it?
These are just a few of the ratings organizations the Record Gazette has selected to focus on for this report, since there are a variety of others that provide data and satisfaction reviews such as Vitals, Rate MDs and even Yelp.
Each of the three major ratings systems has their strengths and disadvantages, so it depends on which program one wishes to pay closest attention to.
Hospitals are not always excited to receive rankings, and sometimes avoid publicity regarding them altogether; and conversely are quick to tout them when ratings are exceptional.
When questioned about San Gorgonio Memorial Hospital’s latest D safety rating, CEO Steve Barron said that his hospital “does not participate in the Leapfrog survey,” claiming that “Fewer than 50 percent of the hospitals in the U.S. do. They still give us a grade, but it’s based on incomplete information.”
Barron prefers to pay attention to CMS’s Hospital Compare program instead, since “All hospitals participate in this, and the information — unlike Leapfrog — is actually audited and verified,” Barron said, pointing out that instead of a D rating, CMS, which awards stars, gave San G three out of five.
Bill Hobbs, a Banning resident who occasionally attends San Gorgonio’s hospital board meetings, has heard Barron’s response to Leapfrog ratings before and doesn’t buy it.
“I’ve done research on The Leapfrog Group and I’m impressed with the nonprofit’s reporting of safety issues among our hospitals,” Hobbs says. “San G did get an F several years ago, followed with two C ratings. I remember Steve Barron fall of last year defending the rating by stating that not all answers are submitted” at Leapfrog’s request. “Well, that is a poor excuse. If other hospitals can do it, why not San G? Redlands Community Hospital received an A, in addition to other small hospitals.”
Journal examines equity in the ratings processes
In August 2019 the New England Journal of Medicine Catalyst, a peer-reviewed bimonthly publication of the Massachusetts Medical Society, assessed the reporting parties in a study, “Rating the Raters: An Evaluation of Publicly Reported Hospital Quality Rating Systems.”
In its report, their article points out that “there are several issues” limiting each rating system, such as data limitation, “lack of robust data audits, composite measure development, measuring diverse hospital types together, and lack of formal peer review of their methods.”
Further, the journal concedes that there are conflicting ratings: “Hospitals rated highly on one publicly reported hospital quality system are often rated poorly on another.”
Since each of the ratings systems offer their insights to care and outcomes at Redlands Community Hospital, Loma Linda University Medical Center and San Gorgonio Memorial Hospital, the Record Gazette has put together this limited report — as there is a lot of data to analyze and compare between each institution, broken down into various categories by each rating system.
The journal outlined its methodology and criteria for evaluating the ratings systems, and received reviews of factual information from each rating system.
It offered its own ratings of the raters it analyzed, and noted that no one received an A or an F. The highest grade issued was a B to U.S. News & World Report and D+ to Healthgrades from the New England Journal of Medicine.
Its six-member evaluation team remarked in its conclusions that “We qualitatively agreed that the U.S. News rating system had the least chance of misclassifying hospital performance.”
Further, the journal noted, “Each rating system had unique weaknesses that led to potential misclassification of hospital performance, ranging from inclusion of flawed measures, use of proprietary data that are not validated, and methodological decisions.”
Lumping all hospitals together can unfairly penalize smaller hospitals, since they cannot adequately be identified as poor or good performers, and may not reflect needs for improvement, and the ratings can be misleading to patients, the journal warns.
In an article, the New England Journal of Medicine Catalyst outlined “Common Issues Across Most Rating Systems,” exploring deficiencies in the processes encountered when rating hospitals.
Administrative data collected for billing rather than clinical purposes have their shortcomings, the journal explained, and sometimes data is “limited to those 65 and older who participate in the Medicare Fee-for-Service program” that do not necessarily gauge valid risk adjustment.
Rating systems are generally not peer-reviewed — unlike articles that would appear in the journal — and there are those that are potentially biased when hospitals pay rating systems to be able to allow for use of ratings in marketing and advertising campaigns, the journal points out.
The journal notes that “Rating systems often have difficulty handling outcomes measurement at smaller hospitals, which have lower volumes and therefore less reliable performance estimates.”
Some methods of rating systems were lauded for being equal when it comes to assessing readmissions and mortality rates.
CMS Hospital Compare’s star ratings have sway because “they are put forth by a federal agency and the largest payer in the country, NEJM notes. It does not monetize is rating system. Its website is “commended for usability and facilitating comparisons between hospitals.”
On the other hand, “A major shortcoming in many current rating systems continues to be the lack of external peer review and validation of the methods,” and “there is likely a high rate of misclassifying hospital performance given the inclusion and comparison of a heterogeneous collection of hospitals into a single group.”
Also, according to NEJM, “the weighting appears fairly arbitrary,” and “there are few, if any, diagnosis- or procedure-specific measures for elective conditions.”
The field needs better data that is not reliant on self-reported information, and not gleaned heavily from Medicare claims; rather, the journal states, should insist on using “all-payer data.”
Ratings systems, the journal insisted, need “meaningful audits” and “external peer review.”
The ratings systems need to better leverage data to create “quality measures that are valid, valuable, and timely,” the journal urged, rather than relying on currently available measures that “fall far short in many domains and suffer from inadequate risk adjustment, questionable relationship outcomes, and unacceptable lag times.”
Further, the journal concedes, there are areas of quality missing from the ratings systems such as collection, analysis and patient-reported outcomes, which are distinct from patient experience measures.”
Also, the journal notes, there are no long-term measures included, such as cancer recurrence and survival.
The journal’s ratings
for the raters
Healthgrades is praised for having “procedure- and condition-specific rankings that offer more granular information to patients in selecting a hospital, and to hospitals seeking to identify improvement targets,” the New England Journal of Medicine Catalyst reports.
Healthgrades, which portrays 71 percent of patients who have used San Gorgonio Memorial Hospital giving it a 9 or 10 out of 10 patient experience rating, gets kudos for insisting on using its own analyses instead of regurgitating data from others.
The journal had concerns about Healthgrades’ composite measure, pointing out that it only contains outcome measures and omits other components that measure quality; further, some data was only available for certain states, rather than nationally.
The journal claimed that Healthgrades’ “methods are not sufficiently described to allow replication and evaluation. An arbitrary 90 percent confidence interval is employed to identify outliers on individual measures,” and “Healthgrades also evaluates all hospital types together, leading to misclassification concerns.”
Healthgrades also had inconsistencies in the codes counted as “complications,” many of which would be unrelated to primary diagnoses or procedures, according to the journal.
For Healthgrades, the journal rates “Pros” for: procedure- and condition-specific rankings; data-based weighting of performance; and notes that Healthgrades has become more transparent over time.
Cons for Healthgrades from the journal: less transparent than other systems; no public information on how they monetize their product, and inadequate information to judge validity and appropriateness of methodological decisions; evaluates all hospital types together; many measures lead to paradoxical misclassification; some complications are not related or relevant to the procedure being rated.
CMS Star ratings: C
The journal credits the Centers for Medicare and Medicaid Services (CMS) Star Ratings “carry considerable weight as they are put forth by a federal agency and the largest payer in the country.”
CMS gave four stars across the board in almost every category from infections and patient safety to emergency department care and value of care for hip and knee replacements to Redlands Community Hospital, whereas Loma Linda University and San Gorgonio Memorial Hospital consistently received three stars in those same categories.
CMS sets a standard, the journal notes, as its ratings “have an important influence on other rating systems.”
The journal credits CMS’s code as accessible and easily replicated for others to analyze; they do not monetize their rating system, and its website is lauded for “usability and facilitating comparisons between hospitals.”
However, according to the journal, CMS has “several weaknesses” when it comes to its star ratings: “There is a likely high rate of misclassifying hospital performance,” “weighting appears fairly arbitrary,” and “there are few — if any — diagnosis- or procedure-specific measures for elective conditions,” noting that “many of the disease-specific measures are for non-elective admissions like myocardial infarction,” (essentially referring to heart attacks) which patients do not have the luxury of comparing hospitals for ahead of time.
Also, “CMS continues to use several measures that other rating systems have deemed not valid for comparing hospital quality and excluded from their rating system” that can face statutory limitations.
Among those reasons, the journal claims, “There is considerable opportunity and need to improve multiple aspects of the highly visible and influential federal rating system form the largest rating system in the country.”
The journal’s “Pros” for CMS Hospital Compare Star Ratings: compiled by largest payer in U.S.; incorporates process, outcomes and patient experience measures; there are some data integrity checks in place to determine anomalous data; they have assembled multiple technical expert panels of diverse stakeholders; data and statistical code is made available for some ability to replicate analysis; extensive methodology description; and no monetization of the ratings.
“Cons” reported by the journal include: transparency in how all measures are weighted; unclear rationale for some methodological decisions; concerns regarding incorporation of feedback in recent literature and from technical expert panels; measures lead to paradoxical misclassification, concerns over the way it utilizes National Healthcare Saftey Network measures; attempts to measure diverse hospital types together; no inclusion of clinical registry measures; inclusion of “relatively unimportant imaging measures”; no robust data audit process; data lag can be up to three years behind collection to release; few elective condition- or procedure-specific measures.
Leapfrog Group: C-
Leapfrog was lauded by the New England Journal of Medicine Catalyst for taking “a balanced measurement approach” to quality, incorporating structure, process, outcomes, and patient experience, and hospitals “receive a calculator to replicate or predict scores.”
No other process, the journal concedes, has a rating system that “includes an assessment of the culture of safety.”
For the period surveyed from July 1, 2016 to June 30, 2018 Leapfrog Group assigned low scores to San Gorgonio Memorial Hospital of a 5 rating (the best performing hospitals receive a rating of 100) in administering doctors’ orders of medications via computer, safe medication administration, and access to specially trained doctors providing care in the intensive care unit.
San Gorgonio Memorial Hospital board member Georgia Sobiech, a retired nurse, reiterated her institution’s stance on the Leapfrog ratings: “We do not participate in the Leapfrog survey. They don’t conduct surveys as others do, and they don’t let the public know when a hospital (and there are several) doesn’t choose to participate.”
Leapfrog’s rating systems’ Safety Survey, used to designate their Safety Scores and Top Hospitals designations, is self-reported and “there is not a robust audit in place,” and that the audits that have been done, were performed on “very few hospitals,” NEJM Catalyst claims.
In the journal’s interviews with Leapfrog leadership, it learned that only five of 2,600 hospitals received a formal audit, and only 72 underwent an electronic audit.
“Concerns were also raised about the value of the many of the items on the survey, as they may not truly reflect patient safety efforts, or be meaningful to stakeholders,” the journal reported.
Aspects of quality are excluded from Leapfrog’s ratings, such as mortality, as “their team believes it is ‘not a safety metric,’” the journal declared.
The journal took issue with Leapfrog lumping all hospital types together for the Safety Score, and that patient reported outcomes, which the journal deems are “important measures of successful healthcare management … are completely absent from all publicly available hospital quality rating systems.”
Another concern to the journal’s reporters was the fact that Leapfrog could not adequately assess hospitals that do not respond to its Safety Survey, which Leapfrog then resorts to secondary sources to fill in those blanks.
According to the journal, approximately 50 percent of hospitals respond to Leapfrog’s surveys, “so a good deal of the rankings are based on missing or inconsistent data.”
The journal notes, “Leapfrog uses unadjusted, internally developed central line infection and urinary tract infection measures rather than other more standard measures rather than other more standard measures. While flawed for hospital quality comparisons, the (National Healthcare Safety Network) measures are at least somewhat standardized and have some minimal risk adjustment.”
The journal also found it “antithetical” to include mortality as a component in Leapfrog’s Top Hospitals list, since it appeared to be a subjective component for an objective hospital quality rating.
The journal’s listing of “Pros” for Leapfrog Group: focus on safety; includes assessment of culture of safety; incorporates process, structural outcomes, and patient experience measures; scientifically rigorous composite methodology; impact score for weighting approach is available and equally applied based on expert evaluation of impact, opportunity, and evidence base of measure; calculator available to hospitals to replicate measures, scores and weights; Top Hospitals ratings separates peer groups; some data integrity checks are in place to determine anamolous data; Safety Grade uses national expert panel; Top Hospitals uses technical and content experts for measure selection; public information available on website about how they monetize their product.
The journal’s cons for Leapfrog: concerns over lack of responsiveness to the issues raised by their expert panel, hospitals, scientific advancements and other stakeholders; concerns about handling of hospitals that do not respond to surveys, and the corresponding missing data; the inclusion of hospitals that do not respond to surveys; high potential for misclassification based on issues with self-reported Leapfrog survey, and some outcomes subject to surveillance bias and ascertainment issues; administrative data not rigorously audited; measures lead to paradoxical misclassifications; use of non-risk adjusted infection measures; Leapfrog’s audits “sample a very small number of hospitals annually”; and voluntary, self-reported survey data account for 100 percent of scores for Top Hospitals, but survey data are not rigorously validated or relevant.
U.S. News & World Report: B
The New England Journal of Medicine Catalyst had more faith in U.S. News & World Report’s Best Hospitals Specialty Rankings, believing that its system is “the most responsive to changes in measurement science and feedback from stakeholders.”
“They revise their rating system annually to address measurement issues that have come to their attention from experts, the literature, hospitals, or their internal investigations,” the journal says.
U.S. News & World Report gave San Gorgonio Memorial Hospital just two stars out of five for overall patient experience, but gave the hospital fantastic scores for its knee replacement program, particularly when it comes to preventing prolonged hospitalizations and blood transfusions.
The journal appreciated the fact that U.S. News & World Report “eliminated National Healthcare Safety Network measures and most patient safety indicators, weighting volume for proportion of Medicare Advantage patients, improving outcome measures with exclusion of external transfers, and adding risk adjustment for sociodemographic factors.”
The journal also lauds U.S. News & World Report’s inclusion of reputation between medical facilities for comparison by patients, in which respondents can compare between five hospitals simultaneously when comparing ophthalmology, psychiatry, rehabilitation and rheumatology.
It includes volume as a quality measure, and specialty- and procedure-specific rankings, as well as high-acuity, high-complexity conditions and procedures, and incorporates Society of Thoracic Surgeons registry data into its rankings, as well as components that focus on cardiothoracic surgery, the journal notes.
However, the journal had concerns about “opportunities for bias and gaming of the survey,” and was concerned that U.S. News & World Report does not make its Reputation Survey data publicly available for analysis and outside validation.
The journal’s “Pros” for U.S. News & World Report include: overall, specialty and procedure/condition rankings are helpful; rating system is “useful and rigorous”; high-complexity and high-acuity measures where quality tends to vary, but also focuses on more common procedure area; measures generally have high face validity; incorporates structure, process, outcomes, reputation; reputation measure offers some information where there is a lack of more granular measures capturing the same concept; adjusts volume in each specialty to account for regional differences in Medicare Advantage enrollment; has been responsive generally to feedback from stakeholders and scientific advances; most responsive of the rating systems to changes in measurement science, recent literature and stakeholder feedback; ranking for specific specialty or condition will not be shown if data are missing; least likely of the major rating systems to misclassify hospital performance.
The journal’s list of cons for U.S. News & World Report: frequently releases changes without opportunity for public comment well in advance; some rankings are based on “reputation” only; concerns regarding adequacy of risk adjustment with administrative data; patient experience measures used only for procedures and conditions rankings; concerns in their “reputation” measurement methodology with respect to sampling and ranking own institution; limited use of registry data, except to give credit for participation in certain registries (but missing other major registries); hospitals missing patient safety data are assigned median patient safety score patient safety indicators for all hospitals; some metrics developed in-house have not been scientifically vetted; and risk adjustment is different between procedures and conditions; there is no comparison tool, making it difficult to compare hospitals; with so much data U.S. News & World Report offers patients to sift through, hospital data is not shown in a user-friendly format.
from the raters
Like San Gorgonio, Loma Linda University Medical Center received a three-star patient rating from the latest U.S. News & World Report rankings, and received “High Performing” marks for gastroenterology and gastrointestinal specialization, and high performance ratings for procedures and conditions including aortic valve surgery, chronic obstructive pulmonary disease, heart failure and colon and heart bypass surgeries, and high marks for bariatric and weight control services.
Redlands Community Hospital was given a three-star patient experience rating by U.S. News & World Report, which gives Redlands kudos for its neonatal intensive care unit; and Redlands was given “High Performing” ratings in hip and knee replacements.
Redlands was rated “Average” for abdominal aortic aneurysm repairs, chronic obstructive pulmonary disease, colon and lung cancer surgeries, and heart failure.
According to U.S. News & World Report, Redlands and Loma Linda received four stars for patients’ overall satisfaction with their hospital experiences, while San Gorgonio Memorial Hospital received 3.
The Centers for Medicare and Medicaid Services (CMS) Star ratings gave San Gorgonio and Loma Linda three-star ratings, while Redlands Community Hospital received four stars out of five across the board in areas such as sepsis care, cataract surgery outcomes, pregnancy and delivery care, and heart attack care.
Healthgrades gave Loma Linda and San Gorgonio a one out of five star clinical rating for gastrointestinal bleeds, cranial neurosurgery procedures and strokes.
All three were given three stars by Healthgrades for diabetic emergencies and pulmonary embolisms, though only Loma Linda score three stars for respiratory failure, whereas San Gorgonio and Redlands only scored one.
The Leapfrog Group, despite working without input from San Gorgonio Memorial Hospital, gave San G low marks when it comes to communication with doctors and nurses, and responsiveness from hospital staff; above average scores when it comes to patient falls and injuries, and air or gas bubbles in blood; low marks for safe medication administration and communication about medicines, but high marks for communication about discharges; when it comes to surgeries at San Gorgonio, Leapfrog gave an average rating for surgical wound splits open, poor rankings for collapsed lungs and dangerous blood clots, and high marks for serious breathing problems and accidental cuts and tears. Leapfrog also gave San Gorgonio a low mark for c.diff infection and a high mark for treatments of urinary tract infections.
Dr. Lud Cibelli, a former San Gorgonio Memorial Hospital district board member who practiced at the hospital for 20 years as an emergency medicine practitioner before moving on to work at an emergency room in Lake Arrowhead, says that “Where I practice now, they do not publicize the rating systems to us physicians. Generally I think they were very important and need to be reckoned with,” Cibelli says. “In the mountains, our hospital is a critical access hospital — the only one in town. We have a Measure each year on the ballot to continue tax funds,” which is at the moment the only real measure they have of their patients’ confidence in their hospital. “I think the hospital and the doctors should be paying much more attention.”
Bill Hobbs, a Banning resident who pays close attention to San Gorgonio Memorial Hospital board meetings, says “During my 16 years in Banning, the hospital has gone through turmoil in CEOs and in confidence by our residents.”
He takes issue with CEO Steve Barron’s dismissal of Leapfrog Group safety ratings.
“Barron can’t keep making excuses” about low safety ratings, regardless of the source, Hobbs says. “All property owners, me included, pay out for two San G bonds per residence, and I own two properties. We expect better ratings, and yes, I do want our residents to have more confidence in our local hospital.”
If other small hospitals can achieve an “A” safety rating from Leapfrog, “Why not San G?” Hobbs asks.
Staff Writer David James Heiss may be reached at email@example.com , or by calling (951) 849-4586 x114.