Home / Patient Satisfaction Information and Articles

Patient Satisfaction Information and Articles

Ratings - PoorBelow you will find links to articles on patient satisfaction and how relying on patient satisfaction metrics can adversely affect a doctor’s judgment and patient care.

While many of the articles are about Press Ganey, the conclusions in the articles are equally applicable to HCAHPS and other satisfaction surveys. If you’d like to add articles, documents, stories about how your patients have been harmed by patient satisfaction-related incidents, or other related information, please send to whitecoatrants at gmail [dot] com. It would help if you included a summary of the information when you send it. All submissions will remain confidential.

Patient Satisfaction Posts On This Blog:

Press Ganey CEO Patrick Ryan’s Hidden Relationships
Hospital CEOs Earn Tens of Thousands of Dollars From Patient Satisfaction
Press Ganey’s Latest Business Model: Eavesdropping
Why Patient “Satisfaction” Could Be Making You Sick
Press Ganey Mantra: Suck It Up
Press Ganey Flunks Own Rating Scale
Presidential Voting and Press Ganey
Presidential Voting and Press Ganey Part 2
My Secret Addiction

Other Patient Satisfaction and Press Ganey Information

Press Ganey handout titled “Answers to Common Physician Questions and Objections” (.pdf file) that gives hospitals scripted responses to use when physician question the scientific validity of Press Ganey data. Summarized below:

  • For questions concerning sample size – “As long as our sampling is random … we don’t need large patient populations to draw meaningful and valid conclusions.”
  • For questions regarding legitimacy of the comparative scores in the database – “Was there a big difference in performance between the physician who ranked 8th in the class and the physician who ranked 28th? Probably not, but the rankings were still legitimate and provided you with a good gauge of performance and direction.” Of course, this is a strawman argument since medical school rankings weren’t based upon 6 surveys or 6 tests.
  • For questions regarding how scores on the surveys translate to a percentage and letter grade – A “5,” or 100, doesn’t mean the experience was perfect. It simply means that you exceeded expectations. Press Ganey Sample Inaccurate RankingsLikewise, a “4,” or 75, doesn’t mean that you’re failing your patients. It means that you’re meeting expectations. Nothing more, nothing less.” However, when Press Ganey sends out its reports, it marks scores below the “mean” – even if they are above “4” as “red” or “failing.” For example, see the report data to the right showing a score of “90.7” (more than 4.5 on Press Ganey’s scale) receiving a red failing grade of “19” out of 100 using Press Ganey statistical analysis.
  • For questions regarding how scores represent evaluations of physicians’ clinical skills – “It’s the service element, or bedside manner, that causes patients to consider going elsewhere or staying right where they are. So, this data really serves to highlight your strengths and weaknesses in terms of providing customer service.” In other words, customer service is more important than proper medical care.
  • For objections that a focus on service will hurt physician productivity – “If you want to attract more cash and privately-insured patients — then you need to focus on service.” Again, proper medical care takes a back seat to customer service
  • Regarding “Other Statistical Questions” – “If physicians ask you about p-values, standard deviation or other statistical issues, the odds are good that they’re trying to sound intelligent and cast doubt on the validity of the data. A good response can be: ‘I’m more than happy to connect you with one of our Ph.D.s back in South Bend.’ Rarely does a physician actually pursue the matter further when offered the opportunity to speak with someone knowledgeable in statistics.” In other words, Press Ganey wants to try to perpetuate the notion that physicians are not that smart and will back down when challenged.
Press Ganey Statistics 2003-2007Graph from 2008 Press Ganey “Pulse Report” depicting changes in patient satisfaction between 2003 and 2007. Note that the worst quarterly score is 81.4 and the best quarterly score is 83.1. Press Ganey makes a big deal out of this. In reality, the variance during those years is 1.7% which is statistically INSIGNIFICANT. Let’s enter the same data on a graph from 0 to 100 which is spectrum upon which Press Ganey bases its surveys …

1-100

 

Data look a little different? Of course they do. There’s almost no change in the data when the data is viewed on its proper scale.
Let’s increase the end points to between scores of 0 and 1000 …

1-1000

 

Now there is no visible change between the measured quarters whatsoever.
Wait. I think that we should change the end points of the data to better reflect the service that emergency physicians provide. Let’s make the scale from 0 to 83 …

1-83

Top notch emergency medical care every quarter for four years. Of course it is.

Keep in mind – all of these graphs were made using the exact same data. The only thing changed were the scale of the graph.

See how Press Ganey actively tries to mislead its customers by changing the scales on the graphs of their data to make data look significant when it really is not?

Think about it this way: divide the numbers in half and pretend they were gas mileage. Would you be so impressed with a care that got gas mileage of 41.5 miles per gallon over a car that got 40.7 miles per gallon? Or convert the values to the actual number scales that patients use to score the hospitals: 1 to 5. A Press Ganey wacky score of 83.1 is equivalent to a patient rating of 4.155. A Press Ganey wacky score of 81.4 is equivalent to a patient rating of 4.07. The difference is 0.085 points. Eight hundredths of a point is what Press Ganey is using to con thousands of hospitals into purchasing its services for tens or hundreds of thousands of dollars per year.

Emperor_Clothes_01This is a charade that borders on fraud, yet all of Press Ganey’s customers – including hospital CEOs and hospital board members – believe the falsely insignificant statistics. If you have fallen for this ridiculousness, consider yourself the emperor who is walking around the hospital naked but whom people are too afraid to tell you that your new suit isn’t all that it is cracked up to be. Rest assured, though, your employees are almost certainly mocking your incompetence behind your back.

December 2015 Study from British Journal of General Practice shows that “Antibiotic prescribing volume was a significant positive predictor of all ‘doctor satisfaction’ and ‘practice satisfaction’ scores in the GPPS, and was the strongest predictor of overall satisfaction out of 13 prescribing variables.”
This study looked at 33.7 million antibiotic prescriptions to 53.8 million patients. The data were taken from 983,000 satisfaction questionnaires.
Satisfaction scores clearly encourage inappropriate antibiotic prescriptions.

June 2015 Hastings Center Report on bioethics of Patient Satisfaction titled Patient-Satisfaction Surveys on a Scale of 0 to 10: Improving Health Care, or Leading It Astray?

“The pursuit of high patient-satisfaction scores may actually lead health professionals and institutions to practice bad medicine by honoring patient requests for unnecessary and even harmful treatments.”
“Some uses and consequences of these surveys may actively mislead health care.”

April 15, 2014 article in Huffington Post by Alexander Kjerulf titled “Top 5 Reasons Why ‘The Customer Is Always Right’ Is Wrong.”
Companies that exhibit this attitude create unhappy employees: “You can’t treat your employees like serfs. You have to value them … If they think that you won’t support them when a customer is out of line, even the smallest problem can cause resentment.”
The “customer is always right” sentiment also creates perverse incentives where “abusive people get better treatment and conditions than nice people.”
When companies enforce this culture, employees feel less valued, feel as if they have no right to respect, and gradually learn to provide “fake” good service where the courtesy is “on the surface only.” One expert noted that “when you put the employees first, they put the customers first.”
The article ends by noting

The fact is that some customers are just plain wrong, that businesses are better of without them, and that managers siding with unreasonable customers over employees is a very bad idea, that results in worse customer service.

June 2014 article in Triad Business Journal titled “Wake Forest Baptist to include doctor ratings, patient comments on website” (.pdf here)

June 23, 2014 letter from US Senators Chuck Grassley and Dianne Feinstein to CMS Administrator Marilyn Tavenner raising concerns about effect of use of patient satisfaction surveys on growing epidemic of prescription opioid abuse (.pdf) and requesting written response as to what CMS is doing to address the impact of patient surveys on improper opioid prescriptions.

October 30, 2013 article in The Atlantic by Richard Gunderman titled “When Physicians’ Careers Suffer Because They Refuse to Prescribe Narcotics.”

January 2013 article in Forbes by Kai Falkenberg titled “Why Rating Your Doctor Is Bad For Your Health

January 2013 letter to the editor of Forbes by a manager of venture capital firm SV Life Sciences named Eugene Hill titled “Bitter Pill” (.pdf file) wherein Mr. Hill criticized Kai Falkenberg for her article. What is interesting about Eugene Hill’s letter is that he failed to disclose his incestuous relationship with Press Ganey and with Press Ganey’s CEO. In other words, Mr. Hill’s letter seemed to come from a disinterested party when instead Mr. Hill’s son was a manager at Press Ganey and Mr. Hill and Press Ganey CEO Patrick Ryan have had several personal business ventures together.
See more about Mr. Hill’s letter and his relationships with Patrick Ryan and Press Ganey at this link.

November 26, 2012 article in American Medical News titled “Patient satisfaction: When a doctor’s judgment risks a poor rating.”

October 2012 article in Wall Street Journal titled “U.S. Ties Hospital Payments to Making Patients Happy

2013 article in the journal Pain Medicine‘s Ethics Forum titled “Autonomy vs Paternalism in the Emergency Department: The Potential Deleterious Impact of Patient Satisfaction Surveys” (.pdf file). In this article, national experts discuss the pitfalls of applying such measures in pain care, and the potential unintended negative consequences to patients and providers alike. Conclusions from the experts include:

  • “Patient satisfaction surveys, such as Press Ganey are flawed metrics for the emergency department setting and also in broader pain medicine.”
  • Press Ganey surveys “tend to highlight complaints by those with chronic pain and possible aberrant drug-related behaviors who seek opioid prescriptions for nonmedical reasons.”
  • Satisfaction scoring may “have broader ill effects on population health due to the increase in prescription opioids available for misuse and abuse.”
  • When using Press Ganey surveys, “Health care providers who work in the ED are forced to choose between good medical practice and performance reports that could reflect on their pay and possibly their employment. This makes little sense for anyone.”
  • “Competent doctors practicing good medicine may receive poor Press Ganey satisfaction ratings  because they are practicing good pain medicine. In the ED and often in broader pain medicine, Press Ganey ratings are virtually meaningless because the metric is flawed and inappropriately applied in these settings.”
  • “A physician’s most compassionate act may  be  gently yet firmly telling a patient that he/she cares too much about his/her well-being and safety to fill the opioid prescription that he/she is requesting. Such a decision may result in the physician receiving a scathing Press Ganey satisfaction score from the patient; yet this decision may also have saved the patient’s life or someone else’s.”
  • “Press Ganey and other similar satisfaction surveys are limited and potentially harmful to patients and providers alike”

July 2012 article in JAMA by Joel Kupfer and Edward Bond titled “Patient Satisfaction and Patient-Centered Care – Necessary but Not Equal” which concludes that “If patient satisfaction is accepted as a valid outcome, then it should be held to the same standard as any other intervention or device. Namely, does it objectively and efficiently improve health care outcomes? As of today, patient satisfaction surveys do not have that credibility as supported by the medical literature.”

March 2012 article in JAMA titled “The Cost of Satisfaction: A National Study of Patient Satisfaction, Health Care Utilization, Expenditures, and Mortality” showing that patients who had the highest satisfaction with their medical care were more likely to be admitted to the hospital, spent more on health care, spent more on prescription drugs, and were 26% more likely to die than those who had the lowest satisfaction. When study authors excluded data for patients who rated their health as “poor” or who had a “substantial chronic disease burden,” they found that the risk of death for highly satisfied “healthy” patients was 46% higher than for those who had the lowest satisfaction (.pdf of study here).

Part of a 2012 handout from EMP (Emergency Medicine Physicians) management group showing how EMP physicians are paid $10 more per hour for high patient satisfaction scores.

2011 ACEP Information Paper on patient satisfaction

December 2010 article in EM News by multiple authors including Shari Welch, Ronald Hellstern, Kirk Jensen, John Lyman, Thom Mayer, Randy Pilgrim, and Timothy Seay titled “Can’t Get No Satisfaction? The Real Truth Behind Patient Satisfaction Surveys.” Article advocates the effectiveness of using satisfaction surveys in the ED.

December 2010 Point/Counterpoint in EP Monthly about whether Patient Satisfaction Scores are useful or not (.pdf).

December 2010 article in EP Monthly from Press Ganey titled “Patient Satisfaction Surveys are Here to Stay” defending use of statistically insignificant data in its surveys. “This is the ED’s data and it has a right to see it.”

October 2010 article in EP Monthly by William Sullivan and Joe DeLucia titled “Is Press Ganey Reliable?” Article interviewed statisticians from St. Louis University and showed how Press Ganey’s “small sample sizes created questionable results” and how between 180 and 556 responses would be required just to get a 10% margin of error in the results. Press Ganey claims their margin of error is 2%. The authors concluded that “using survey data to compare one hospital to another or to compare one provider to another is a misuse of survey data and is likely to create misleading and unreliable results.”

2010 CareChex Research Study titled “Is There A Correlation Between a State’s Quality of Care and Patient Satisfaction?” (.pdf file) This study showed that there was “no statistically significant positive relationship exists across states between their rank order on quality of care and their rank order on patient satisfaction” and noting that many states “experienced substantial variation in rankings (e.g., Ohio ranked 1st on quality of care and 34th on patient satisfaction).”

September 2010 article in EP Monthly by William Sullivan and Joe DeLucia titled “2+2=7? Seven things you may not know about Press Ganey Statistics” showing how Press Ganey’s grading scale creates a situation in which emergency physicians are encouraged to spend less time critically ill patients, how the data is not random and therefore is not reliable, how response errors dramatically affect survey results, and how a large number of survey respondents were aware of adverse patient outcomes specifically linked to pressures from patient satisfaction surveys.

Get Over ItSlide from 2009 ACEP lecture by Thom Mayer lecture regarding whether patient satisfaction is statistically significant, really measures satisfaction, measures quality of care, or may be subject to multiple biases. His advice: “GET OVER IT !!!!!!” Pay no attention to statistically insignificant, biased, and inapplicable theories that don’t measure what they purport to measure. Invalid scientific method? Patients more likely to die from satisfaction data? “GET OVER IT !!!!!!” These corporations need to make money, too, you know! At the time of his lecture, Dr. Mayer disclosed that he was Medical Director of the Studer Group – and the founder of another patient satisfaction-related company called “Best Practices.”

June 2008 article by O’Toole et al. in The Journal of Bone and Joint Surgery titled “Determinants of Patient Satisfaction After Severe Lower-Extremity Injuries” concluding that “Patient satisfaction after surgical treatment of lower-extremity injury is predicted more by function, pain, and the presence of depression at two years than by any underlying characteristic of the patient, injury, or treatment.” Outcomes are key determinants of patient satisfaction, not treatment received.

September 2007 Ron Elfenbein article in EP Monthly titled “Nothing satisfactory about patient sat surveys” (.pdf) showing how “patient sat scores are bringing the emergency department to its knees.”

 

 

 

 

3 comments

  1. Agree totally with your take on patient satisfaction scores having nothing to do with quality. Keep writing!!!!

    • In the case of electroshock, voluntary and involuntary brain damage ( on healthy brain tissue) in the name of medical science, do you think the patients satisfaction is relevant?
      If the involuntary patient can remember they did not want the treatment afterwards, do you think they will be satisfied afterwards?
      The sarcasm is to get the real medical doctors to kick the fake medical doctors out of medicine. This is the year 2015.

Leave a Reply

Your email address will not be published. Required fields are marked *

*