How to spot medical grifters and fake experts - Understanding survey research

This is a bit of advanced post as I’ll do a deep dive into reliable versus unreliable survey research. It’ll help you tell the difference between the people who actually know what they’re talking about versus those who don’t.

The reality is that survey research is nuanced and really tough to do well. Those conducting surveys rarely explain those nuances to others because it might make their research look unreliable. So while there are some people out there who know what they’re doing, they usually don’t go out of their way to explain surveys to you.

People interpret survey questions differently

One of the biggest issues with surveys is that people interpret questions differently. For example, chronic illness patients have very different ideas about what it means to be “recovered”. Some people feel that they are recovered even though they cannot work their previous job (their health continues to affect their employment). Others consider 90% improvement to be a recovery. And others may feel that only 100% improvement should be considered a recovery. If you ask patients about whether or not they are recovered, then people will essentially be answering different surveys because their definition of “recovered” is not the same as everybody else’s. Similarly, people have different ideas about what brain fog is.

This makes surveys really hard. Most people think that they can write a decent survey. In reality, it takes experience and pilot testing to figure out good questions where the misinterpretations are minimal.

They can’t all be right

Two different groups have published data on the prevalence of diarrhea in Long COVID patients.

Their data is worlds apart. So what’s going on?

The PLRC survey question was likely misinterpreted. The PLRC researchers likely wanted to know about ABNORMAL diarrhea such as having diarrhea for 7 or more consecutive days. However, their survey simply says “Diarrhea”. So if somebody had only ONE instance of diarrhea in 5/6 months, then it would make sense for them to check one of the boxes. Their survey is shown below:


Source: Questionnaire to Characterize Long COVID: 200+ symptoms over 7 months

The problem is that people are reporting NORMAL diarrhea which the researchers are misinterpreting as ABNORMAL diarrhea.

Adverse selection

While the PLRC group did not perform high-quality survey research, their paper has received over 1700 citations.

image

While a lot of their paper should not have passed the peer review process, the science world is full of people who don’t really know what they’re doing. This has contributed to others citing their paper without fully understanding the issues with their research and the paper describing that research.

However, the PLRC group has been incredibly successful in raising research grants (which has given them opportunities for some of them to earn an income when their health makes it difficult for them to do so). They’ve raised millions of dollars thanks to Vitalik Buterin (the Ethereum guy) funding Long COVID research. Other researchers also want to get research funding because their job depends on pulling in grants. So there are strong economic and political incentives not to explain why the PLRC group is inexperienced in survey research. To be fair, their paper is important because it explains the lived experiences of patients suffering from a very long list of symptoms. They’ve done some helpful things and they’ve done some unhelpful things.

Unfortunately, due to the politics involved, misguided ideas may take a long time to die. But that’s why it’s important for people to speak the truth- patients should not be misled into trying janky treatments based on unreliable science.

The emperor has no clothes

Unfortunately, there is a lot of bad behaviour in the science world- this includes “peer reviewed” papers published in high impact (top) scientific journals. The reality is that there are strong incentives for scientists to engage in misleading research practices so that their results are sexy. This lets them secure research grants so that they don’t lose their job. The system is fairly broken as most research dollars end up being wasted on pointless research.

Pharma companies spend vast sums of money on mis-educating doctors because they often need to push shoddy drugs or treatments that happen to generate a profit. They want a system where doctors will give out shoddy treatments that generate pharma profits. While doctors will also provide legitimate medical care to their patients, the problem is that they aren’t trained to tell the difference between shoddy care and quality care. They don’t seem to receive training in communicating with patients properly to overcome issues with patients misinterpreting what the doctor is saying- which is one of the cornerstones of quality survey research. So, doctors can’t tell the difference between good survey research versus bad survey research.

A rule of thumb: do the researchers make it easy for you to find their survey?

One of the best ways to describe the methodology to a survey is to simply provide a copy of the survey. You actually need to see the actual survey to figure out if there are subtle issues with the survey design. For example, if the survey takes a long time to fill out, some people will “speed run” the survey and try to finish the rest of the survey as quickly as possible. The quality of the data can fall for later questions in the survey. So, if you suspect that there is an issue with survey fatigue, you would want to know the length of the survey and the order of the questions. A simple way to do that is to look at the actual survey.

If you use this simple rule of thumb, you will realize that most of the scientific literature is questionable.

Medical commentators

Dr. Been seems like a nice guy and he’s probably one of the better people out there. However, you can get into a lot of trouble relying on him because he presents ‘scientific’ papers uncritically as if the results are reliable. For example, he has a breakdown of the PLRC survey where he states:

Explosive diarrhea is very common. Gas is very common. This is so debilitating that patient cannot actually go out or be in the social settings because they need to go to the restroom very commonly

Part of the problem is that most doctors don’t question the scientific literature. The other part of the problem is that it takes an incredible amount of time to understand all of the relevant scientific literature. For somebody like Dr. Been, there’s too much ground to cover. He can’t possibly have the time to read enough about every esoteric topic to actually understand what he’s talking about.

Get informed

If you suffer from chronic illness, you need to protect yourself. Don’t rely on grifters and fake experts. The system is broken and most people make money (or chase clout) by acting against your best interests. That’s just the way it is. I don’t trust patients either because many of the most vocal and actives ones contribute to the problem (e.g. PLRC consists mainly of people who suffer from Long COVID).

The good news is that people are recovering and we have data on what works.



Appendix: nitty gritty details on surveys

Asking participants to do something that they can’t

Another mistake is to ask participants if they’ve had COVID in the past. At the beginning of the COVID pandemic, testing usually wasn’t available. So most people are only speculating as to whether or not they’ve had COVID during that timeframe. They can only speculate because they don’t actually know the answer. (There are also people who self-diagnose themselves with getting COVID in 2019 at a time when very very few people actually had COVID.)

Similarly, asking participants if they have myocarditis or autoimmune disease is problematic. A lot of people didn’t get a full work-up for it because they had difficulty accessing healthcare. If you ask people if they’ve had myocarditis, you should ask them if it was diagnosed by one of their doctors. If you don’t, you will have speculative answers mixed in with reliable answers. Data on autoimmune disease can be found in the Risk Factors Survey.

Asking about obesity

People tend to report obesity, possibly because their definition of obesity is different than determining obesity based on BMI (Body Mass Index) and age. In theory, you can ask people about their weight. However, a lot of people don’t have a scale or don’t know their weight… so that runs into the problem of people not being able to provide a reliable answer.

Asking about income can have similar issues as people may overstate their income.

Other biases and issues with survey data

There are usually some differences between the people who fill out surveys and the ideal survey population (sampling bias). Those issues tend to be better known and it’s not as hard to find information about it.

There are also a lot of esoteric issues that are difficult to get information on.

  • As people go through the survey, they may pick up on contextual clues that the survey designers did not anticipate. This means that the ordering of questions matters.
  • Some people will have a ‘completionist’ approach to surveys where they will try to answer every single question. If there are multiple choices, they will pick something rather than leaving the question unanswered. This causes problems if the survey designers want people to leave questions unanswered.
  • Length can be an issue especially once you get beyond 10-15 minutes. The threshold is lower for people with cognitive difficulties due to their health. Some people will start ‘speed running’ through the survey, skimming over questions and not reading parts of the question if the first part of the question is enough for them to provide an answer. The flip side of that is that some people will forget about the first part of a question. This can result in situations where some people will ignore/forget about the beginning or ending part of a question. This is why surveys are hard- there are no perfect questions. Sometimes you have to accept tradeoffs. Multi-part questions and stipulations help avoid ambiguities as to the interpretation of a question, but they risk some people ignoring part of the question.

High quality survey work is not obvious

To ensure that all participants on the same page, it is a good idea to use language that is accessible. You usually want to make sure that high school dropouts (or people with relatively low education) can understand the survey questions. So, the survey will not look very sophisticated because the reading level is below average.

It’s also a good idea to avoid complicated logic in surveys such as instructions to skip questions that don’t apply to the surveyee. This results in instructions that are dumbed down. You have to really hold people’s hands to handle the people who are the most easily confused (or most likely to misinterpret the logic).

Some problems with my survey work

  1. On the Treatment Outcomes Survey, there’s something wrong with the bedbound question. That’s why I don’t mention it in the Odysee videos (here, and the Nov 2022 one here).
  2. I didn’t have money for focus groups so I didn’t figure out why the bedbound question is bad.
  3. Very low recruitment of Long COVID and ME/CFS patients.
  4. Some of the data is from earlier versions of the survey. The survey has changed over time.