Survey research is one way to spot bad science

What you need to know about “science” in general is that most of it is political. A lot of it is unreliable because politics ruin everything. That’s part of the reason why people aren’t healing from chronic illness. Advocacy groups and clinicians told patients to get the COVID vaccine. Then the Long COVID and ME/CFS patients injured by them started coming into the vax injury support groups.

For survey research, most people don’t understand it because they’ve never done it. They’re inexperienced. Of the people who do understand survey research, a lot of them will put out bad research for political reasons or to get research funding. So let me explain things that other people are too afraid of explaining…

They can’t all be right

Two different groups have published data on the prevalence of diarrhea in Long COVID patients.

Their data is worlds apart. So what’s going on?

The PLRC survey question was likely misinterpreted. The PLRC researchers likely wanted to know about ABNORMAL diarrhea such as having diarrhea for 7 or more consecutive days. However, their survey simply says “Diarrhea”. So if somebody had one instance of diarrhea in 5/6 months, then it would make sense for them to check one of the boxes. Their survey is shown below:

Source: Questionnaire to Characterize Long COVID: 200+ symptoms over 7 months

The problem is that people are reporting NORMAL diarrhea which the researchers are misinterpreting as ABNORMAL diarrhea.

This is a common issue in survey design. There is usually a small minority of people who will think very differently than everybody else and interpret the questions differently. Usually their interpretations are valid- it’s just that they’re not on the same page as everybody else so they end up filling out a completely different survey.

A rule of thumb: do the researchers make it easy for you to find their survey?

One of the best ways to describe the methodology to a survey is to simply provide a copy of the survey. You actually need to see the actual survey to figure out if there are subtle issues with the survey design. For example, if the survey takes a long time to fill out, some people will “speed run” the survey and try to finish the rest of the survey as quickly as possible. The quality of the data can fall for later questions in the survey. So, if you suspect that there is an issue with survey fatigue, you would want to know the length of the survey and the order of the questions. A simple way to do that is to look at the actual survey.

Many peer-reviewed papers don’t provide a copy of the survey. It’s because the peer reviewers often don’t know what they’re doing. That’s why we have one peer-reviewed paper saying 59.7% have diarrhea while the other paper says that 2.6% have diarrhea.


In science, a lot of people pay attention to the popularity of a paper. Popular papers are usually more impactful than other papers. The best papers usually have a lot of other papers citing it.

  • It’s more useful than other ways of trying to measure scientific merit
  • It’s easy to measure.

The two most common ways of measuring popularity are citation count and h-index. Higher is better. Use Semantic Scholar or Google Scholar to dig that information up.

For the PLRC paper mentioned earlier, it currently has around 1500+ citations. It’s an extremely popular paper. Unfortunately, the quality of that paper is… not so great because the researchers were inexperienced at surveys. (The inexperience is to be expected when people first start doing surveys. We shouldn’t fault them for it.)

However, it does show that the science world goes into some silly directions. What’s popular isn’t necessarily good. Right now, the PLRC has been able to secure millions of dollars from Vitalin Buterin’s Balvi fund. Because of it, a lot of researchers are pandering to the PLRC’s ideas (some good and some bad). While these people can figure out that some of the science is janky (e.g. microclots), they don’t want to say anything because they are too busy trying to get their piece of research funding. So that’s why smart people aren’t speaking up about the nonsense that is going on.

That’s also kind of why the “advocacy” groups have continually pushed COVID vaccines even though patients were getting badly hurt by them. They had political reasons to push a tragically misguided idea. Heidi Ferrer committed suicide after she worsened from the vaccine (*though other factors like a controlling husband may have played a role in her death).

Building a better chronic illness ecosystem

People want their lives back. But there are a lot of obstacles in the way. There are many patients, doctors, researchers, and “advocacy” groups that act in bad faith and intentionally put out unreliable information. And the bad actors often suppress reliable information, which is why there is a ridiculous amount of censorship in support groups.

So I’m working hard on cobbling together the solution.

  1. For a recap of what we know about treatment, see the Aug 2023 summary of what we know about treating post vax. Most people haven’t tried the top treatments. If you want to know why certain treatment modalities are nonsense, see this post. That’s so you don’t waste time and money on bad doctors and shoddy science.
  2. I will generate more reliable data through the Patient Experiences Survey. Please spend 5-10 minutes and fill it out or promote it if you haven’t already. Hopefully we start finding more treatments because not everybody responds to the top treatments.
  3. Building out better support group communities.

If you’re looking to get your life back from chronic illness, take advantage of these resources.