Wednesday, February 15, 2017

Medicinal herbs for respiratory tract infections are safe

[title]

my name is john powers, i'm a senior medical scientist. and employee of liedos biomedical research. i worth in support of the collaborative clinical research branch within the division of clinical research in the national institutes of allergy

and infectious diseases. i know this is one of the earliest talks you're getting in the principals and practice of clinical research. we're going to cover topics that will be covered in greater depth in detail later. what we'd like to do is talk

about how to choose a research question and then match that research question to the design of a study. and by design i mean the hypothesis, the people -- types of enrollment identityia and the outcomes, then talk about how to design studies most efficiently.

how can you do that with the smallest effort and fewest number of patients enrolled in the study? we have 84 slides to cover in an hour and a half. so i'm told -- last year when i did this, i was told there were a lot of i.t. connect issues.

hopefully this year for those online, i'm assured we've fixed all those problems from last year. anybody that's here in the audience that wants to ask a question during the presentation, please just raise your hand or run up to a mike

phone, i will try to remember to say the question over so people online can hear the question before i give an answer to it. so the first thing to do here tonight is actually make the distinction between what we mean by clinical research, which is what this course is about, and

clinical practice. clinical research designates and activity designed to test hypothesis, and allow us to dray conclusions and develop or distribute to generalizable knowledge. we're not just treating an individual patient we're trying

to develop general theories or make broader conclusions. in clinical practice on the other hand we're looking at interventions designed solely to enhance the wellbeing of an individual person. a couple years ago, i -- there was a patient that came in

harmosty mental status was not very good. this was a gentleman out playing good, doing great. came in almost in a coma. he was really unresponse offensive. we did a number of experimental things to him.

he woke up and went home and as far as i know he's playing golf and still having a good time now. cue i know that the things we did for that individual person made him betner i don't. i was joking with him and his wife as he left.

i don't know if you got better because of us or in spite of us. in taking care of an individual patient i was not testing a hypothesis. we were trying to make that one person better. often, these two endeavors get mixed up sometimes.

and we do want to do research that informs clinical practice. and the best questions to do research on come from clinical practice. so they are directly related. but you're not testing a hypothesis in clinical practice. this distinction got so confused

that in 1979, the federal government put out a report called the belmont report on ethics and human experimentation, and this distinction is explained in the very first section of that document because of so much confusion between it.

so that also relates to that issue of how science relates to ethics. you'll hear more about this later in the course as well. so a study that cannot contribute to general liable knowledge isn't ethical. you may be exposing patients to

harm without being able to draw conclusions at the end of the study. so putting patients at risk of harm even for minor inconvenience for no benefit to anyone is really not a very useful endeavor for anybody. now, in the earliest phases of

when people are developing new interventions, they may be tested on healthy volunteers. nothing wrong with those volunteers. so they may not benefit themselves from taking this experimental intervention but there is knowledge gained from

doing that about early toxicities in the drug, et cetera. so you're still able to contribute to generalizable so the reason i'm bringing this up is that scientific validity is really not just nice to have but a requirement of all research.

so when people bring up issues related to study design, it's really not just to give you a hard time. it's really an integrate part of the validity of the research to begin with. and that word validity gets used a lot.

it's kind of like the word clinical. i was in the grocery store walking down the i'll that had mouth washington. there was a mouth wash, it said clinically tested. i don't know what that means. but the word clinical guess used

a word and validity gets used a word. validity means the ability of the study to correctly answer the research question that was posed. i'm also the chairperson off a scientific review board in the research network in which i

work. and we actually did animals of the last several presidentials we reviewed over the last year. we found one of the biggest issues, when we review the protocols, we often couldn't understand what the research question was.

if that's the major issue it means those designing studies should be more clear about what the research question is at the very beginning. so this sort of relates to the whole purpose of science to begin w and how e how we make conclusions in science.

and philosopher said it's not what science believes that distinguishes him. it's all hymns, but how and why he believes it. he leave lieus are tentative, not goingmatic, based on evidence not on authority or intuition.

so although we talk about the rise of evidence based medicine over 20 or 30 years, the whole purpose of science is to gather this kind of evidence. so if you're going to come up with a research question, this is a general way to try to think about it.

and we'll march through these steps. one, come up with a general rfp topic or idea that you have to begin w usually those ideas are very broad and you need to focus them down. once you focus down the question you need to come up with a

specific hypothesis or a description of what you're actually going to analyze. and then finally, to come up with specific aims and objectives. who you're focusing down, it gets more and more detailed as you go through each of these

so when you submit a research protocol, if all it has is a good idea that's not a research question. you haven't completed the process. as we said earlier, these ideas come from real life experiences that people encounter in

clinical practice and these are exactly the kinds of questions that patients ask all the time. for instance i might ask i don't feel well. what do i have. >> that's a diagnostic question. once you tell them that they have a certain disease they must

say how bad is it? they want to know the prognosis. if it's something like what i do, infectious diseases, they want to know can i give it to my family? that's a natural history is it transmitable? they want to know the etiology.

how did i get this? and then finally, when it comes to administering treatments they want to know is this going to help me? and i should put as a caveat, is it going to hurt me? even if it's beneficial for them, does it have side effects

they're interested in as well? clinical research often focuses on biology but it can focus on other non biological questions as well related to behaviors or social issues. so for instance, you may want to do a study that says why do people come to the doctor when

they have disease x, or what behaviors influence outcomes in disease y? this is very interesting cardiology done about 25 years ago where they compared people who adhered to medication in the placebo group to people who didn't in the placebo group.

so neither of these groups got an active agent but show if you adhered to the placebo, you're more likely to live longer that. clearly cannot be due to the intervention. because both were inert. it means there was a behavior or something that those people were

doing or some characteristic of those kinds of patients which helped them to live longer. there are those things that influence outcomes even on what people would call hard outcomes, like mortality. so developing better tools to evaluate and describe a disease

is often a good praise to start if that hasn't -- place to start if that hasn't been done in the research area you're interested in. the other step, characterize the natural history of the disease in order to study it. if people don't know what the

disease does it's difficult to look at how to treat it. the other thing to look at, are there valid measures of how to assess outcomes? and that actually gets to the whole issue of measurement theory. some outcomes are very easy to

measure, such as is the person alive or dead? on the other hand, there is a whole process that people go through to develop other kinds of outcome measures like patient reported outcomes. finally, you might want to jibe what are the risk factors for

having certain outcomes is this for instance, who is more likely to die from a certain disease, more likely to live, who is more likely to have complications or disability in their life? based on patient characteristics. so some examples of measurement

tools are for instance coming up with a severity scale. a severity scale might take baseline risk factors for an outcome. so fringes, if you look at people that have pneumococcal pneumonia pneumonia, being older is a risk factor for death in

pneumococcal pneumonia. younger people have lower mortality rates than older people even if everything else is the same. another example is developing outcome measures. so for instance, one of the projects i'm involved in now is

developing a patient reported outcomes scale for symptoms of influenza. so influenza is a pretty common disease. everybody has that type of illness at some point in their life. most people do not die when they

get the flu will you you feel mesh -- feel pretty lousy. the question is how do you measure lousy. so we went through an interview of patients and developed a valid scale and be able to track the symptoms over time to describe how they're getting

better. that gets to the issue fundamental to the quantitative study of any phenomenon, that's classification. so i know -- ever had this thing when you drive into a parking lot and it says these parking spots are for subcompact cars

only. my car is not that small but not huge, either. not like the gigannting minivan i used to have to drive around in when i was a kid. do i qualify as a subcompact or don't into there is a whole issue, you have to know how to

classify things to study them. so classification is recognizes as the basis for augscientific generalization and an essential -- all scientific generalization. uniform definitions and systems of classification are prewreckets in the advancement

of study and knowledge. in death, this is essential. so what's interesting now is that we have these various forms of injury and causes of death. it was just implemented a couple weeks ago, the icd10. the 10th version, came out in 1957.

guess what that's used for these days? it's used for billing purposes. it is not used for actually characterizing accurately whether someone has a disease or not. so again this gets to the issue of appropriately characterizing

these things in order to be able to describe it. if a millimeter for me in my lab is not the same as the milliliter for your in your lab we rant replicate the experiment together. this is just a traffic that looks at the different kinds of

questions that you could ask that are all related to each other that maybe may be something other biology. you can ask environmental, social, culture questions, cognitive questions. how do people think about a specific problem.

and then also behavioral questions as well and how they all might impact outcomes. it's interesting that we think about only biology as affecting outcomes, and write these other things off as being soft outcomes, in fact, they do impact eve things like

mortality. they are useful areas for study. so let's start on our 4 steps. first you have to choose a broad topic. one of the things you want to do is choose something that's really interesting to you. and that is because you're going

to have to do a lot of work to go through this process. if you're not interested from the beginning you're going to lose steam. not something you're going to complete. you also want to choose a topic that's timely and relevant

because other people are going to ask you the question, so what? what's the big teal? why do we have to study this in the first place? a question that you should answer before somebody else asks you that question.

that is answerable, keeping in mind time and resource constraints. so the book dining council research came up with finer. and that is something feasible, interesting, novel, they will, and relevant. and i want to put in an aside

about the feasible piece. unfortunately, to fit into the acronym finer, feasible comes first when it should come lastso there are many things you could do that are feasible. you could study 5 patients with a given disease. but that wouldn't really give

you valid answers. we talked about earlier that in the absence of scientific validity, the research looses its ethical component. feasibility is important but if you can't do the study the right way, with the question you picked, pick a different

don't answer the question in a way that you're not going to get useful information. it's also very helpful to have a mentor who can help with all the facets of this project, who has done this before to help you think through these questions. once you've pick add general

topic, you need to focus that question down. and what that entails, again, we're back to the hard work piece. you need to evaluate the medical literature and other sources to evaluate the current knowledge in that research area.

maybe the question you're interested in has already been answered by somebody else. in many areas, though, much of what we think we know is less than what is generally believed. and one of the places to start with this is review articles or treatment guidelines but really,

what these do is represent a synthesis of others view on the data. and much in treatment guidelines is graded according to the strength of the evidence. there was a recent study done about 4 years ago which actually looked at the recommendations in

both cardiology and infectious disease treatment guidelines. only 11% of what was in them had been generated from humidized control trials, almost half of the recommendations were based on clinician opinion. so that's nice guys sitting around a table saying i think

you should do this. that's not evidence. that's not science. so many times what you can get out of guidelines is actually, some of the questions that still need to be answered because even though they're in the treatment did you based on pest opinion

they vice-president been answered in a -- vice-president been owned in a valid way. this say said many fields, claimed research findings may be accurate measures off prevailing biases. and so what he meant is that if you don't design a study

properly, you may get an answer with a p value around it but a p value doesn't tell you anything about clinical meaning, nor is it a measure of bias. so poorly designed study can get you an answer with a p value of .001, but all you've done is measure what that bias was.

so you're -- got an answer that was accurate publisher just more accurately wrong. that's the point he makes in this particular essay. you're looking at things we don't know already. how you define the problem you're interested in looking at,

candlelight vigil -- clinical features. is it known to designateization the problem. if you come up with a diagnostic test but it doesn't mean kneeing anything to the patient because they're 95 years old -- for instance, antigen testing where

a large study through nih showed that using psa to screen men for prostate cancer didn't change outcomes for them. many times you find prostrate cancer in its early stages, not going to harm the person at all. and they're going to die somewhere down the line of

something else. i told my doctor after that study came out, if he puts psa on the lab test i'm crossing it off. a friend of mine called the psa -- should be called prostrate specific anxiety inflammatory -- instead of

antigen, if you tell somebody about the test they're going to worry about it. is this a really common problem so the number of people effected is very high. or is this something that's rare but even if it's rare has a huge impact on the people that have

it? and that's the 4th question, looking at the impact in terms of morbidity and mortality. this is often assumed in terms of people will say disease x is prevalent. but if the more minor stages of that disease are prevalent, it's

not as big an issue. in fact, i wrote a paper in 2008 where we looked at what the impact was on antibodies in the 30s and 40s when pence littlen was first introduced. penicillin reduced mortality but only 5%. if you're older and sicker it

reduced mortality by 40%. that's huge. 40% absolute reduction in mortality is huge. what was interesting to show that 95% of young healthy people with pneumonia in the preantibody era got better on nothing.

so the idea is, -- that gets to the issue of what are the risk factors for getting the problem in the first place? and risk factors are not necessarily causal. so you could say older people are more likely to get pneumonia.

older people are more likely to have great hair. great hair does not cause you to get pneumonia. but gray hair could be a marker for getting pneumonia. so risk factors don't necessarily need to be causal to the next question is what's the

prognosis in terms of morbidity and mortality and what factors might modify that prognose. * if you were doing a clinical study to describe the natural history of the illness, those things may be impacted by who you study.

when you're studying treatments, those things are referred to as confounders. in other words, those things may impact outcome independent of the treatment you're trying to evaluate. what interventions can mitigate >> this is much of the research

that gets done, usually but not always, in randomized controlled trials, evaluating interventions like drugs, devices, biologics, and also different behairs, hand washing. -- behaviors. and then finally there is the question, the questions, when there are effective treatments

already available there are still questions to answer about how you use them properly, how they compare to each other, and do certain interventions work better in some people than others? so that's the issue of effect modification.

so for instance, we know that beta blockers to do not work as well in african americans as they do in caucasians. there is a big push to do what they call comparative effectiveness research. in cardiology this look at more expensive medications compared

to things like diuretics, the newer drugs may be more expensive but they don't help anymore than the older drugs do. so one of the things that you run up against when you're evaluating the medical literature in the field you're interested in is that people

believe a lot of things that may not be supported by evidence. when i was a fellow, i went back and there was a statement that any person with kidney -- who had a kidney transplant, if they had fungus in the urine, had to be treated. they would get a ball in their

urine that could knock out the new kidney. i downed that in a textbook. i looked at the reference, referenced another textbook, referencing a paper, with an opinion piece, without a reference. do people really get fungus

balls? when i tried to trace that information it turned out to be one person's opinion in an editorial, not anything based on even a natural history study of people that had kidney transplants. daniel, who was the head of the national library of congress,

for a number of years wrote the greatest obstacle is to discovery is not ignorance, it's the illusion of knowledge. for you when you do your research that's an issue because you have to convince people that what they know isn't what they really think they know in terms

of why you want to answer the question you do. that gets to the issue of what do we know about previous yes questions. when you get into the issue of -- i took a logic course in college. one of the fallacies is called

[indiscernible]. that's arguments from authority. well, i'm a big wig in this field. i say so, that makes it true. we run up against this. television is full of this stuff. all the talking heads on tv who

say it's got to be true because i say it's true. but the real question for you as a scientist, what is the quality of the data? the validity, lee liability and precision, are there biases in the previous data? so one of the good things about

the rise of meta-analysis, since the 1970s, is this issue that the quality of the data and potential biases in the data is taken into account when people try to look at the data synthesis for meta-analysis. has the evidence been independently confirmed.

is there a study that shows the treatment is effective and ten studies that show it's not? that's one of the basic principles of the scientific method. make an observation, develop a hypothesis, test the hypothesis and what's point 4?

confirmation. because you might just get lucky or get wrong in a single study and one of the basic tenants of -- when i was a fellow we wrote things down in lab books. we don't do anymore. but the reason rewrote the experiments down, so somebody he

could else could replicate the same experiment. is the evidence consistent across different populations? effect modification means it might be effective in one group, not another. or the side effects might be different in one group than

another. and for interventions we might want to look at different doses, durations of therapy or combinations of therapy. so after you get through that, and that's a big process going through all that medical literature, you need to focus on

one part of that knowledge or research gap or one piece of the puzzle. you also need to think about not just your study, but how is your study going to fit into the bigger research picture? so the best kind of studies, the joke goes, ask more questions

than they answer, meaning once you answer one piece of this puzzle, it leads to answering all -- asking all these other things that may relate to it. so for instance, you do a research study, and you show drug x is effective in treeing disease y.

but does it work in a lower dose or in a higher dose or in combination with something else? those questions might come up later so you have to think about how does this fit into the overall research plan. the best questions are the ones where even a so-called negative

study still give useful results. the issue here is a single study cannot possibly answer all the questions about a topic. one of the marm flaws in research is people try to answer too many things in a single look like trying to put 25 anchors on a boat.

you'll sink the boat if you try to answer too many things at one time. so you really want to see how does your research fit into the overall model or theory of the problem? once you get to this issue have 0 feasibility, you have to

address what kinds of information are you going to need to be able to do your study? that relates to what kinds of people you'll study, what populations. if it's a comparative study, you need to have a test group and

control group to compare to each other. what exposures you're going to analyze, are you testing people for this drug verses that or exposure like people who smoke verses don't or some environmental exposure and what outcomes are you going to

measure. is it going to be death, a measure of disability, et cetera. what kinds of information are available to you? do you work in a hospital that's got enough information or a system that's got enough

information to do this? we tried to do a study here where we looked at all the positive blood cultures at the nih clinical center to look at the impact of antibiotic resistance. the clinical center is a fairly small hospital, so doing this

only here did not give us enough information to answer the now we're trying to collaborate with other hospitals to try to pool our data with theirs to address that question. so what lowers do you need to obtain the -- resources do you need?

is this stuff available in medical charts or do you have to get that information and even if the information exists, do you have access to it or can you get access to it? and again, just to reinforce feasibility doesn't mean using invalid methods because while

that's all i can get around to doing, you still need to have scientific validity to justify the research in the first place. now that you focused on that question, you immediate to turn this into a hypothesis or a description. so people of talk about

hypothesis driven research. if you're just doing a description, like how many people in the hospital get disease x, that is not a you're just descriptioning something. and what that is actually usually hypothesis generating

so you may want to use that information. so i did a scientific review that wanted to look at comparing outcomes in patients before and after they had instituted a new treatment -- a new guideline for testing people for tuberculosis in their system.

and they wanted dues what impact implementing that guideline actually had. so the guideline had only gone into effect literally a year ago. so one of our questions was cue know how many people are going to have tested positive for tb

in the last year? if it's at this people, you're not going to have much to analyze in that timeframe. so they needed to do that descriptive piece first. before they could go test their so what is a hypothesis? so a hypothesis is not a

question, it's a statement. it's a statement about what the investigators believe to be true bouts nature, and the relationships of 2 or 4 variables to each other. hypothesis doesn't say i'm going to fiend out whether drug x is bert than y.

a hypothesis is drug x is better than drug y. and then you formulate a question around that. a hypothesis almost always entails a comparison. but all research is not comparative, as i said. so you need to differentiate

qualitative from quantitative research, and descriptive from anwritical research. descriptive is what i just described where you're going to describe the number of people in your institution who have tested positive for tb by a skin test. that's descriptive.

analytical means you're comparing one group to another. in some way. it may be to compare people who have the disease verses don't to look for risk factors of getting it, or it may be able to compare people who have got drug x to y to see if drug x improves

outcomes compared to drug y. qualitative and quantitative are two distinct inturfs. and there is more and more information coming out or more realization about the impact of qualitative research. qualitative research is often used to generate the hypothesis

that are, then, evaluated in a quantitative way and quantitative research. so the aim of qualitative research is the complete detailed description in words. so for instance, when we developed the outcome symptom scale for influenza, the first

step was actually to interview patients who were tested positive for influenza, to see what symptoms they had. the output was words. it was the patient's descriptions of what they felt when they had influenza. the second step was to quantify

how many people had what symptoms. that's a quantitative question. so the qualitative research develops observations for further testing where as the quantitative one constructs statistical methods to explain the observation.

in the qualitative research, you only know in advance what you're looking for, groping around for what you're going to test quantitatively later. on the other hand, quantitative is not a data drudging exercise. you want to clearly state in advance what you're looking for.

so the qualitative part comes early in the phases of a research project whereas the quantitative part comes later. around the design of a qualitative study may emerge as the study unfolds. on the other hand, a quantitative study carefully

lays out in advance all aspects of how the research is going to be designed, and the data is going to be collected. the actual instrument in a qualitative study is the researcher, themselves. so if you're doing patient interviews what's the tool.

>> the interviewer. the interviewer is asking the person questions and the words are the output. on the other hand, in a quantitative research study the researcher uses other tools like, say, the questionnaire that was developed in the

qualitative part. or equipment fa i might have developed or laboratory test to collect numerical data. the output is number merical data rather than descriptive words. we went through this descriptive research describes the account

and -- fins, a case report. i was on service in july and we saw someone who was neutropenic after receiving cancer therapy. happens all the time when people get cancer therapy. this person developed [indiscernible], it's what people think killed george

washington, but it's usually a child disease. it's when the epiglottis in your throat swells to the point where it closes your windpipe so you can't breathe. that hardly ever happens in people that are neutropenic. their white blood cell count is

low. what do you need. >> white cells so it will cause the inflammatory. so the medical student that was with us dean excellent job of looking back through the literature to find all the cases that had ever occurred in people

in you tro penic. we put that together in a case series and described the kinds of people that get this disease. very useful. if anybody runs into the same problem we did they can lieu back and say here is the kind of people that get this disease.

does it say that what we did for the patient is the best thing to do? no. that we couldn't answer by the use of a case series. that needs analytical research, testing one or more hypotheses in a quantitative fashion.

distinction is not as clear as descriptive research of contains comparisons but you can assesses aality. i may want to say -- you can look at the case series and say it appears there is more cases now than there used to be. that might be because people are

more aware and more likely to report it. it doesn't mean it's more common. i can't make that assessment. the push for hypothesis driven research tends to make descriptions sound less valuable.

but these descriptions are actually necessary in order for you to be able to form the hypotheses that you were going to test in the future. in other words, research is really a step-wise approach. you have to develop the question before you can answer the

if you skip over those earlier stages or assume that it's already been done, you're research may be on shaky ground to gip with. so once you come up with that kind of a question, then you have to match the kind of study to the kind of question that you

actually want to answer. so i say i made this graphic once on an airplane, and i think the women next to me wanted to jump out the window. putting all these lines on power point is not my porta. but -- forte. what i tried to do is separate

this out into the different types of research of descriptive verses analytical. notice i don't have anything under descriptive. you don't need to do anything. you count up the number of people that have a certain disease and describe what you

saw. on the other hand, analytical research entails an comparison, so there are several ways to make that comparison. the first is to divide things up into experimental verses non experimental research. non experimental is often called

observational research. so the distinction between these, in experimental research, it's the investigating who decides who gets what treatment. or who gets what exposure. so randomized trials put -- assign the patients to drug x or y or placebo p.

done by the investigator through humidization. in an observational study the investigator doesn't have a hand in who gets what. they're just watching what happens. the clinicians give what they think is best and the

investigator counts up the results. there are 3 types of observational studies. one is called a cohort study. one is a case control. and one is cross sectional. so a cohort is one that moves forward in time.

starts with the exposure. and then looks forward to the so i start with exposure. and you go to outcome. you can do that retrospectively, meaning that the data is gathered before the hypothesis is made. so a retrospective cohort would

be me going back and looking at the last -- which is essentially what we wanted to do, looking at blood cultures in the clinical center. i want to compare people that had a resistant bacteria in their blood compared to people that didn't, and then see how

many people died in each group. that's a retrospective cohort. i'm starting with the exposure, who had the bacteria in their blood and the outcome is death. i'm looking forward in time, even though the data has been collected previously making it retrospective.

a case control study does the opposite. it starts with the outcome and then looks backward in time. these are always retrospective. you're always looking backward. this is how the first studies were done where they looked at lung cancer and smoking.

so they looked at people who had lunch and looked back to see that more people who were smokers had gotten lung cancer than the people that didn't have lung cancer. there is a cross sectional component with no time component, looks across the data

at one point in time. experimental can be divided into two types. randomized and those not randomized. randomization is a process by which patients are assigned to give an intervention -- given interventions randomly.

you may generate a table of numbers or even use a kind that you can flip to decide which group people go into. but it is not a systematic process of aseeing folks to the -- assigning folks to non randomized trials don't do that.

in a non randomized trial i may do a historical control. pair people who got the new treatment who people who didn't. so for instance, in the hospital, studies on hand washing are often done this way. so what happens, a hospital implements a new protocol for

hand washing, count up people who get infections, compare to what happened in the past when they didn't do the hand washing. the problem is many things can change over time between the time when you implemented the hand washing and the time that you didn't. so you can't really

assign that the outcome is due to what you thought it was. the process of randomization allows you to assign causality to the outcomes. so if you randomized people to drug x or y, everyone in the study, if randomization works, has an equal chance of dying

from the disease. so nobody is different in the study at the beginning of the and then you follow them forward to the outcome. the outcomes differ -- the only thing different between the groups? this group got the intervention,

that group didn't. they were all equally likely to have a specific outcome at the beginning of the study. so what randomization does not do is account for biases that happened after the study begins. so i never saw this movie but there is a movie called the

dictator, one of those baron cohen movies. in this movie he's running in the olympics in husband country, the dictator of the country. he's running in the race. he starts off and shoots the gun off at the race. and he starts running and turns

around and shoots all the other people in the race. so did they all start at the same point? absolutely. that's randomization. what happened after the study. he shoots everybody. that's missing data.

so that randomization does not account for stuff that happens of the race has been started. so there are 4 different types of randomized trials. one is when you compare an intervention to placebo. and -- or you compare to go specific treatment.

the difference between those is that i can blind a placebo controlled trial. placebo means that you're giving people something that looks exactly like the thing that you're testing, so nobody knows what you're going to get. on the other hand, if you

compare treatment to no treatment, people know who got what. because you can know who go no treatment, you know who got the treatment. the other trials, dose response trial where you care higher doses of the intervention to

lower doses, and lastly, there is an actsive controlled trial where you compare drug x to drug y. which has their own special issues. you can do those studies to evaluate whether the new intervention is better than the

old one, but in what i do in infectious disease, many of these studies are not used to evaluate the new intervention is better want they want to rule out that the intervention is worse by some amount. and you'll hear more about that kind of specific trial design

later on in the course syllabus. so let's get down to developing hypotheses. the more specific you are, the because people can understand what you're trying to evaluate. so for instance, let's take the hypothesis, antibiotics are effective in ear infections in

kids. that doesn't tell me what antibiotics, what kind of kids or what kind of ear infections. so saying amoxicillin is effective in [indiscernible] between ages 2 and 6, that's at least i'm describing the kids amoxicillin is effective

compared to placebo, even now i know what i'm caring it to. in reducing pain, okay, now i specified the outcome, in children ages 2-6 with initial episodes of acute now i described the disease. that's the best way to screen

it. the more specific you are about your research hypothesis, the more people can understand what you're looking at. so that gets down to developing specific aims an objectives for your study. so once you've chosen an overall

research question that gives you the why. it's the rational for doing the then you need to answer other who are you going to study? that defines the population, where are you going to do the study, hospitalized patients, outpatients, both?

when are you going to do it? what's the timeframe? and is the study prospective, meaning that the hypothesis comes first and now you're going to collect the data, or is it retrospective, meaning i come up with my hypothesis but the data has been collected in the past

that i want to look at. what variables are you going to measure what, interventions and what are the outcomes that you're interested in looking at? so things all fall into content validity. and then how are you going to do this some what tools are you

going to use to make the measurements? of times what happens -- often times what happens, we're stuck with what we're stuck with. so in other words if you're doing a retrospective chart review, you want to look at how many people had disease x.

you may look through the chart and it says this patient had and you go to look for some confirmation of the person having pneumonia. you can't find it. it's not in the chart. can't find an x-ray, why this person that the they had

you are ustuck with what's already in the collected data. on the other hand, a prospective study you can define ahead of time what you want to measure. for both situations, though, planning is really key. and there is a thing in the efficiency literature that says

failing to plan is planning to fail. if you don't think through these ahead of time you're going to get stuck in a place you don't want to be. you'll run into a problem you didn't anticipate. so my favorite is when i review

protocols and you get stuck into this vague language. back in 1992 when bill clinton was having his issues with monica will you inski, he uttered the famous it depends what the definition of is, is. that's not the position you want to be in when you design

how is success defined or failure defined? you're no more informed than you were before you read that sentence. this is one of my favorite shows to watch on saturday morning. it's been on for like 35 years now, this old house.

i'm very jealous of these guys. they have every power tool known to mankind, which my wife will never let me buy. i'm stuck doing it by hand. these guys have the fancy tools to do it with. they have the right tools for the job.

and that's actually what you have to think about inclinical research, you need to apply the right tools to what you're trying to do. before we talk about efficiency in clinical trials let's talk about the common pit falls that people run into when they come

up with the research question. they let feasibility issues become paramount. it's better to change the question than it is to develop a research study that's invalid scientificry just because you don't have the resources or whatever you need to answer that

question properly. another one is taken on too many questions, and because you do that you don't answer any. you have too much information stuck into a single study, or it becomes so ditch to do that you can't -- difficult that you can't accomplish it.

lack of clarity on the hypothesis or -- not choosing a study design that matches the so one of the things i noticed infectious diseases, what i do, is we say there is an unmet medical need for this drug because there is resistance to the old drug, and people are

dying. and then you look at the study design, and it's not designed to show the new drug is better than the old one you were just told is so bad. it doesn't match the research design to what the stated problem is to begin with.

and then finally, the issue we talked about, vague specific aims and variables, unclear measurement properties of the tools you're using. one of the things to be very weary of is the two words you'll see in a lot of protocols called clinical judgment.

my judgment is not my colleague's judgment. when you put that in there you're inherently putting in vagueness into the study. sometimes that's okay. maybe what you want to evaluate is clinician judgment. why do clinicians make these

judgments? but on the other hand, if you're evaluating the effective interventions, if the outcome is clinician judgment, you're not sure what you actually measure because many things go into clinician judgment. so any questions about that in

terms of how to select and develop a research question? so we just spent 47 minutes talking about how to develop a research question. the reason we spent so much time on it is it's a process. it takes a while to do it. you won't sit down in a half

hour and develop a request question out of thin air. it takes a while to go through all that information, find out what's out there and develop that kind of a process. so -- but if you don't you're treading down a pathway where you could run into trouble

later, especially when it comes to getting funding for research. you have to defend all those things we talked about, anyway. so it makes better sense to think ahead. now you have your question. how can we design this study most efficiently?

>> sorry, i didn't see -- >> [inaudible question] the question is, is there funding available for qualitative research as much as there is for quantitative legislature? that's depending who you ask. there is some funding available, for instance, when i was using

the example of patient reported outcomes. the first step is the qualitative research piece. there organizations, like in the affordable care act created a group called the patient centered outcome research institute, which is provided

funding for that kind of so there are maces now specifically that you might be able at a gather funding. -- to gather funding. like we talked about with the tools -- you have to go to the right place with the right if you go to a place

that'snisted in qualitative research questions and ask them to funder your qualitative study, you may not be that successful. a lot of that is because this is-- this is my personal experience. people think e already answered the question when they haven't.

so you've got to show people the literature on why that question is really needing to be answered. any other questions? this light is in my eyes so it's hard to see everybody. okay? all right.

so let's, first of all -- we're going to talk about efficiency of clinical trials. let's define the terms. so a clinical trial is a controlled prospective trial enrolling human subjects often used to israel the effectiveness and or harms of interventions.

so that's why i was vocking, when i see mouth wash clinically tested i doubt they did a clinical trial comparing this type of mouth wash to that type. so i don't know what they mean be clinically tested. on the other hand, efficiency, at least in physics, means the

ratio of useful work to the energy flied to it. it means getting valid, reliable answers to important questions with the least amount of resources. you don't want to put patients at risk by doing a not steady valued.

but you don't want to have not enough people to answer the so to be able to do a study efficiently means actually more planning. not less work. so if you want to do a study with fewer patients, that translates inversely into more

so inverse relationship, fewer patients, more work. base you have to think about it ahead of time. back in 2001, the institute of medicine put out this monograph which i'm sure you'll want to read. it's 300 pages long.

called small clinical trials, issues and challenges. they clinical trials with small numbers of participants ... in other words, that can be summarized in two words. moreã±r work. so -- but that gets to the issue, what's a large or small

trial. a large trial is within that has adequate sample size or numbers of patients enrolled. to answer the primary research so large means large enough. the are first randomized trial that was eve) done wasqrq participants may have adequate

power, if the effect of the drug that you're testing is very large. like that one was where you had a 20% decrease in mortality. so this means balancing, exposing research subjects to potential harms of the experimental intervention with

the balance obtaining valid answers. remember, if you don't obtain valid answers, there wasn't a point to do the study to begin with. so this relates to the ethics of the study to start with. and there have been a number of

editorials in the literature complaining about this issue. that trials from their inception are too small to begin with. so that you can't really answer the question that you set out to address. so this is one editorial which actually argued that it's

unethical to start those studies do begin with. the other side of the argument is that, well, if i do a small study, somebody, some day, will put that together in a meta-analysis and answer the question for real. i have to say personally, i find

that to be a very weak response. you don't know what's going to happen in the future. for those that are familiar with yogi berra, the famous new york yankee that just passed away, he said progress notcation is very difficult eswhen it concerns the future.

so you don't know what somebody else is going to do. that doesn't have anything to say about the validity of your it doesn't explain why you should do the study now if you expect somebody else to do better in the future. maybe somebody else shoulddite.

a proposed study that cannot answer the question being asked because the necessary sample size cannot be attained ... that's pretty clear. so especially when what i do, there have been discussions about studies in patients who are infected with resistant

pathogens, talking about the use of smaller data sets. it would seem to violate a basic ethical premise to begin with, why you're doing the research to start with. so the other thing that the iom says, doing small trials is a last resort.

and it says that they should only be done when there are no alteratives. this cannot be overemphasized. the committee is not encouraging the use of small clinical trials but rather providing advice on strategies to consider in the design and analysis of these

trials. so -- actually, it even says for some trials it might be impossible to answer a research question with a high degree of competence. in those cases the best that one can do is assess the next set of questions to be asked.

so again, they're saying if you can't definitively answer the question maybe you need to put on to a different question. so what's the issue about small numbers in a study? why is that a problem? the problem is that the smaller the numbers of people in a

study, it increases the variability of the results that you get and increases the play of chance on the results that you might actually obtain. one of the examples i like to use is i'm a football fan. my wife would love it if fantasy football diappeared from the

face of the earth. she says my mood on sundays goes up and down depending how my team is doing. so if you actually look at the coin flip in the super bowl, the nfc team up until 2 years ago won the coin flip 14 years in a row.

what is that to you to. the ref cheating? because there is no correlation at all between who wins the coin flip or game. it's not owe big deal. so what to i know is going to happen -- do you know is going to happen over the next 50 years

of super bowls. >> it will even out tafc team will have a long run of winning the coin flip, it would end up 5050 over the next hundred years. but me only looking at the last 14 years of coin flips is too small.

and what i'm looking at is random chance, that we happen to get 14nfc teams always winning is nothing more than chance. it's wrong, it's wrong because i didn't take a random sample and i didn't look at a sample big enough. why p values do not have

anything to do with bias at all. and it also means that even though you reach statistically significant results, those results may not be generalizable or may not even be true. so the other issue, too many variables are present in a small study to assess cause and

effect. if you're looking at really sick patients, there is a lot of do you have going on with those people. and you may not be able to balance all those variables between the test and control group.

you may only be able to discern gross effects like did people ever die, very limited ability to analyze the baseline factors which may result in death. so if you only do a study with 20 people you're not going to be analyze whether older people die more often than younger people,

there aren't enough people to answer the question and you may be incapable of identifying adverse effects, especially if you study 25 people and the side effect happens in one of every 50. you may see nothing. that gets to the other issue.

absence of evidence is not evidence of avsense. the fact you do a study and don't side side effects of a drug doesn't mean it doesn't have side effects. so where the situations where smaller clinical trials may be justifiable?

in rare teases where there aren't many people that have the disease to begin with. astronauts in space. they're not talking about people with a disease. they're talking about really rare situations. so individually tailored

therapies. environments that might be isolated. like some guy flying the mars and -- i saw that movie interceller last night. did anybody like that movie? i had to wake up at the end. so -- emergency situations.

and public health urgencies are the places we might want to do these things. while smaller clinical trials are a last resort, efficient clinical trials are always justifiable and really what i should want to be doing. what we're going to do is look

at different methods to approach the efficiency of a study which are useful or not depending what situation, research questions you're trying to answer and what setting you're doing this study by these kinds of things are always applicable whether you're doing a larger study or a

smaller study. so first of all, we need to talk about what are the basic components of a clinical research study to begin with? if we're going to talk about altering these things to try to make the study more efficient, we have to know what the basic

building blocks are to begin so the first thing is that we spent the first half of this talk on, you have to have a clear object efof the study to if it's a comparative, what i called analytical study, you need to actually have a quantitative comparison between

the test and control group. that does not apply if it's descriptive. you need to select patients for inclusion in the study. and if it's a comparative study, you need to make sure that there is baseline comparability between the test group and the

control group. that applies if you're testing if you're looking at risk factors for an outcome you don't need baseline base that's what you're trying to analyze. what are the deferences between the people at baseline who get one disease verses another?

you want to minimize the biases in the study. you want to have well defined an reliable outcome measures that actually mean something to patients. many outcome measures in couldn't research studies are based on what the doctor chooses

to do. what i do, many of the studies in the past have been based on the doctors judgment that the person doesn't need more antibiotics. but it's not clear why the doctors were making those judgments.

we've been trying to move away from that to get to more patient centered outcomes that measure, is the patient living longer or better? if they're alive, which is sey to measure, can we have patient center measures that measure their ability to function in

daily lives rather than just judging whether the doctor needs to give more medicine or innovate. nobody comes to the doctor saying my goal is to get more medicine. they want to feel better in their lives.

finally, appropriate statistical analysis. that is the summary of this course in a nutshell. you're going to hear about all these things in greater detail marching through the course but keep that slide in mind. those are the things that you're

going to hear about for the rest of the time. so a lot of this has to do with this issue, remember, small and efficient clinical trials revolves around the issue of what's called sample size. sample size is essentially the number of research participants

or volunteers that you need to participate ate in your study. i use that terminology because, first of all, it's not right to refer to them as patients. if you're doing a survey on totally healthy people, they're not patients. remember, patients comes from

the greek word, to suffer. those beam lentor suffering. you may want -- those people aren't suffering. i may want to interview people as to why they get a flu vaccine or not. and the second thing, for these research studies people are --

you have to rely on the folks that sign up to volunteer to be a participate. keep in mind these people are research volunteers. they're helping you out by being so many variables have what's called a normal distribution or famous bell shaped curve we all

learned about in grade school. so what that means, many data falls on to this kind of bell shaped curve. 95% of the data is in the middle. 2.5% of the data on the tails of this curve. so if i took, for instance, the

height of people in this room, there would be an average height and then there would be some distribution. so if danny devito walks in, and shakil o'neal walks in, what will happen, this curve will get broader. role ahave pidgeon one end and

another person on -- we'll have people on one end and another way far on the other end. keep that in mind when we talk about homogeneity in the data. that middle part of the curve is called 2 standard deviations from the mean. the mean is right down the

middle at the top of that bell. so what i've done here is this is a descriptive study. i'm only looking at one curve. what we want to do in a clinical trial that's analytical comparing one group to another, you want to do this. we want to compare one group to

another group. so -- but the clinical trial, they compare average effects in groups of research participants who are administered to intervention to another group of people that don't get the let's look at this example. suppose we have two groups.

and if you look down at the bottom, the one group is centered around 50. and the other group is centered around 0. down at the bottom of that. if i can show this. right here. this is the one mean.

and here is the center of this bell shaped curve, 50 on the other one. so there is only 12 people per group in here. and the standard deviation which is the width of the bottom of the curve is 60. the reason this is shaped this

way is because we have only 12 people in the group. now what happens if we -- what we want to do -- what is statistically significant, actually means. so what scientifically significant means -- statistically significant means

and the p value of .05, means we want this little red tail of the one group to fall out -- the mean of this, off this curve, to exclude all of this curve except for that little 2.5% red tail on that end. so how do we get there? the first thing i want to show

you, look at this big overlap between these curves, getting a p value of .05 which means there is 2-and-a-half percent on this end and this end doesn't mean these two curves don't overlap. they overlap a lot. it just means they don't overlap as so much that we only have

this little teeny tail overlapping the mean of this so what happens if we go from 12 patients and increase the number of people to 24? can you see the difference? the curves are taller and narrower. because the standard deviation

is now smaller. so by increasing the number of people in the study, the variability in these measures goes down. this is only accounting for random chance and random variability. if i was measuring height but i used a ruler for

this group that was off by two inches but use a ruler for this group that was completely accurate, that is a systematic bias in the study. this will tell me nothing about this is only a measure of random variability. not a measure of bias.

random variability means it's occurring equally in both groups. so again, notice there is a good deal of overlap here. we're just excluding this tail down at the bottom. so finally, if we increase this to 48 people per group, there is

hardly any overlap at all. notice the mean doesn't budge. it's still 50 and still 0. centered around the same place, but by increasing the sample size, you have managed to decrease the variability or the standard deviation between the two curves.

you haven't changed anything else. just increased the sample size. but this is what people don't want to do. they want to do a smaller study. so how do we, then, decrease this variability without doing this?

without putting more people into so that requires some understanding of what goes into sample size. do not sweat this. you're going to get whole lectures on this stuff. this is just toe in the water time to try to get some basic

understanding of this to understand efficiencicy of so a sample size is based on 4 parameters or ingredients. and there are two types of error inclinical trials. the one is called type one error. fancy word for saying you

conclude something works when it doesn't. a false positive result. a type two error is actually you concluding that something doesn't work when it does. a false negative error. the term power which is over used tremendously, is actually

one meanis the type two error. type one error is usually that .05, where the p value of .05 comes from. type one error, you make a mistake and conclude a new drug was effective five types out over a 100, or one time out of 20.

the type two error is usually set at 10 or 20%. since power is the ability to detect a difference when a difference twists, it means power is one minus that. so a trial with a type two error of 10 or 20% has the power of 80 or 90%.

by the way, as an aside, you'll hear more about this. there is no role for talking about these after a study is over. so these are only probabilities that occur before the study is done. so for instance, you do a study

and you expect to fiend that drug a is 10% better than drug b. the results show drug a is 2% better but you can't show a difference because that difference was so small. so what people will say after the study is, that study was

under powered. no, it wasn't. it was not. you got the effect size wrong to that's the problem. it was not under-powered. so it's kind of like when the ravens are, what, 1-5 this year. only one won game?

that was not the prediction. they were supposed to do well and now they've only won one game out of 6. you can taught about if they had this guy not hurt, only had this within play didn't go south -- i love between sports casters do this.

that's why you play. those things did happen and they did loose. that's the way it is. talking about stuff that could have happened after a steady is over is irrelevant. there is literature on the overuse of the term

underpowered. what people mean, if only i had done a bigger study, i would have detected a difference. for reasons you'll hear about later, that is false. and that is because these numbers don't stay where they are.

so there is something i showed you in these photographs that never happens in real life. when i did this and i increased the sample size, the 50 stated 50 and 0 stayed at 0. that doesn't happen. anybody want to know what phenomenon that is?

you guys know, you they understood this. called regression to the mean. and what happens is as the sample size goats bigger, these numbers are going to move closer to their true values. sooners, i think what the average male height in the

united states is 5'9". i think the average female height is 5'4" if i'm correct on so suppose we did the average in this room and it came out to be 6 foreign and 5'2" for women. the more men and women we put into the room what's going to happen?

it's going to move toward the population mean. it is not going to someday at 6 feet and not going to stay at 5'2". it's going to move toward where the true average actually is. so when people say their study is underpowered they think that

somebody came by and poured cement on this 50 and ob this 0, and they can't move. and that what i showed you here, that the curves squeeze in around each other, that actually happens in real life. this, folks, is a fictional example.

i was trying to show you the issue with standard deviation and try not to move too many action on the graph but it doesn't happen because of regression to the mean. the other two components of sample size are the variability in the data, which is called

standard deviation and finally, the treatment difference. the difference between the test group and control group. the bigger that difference is, the less number of people you need to actually show that. that's sometimes also called the effect size or it can be called

the delta as well. so here is the 2 things you're looking at. variability in the data is the width of this curve, and the treatment effect size is the difference between the means of the two curves. by making these further apart

increasing the effect size you can have a smaller sample size. by making the base of this narrower, by decreasing variability, you can have a smaller sample size. so now that we have that background, how are we going to make these trials more financial

marketant? so there is -- efficient. there is four ways to do this. one at that time. the first, have a more focused relevant research question. so the research question we talked about needs to be one worth answering.

but focusing it on a more homogenous population, actually allows you to have less variability in the data. so if i only do a study in people over 65, the data is more homogenous than a study that goes from neonates all the way to 95-year olds.

the other issue is that it's really important to calculate a sample size after you come up with a research question. not start off with the sample size and back calculate for the because what happens when people do that, is you end up usually overestimating the effect size

coming up with a sample size too small, and you can't answer the question to begin with. the second thing you could do is change those ingredients of the remember, i said there is type one and type two error. not suggesting that you do this. if you allow more error in the

study, you will get away with a but when you think about it, a p value of .05 or type one error means you're wrong 1-20 times. if you don't think that's a lot, go on to clinical trials.gov. see how many thousands and thousands and thousands of studies there are out there.

1-20 is a lot. if you were wrung that many times we'd really get things wrong. the other things i joked, if i don't think patients care about an error rate that high, hang a sign on your office door that says i'm wrong 1-20 times and

watch people run out of the waiting room. that is a pretty high level of error. it's why people ask for confirmation of results. because if i get confirmation of results you're less likely to be if you increase the type 2 error rate you're more likely to "dead

poets society" card things that are beneficial. so -- discard things that are beneficial. neither one of these are a good idea. the next thing to do, enhance effect sizes. you can't make a drug more

effective than it is, or hand washing more effective than it is. so what can you do that would allow you to legitimately increase effect size? the first thing we talked about is actually try to come up with more homogenous populations,

because that will decrease the you can also try to look for what's called effect is there some group in whom i think this intervention is more likely to be effective, and then study it in that population. , so in other words, like i said, penicillin was way more

effective in holder people that were sicker than in i didn't thinker, healthier people. you could get way with a study way smaller if you studied the effect in older, sicker people. by the way, answering more clinically relevant questions, because the young healthy people

will get better on their own most of the time anyway. so what that does is it effects the generalize ability have your so you can only draw conclusions about the population that you studied. but this requires what we talked about at the very beginning,

understanding the natural history of the disease. and evidence from prior trials. so one of the things that that hurts you, enrolling participants in the trial whom the effect is expected to be 0. you do an unanimously trial that has lousy -- pneumonia trial

that has lousy diagnostics and enroll people with a cold. they don't of the disease you were interested in. luckily oncology and cancer drugs have been going this way in terms of developing agents that more specific to specific for instance, the monoclonal

antibody, [indiscernible], herceptin has been used for breast cancer. it is only effective in people who row her 2 positive and have that genetic mutation, which is only 20 or 30% of patients with the other 70 or 80% is potentially harmful, and it

doesn't help them. you want to specifically use this in people that are her 2 positive to begin with. so that is studying a more hemogenius population and you can use the smaller sample size and more importantly, avoid harming people that can't

benefit to begin with. another issue, optimizing exposure. so some preclinical information or data from healthy volunteers may help you pick the right dose that you want to study, in a future trial going forward. so -- or if you're doing a study

with a drug that may have high variability in the population, you may want to check blood concentrations in the blood. now, that needs to be done by evaluation through an unblinded third party to maintain blinding of the study. you'll hear more about these

various interventions in studies when we get more into the course. i'm just again giving you a flavor of some of the things that you're going to hear about later. the next kind off thing you may want to do is use continuous

instead of dichotomous outcomes. what do we mean? continuous is one where you collect data continuously over time as opposed to measuring at one point in time. suppose like what i'm doing in the influenza studies, you wanted to look at the time it

takes people to get over their symptoms for influenza. that is a continuous measure. on the other hand, if i look at how many people are over their symptoms on day 10, when do i measure that. >> one time. on day ten.

so there is not as much information being captured, and therefore, i don't have as much ability to detect differences. now, what that does do is requires more frequent data capture. what we have been doing is using patient diaries electronically

and fill out at home. they don't have to be in the clinic to do this. here is a good example. this was published a decade owing accomplish two drugs in cholera. notice, that more people got better, 120 hours.

if i went beyond this, everybody is better. in both drugs. so would i be able to detect the difference between the drugs if i looked how many people were betting or day so 10? no, i'd completely miss it. this time to event analysis

which the continuous measure, was able to detect differences that you would not have found if you used a dichotomous, yes, no, at a specific point in time out here. so the next thing, selection of getting outcomes that are more common actually allows you to

detect differences. the problem with that is that the more common volunteers may not be what you're interested there is a trade off between those. so sometimes people do what's called composite outcomes. they look at multiple things

that might happen to somebody. the problem is that results in issues with interpretation. if what you're combike together is not equally important to so for instance, the cardiologists get this right. they use a composit outcomes of death, stroke, and heart attack.

those are all bad things. that happen to you. on the other hand, in oncology, they have an outcome measure called progression free survival. that means you're alive or dead, and you're tumor shrinks or it they're not desame thing.

and if i gave you the choice of do you want your tumor to shrink or be alive, you'd choose the later. so the problem is these two things are very disparately different for people. one is a laboratory radiologic finding, the other a direct benefit for patients, that is

not like death stroke heart attack in cardiology and doesn't imply benefit on all parts of the outcome. you could shrink the tumor but nobody lives longer. so -- most of what the outcomes are in the tumor shrinkage who here is what happens when

you combine things. so this was a hypothetical, an article on composite outcome measures. they came up with a hypothetical trial where the outcome was patients died or were hospitalized. it looks like only 5 people in

the test group died and were hospitalized, 15 people in the that's great. so 3 times as many people had bad outcomes. so this new stuff is great, light? wait a minute, we didn't count the people who died because they

never got to the hospital and dropped dead at home. that's the opposite. so when we add it up there is go difference between the test and the control group. so how you formulate these composite outcomes is really important.

you can't leave things out that may be relevant to the patient. so getting hospitalized and dying, that's important. if you died at home because this was so bad it killed you fast that's not good either. we need to think about those so what are some ways to

decrease variability? so decreased variability. we talked about effect size. moving those curves further apart. on the other hand, decreasing variability is taking the width of that curve and making it that's the other way we could

decrease the sample size of a so the first thing is to come up with better measurement. so when you do a measurement, there is the true value and there is always some associated error with that measurement. one way to make that better is decrease the error.

and there is two types of error. that we already talked about. random error, by chance alone and systematic bias. so by coming up with better measurements, you can decrease some of that error which will decrease the variability in your so remember, i said earlier

about what the problem is, when you allow clinician judgment? this is a really cool study that archy cochran, the collaboration is named after, did in the 1950s. took a birch of coal mine ers in south africa and randomized them to 4 groups of doctors.

he told the doctors to ask the people about their symptoms. cough, chest nightness, pain, and shortness off breath. so these are all the same coal miners. what we ought to see is the number of symptoms should be flat across abcd.

right? same people, should be the same amount of symptoms. look at this variability. so these doctors got all sorts of different answers to the same patients about whether they had coughs, sputum -- and it also may be how much the particular

doctor thinks that thing is so doctor a must love cough. but doesn't think too much about sputum. whereas dr. c here thinks that -- i have it the opposite way. dr. c is concentrating on cough, less on sputum.

dr. c things -- this is the same measurement, same patient, different people. because it's vague, left up to judgment, this is all this is what you don't want in your study because just that kind of variability is going to result in a bigger sample size

foregoing. you're not more accurately measuring what's happening to the patients. you're more inaccurately what's happening and it results in you need to do a better study. coming up with a better outcome that's why we're reporting

patient outcome measures, asking patients about their cough. that will decrease the variability by asking patients those questions in a standardized way. this is another great way. you're thinking that's just subjective.

if we use more objective findings we won't have this problem, right? chuck more tell did this study in the 70s. he took marbles and put them between two sheets of stye roam foam, or this kind of stuff that covered the microphone, squishy

he had the doctors measure the size of the marbles. what the doctors didn't know, at this of the marbles were idea call. he was able to measure variabilities. he 0 showed the marbles the same size, they had a 25% reduction

in size based on the measurements. he was doing this, he was an oncologist. he wanted to show that judgments about do you mean size were variable. so what it showed was that the same marble of the same size got

25% smaller when you left it up to the doctor to judge. that defies the lays of physics, i need these guise to do my taxes. reduce my tax bill and get 25% smaller. so the interesting thing is that reduction is way bigger than the

average tumor shrinkage in a that much noise is really going to -- measurement error alone, going to sink your trial, to be able to do that. so this is the -- the reason i put this in here, to show you, the percentage of approvals that have used patient reported

outcomes. this data is old, a decade old now. what it showed was that so i places like pain drugs have been using patient reported outcomes for a long time. gastro intestinal drugs for a long time. here is my part in infectious

diseases, 0. i guess when you have the flew you don't have symptoms, right? of course. you feel fine when you have the flew. so it's kind of crazy. that in antivirals and anti-bacterials we haven't been

using any patient reported outcomes. there is a reason, why? when pence littlen came alone it was when people died from infections. we need the ask the question, maybe drug x is better than y on decreasing symptoms.

even though it's not different in preventing death. we need to ask these questions. so one of the outcome measures used on, surrogate endpoints. so these are laboratory tests or physical signs used as a direct replacement for how the patient actually feels, functions or

survives. so there are very good examples of this like viral load in hiv studies and cholesterol as a can you bestitute for lowering heart attacks. what you never hear about, the surrogate endpoints that flop because they don't represent

what happens to the patient. two weeks ago there was a study in the "new england journal of medicine" that loaninged at people with a disease, a par acritic infection when you're bitten by an insecurity. it ineffects your heart, can give you arrhythmias and kill

you from the heart problems. they took people, gave them a drug, and they looked whether it decreased arrhythmias in death. didn't do anything. but the surrogate outcome was a pcr for whether you had the parasite in your blood or not. huge difference.

whether you had the parasite in your blood. did it do you good. no you're still good. it didn't help you. so what's interesting, in my -- you hear people say that's okay. that's all the drug was supposed to do was kill the bug.

so one of my friends, a statistician, if we threw the volof patient in a volvolthat would -- in a volcano that would be the same thing t problem with this, we think the intervention works only through that mechanism when there may be unmeasured benefits or

unmeasured harms or other pathway ways of cities that are not -- disease not measured. so killing off the bug didn't do anything in terms of decreasing the heart problem. and maybe that's because at that point, the damage is already done and not related to the bug

anymore. it's related to the damage that happened in the heart and you need another treatment unrelated to killing a bug. so lastly, the issue of assuring follow-up of the enrolled participants in the study. this is a key, key issue.

where you spend all this time getting people into the study. but don't follow them up. and when you have a lot of missing data in a study, it can really sink your study because then you don't know what happened to those people at the end of the study.

and finally, there are some particular types of study zones like cross over studies, n equals one studies, and sequential trial designs. what you're doing here is using the patient as their own control. so in other words, they would

get intervention a, have a washout period, and would get intervention b. you compare what happened to that same patient in a to b. now, there are only certain places where these trial designs can be used. and usually, these are most

useful in chronic illnesses with a stable course of disease. so if you tried to do this in pneumonia, this isn't going to work, right? in pneumonia you get sick, ill, and you get better on don't in the space of a couple weeks to a month.

so you can't cross over to some other drug. you're going to be better or you're not in the short period of time. some diseasesalways worse in certain times of the year, this is things called period effects that can effect these kind of

cross over trials as well. it's most useful in diseases where the treatment effect is rapid in on set or a rapid cessation of the effect of the intervention when it stopped. so you have arthritising. you get put on a drug, get better in a couple days.

they stop it, you get sick in a couple days. so those are the kinds of things where this is useful. i'm going to skip over this. we're running out of time. the last thing you'll hear about in this course, issue called adaptive trial design.

and really, adaptive trial designs, i joke, they remind me -- i like food. what kind of food? in other words, there is many, many ways to adapt a trial. so one of the ways used tin past, you study multiple doles of a drug.

at some point you drop out one of the doses because it's very clear that that particular dose is not effective. though kind of studies have been done for a long time. on the other hand, you're moving along doing a study on a particular outcome measure and

you look at the data and say doesn't look like i'll have an i'm going to switch horses and go to a different outcome. that's a lot different. so there are -- you'll hear about this in terms of advantages and disadvantages. i'll make a caveat on this final

slide. almost everything i have talked about today in terms of modifying studies to make them more efficient is in the setter of trying to show one intervention is bet irrelevant than another one. that's a superiority hypothesis.

there is this crazy thing i mentioned earlier called non inferiority, a misnomer. to you and me, non inferior means not worse. that's what i was taught in english class. this trial design is actually designed to rule out that the

new intervention is worse by some clinically acceptable amount. now, there are places where this makes sense. and the reason for these kinds of trials is there are places where patients make accept somewhat less effectiveness for

another benefit like decreased side effects. for instance, uncomplicateed urinary tract infection in young healthy woman. suppose there is a new drug that causes less nausea. when the outcome measure is death, like somebody in a

ventilator with pneumonia in the icu, how much death is clinically acceptable decrease in effectiveness? the answer is 0. nobody wants to die. and you would accept a drug with more side effects if it kept you alive.

so there are places where this kind of hypothesis makes sense and places where it does not. so you'll hear more about what are some of the issues in designing these kind of studies. some people have held that in certain situations, these trials are unethical because they

disregard patient interest. one, because they shouldn't be used in life threaten illness, two, that other benefit of decreased side effects are improved convenience is never spelled out for the patients. the study designed to show new drug is less effective but never

evaluated, okay, so what's the benefit on the other side? like decreased side effects. if that's not vaulted all you've done is put patients at increased harm without telling them they're getting something else on the other side. so in conclusion, developing an

efficient trying starts with planning and good research and them more planning. and more planning. so i want to get across to you, this is -- the reel key is really thinking about this ahead the question comes from, the sample size comes second.

not the other way around. and there are various methods to increase effect sizes and decrease variability, when applied in the correct setting can provide valid, reliable answers to your research and for some diseases developing the tools is a good way to

start. for instance, developing better outcome measurements and better data on natural history before you get to the point of where you want to do clinical trial of-- trials. any questions? judge are there online questions?

or people just e-mail the questions after, right? okay. thank you very much. keeping you from your dinner.

No comments:

Post a Comment