User login
GRAND CAYMAN, CAYMAN ISLANDS "Read clinical research studies carefully and thoughtfully, and don't rely on anyone to think for you," Dr. Lee Zane advised at the Caribbean Dermatology Symposium.
An important skill to master is the ability to summarize a study in a single sentence that describes the study design, primary predictors, primary outcomes, and study population, said Dr. Zane, a dermatologist at the University of California, San Francisco.
Readers who adroitly summarize a study can communicate its essential elements to others and create a framework for evaluating the study's results. To that end, Dr. Zane offered several points to consider when reading a study.
All studies have limitations, such as bias or confounders, that may compromise the interpretation of the results. Such factors do not generally invalidate the results, but they invite readers to consider the results in the context of the limitations, Dr. Zane explained. "In fact, sometimes conclusions may be strengthened by the presence of a confounder," he said.
Randomized clinical trials are considered to provide some of the strongest clinical evidence, but even they have vulnerabilities that can compromise their interpretation.
Randomization itself is one such limitation. If not done properly, randomization can introduce bias into a study. For example, randomization by whether a patient comes in on Monday or Wednesday versus Tuesday or Thursday is not true randomization, Dr. Zane said. In addition, traits such as age and sex can confound the results if they aren't distributed equally among randomized groups.
"Always scrutinize Table 1," he advised. Table 1 shows the features of the randomized groups. "If there are differences among groups, you have to decide whether they may have had a significant effect on the outcome."
Don't confuse clinical significance with statistical significance. "Just because a result has a low P value doesn't mean it is an important or useful clinical finding," he said at the meeting.
"There is a general overreliance on P values in our literature," Dr. Zane said. He cited the historical origin of P greater than .05 as an indicator of statistical significance. The value was arbitrarily chosen by statistician Ronald Fisher in 1926 in a paper assessing the effectiveness of manure on crop growth.
"Investigators should report the actual P value rather than simply saying whether it is greater or less than .05," he said. "Knowing whether a P value is .06 or .98 provides much more information about how likely the result may have been simply due to chance."
Confidence intervals may be a preferable alternative to P values, Dr. Zane said. These intervals are a measure of precision, not the result of a statistical test, and they provide a range of values around an estimate that may be considered statistically similar to that estimate.
Don't forget to consider such methodologic factors as the size and composition of the sample populationas well as the level of blindingwhen reading and evaluating a study, he said. "Are the subjects in the study similar to those that you see in your clinic? Could the lack of blinding in an open-label study have contributed to the observed results?"
All studies provide evidence of some sort. "The key is being able to determine the strength of that evidence," he said.
GRAND CAYMAN, CAYMAN ISLANDS "Read clinical research studies carefully and thoughtfully, and don't rely on anyone to think for you," Dr. Lee Zane advised at the Caribbean Dermatology Symposium.
An important skill to master is the ability to summarize a study in a single sentence that describes the study design, primary predictors, primary outcomes, and study population, said Dr. Zane, a dermatologist at the University of California, San Francisco.
Readers who adroitly summarize a study can communicate its essential elements to others and create a framework for evaluating the study's results. To that end, Dr. Zane offered several points to consider when reading a study.
All studies have limitations, such as bias or confounders, that may compromise the interpretation of the results. Such factors do not generally invalidate the results, but they invite readers to consider the results in the context of the limitations, Dr. Zane explained. "In fact, sometimes conclusions may be strengthened by the presence of a confounder," he said.
Randomized clinical trials are considered to provide some of the strongest clinical evidence, but even they have vulnerabilities that can compromise their interpretation.
Randomization itself is one such limitation. If not done properly, randomization can introduce bias into a study. For example, randomization by whether a patient comes in on Monday or Wednesday versus Tuesday or Thursday is not true randomization, Dr. Zane said. In addition, traits such as age and sex can confound the results if they aren't distributed equally among randomized groups.
"Always scrutinize Table 1," he advised. Table 1 shows the features of the randomized groups. "If there are differences among groups, you have to decide whether they may have had a significant effect on the outcome."
Don't confuse clinical significance with statistical significance. "Just because a result has a low P value doesn't mean it is an important or useful clinical finding," he said at the meeting.
"There is a general overreliance on P values in our literature," Dr. Zane said. He cited the historical origin of P greater than .05 as an indicator of statistical significance. The value was arbitrarily chosen by statistician Ronald Fisher in 1926 in a paper assessing the effectiveness of manure on crop growth.
"Investigators should report the actual P value rather than simply saying whether it is greater or less than .05," he said. "Knowing whether a P value is .06 or .98 provides much more information about how likely the result may have been simply due to chance."
Confidence intervals may be a preferable alternative to P values, Dr. Zane said. These intervals are a measure of precision, not the result of a statistical test, and they provide a range of values around an estimate that may be considered statistically similar to that estimate.
Don't forget to consider such methodologic factors as the size and composition of the sample populationas well as the level of blindingwhen reading and evaluating a study, he said. "Are the subjects in the study similar to those that you see in your clinic? Could the lack of blinding in an open-label study have contributed to the observed results?"
All studies provide evidence of some sort. "The key is being able to determine the strength of that evidence," he said.
GRAND CAYMAN, CAYMAN ISLANDS "Read clinical research studies carefully and thoughtfully, and don't rely on anyone to think for you," Dr. Lee Zane advised at the Caribbean Dermatology Symposium.
An important skill to master is the ability to summarize a study in a single sentence that describes the study design, primary predictors, primary outcomes, and study population, said Dr. Zane, a dermatologist at the University of California, San Francisco.
Readers who adroitly summarize a study can communicate its essential elements to others and create a framework for evaluating the study's results. To that end, Dr. Zane offered several points to consider when reading a study.
All studies have limitations, such as bias or confounders, that may compromise the interpretation of the results. Such factors do not generally invalidate the results, but they invite readers to consider the results in the context of the limitations, Dr. Zane explained. "In fact, sometimes conclusions may be strengthened by the presence of a confounder," he said.
Randomized clinical trials are considered to provide some of the strongest clinical evidence, but even they have vulnerabilities that can compromise their interpretation.
Randomization itself is one such limitation. If not done properly, randomization can introduce bias into a study. For example, randomization by whether a patient comes in on Monday or Wednesday versus Tuesday or Thursday is not true randomization, Dr. Zane said. In addition, traits such as age and sex can confound the results if they aren't distributed equally among randomized groups.
"Always scrutinize Table 1," he advised. Table 1 shows the features of the randomized groups. "If there are differences among groups, you have to decide whether they may have had a significant effect on the outcome."
Don't confuse clinical significance with statistical significance. "Just because a result has a low P value doesn't mean it is an important or useful clinical finding," he said at the meeting.
"There is a general overreliance on P values in our literature," Dr. Zane said. He cited the historical origin of P greater than .05 as an indicator of statistical significance. The value was arbitrarily chosen by statistician Ronald Fisher in 1926 in a paper assessing the effectiveness of manure on crop growth.
"Investigators should report the actual P value rather than simply saying whether it is greater or less than .05," he said. "Knowing whether a P value is .06 or .98 provides much more information about how likely the result may have been simply due to chance."
Confidence intervals may be a preferable alternative to P values, Dr. Zane said. These intervals are a measure of precision, not the result of a statistical test, and they provide a range of values around an estimate that may be considered statistically similar to that estimate.
Don't forget to consider such methodologic factors as the size and composition of the sample populationas well as the level of blindingwhen reading and evaluating a study, he said. "Are the subjects in the study similar to those that you see in your clinic? Could the lack of blinding in an open-label study have contributed to the observed results?"
All studies provide evidence of some sort. "The key is being able to determine the strength of that evidence," he said.