SCIENTIFIC RELIABILITY and PEER REVIEW

"You can fool some of the people all of the time, some of the people some of the time, but you can't fool all of the people all of the time." Abraham Lincoln

The basic explanation of peer review means exactly what the term implies. Scientific journals have "reader(s)" to review the papers submitted for publication. The "reader(s)" read and review the document. It is hoped during this process any gross errors will be uncovered. The document may be returned to the author a number of times before everyone is satisfied with the product. The "reader(s)" is not usually given the raw data that was used by the author. Hopefully, the author and "reader(s)" have expertise in the area to be reviewed.

Often, when a scientific paper is published, there will be two dates listed on the paper, usually at the beginning or at the end of the document. The first date is the date the document was submitted or given to the journal publisher. The second date is the date of acceptance. The dates may be months later or even years later. This is a quick way to tell if a document is peer reviewed. Dates of submission and acceptance are not always published on the document however. An editorial statement concerning peer review may be contained in the journal itself, usually in the December or January issue. Even though a journal is peer reviewed, it may publish supplements which are not peer reviewed. For example, the supplement of a journal may contain conference papers, proceedings, discussion group reports, and the like which are not peer reviewed. It is usually best to check the editorial policy of a journal to really know if a particular document is "peer reviewed."

"Good Science, The Minimum Standard of Credibility for Any Research, Requires Peer Review and Publication,"

"[Peer review] focuses on the methodology, analysis and results recited in an article in an attempt to weed out error (e.g., from observer bias, improper statistical analysis, insensitive methods, carelessness and failure to recognize and control for confounding variables) and to assure that the text of an article is complete so that readers may judge the work for themselves...

...Unlike the biases inherent in litigation--which may skew an expert's opinion or results to benefit one part over another--good science in not primarily concerned with what the results may turn out to be, but rather with the accuracy and reliability of the experimental methodology and data by which they are reached...

...[Peer review] and publication does not end the scientific inquiry. Indeed, the mere act of publication frequently provokes scientific critique of the article by readers, who often submit Letters to the Editor commenting on the strengths or weaknesses of the work and its implications...

Because [peer review] does not, and cannot, inquire into whether authors actually did what they report, the reviewers cannot assure the readers of the truth of the ultimate conclusion contained in an article...

...[Peer review] does not guarantee the results are error free...

The goal of publication and [peer review], as part of good science, is to prevent premature reliance on certain propositions by the scientific community before the supporting evidence has been reviewed by a panel of expert peers and published for all to see; the goal of the rules of evidence, as part of due process, is to prevent premature reliance on certain propositions by a fact finder before it has been found probative, relevant and reliable. Neither guarantees the accuracy of the information or assesses its weight in finding the truth. But both establish a threshold level of reliability. In either situation it would be imprudent to rely upon a proposition before, at a minimum, it has been passed upon for plausibility.

The only mechanism available to assess the credibility of a scientific opinion crucial to the question of liability, independent of the biases of litigation, is to confirm that the opinion was reached by good science, e.g., based upon published evidence that has been [peer reviewed], and then subject to replication and verification. Accordingly, were this Court to repudiate [peer review]--an actions which Amici strongly oppose--there would be no objective standard for a court to apply in evaluating the credibility of a scientific opinion. [Peer review] and publication of scientific data and conclusions, simply put, are the only non-biased checks on scientific opinion available to the courts and should, therefore, be employed to the extent feasible." From Foster, K. R.; Huber, P. W.; Judging science: scientific knowledge and the federal courts, MIT Press, Cambridge, MA, c1997, page 183.

The Office of Research Integrity (ORI) promotes integrity in biomedical and behavioral research supported by the Public Health Service (PHS) at about 4,000 institutions worldwide. ORI monitors institutional investigations of research misconduct and facilitates the responsible conduct of research through educational, preventive, and regulatory activities. Organizationally, ORI is located in the Office of Public Health and Science (OPHS) within the Office of the Secretary of Health and Human Services (OS). http://ori.dhhs.gov/

Articles:
"On Being a Scientist: Responsible Conduct in Research--Publication and Openness" http://www.nap.edu/openbook.php?record_id=4917

"Conduct and Misconduct in Science" by David Goodstein
"Reproducibility: In reality, experiments are seldom repeated by others in science. When a wrong result is found out, it is almost always because new work based on the wrong result doesn't proceed as expected. Nevertheless, the belief that someone else can repeat an experiment and get the same result can be a powerful deterrent to cheating. This appears to be the chief difference between biology and the other sciences. Biological variability -- the fact that the same procedure, performed on two organisms as nearly identical as possible is not expected to give exactly the same result -- may provide some apparent cover for a biologist who is tempted to cheat. This last point, I think, explains why scientific fraud is found mainly in the biomedical area." http://www.its.caltech.edu/~dg/conduct_art.html

The basics are verifiability and reliability. Who agrees with the author? Have the same experimental conditions been tried by others?

See also: junk science in Judging science and The Toxicologist as expert witness

CSPI--Center for Science in the Public Interest

Committee on Science, Engineering, and Public Policy, Panel on Scientific Responsibility and the conduct of Research, Responsible science: ensuring the integrity of the research process, National Academy of Engineering, Institute of Medicine, c1992.

MEDHUNT Medical Document Finder
HONcode (Health on the Net) Site-Checker

This is a very short interactive questionnaire that will help you determine if a
Web site follows the ethical principles highlighted by HONcode. WRAPIN--automatically determines the reliability of medical documents.
http://www.hon.ch/

Bibliography:

Edwards, G.; "Ethics in alcohol research and publishing," ALCOHOL AND ALCOHOLISM (1996), 31 (1): 7-9.

Furst, Arthur, The Toxicologist as expert witness : a hint book for courtroom procedures, Taylor and Francis, c1997, 106 p.

F.; Hammersley, R.; "The Effects of alcohol on performance," In: Smith, A. P.; Jones, D. M., eds.; Handbook on human performance, New York, NY, Academic Press, c1992, p.73-126. This article discusses the different types of research, their purposes, and minimum standards for number of subjects (18), scientific method, and much more, 199p.

Foster, K. R.; Huber, P. W.; Judging science: scientific knowledge and the federal courts, MIT Press, Cambridge, MA, c1997. What is "scientific knowledge"? When is it reliable? These deceptively simple question have been sources of endless controversy. In 1993, the U.S. Supreme Court handed down a landmark ruling on the use of scientific evidence in federal courts: federal judges may admit expert scientific evidence only if it merits the label "scientific knowledge," and testimony must be scientifically "reliable" and "valid." In the book--organized around the criteria set out in the 1993 Daubert ruling, Foster and Huber consider such issues "fit" (whether a plausible theory relates specific facts to the larger factual issues in contention), the falsifiability of scientific claims, scientific error, reliability to science (particularly in epidemiology and toxicology). The meaning of "scientific validity," peer review, the problem of boundary setting, and the risks of confusion and prejudice when scientific materials is present is presented to a jury.

Jones, A. W.; "Some thoughts and reflections on authorship," ALCOHOL AND ALCOHOLISM (1996), 31 (1): 11-15. Jones explains the peer review process and some errors in peer reviewed articles he has encountered.

updated 12/26/16