Category Archives: Journal Articles

Scientific racism?

At one of the many journal clubs I regularly attend, the article we were discussing included Race in the demographics. I’ve been mulling over Race in medical articles for some time now and my belief, informed by my cultural values, indicates to me that this is ingrained racism. I asked members of the journal club what they thought and whether they thought it a useful metric to count. The responses were mostly around the article being from the US and that it reflects how most of the US population think of themselves (there was some awkwardness there).  Earlier in the week, I emailed a well respected epidemiologist and asked him what he thought, and he said that it is almost impossible to separate socio-economic and genetic factors . Race is a sociocultural concept that is used to classify humans by skin colour. It has been used to justify superiority of one group over another group. Ethnicity is expressing belongingness to a social group with similar cultural or national traditions. An ancestral group is the genetic link from an ancestor to descendants. Socio-economic groupings uses a persons position in society and is mainly based on occupation, industry or other professional activity. So what is a useful metric in medical research? Socio-economic grouping can indicate what food, health care, housing, education and other services a person has access to. Ancestral group can indicate who will suffer disease or other negative health outcomes due to genetic susceptibility. Race – what does that indicate? I suspect the use of race is a result of social conditioning and not intentional racism. It’s a curly question.

SAUER, N.J. FORENSIC ANTHROPOLOGY AND THE CONCEPT OF RACE: IF RACES DON’T EXIST, WHY ARE FORENSIC ANTHROPOLOGISTS SO GOOD AT IDENTIFYING THEM? Soc.Sci.Med. 1992, Vol.34, No.2, pp.107-111.

Templeton AR. Biological races in humans. Stud Hist Philos Biol Biomed Sci. 2013 Sep;44(3):262-71. doi: 10.1016/j.shpsc.2013.04.010. Epub 2013 May 16.

Fujimura JH and Rajagopalan R. Different differences: The use of ‘genetic ancestry’ versus race in biomedical human genetic research. Soc Stud Sci. Author manuscript; available in PMC 2012 Feb 1.

Ousley S, Jantz R, Freid D. Understanding race and human variation: why forensic anthropologists are good at identifying race. Am J Phys Anthropol. 2009 May;139(1):68-76. doi: 10.1002/ajpa.21006.

 

Sicily statement on classification and development of evidence-based practice learning assessment tools

   Sicily statement on classification and development of evidence-based practice learning assessment tools

Julie K Tilson, Sandra L Kaplan, Janet L Harris, Andy Hutchinson, Dragan Ilic, Richard Niederman, Jarmila Potomkova and Sandra E Zwolsman                      BMC Medical Education 2011, 11:78 doi:10.1186/1472-6920-11-78  http://www.biomedcentral.com/1472-6920/11/78 

Abstract

Background

Teaching the steps of evidence-based practice (EBP) has become standard curriculum for health professions at both student and professional levels. Determining the best methods for evaluating EBP learning is hampered by a dearth of valid and practical assessment tools and by the absence of guidelines for classifying the purpose of those that exist. Conceived and developed by delegates of the Fifth International Conference of Evidence-Based Health Care Teachers and Developers, the aim of this statement is to provide guidance for purposeful classification and development of tools to assess EBP learning.

Discussion

This paper identifies key principles for designing EBP learning assessment tools, recommends a common taxonomy for new and existing tools, and presents the Classification Rubric for EBP Assessment Tools in Education (CREATE) framework for classifying such tools. Recommendations are provided for developers of EBP learning assessments and priorities are suggested for the types of assessments that are needed. Examples place existing EBP assessments into the CREATE framework to demonstrate how a common taxonomy might facilitate purposeful development and use of EBP learning assessment tools.

Summary

The widespread adoption of EBP into professional education requires valid and reliable measures of learning. Limited tools exist with established psychometrics. This international consensus statement strives to provide direction for developers of new EBP learning assessment tools and a framework for classifying the purposes of such tools.

Instruments for Evaluating Education in Evidence-Based Practice

Shaneyfelt et al. Instruments for Evaluating Education in Evidence-Based Practice: A Systematic ReviewJAMA. 2006;296(9):1116-1127. doi:

Abstract

Context  Evidence-based practice (EBP) is the integration of the best research evidence with patients’ values and clinical circumstances in clinical decision making. Teaching of EBP should be evaluated and guided by evidence of its own effectiveness.

Objective  To appraise, summarize, and describe currently available EBP teaching evaluation instruments.

Data Sources and Study Selection  We searched the MEDLINE, EMBASE, CINAHL, HAPI, and ERIC databases; reference lists of retrieved articles; EBP Internet sites; and 8 education journals from 1980 through April 2006. For inclusion, studies had to report an instrument evaluating EBP, contain sufficient description to permit analysis, and present quantitative results of administering the instrument.

Data Extraction  Two raters independently abstracted information on the development, format, learner levels, evaluation domains, feasibility, reliability, and validity of the EBP evaluation instruments from each article. We defined 3 levels of instruments based on the type, extent, methods, and results of psychometric testing and suitability for different evaluation purposes.

Data Synthesis  Of 347 articles identified, 115 were included, representing 104 unique instruments. The instruments were most commonly administered to medical students and postgraduate trainees and evaluated EBP skills. Among EBP skills, acquiring evidence and appraising evidence were most commonly evaluated, but newer instruments evaluated asking answerable questions and applying evidence to individual patients. Most behavior instruments measured the performance of EBP steps in practice but newer instruments documented the performance of evidence-based clinical maneuvers or patient-level outcomes. At least 1 type of validity evidence was demonstrated for 53% of instruments, but 3 or more types of validity evidence were established for only 10%. High-quality instruments were identified for evaluating the EBP competence of individual trainees, determining the effectiveness of EBP curricula, and assessing EBP behaviors with objective outcome measures.

Conclusions  Instruments with reasonable validity are available for evaluating some domains of EBP and may be targeted to different evaluation needs. Further development and testing is required to evaluate EBP attitudes, behaviors, and more recently articulated EBP skills.

Evaluation tools: EBM Fresno Test | Berlin Questionnaire