Category Archives: Search Strategies

The death of a research subject

In 2001, a clinical trial participant at Johns Hopkins (JH) died as a result of inadequately researched safety information. Ellen Roche was a 24 year old lab assistant at the JH Asthma and Allergy Center and thought to enrol in the study in altruistic intent (as most clinical trial participants do). Immediately following Roche’s death, research funding was suspended and JH was left scrambling to address the controversy. Medical librarians in the US went into overdrive over the incident. It appeared that the lead researcher did a rudimentary search of the literature. He searched PubMed (Medline records from 1966 onwards were available in 2001 and in order to discover older records (hexamenthonium was used extensively in the 1950s) you would have to have searched print indexes), Google, Yahoo!, LookSmart and Librarians and researchers said that the literature search was lazy and foolhardy. Not only were the researchers at fault, but the JH Institutional Review Board (IRB) was at fault also for not providing proper oversight. Librarians talked about literature search standards and what should constitute a reasonable search. JH have now included literature search standards in the IRB application form and many other IRBs across the US have too.

JH has well resourced libraries – why didn’t the researcher contact them to ask for assistance? Was this a failing of the library as well? The Welch Medical Library began liaison services in 2000 for a few departments (Asthma and Allergy and the IRB were not included). What does this mean for medical librarians now? Perhaps librarians could try and work closer with their research offices in providing some sort of literature search service – approving search strategies and advising researchers where they can be improved. It could be spun as part of the quality control process of clinical trial management. There also needs to be better reporting of adverse effects.  Apparently in a hexamenthonium trial in the ’70s, adverse effects were not included. Adverse effects reporting is still a problem today and also a problem for librarians in searching for adverse effects literature due to inadequate indexing amongst others.

When I first read reports about the death of Ellen Roche and the swirl of commentary about it in the library e-list, it touched me, as it did others. As health professionals, we have a duty to get involved in research activities in order to prevent another incident like this happening again.

Further reading:

The hexamethonium asthma study and the death of a normal volunteer in research

Johns Hopkins’ Tragedy: Could Librarians Have Prevented a Death?


HTAi15 IRG Advanced Searching Workshop: Understanding and Using Text Analysis Tools

IRG Text Analysis 15The final session of the IRG Advanced Searching Workshop was divided up into 4 parts and was rather intensive. Well, the entire workshop was intense! A lot was covered. Julie Glanville from York Health Economics Consortium spoke briefly about text analysis tools (which was covered in more detail at the HLA event earlier this year) and Carol Lefebrve talked about the 2002- project to test the Cochrane Highly Sensitive Search Strategy for RCTs (developed in 1993-4) to determine whether it still performed well with as little terms as possible. Since the development of the Medline filter, a few changes had occurred – the addition of the Controlled Clinical Trial publication type in MeSH (quasi-randomised trials), better trial reporting due to CONSORT guidelines, and the development of text analysis tools. It was through testing with a gold standard set of RCTs and a set of non-trial records using WordStat that the best identifier term was Controlled Clinical Trial PT. But due to indexing changes (RCTs and CCT PT double-indexing cessation) reassessment led to the best identifiers (those terms locating most RCTs) was  RCT MeSH OR CCT PT. This was one of the issues with the filter that Carol mentioned (always be aware of indexing policies and changes!) and the other was the non-text terms to catch non-indexed records. Siw Waffenschmidt and Elke Hausner from the Institute for Quality and Efficiency in Healthcare (IQWIG) discussed generating search strategy development test sets for data analysis and guidance for identifying subject terms and text-words. The guidance is from the recently published EUnetHTA Process of information retrieval for systematic reviews and health technology assessments on clinical effectiveness (still in 2nd draft stage). Hausner spoke about the work she and other IS researchers did in comparing the conceptual and objective approach to search strategy development, one which is elaborated in this journal article: Development of search strategies for systematic reviews: validation showed the noninferiority of the objective approach.  Basically, the research showed that a conceptual  strategy developed by Cochrane with more synonyms was not superior to a objective search strategy on the same topic developed by IQWIG. However, the objective approach is not faster than the conceptual. Time saved is not the issue here though, it is the quality of the search. IQWIG demonstrated with their projects that the conceptual approach can produce high quality strategies that are stable over time and more suited to complex questions.  Take home points: text analysis tools are here to stay! It will take time to learn this approach but the plus side is that it produces strategies of equal quality to those developed using the conceptual approach as well as data to demonstrate strategy development and decision-making.

HTAi15 IRG Advanced Searching Workshop: Qualitative Evidence

IRG BoothWhen I found out that Andrew Booth from ScHARR was going to be presenting at the IRG Advanced Searching Workshop, that was a decision maker for me. My education plan includes teaching methods for finding qualitative research and since I know Booth is an excellent teacher, I though it a brilliant opportunity.  I also know from previous experience that Booth loves acronyms so I had to laugh when I saw the opening slide of his presentation (above). Systematic reviews of qualitative research is increasing, and according to Booth, there are around 12 a month hitting the databases (the search strategy used to determine this interested me: Topic=(“qualitative systematic review” OR “qualitative evidence synthesis” OR “qualitative research synthesis”) OR Topic=(metastudy OR metasynthesis OR “meta synthesis” OR “meta ethnography” OR “meta ethnographic” OR “metaethnography” OR “metaethnographic”) OR Topic=(“systematic review of qualitative”). The main difference between quantitative and qualitative SRs is that while the former seeks to pool numerical results, the latter seeks to find themes or constructs – it is an interpretive exercise which aims to gather insights. Qualitative SRs answer questions such as how and why interventions work or don’t work, what outcomes matter to patients, and what patient experiences of disease are to name a few. The hardest thing about qualitative evidence is appraisal, so I was glad to find out about a method which Booth says is similar to GRADE. CERQual‘s aims to assess how much confidence can be had in the evidence in qualitative reviews. There are four components to this: methodological limitations of the included studies; coherence of the review; relevance of the studies to the research question; and adequacy of the data. Creating a search strategy can use these components as a guide: use methodological filters to find high quality studies; use sampling to retrieve papers that provide a fair representation of the phenomenon; use question mnemonics to guide search strategy formation; and use alternate search strategies to locate similar studies. There are a few question mnemonics to choose from: SPICE, ProPheT, ECLIPSE, SPIDER. It was fun to test these tools with a qualitative research question – one of the really useful workshop activities. Now onto something tricky – searching! The next activity was assessing whether an article was qualitative research or not and if not, why it came up in the retrieval set. I really liked this activity and will copy it in my own sessions – it is one where you can learn a lot from. And of course, there was another acronym to help with identifying qualitative research! ESCAPADE asks what methods, approaches and data were used. Now for qualitative filters. Filters are search strategies that identify papers using specific study designs or publication types and subject terms/free text terms used in high quality studies. Some of them are one liners: MeSH Qualitative Research (though it has some limitations as this subject added in 2003). Many of these filters are available via the ISSG Filter Resource. Booth mentioned the health services research filter available through the PubMed Topic Specific Queries (which I have to say I haven’t looked at often) which includes a qualitative research option. The third activity was an interesting and useful exercise too and one worth reusing: what weakness can you spot in a filter and what is one instance in which a filter could be useful? Next up was the sampling search method. I have a hard time with this because one of the mantras with searching for SRs is comprehensiveness. But this is impossible to achieve in reality for many reasons. SRs address specific questions and have strict inclusion and exclusion criteria and so there are set boundaries – the key is to be exhaustive within these. In qualitative research, the boundaries are more fluid. Is sampling the answer and what is it anyway? Sampling is used to reach data saturation (until information repeats itself and nothing new is gleaned). So instead of having more of the same information, only retain information that explains a concept, confirms interconnections or assists in argument formation. Where can studies be found? Don’t rely on databases – use expansive searches to include footnotes (risk of confirmation bias), citation tracking and snowballing, theses and books and other grey literature. A concept that Booth mentioned was sibling studies – different studies in the same context Sampling - cluster searches(I might not have got this right). Clustering is a search strategy technique used for identifying sibling studies. This is a difficult concept for me even though I use some of the techniques in isolation. I guess it is a matter of being a little more systematic (!) in my approach. Phew – it was an intense session which could have gone on all day. Well worth attending – and I encourage you to attend any sessions run by Andrew Booth – they are really good.

And did I do the acronym challenge? I did but stopped 1/2 through the session because there were so many and I was loosing track!