Category Archives: Criticical Apprasial

TEBM15 – what I learnt

It has been awhile since I returned from the UK and I have been meaning to write about the experience and what I learnt for awhile now. But every time I started to settle down to writing, something cropped up that I had to deal with.

There was only one other librarian there. Most were university lecturers, clinical educators, and hospital consultants or researchers who had teaching as part of their work duties. They came from all over the world, and I wasn’t the only Australian there. There was someone from Monash Health and someone from the University of Melbourne. In the group I was in (small group teaching is a common method used in medical education), there was a university lecturer of undergraduates from South Korea, a university research fellow from Norway, a clinical educator from Plymouth Primary Care, 2 from the Nuffield Primary Health Care (which CEBM is a part of), and 2 from NIHR Applied Health Research and Care. The picture below is all of us after the course end. TEBM Group 3My last post expressed a bit of my nervousness about the timetable. It looked very intensive! And it was intensive, but very stimulating and inspiring. There was some prep to do before the course and one of them was to prepare a teaching session that you would deliver normally. On the first day during small group work in the afternoon, people delivered their presentations. I did mine without powerpoint (the only person to do so!) and it was mostly off the top of my head after reviewing my notes (I had run out of time to prepare and send one – bad me!). The comments from the facilitators were that it needed a bit more structure and could I do it again? Sure. I had just brought an iPad that didn’t have Microsoft Office applications. So  before going out to dinner at Jamie Olivers the following evening, I downloaded ppt and then after dinner, worked on my presentation and emailed it to the facilitator at 10.30pm and presented again the next day. The mini presentations were commented on by the facilitators and the group and it was a great learning experience. The final group work on the last day was to prepare and deliver a stats presentation, teaming up with another to do it. Now this was challenging! I teamed up with a clinician and said ‘let’s do likelihood ratios’. There was a statistician in the room who helped people with various problems. The clinician and I used the formula in the workbook provided but we couldn’t figure it out. And we asked for advice … and it turned out the formula in the workbook was incorrect! What happened next was a mini class for the group about how to calculate likelihood ratios.

The plenary sessions were delivered by a mix of professionals – clinical educators giving their tips and tricks for specific areas (eg RCTs), curriculum designers and assessors, a highschool science teacher about lesson planning, and a medical librarian. I thought I wouldn’t learn anything from the searching session but I was wrong! Her tips were: make sure access to electronic sources is available beforehand, what to do if there isn’t access available and make searching sessions relevant by using clinically relevant examples (if you have a defined group) and health news reported in the media (if you don’t). I liked the random allocation of the resource to use. Many tips were repeated: safe learning place (what does that actually mean?), use humour, use stories (many presenters used personal stories to illustrate a clinical example, use a mix of media – voice, ppt, videos, images. During each plenary, everyone – including the presenters and facilitators attended – and in some sessions, there was lively debate amongst them. Rod Jackson’s session about the GATE Frame was interesting but what he brought to the session was infectious enthusiasm which really made an impact. The highschool teacher talked about session planning which was really good and something I really wanted to know about. She gave out a handout called Bloom’s Taxonomy – Teacher Planning Kit which is a great tool for working out what sort of questions and words to use in education sessions. And I was really impressed when she told me at one lunch break that she runs a journal club for her final year students – cool!! Both the diagnostics and the RCT session underlined keeping things simple and not throwing too many concepts at people and the teaching stats plenary included a fun activity (head measurement) to demonstrate a inter-rater reliability concept. And I was chuffed when the SR presenter used a concept from my presentation in his!

On the final day we had presentations from all the groups, and a wrap-up from the Chair and Director of CEBM, Carl Heneghan (with some genial threats that he was going to follow up on what we have done in 3mths time … ).

All ppts are available on the TEBM15 website and accounts of this year’s course are mentioned on the CEBM blog.

Search filters with Julie Glanville

What are search filters and why do we use them? How can we use them effectively? These questions convey the main content of the second workshop given by Julie Glanville from the York Health Economics Consortium.  clip_image002So, what are search filters then? Search filters are strategies to find a particular kind of research, type of population (age groups, inpatients), geographic area etc. In a 2014 paper (Beale S, Duffy S, Glanville J etal. Choosing and using methodological search filters: searchers’ views. Health Info Libr J. 2014 Jun;31(2):133-47.), users were surveyed about their use of filters. The main reasons are: to refine systematic review searches, to assist in answering short questions (PubMed Clinical Queties eg) or to asertain the size of the literature when doing a scoping search. Why did users choose the filters they did? The most common answers were performance during validation and sensitivity/specificity information. What about appraisal? Can you critically appraise a search filter? There is a tool for that and it is available at the ISSG Search Filter website. Julie talked about the main issues that you want to know about the filter (these are considered in the tool): what is it designed to find, and how are the terms identified, prepared, tested and validated?

When appraising a filter, there are are a number of issues within each question to consider. For the first question, 2 of the obvious ones are – does the filter use a database I can use too? And – does the filter have the same idea about the topic that I do? The second question involves questions like how were MeSH terms selected and were the terms selected in an organic or highly structured way? How the filter was prepared relates to the questions above: how were the terms combined for testing?

Here we come to a tricky bit – sensitivity and specificity. And precision – where does that fit in? I’ll be honest here, these concepts still trip me up and I have to take time thinking about them before I can take action. I’m working from the handouts Julie prepared and memory. Sensitivity: how many relevant records are found out of all the relevant records available? Specificity: how many irrelevant records are not retrieved. Precision: The number of relevant records retrieved out of the total retrieved by the search strategy. If a research design is  common, the more likely the retrieved record using the design will be relevant. So when you want to use a filter, perhaps you should look at the precision reached during testing. Or if you want to be as sensitive as possible, perhaps looking for sensitivity percentages is the way to go. What do you think/do?

Ok – now for validation! First things first – validation is not testing (but it should be part of the development phase). If you test the filter in the same reference set as the first testings, that is not validation. Validation requires the filter to be tested in a different reference set, or in a database (real world). This determines that the filter maintains performance levels shown in initial testing. Julie demonstrated with a test filter [W.L.J.M. Devillé, P.D. Bezemer, L.M. Bouter Publications on diagnostic test evaluation in family medicine journals: an optimal search strategy Journal of Clinical Epidemiology, Volume 53, Issue 1, January 2000, Pages 65-69] that validation performance fell from 89.3% to 61%. Haynes filters also dipped from 73.3% to 45%. That is quite a lot! Julie surmised that it could be something to do with the reference set or that neither perform consistently.

Sometimes a filter isn’t quite what you are after – nearly there but not quite. So you edit it. Warning! This isn’t the same filter (which has been developed through research). It is a filter you have made based on another. If you do this (and I admit that I have) it is very important to record this and why you made the change, plus acknowledging the original researchers. Also, be prepared for unexpected consquences from your modification.

The workshops were excellent – everyone learnt new things and I came away inspired.  Julie is a great trainer and presents difficult concepts in a very accessible way. If you ever have the chance to attend a workshop or training given by Julie, grab it! It is well worth it.

Assessing qualitative research for systematic reviews

Do you search for studies for potential inclusion in qualitative systematic reviews? If you do, you might be interested in this quality assessment tool developed by Dr Christopher Carroll and Dr Andrew Booth.  QuaRT aims to assist reviewers with decisions about inclusion and exclusion of qualitative studies by asking questions in 4 domains:  question and study design, selection of participants, method of data collection and method of data analysis.  This is still a new tool and as it has been developed during writing of a health technology assessment, it has not gone through metholdological development, testing and evaluation. If you plan to use this tool, Drs Carroll and Booth would be interested to hear from you. http://quart.pbworks.com