Are beauty products placebos?

The first episode (season 3) of ABC’s The Checkout did a fun investigation into the claims of the beauty industry. Reduces the appearance of fine lines by 90%! 95% smoother skin! Oh really? How often have we heard these sort of claims?  Well, surprise surprise, the ‘clincial trials’ this industry carries out are not based in science. The Checkout team approached some beauty industry companies and asked for the results to the trials. All but one said it was a commercial secret (so of course, you wouldn’t expect to find them publishing in dermatology

journals). One company did reveal some information.  The company in question (and most of the beauty industry)  hire companies such as Cutest Systems to run ‘trials’ and tailor the results to the beauty company’s market goals. I don’t believe in any of the claims and laugh at the funny ‘scientific’ diagrams and product names. And yet, I use beauty products. Why? I think it is all down to the placebo effect. The placebo effect is a very powerful psychological impulse/reaction and can make people feel better when all that has been taken is a fake treatment or inert substance masquarding as an active one. Marketing has a lot to do with how beauty products are perceived and there is also social pressures around ideals too. So, if I use a cream that I know is no better than sorbelene  but makes me feel good and the gives me the belief that I am looking after myself, do you think – (to paraphrase ) it’s worth it?

The Federal Health Budget

I don’t want to get into politics too much but the latest media twitterings about the federal health budget and trimming Medicare (and perhaps the Pharmaceutical Benefits Scheme as well) has me a little annoyed. The thing is that the mechanisms for trimming are already in place. What I am referring to is the federal health technology assessment process and the role of the Medicalbag-147782_640 Services Advisory Board (MSAC), the Pharmaceutical Benefits Advisory Board (PBAC) and the Prostheses List Advisory Committee (PLAC) (replacing the Prostheses and Devices Committee (PDC)). At one of the Health Technology Assessment International (HTAi) conferences I attended, I learnt that the original remit of MSAC was to review all technologies funded by Medicare. But MSAC funding was only able to stretch to reviewing new technologies for approval. I think this is still the case. This means that there is probably a large number of legacy (the word I heard used to describe items in Medicare before the advent of MSAC)  technologies still being reimbursed by the federal government that are ineffective and costly. So, why doesn’t the federal government increase funding to MSAC, PBAC and PLAC (and modify the remits) in order to review all technologies reimbursed by government so that ineffective, costly and perhaps unsafe technologies can be removed? It will be a increase in spending in the short term but in the long term, could save money.

I am not an expert in this area (ie – definitely not a health economist!) and am just reporting about what I know and have learnt. If readers have any comments, I would love to read them!

Search filters with Julie Glanville

What are search filters and why do we use them? How can we use them effectively? These questions convey the main content of the second workshop given by Julie Glanville from the York Health Economics Consortium.  clip_image002So, what are search filters then? Search filters are strategies to find a particular kind of research, type of population (age groups, inpatients), geographic area etc. In a 2014 paper (Beale S, Duffy S, Glanville J etal. Choosing and using methodological search filters: searchers’ views. Health Info Libr J. 2014 Jun;31(2):133-47.), users were surveyed about their use of filters. The main reasons are: to refine systematic review searches, to assist in answering short questions (PubMed Clinical Queties eg) or to asertain the size of the literature when doing a scoping search. Why did users choose the filters they did? The most common answers were performance during validation and sensitivity/specificity information. What about appraisal? Can you critically appraise a search filter? There is a tool for that and it is available at the ISSG Search Filter website. Julie talked about the main issues that you want to know about the filter (these are considered in the tool): what is it designed to find, and how are the terms identified, prepared, tested and validated?

When appraising a filter, there are are a number of issues within each question to consider. For the first question, 2 of the obvious ones are – does the filter use a database I can use too? And – does the filter have the same idea about the topic that I do? The second question involves questions like how were MeSH terms selected and were the terms selected in an organic or highly structured way? How the filter was prepared relates to the questions above: how were the terms combined for testing?

Here we come to a tricky bit – sensitivity and specificity. And precision – where does that fit in? I’ll be honest here, these concepts still trip me up and I have to take time thinking about them before I can take action. I’m working from the handouts Julie prepared and memory. Sensitivity: how many relevant records are found out of all the relevant records available? Specificity: how many irrelevant records are not retrieved. Precision: The number of relevant records retrieved out of the total retrieved by the search strategy. If a research design is  common, the more likely the retrieved record using the design will be relevant. So when you want to use a filter, perhaps you should look at the precision reached during testing. Or if you want to be as sensitive as possible, perhaps looking for sensitivity percentages is the way to go. What do you think/do?

Ok – now for validation! First things first – validation is not testing (but it should be part of the development phase). If you test the filter in the same reference set as the first testings, that is not validation. Validation requires the filter to be tested in a different reference set, or in a database (real world). This determines that the filter maintains performance levels shown in initial testing. Julie demonstrated with a test filter [W.L.J.M. Devillé, P.D. Bezemer, L.M. Bouter Publications on diagnostic test evaluation in family medicine journals: an optimal search strategy Journal of Clinical Epidemiology, Volume 53, Issue 1, January 2000, Pages 65-69] that validation performance fell from 89.3% to 61%. Haynes filters also dipped from 73.3% to 45%. That is quite a lot! Julie surmised that it could be something to do with the reference set or that neither perform consistently.

Sometimes a filter isn’t quite what you are after – nearly there but not quite. So you edit it. Warning! This isn’t the same filter (which has been developed through research). It is a filter you have made based on another. If you do this (and I admit that I have) it is very important to record this and why you made the change, plus acknowledging the original researchers. Also, be prepared for unexpected consquences from your modification.

The workshops were excellent – everyone learnt new things and I came away inspired.  Julie is a great trainer and presents difficult concepts in a very accessible way. If you ever have the chance to attend a workshop or training given by Julie, grab it! It is well worth it.