Uni of Edinburgh
I’ve just read Isla Kuhn’s posts about EAHIL2015 and I have to agree with her – it was a great event and I’m glad I attended. Like Isla, it was my first EAHIL conference and perhaps it won’t be the last. I attended many great workshops. Some have already blogged about it and I feel a little tardy (I started writing this post a month ago – oopps)! However, I hope my posts will add to what has been written and not offer more of the same. It is worth reading other reports anyway because you can’t be in two places at once. There were so many great workshops offered and it was hard to choose which to attend.
The conference dinner at the National Museum of Scotland was a treat (waiters coordinated serving – almost like a dance) and the ceilidh itself was massive fun. I met some great people – some I had communicated with via email and twitter only, so it was nice meeting them face to face at last. And wouldn’t you know it, I found out who the other Australian attendee was – from Eastern Health, just a few suburbs away from the Royal Melbourne Hospital!
The plenaries were interesting and sitting in the main lecture theatre made me feel like an undergrad again. I did my Major in Sociology so the sense of deja vu was very strong (the plenary presentations also had this effect on me). Professor Hazel Hall talked about the DREaM project which aimed to create a network of researchers in the library and information field and along the way, encourage the application of research in practice. This project finished in 2012 but the resources and the networks are still available. Dr Joanna R Eckerdal talked about her dissertation subject which was around how health literacy informed contraceptive method choice in young women, and the topic of Dr Liz Grant’s talk was about the Global Health Academy Project which amongst other goals, addresses sustainability and equity issues. You can read about member activities on the Academy blog
EAHIL 2016 will be hosted in Seville, Spain and if you are considering a holiday in Europe next year (or fitting a holiday around an overseas conference like I do), I encourage you to think about attending.
Well, it has been awhile without posting – apologies everyone! So why did I choose to take this workshop? Well, I have a basic grasp of what content/thematic analysis is, well – I think I do, but I thought I should perhaps learn more about it instead of relying on presumption. I also had the recent experience of collating feedback surveys received from a course we run at work (I don’t usually do this job) and realised that when it came to the free comment section, I didn’t know what to do with the information. If I had some basic knowledge of this type of analysis, perhaps I could extract meaningful information given during feedback. So what is thematic analysis all about? It is a qualitative research method that identifies recurring patterns, ideas or themes in recorded data. Professor Ina Fourie from the University of Pretoria led the session and before the workshop, she asked people to write to her about why they wanted to take the workshop. I responded, along with some others and these were bundled together for us to analyse. I found this a really interesting exercise to do, and also hard. What you have to do with your data is look at it more than a few times and perhaps in different moods and periods of time. You also have to ask yourself about your assumptions and biases and take these into account. No one is without bias. Then each of us reported our findings to the class and this exercise demonstrated that different people see different things in data. Fourie said that she aimed to give us an idea about what was involved in thematic analysis as it is quite an involved process and requires many goes before getting the process right. And there isn’t any right or wrong answers to data – the act of thematic analysis is part of the research act itself. The process of extracting reoccurring ideas or themes is called coding and is a cyclical process where the code is refined and ideas reorganised along the way. Some coding can be built before analysis based on interview structure and the research question. When it comes to reporting, the methods in which the way the code and themes has to be explicitly described and the reasoning behind them too. So, could this method be used to extract information from free commentary in feedback surveys? I am a little uncertain about this. It certainly was an interesting process to go through. Perhaps I need a few goes and more knowledge about the theories behind it before I can see whether it could work.
This session given by Dr Ian Handel from the Roslin Institute in the UK was great fun. Handel asked us what we wanted to cover in the session and wrote them on a whiteboard, crossing them off as we went. We played some fun games with die and chickpeas which funnily enough, didn’t work out the way he wanted it to go first off so we started again – resulting in ‘proper’ results. Tee hee! It was a fast-paced session which might have made some game players count wrongly (me amongst them). So, did I learn anything new? I do have some rudimentary knowledge of statistics but thought I should attend this session because I find I have to have it repeated for me since I don’t use the knowledge often. I tend to forget. Statistics can be dry and for people without a firm mathematical footing, confusing. Handel was very enthusiastic which made the session fun and the really good thing was that he started at the beginning – with mean median and mode. Once you get these down, the rest becomes a little easier to follow. The standard deviation is the range or how spread out your sample is. The null hypothesis (this is something I didn’t get in my rushed introduction to statistics) is the underpinning of determining Type 1 and Type 2 errors and the basis for thinking about p values. A null hypothesis is that there is nothing going on or that there this no effect. It is the opposite of your hypothesis. I found a good video recently that describes type I and type II errors. The p value is set before the study is done and is a % indicating significance. And here is a handy mnemonic – if the p is low, the null must go! Big samples better pin down the differences between groups, but the groups have to be similar – you have to compare like with like. Big samples equal smaller confidence intervals (estimates of population parameters) while smaller samples have larger confidence intervals. We didn’t go into great detail about parameters so I might have to go over these again later on.
Handel discussed how visual depictions of statistics can be misleading, using a bar chart created by some US government department using very large values (something like 100|1000|10000|15000) on the Y axis. This made the items measured look equal but when you reconfigured the Y axis measurements to a more reasonable configuration (100|500|1000|1500) the equality became seriously inequality. He recommends 3 books: The Tiger That Isn’t: Seeing Through a World of Numbers | The Visual Display of Quantitative Information | Dicing with Death: Chance, Risk and Health. He also recommends 2 websites – one is about funny correlations eg: the number of drownings by falling into pools correlates with the number of films Nicholas Cage starred in and BBC Radio’s statistics program, More or Less.