Monthly Archives: November 2015

Dealing with unknowns.

Science is all about dealing with unknowns.

There are the big unknowns, ‘Can we eradicate cancer?‘, ‘Why do we forget things as we get older?‘, ‘ Can we grow replacement organs?‘. Then there are the day to day niggling unknowns. These are the ones that tend to cause the most anxiety. Perhaps because we never expect to completely answer the big questions and are simply looking to add to the body of knowledge.

Pretty much all of the day to day problems I deal with relate to ‘how’ are we going to test a particular hypothesis. Once you have data in hand, it is not uncommon for some technicalities or oversights to emerge. We have to accept that the perfect study design is often unobtainable, and instead are striving to control for as many external factors that may influence the result as possible. Where you couldn’t do so in the way the experiment was conducted, you have a second chance at the analysis stage. This is limited by two things: 1) knowing what all of these possible confounders are, and 2) actually having a measure or proxy for that confounder.

There are two routes taken when dealing with confounders: one option is you perform the initial analysis and then see if it changes with the addition of additional covariates, alternatively you include all the variables from the outset. Personally I don’t see the point of doing an analysis, if you are subsequently going to discount any of the results which you find later to be a product of some other factor. Of course this view, may reflect my ‘omics background, where, given the large number of features tested in every experiment, spurious results are expected as part of the course and the quicker you can discount them the better.

Recently I have been working with some data for which, we are aware of many possible confounders. Some of these were obvious at the start and we have the relevant information to include in the analysis. For some of the unknowns, we have calculated estimates from our data using a commonly accepted methodology – however we are unsure of how accurate these are, as there is little empirical evidence to truly assess them, and whether they are capturing everything they should.

An alternative in high dimensional data (that is when you have lots of data points for each sample), is to use methods to create surrogate variables. These capture the variation present in your dataset presumed to be reflecting the confounders we are concerned about (and those perhaps we haven’t thought of yet). I have always been cautious of such an approach as I don’t like the idea of not understanding exactly what you are putting into your model. What’s more there is a possibility that you are removing some of the true effects you are interested in. However, there is the opposing argument of, ‘What does it matter? If it prevents false positive results then that’s the whole point.’

At present it is somewhat an open question which way we should proceed. It is good practise to question your approach and test it until it breaks. Having tried a few ways of doing something – all of which produce a slightly different numbers, how do we decide which is the correct one? Part of the problem is that we don’t know what the right answer is. We can keep trying new things but how do we know when to stop? Because unlike school we can’t just turn the textbook upside-down and flick to the back pages to mark our effort as right or wrong. Instead we have to think outside the box to come up with additional ways to check the robustness of our result. But this is part of the course, research is all about unknowns. These are the challenges we relish, and maybe eventually we will start to convince ourselves that our result might be true!

Often the gold standard is replication, that is repeating the analysis and finding the same result in a completely independent sample. Sometimes you might have a second cohort already lined up, so this validation can be internal and give you confidence in what you are doing. Or you may face a nervous wait to collect more data or for another group to follow up your work.

Sometimes though, you just have to go with what you have got. Sharing your work with the research community is a great opportunity for feedback and may prompt a long overdue conversation about the issues at hand. Ultimately, as long as you are clear about exactly what has been done, your findings can be interpreted appropriately.

 

Questions and more questions

Within my direct team I am the only mathematician. The rest of my colleagues have come from biological backgrounds and have spent a lot of their careers in lab generating data.  While they have lots of experience in analysing this data, they have fully embraced the addition of a statistical mind to expand their skill set. One of the benefits of this is that I get to work across the group and am involved in a wide variety of projects.

The level of my involvement varies from getting stuck in and doing some of the analysis, explaining particular methodologies, making suggestions or providing a sounding board for other people’s ideas. Our office is a very open, social environment where we can discuss problems and ask questions as and when they occur.

Having the confidence to ask questions is very important. If you work in academia it is presumed that you are very intelligent and therefore know everything about everything. It can therefore be a daunting environment for a student, as you feel that any question you ask may inadvertently expose your weaknesses. However, not asking for help when you need it is a weakness in itself and will only hold you back.

The breakthrough for me came when biologists started asking me maths questions. It made me realise that we all had different skill sets and most importantly we were here to learn from each other, in return the maths student could ask the biologist biology questions! What you start to realise that everyone has gaps in their knowledge, it just may be hidden behind a good poker face.

I really enjoy sharing my knowledge and the challenge of trying the explain a concept clearly. It also gives me confidence that I do know what I’m doing, if someone goes away understanding something that boggled them previously. However, as the resident statistician, initially I felt a lot of responsibility to answer every question about statistics completely,  correctly and succinctly. What’s more I also felt that I should be able to answer any question posed. But as with asking questions, you shouldn’t be ashamed to admit you don’t know something when answering them too. Many of my answers are prefixed with ‘I am not an expert in this but if it was me I would …’ Sometimes my offering is that I know where to find the answer (using every Bioinformaticians best friend – the internet) and then help explain what it means.

It can be reassuring when someone else acknowledges that they are not 100% sure about something, as it helps remove any unrealistic expectations of perfection. On top of that – and as I have to constantly remind myself – it wouldn’t be science, and we wouldn’t be here doing this job if we knew all the answers…

Bioinfo what?

So I thought I’d spend some time explaining a little more about the field of Bioinformatics.

If you wikipedia it you will discover that it is where statistics, programming and biology meet. You may then be wondering when would this happen? Although the relevance of maths in biology has been present ever since its first outing, the need for mathematicians or programmers is much more recent.

It has mainly arisen as technology has improved to produce ever-increasing amounts of data, and more complex data at that. In 1990 they started sequencing the first genome, finishing in 2003. These days, the data can be generated and analysed in around a day. Whats more we can now generate data on not just the genome, but the epigenome, transcriptome, metabolome and proteome. These are collectively known as the ‘omics. What they all have in common is lots of data points (generally hundreds of thousands) each representing different parts of your DNA or the resulting chemical molecules.

It would be completely implausible to try to analyse each data point one by one with a pen and paper. Therefore, some knowledge of programming is needed to manage the data and implement analyses efficiently.  The role of statistics is to ensure that the data is analysed appropriately and results are not chance findings. While the biologist is needed to run the experiment and do the interpretation. This is a simplication of how these skills come together, but there is huge variety in the type of projects requiring a bioinformatician.

Bioinformatics is now a field in its own right. I am not aware of any undergraduate course in the UK, although many bioscience departments are starting to offer modules in it. Often the first chance you would have to study it is at a masters level. Courses will accept biology, mathematical/statistical or computer science graduates, but my experience is that the vast majority of the intake have studied biology or related disciplines. I mainly put this down to no-one telling mathematics or computer science undergraduates that this is an option, as these course are predominately based in bioscience or medical schools. It may also be that biologists realise that to remain competitive in the jobs market you need to have some of these skills (particularly if you want a career in genetics). I have seen non-biologists initially really struggle on these courses, as it is a steep learning curve from GCSE or A level  with lots of new vocabulary, concepts and mechanisms to get your head around. It can be demoralising and seem like a daunting task, but when it comes to the analysis side you will find the tables flipped and everyone looking at you wishing they could do what you can so it is worth being patient and sticking with it.

So if this appeals you are probably wondering, which of these subjects should you study at undergraduate level? Well, as all of them can lead to the same outcome it has to be a choice based on were you think your strengths lie, what you will enjoy, and remain motivated to study for three or more years. What I would say is look for opportunities to broaden your skill set across the three domains, can you do a computer science module or learn some programming as part of your final year project in your Maths degree. Does you department offer a bioinformatics or mathematically modelling module in your biology degree? Can you develop some software to help a field biologist collect and store their data. My break came when I spent 10 weeks doing a computational biology project in Edinburgh during the summer of my Maths degree. This was my first chance to learn to programme and learn about the data biologists were working with.

The reality is that most bioinformaticians have a particular strength and positions call for different combinations of skills. You may not need to be the whizziest programmer but have a good analytical mind to decide which statistical approaches should be used. You may not know a huge amount about what the data is but you do know how to store data in an efficient and secure manner, or how to set up and manage high-powered computing systems.

Which ever way you approach it you will have many opportunities to work on different projects with different teams all over the world. Almost all industries are increasing reliant on data and informaticians to stay ahead, so if you decide that biology isn’t for you there are many other opportunities out there with these skill sets, so it worth thinking about!

Taking maths beyond the classroom.

In this blog post I want share my thoughts on the differences between Maths taught in the classroom and that used in professional life.What often appeals about Maths is the routine of applying a clear set of instructions. Regardless of the context, this remains a large part of any mathematicians career. The big difference once you have left the classroom, is what comes before and after.

Generally in the classroom who are taught a particular statistical test: its assumptions, how to apply it and how to interpret the output. Then when it comes to the end of year exam, the question specifically asks you to perform said test, often on data generated (by a computer) to give a particular answer.

Now, once you are employed as a statistician (or any role where statistics forms part of the job description), the question no longer guides you in exactly what to do. More likely you will be given a data set, and some premise of what you required to extract out of the data. The level of detail of your task is highly variable and likely dependent on the statistical ability of the person asking. The less they know the more vague or far-fetched the question, whereas a fellow statistician is likely to set out a clear hypothesis having already worked through much of the thought process you would have gone through.

Before you can actually start any number crunching, you need to deduce what the hypothesis is. Then you need to decide whether it is actually testable in the data you have. If you think the data can’t answer the question in hand you may have to adjust the question, and present your superior with what you can establish sometimes leading to a protracted negotiation until you are both happy. Once you have finalised the hypothesis, you can then think about which statistical test to use and how.

What I think is missing in the classroom is this thought process of deciding what procedure to use and when. In my experience, I was always explicitly told what I was going to need to do. As a results of this I remember stressful interactions, when at university, friends on other courses would ask for advice on what statistics to use in their dissertations and I would grapple through what I knew to try to advise them. Since my degree, I have had to learn how to answer these questions for my own work, but also to help colleagues in their projects. It can be challenging to convert their biological question into the underlying hypothesis and the mathematical concept that may be represented by.

Experience is the key, but communication is what is going to get you through. Being able to decompose their question into the relevant parts (if they are asking you for help, they probably have overcomplicated it) allows you start thinking in terms more familiar to you. Keep asking them questions with the aim of getting them to refine their question into a testable hypothesis. While there may be moments of utter confusion or complete miscommunication, these interactions are good for both parties and can lead to some novel ideas neither party would have come to on their own.

The  other main difference I want to discuss, is that the way Maths is taught can be quite limited.  Not only what has been decided should be on the curriculum which is true of all subjects, but also in the structured style of exams. This means you can only do what you are directly asked to, and once you get to the require answer that’s the end. There is no opportunity to show off additional skills or explore further, the way you can with an English or History assignment. Within my role, I am given a lot of freedom to explore datasets beyond the primary purpose. I can generate and test additional hypotheses and try out more advanced or new routines. This creativity really helps improve my skill set and gives me confidence in adopting new areas of statistics I have not encountered before.

The reality is, I have probably learnt more about how Maths really works outside of my initial education and training. You can never discount the value of experience. I would advocate, therefore, that more Maths assignments or assessments take on a more flexible framework. We should give students a chance to follow a project through from design to completion, rewarding the thought process as much as the ability to compute the answer. The skill most employers value from Maths is problem solving, and how can we really teach that if when we set the question we tell them how we want it answered too?