Category Archives: Bioinformatics

You may have never heard of a bioinformatician but there is a big demand for them.

Today while scanning twitter I came across two posts relating to the demand for data science/analytical/programming jobs in both academia and industry.

The first tweet was from the Royal Society and Royal Statistical Society promoting a report on the need for STEM (science,  technology, engineering and maths) skills in the workforce. It is the product of a conference which brought together academics, the Government’s Chief Scientific Adviser and senior representatives from BT, Amazon, Jaguar Land Rover all united by their need for computer and numeracy literate graduates. They estimate that up to 58,000 data science jobs a year are created each year, and there are a large number of these positions which do not get filled because there is a lack of suitable candidates applying.  In industry there is demand to model data to make predictions and decisions on what trends to follow and demand to visualize this data in a way that allows those without such strong numerical skills to make sense of it. They require employees who can communicate effectively what they are doing and think creatively about what further information they can get out of the data that will improve the commercial aspects of the business. It is a worthwhile read for anyone wondering where their mathematics or computer science training might take them.

The second was a tweet from Barefoot Computing stating that in the UK we are losing £63bn from GDP as we don’t have the appropriate technical or digital skill. I don’t know where the statistic comes from, but Barefoot are using it to encourage confidence and enthusiasm in the teaching of the Primary School Computer Science curriculum which is their underlying ethos.

Both of these reiterated to me the  demand there is and will be, in a wide variety of disciplines, for individuals who have strong mathematical, or computer science skill sets. So if you are considering your career options or know someone who is, encourage them to pursue a mathematics or computer science degree as this demand will just keep on growing.

 

Bringing together, statistics, genetics and genealogy

In this post I want to highlight a recent genetic study published this week in Nature Communications which uses genetic data to characterize the current population of the US and understand how it came to be using databases of family history.

Their starting point was a very large genetic data set of 774,516 people currently residing in the US, the majority of which were also born there, with measurements at 709,358 different genetic positions.

They compared the genetic profiles of all pairs of individuals to identify regions of the genome (of a certain size) shared by both individuals, consistent with those two individuals having a common ancestor. It is important to note, that this is very unlikely to be the case between two randomly selected or even two distantly related individuals. Therefore this study was only possible because they had accumulated such a large genetic data set, meaning they had enough pairs of individuals with such a genomic region in common to make any inferences. Based on this information they produce a plot of US states where distance between points represents the similarity in common ancestry between individuals born in those states, which closely resembles a geographical map of the US. What it means is that, in general, the closer together two individuals live, the closer their ancestry is likely to be. This isn’t hard to believe,  and has been shown before, for example, similar studies in European populations have produced similar figures in the past.

The aim of the study was to divide the sample up into groups, referred to as clusters, of individuals whose genetic data implied common ancestry and which represented the substructure of the US population. What is perhaps novel to this study, is the inclusion of information from participants relating to when and where their relatives were born to interpret the origins and migratory patterns of each cluster. All of which is then discussed in the context of known immigration and migration patterns in recent times (~last 500 years).

A few things struck me about this article. Firstly, the data was taken from a genetic genealogy service AncestryDNA, who use a saliva sample to genetically profile and generate statistics on customer’s ancestry. Their analytical sample size was 774,516 individuals of US origin who provided consent for their data to be included in genetics research demonstrating potentially how interested the general population is in the information that their genome harnesses. What’s more these individuals are also keen for it to be used to improve our understanding of how genetics influences health and disease.

Secondly, the authors used network analysis to identify their clusters of individuals with common ancestry. The article is littered with mathematical terminology, “principal components”,  “weight function”, “hierarchical clustering”, “ spectral dimensionality reduction technique”, demonstrating not only the utility of statistics in genetics but the additional applications of this to supplementing our knowledge of modern history.

Thirdly, they make use of a range of large data sets (multiple genetic data sets and genealogy databases). This is increasingly necessary in genetics research in order to interpret findings and draw conclusions, making this a nice demonstration of how to think about incorporating additional sources of information (like a historian would) in order to contextualize your results.

Finally, if nothing else, this research serves as a timely reminder of the broad roots and origins of the current residents of the USA and how they came to be there.

Let’s test all the genes!

In this blog post (and others to follow) I want to give some examples of how statistics and statisticians have helped advance genetics research.

Most genetic studies these days consider and measure variation across all 20,000 human genes simultaneously. This is a great advance as it means we can forgot all the old biological theories we had based any previous research around and as yet not found any concrete support for. This is the basis of a genome-wide association study, often shortened to GWAS. GWAS are often referred to as a hypothesis-free approach. Technically, they are not completely hypothesis-free, as to do any statistics we need a hypothesis to test. They work on the hypothesis is that the disease of interest has genetic risk factors, however, we don’t need to have idea which gene or genes may be involved before we start. This means we may find a completely new gene or novel biological process which could revolutionize our understanding of a particular disease. Hence, they brought great promise, and new insight,  to contemporary genetics research.

So when it comes to doing the statistical analysis for our GWAS, we are essentially performing the same mathematical routine over and over again for each genetic variant in turn. This procedure is automated by computer programmes designed to do this efficiently. At the end we have a vast table (as a gene will have multiple genetic variants across it this can contain hundreds of thousands or even millions of rows) of summary statistics to draw our conclusions from. One highly important number for each site is the p-value from each statistical test that we can use to rank our table of results. There is no plausible way in which we can apply the standard checks of the individual statistical tests that a mathematician may have typically been taught to do (i.e. do the data meet the assumptions), to every single genetic variant that we have tested. Instead we often look at the distribution of p-values across all the tests, generally using a Q-Q plot to compare the expected quartiles to the observed quartiles, to decide if there is major bias, or any confounders affecting the results. Once happy in general, we can look at which genetic variants are significantly associated with your disease of interest.

With a number of computer software tools it can be fairly straight-forward to plug in the numbers and perform the required statistical test. The challenge is often the interpretation or drawing conclusions, in particular when it comes to the p-value.  This is made harder by the fact that most statistical training courses make the rather unrealistic assumption that you will only ever do 1 statistical test at a time and teach you how to apply a significance threshold in this scenario. This knowledge is then taken forward, and merrily applied in exactly the same manner to all statistical tests performed from that point forward.

However, there is a potential trap.

When you perform multiple tests, you increase your chances of getting a significant finding, even if there are no true associations. For example, let’s assume that there is no association between eating fruit and time spent watching TV. But to be 100% sure, we have found a group of people to ask about their TV watching habits and how many apples, bananas, oranges, strawberries, kiwis, melons, pears, blueberries, mangoes and plums they eat each week, then we decide to test each one of these ten different fruits individually. At a 10% significance level ( i.e. p-value < 0.1) we would expect that 0.1 x 10 = 1 test would identify a significant finding, which would be a false positive finding. The more things we test, the more we increase our chances of finding a significant association, even where none exists. This is called ‘multiple testing’, or ‘multiple comparisons’.

This knowledge is crucial for correctly interpreting the results of a GWAS. Say we have tested 500,000 genetic variants, even if none of them were truly associated at a significance threshold of P < 0.05 we would get 500000 x 0.05 = 25000 associations! That is (potentially) a rather hefty number of false positives (the number of associations you report as true but in fact are false). To prevent this, we need to adjust our significance threshold to account for the number of tests we have performed, minimizing our chances of incorrectly reporting a false positive. There are multiple methodologies proposed to resolve this issue, and this is one example where statistics plays an important role in genetic research.

What’s more, by highlighting the high probability of chance findings in GWAS there is a common consensus that all findings, even if they withstand the appropriate control for the number of genetic variants tested, must be replicated before they are ‘believed’ or taken seriously. Replication means repeating the whole GWAS process in a completely separate sample. So that’s more work for the statisticians then!

If you are interested in this topic you may enjoy this cartoon, which gives an alternative (comical solution).

Getting started with pRogramming

With the new term fast approaching we will soon have a number of new students.

Whilst they will come expecting to improve their wet lab skills by spending time in the lab learning new techniques and performing experiments, they may not be aware of the amount of time they will spend in front of the computer using programming to analyse the data they generate. Fortunately many biosciences undergraduate courses teach some statistics and increasing a little bioinformatics, so the students don’t arrive completely unprepared.

With this in mind, I thought I’d run through some of the ways you can go about learning to program, specifically focusing on the statistical language R.

I always like to reiterate that all my programming skills are self taught and I highly doubt that I am not the only one. Practically it is a very hard skill to teach in a classroom setting and the best way to learn is to get stuck in. Trial and error through experience is how you will progress, so it can be very attractive to employers or course providers if you have already had a go. It shows enthusiasm, a go-getting attitude and forward thinking, particularly if it it off the back of your own initiative.

When I started I downloaded the software (freely available) and worked through the ‘Introduction to R’ manual. This is a very dry way to go about it – and I will acknowledge, I did not make it to the end of the document. However, it helped me understand some of the basic principles about variables and functions. From there I was able (with the help of Google) to develop code to acheive the statistical tasks as I needed to.

Since then I have discovered a number of  online tutorials, which provide an interactive environment with hints and tips to make the process more successful and hopefully more enjoyable. In particular, DataCamp (again free) has been highly praised by colleagues  starting out of their programming journey. It is designed for beginners so is appropriate for any age, stage of education, or purpose.

I have recently tried it out with some work experience students who really enjoyed the experience. Programming can seem intimidating, not knowing where to start, fear you’ll break the computer or delete something important, unsure what exactly it can be used to do. These online aids remove many of these worries, and are a great option if you thinking you may be interested in a career involving programming but don’t know how you’ll get on.

In fact I’d encourage everyone to have a go, it is more accessible than you think. You never know what you are capable of until you try and it may even help you decide what career path you wish to follow.

Over the next academic year you may be faced with decisions about what to do next, which subjects to study at GCSE or A-level, whether to look for a job or continue with your studies, which Universities to apply to and what courses to do, what job or career path to follow? Or perhaps you just fancy learning a new skill that may lead to a new direction. Trying out some programming, it may open some doors you didn’t know existed, just like me!

Good luck!

 

 

You can’t do that!

I have previously discussed what I feel is the disjoint between taught statistics and the reality of being a statistician. Part of this is that the hard and fast rules are not always obeyed by the data you are working with. This can lead to a state of paralysis either through confusion on what to do next or refusal to use any of the standard approaches.

 

Unfortunately though, I am paid to do data analysis. I am expected to present results, not a list of reasons why I didn’t think any of the tests I know were not appropriate. Now I am not advocating that all the assumptions are there to be ignored but sometimes you just have to give something a go,  get stuck in and see how far you can bend the rules. For something like a regression model, some of the assumptions relate to the model fitting. For example, you can’t check if the residuals are normally distributed until you have fit the model. Therefore you have to do the calculations and generate the results before you know if it is appropriate.

 

A big part of statistical analysis is ensuring the robustness of your results. In other words are they a fluke? Is there another variable you haven’t factored in? I find visualization helpful here, can you see any outliers that change the result if you were to take them out? Is there one particular group of samples that is driving your association? Is your sample size large enough that that you can pick up subtle but noisy relationships? Does it hold true in males and females? Old and young? Essentially you are trying to break your finding.

 

In genomics the large datasets with measurements at thousands of markers for hundreds or thousands of individuals often mean repeating the same test for each marker. Doing statistics at this intensity makes it implausible to check the robustness of every single test. To prevent serious violations, fairly stringent filtering is applied to each of the markers prior to analysis. But the main approach to avoid false positives is to try to replicate your findings in an independent dataset.

 

Often performing the analysis is quite quick: it’s checking and believing that it’s true that takes the time.

Dealing with unknowns.

Science is all about dealing with unknowns.

There are the big unknowns, ‘Can we eradicate cancer?‘, ‘Why do we forget things as we get older?‘, ‘ Can we grow replacement organs?‘. Then there are the day to day niggling unknowns. These are the ones that tend to cause the most anxiety. Perhaps because we never expect to completely answer the big questions and are simply looking to add to the body of knowledge.

Pretty much all of the day to day problems I deal with relate to ‘how’ are we going to test a particular hypothesis. Once you have data in hand, it is not uncommon for some technicalities or oversights to emerge. We have to accept that the perfect study design is often unobtainable, and instead are striving to control for as many external factors that may influence the result as possible. Where you couldn’t do so in the way the experiment was conducted, you have a second chance at the analysis stage. This is limited by two things: 1) knowing what all of these possible confounders are, and 2) actually having a measure or proxy for that confounder.

There are two routes taken when dealing with confounders: one option is you perform the initial analysis and then see if it changes with the addition of additional covariates, alternatively you include all the variables from the outset. Personally I don’t see the point of doing an analysis, if you are subsequently going to discount any of the results which you find later to be a product of some other factor. Of course this view, may reflect my ‘omics background, where, given the large number of features tested in every experiment, spurious results are expected as part of the course and the quicker you can discount them the better.

Recently I have been working with some data for which, we are aware of many possible confounders. Some of these were obvious at the start and we have the relevant information to include in the analysis. For some of the unknowns, we have calculated estimates from our data using a commonly accepted methodology – however we are unsure of how accurate these are, as there is little empirical evidence to truly assess them, and whether they are capturing everything they should.

An alternative in high dimensional data (that is when you have lots of data points for each sample), is to use methods to create surrogate variables. These capture the variation present in your dataset presumed to be reflecting the confounders we are concerned about (and those perhaps we haven’t thought of yet). I have always been cautious of such an approach as I don’t like the idea of not understanding exactly what you are putting into your model. What’s more there is a possibility that you are removing some of the true effects you are interested in. However, there is the opposing argument of, ‘What does it matter? If it prevents false positive results then that’s the whole point.’

At present it is somewhat an open question which way we should proceed. It is good practise to question your approach and test it until it breaks. Having tried a few ways of doing something – all of which produce a slightly different numbers, how do we decide which is the correct one? Part of the problem is that we don’t know what the right answer is. We can keep trying new things but how do we know when to stop? Because unlike school we can’t just turn the textbook upside-down and flick to the back pages to mark our effort as right or wrong. Instead we have to think outside the box to come up with additional ways to check the robustness of our result. But this is part of the course, research is all about unknowns. These are the challenges we relish, and maybe eventually we will start to convince ourselves that our result might be true!

Often the gold standard is replication, that is repeating the analysis and finding the same result in a completely independent sample. Sometimes you might have a second cohort already lined up, so this validation can be internal and give you confidence in what you are doing. Or you may face a nervous wait to collect more data or for another group to follow up your work.

Sometimes though, you just have to go with what you have got. Sharing your work with the research community is a great opportunity for feedback and may prompt a long overdue conversation about the issues at hand. Ultimately, as long as you are clear about exactly what has been done, your findings can be interpreted appropriately.

 

Bioinfo what?

So I thought I’d spend some time explaining a little more about the field of Bioinformatics.

If you wikipedia it you will discover that it is where statistics, programming and biology meet. You may then be wondering when would this happen? Although the relevance of maths in biology has been present ever since its first outing, the need for mathematicians or programmers is much more recent.

It has mainly arisen as technology has improved to produce ever-increasing amounts of data, and more complex data at that. In 1990 they started sequencing the first genome, finishing in 2003. These days, the data can be generated and analysed in around a day. Whats more we can now generate data on not just the genome, but the epigenome, transcriptome, metabolome and proteome. These are collectively known as the ‘omics. What they all have in common is lots of data points (generally hundreds of thousands) each representing different parts of your DNA or the resulting chemical molecules.

It would be completely implausible to try to analyse each data point one by one with a pen and paper. Therefore, some knowledge of programming is needed to manage the data and implement analyses efficiently.  The role of statistics is to ensure that the data is analysed appropriately and results are not chance findings. While the biologist is needed to run the experiment and do the interpretation. This is a simplication of how these skills come together, but there is huge variety in the type of projects requiring a bioinformatician.

Bioinformatics is now a field in its own right. I am not aware of any undergraduate course in the UK, although many bioscience departments are starting to offer modules in it. Often the first chance you would have to study it is at a masters level. Courses will accept biology, mathematical/statistical or computer science graduates, but my experience is that the vast majority of the intake have studied biology or related disciplines. I mainly put this down to no-one telling mathematics or computer science undergraduates that this is an option, as these course are predominately based in bioscience or medical schools. It may also be that biologists realise that to remain competitive in the jobs market you need to have some of these skills (particularly if you want a career in genetics). I have seen non-biologists initially really struggle on these courses, as it is a steep learning curve from GCSE or A level  with lots of new vocabulary, concepts and mechanisms to get your head around. It can be demoralising and seem like a daunting task, but when it comes to the analysis side you will find the tables flipped and everyone looking at you wishing they could do what you can so it is worth being patient and sticking with it.

So if this appeals you are probably wondering, which of these subjects should you study at undergraduate level? Well, as all of them can lead to the same outcome it has to be a choice based on were you think your strengths lie, what you will enjoy, and remain motivated to study for three or more years. What I would say is look for opportunities to broaden your skill set across the three domains, can you do a computer science module or learn some programming as part of your final year project in your Maths degree. Does you department offer a bioinformatics or mathematically modelling module in your biology degree? Can you develop some software to help a field biologist collect and store their data. My break came when I spent 10 weeks doing a computational biology project in Edinburgh during the summer of my Maths degree. This was my first chance to learn to programme and learn about the data biologists were working with.

The reality is that most bioinformaticians have a particular strength and positions call for different combinations of skills. You may not need to be the whizziest programmer but have a good analytical mind to decide which statistical approaches should be used. You may not know a huge amount about what the data is but you do know how to store data in an efficient and secure manner, or how to set up and manage high-powered computing systems.

Which ever way you approach it you will have many opportunities to work on different projects with different teams all over the world. Almost all industries are increasing reliant on data and informaticians to stay ahead, so if you decide that biology isn’t for you there are many other opportunities out there with these skill sets, so it worth thinking about!

Simplicity is the key.

So for all the budding mathematicians out there I want to share with you more details of which statistical tools I use day-to-day.

So the first thing to say is that, long gone is the pen and squared paper.  Here showing your working involves creating a document with a series of the commands or functions you have run in your statistical computer package of choice. Generally I am interested in seeing if there is a relationship between two measures. I would say that the most common methodology I use is regression, perhaps more familiar to you as fitting a line to data. We routinely have to deal with a range of confounders (that is additional factors such as gender or age that may induce an association between the two variables of interest) and linear models have the flexibility for this.

There is often an expectation that as we deal with complex data, we must use super complicated mathematical formula to cope. This isn’t always necessary (at least not initially), so why make it harder than it needs to be? Keeping the analysis simple, helps makes the interpretation easier. Implementing a more advanced test (likely accompanied by an impressive name such as ‘dynamic time warping’) may give you a great sense of achievement. But this is often short-lived, lasting until someone asks, “So what does this mean?” and you try to translate the underlying hypothesis into a biological concept.

The most important factor,  is to keep in mind what scenarios the statistical test is designed for and understanding or recognising its limitations. Unfortunately, there are many biological measures (genetics being a particularly good example of this) that flaunt common statistical assumptions. This can be the biggest challenge as often an appropriate test does not exist, so you have to get creative to see how far you can stretch the one you are using. However I think this is whether statisticians need to think a little bit more like other scientists, who routinely accept that no approach is perfect and every experiment has limitations, the key is to acknowledge them.

 

It’s all out there.

My job is essentially a office job, I spend most of everyday sat in front of my computer. The reality is I do most of my maths with the help of statistical programming packages, however that is not to say you won’t find scraps of paper with hand written algebraic derivations littered around my desk – it just helps me think!

Predominantly, I work with one called R, which is free to download. Programming is an important part of my job and is a natural progression for anyone mathematically minded as it is essentially based on logic, and you get the same sense of satisfaction creating a working computer programme as you do solving an equation. I would strongly encourage anyone interested in a career in statistics to take a look at the tools out there as it may put you one step ahead in the jobs market.

I and most of my collegues are self taught programmers. Intially small things can be incrediably fustrasting, what really flummoxed me early on was working out how to read my data from an excel spreasheet into my R session. But, this should not deter as,  your ability accumlates quickly once you have made the initial breakthrough.  Further, these skills are so transferable (once you understand the principles of programming in one language, picking up a second, third, fourth etc is much easier) and valued by employers, it’s worth the early pain as it can open up so many alternative careers.

There is so much advice and many tutorials online, one I would recommend is https://www.datacamp.com/ which is great starting point for beginners, there is no reason why anyone can’t give them a go as all the material is accessible and FREE. Google is an essential resource for any programmer, it’s often quicker than looking up functions or commands in reference books and can save you a lot of time in debugging errors. ‘Have you Googled it?’ is a common retort when presented with an unseen before error message. The challenge is sometimes knowing what to search for, as the terminology may not be obvious, particularly if you don’t have any formal trainning but you will pick it up. It can also be helpful to know others are struggling as well. Stumbling across forums where people are publically declaring that they have hit the same wall as you, reaffirms you are not completely inept and on the right track. Remember we learn more from the mistakes we make than from our successes – which is a good thing as you will get lots of errors in your programming career.