Welcome Your IET account
DNA scientist etc

DNA medicine faces reality

Image credit: Getty Images

It may be the ultimate medical profile, but you might have to wait a while for your genomic analysis.

UK health secretary Matt Hancock is a big fan of applying genetics to healthcare. Just over a year ago he spat into a tube so the DNA in his saliva could be passed over a chip packed with an array of chemical probes that look for a selection of genetic markers. He then announced in a speech at the Royal Academy that the information showed he was at higher risk than average of prostate cancer and had decided to book a blood test.

Criticism from the medical community came swiftly. A number of specialists pointed out that all he had gained was needless anxiety for what seemed, based on the scores quoted, to be a modest risk and a blood test that would be unnecessary if there were no symptoms of the condition.

Hancock is keen to see genetic testing used more widely in the UK’s health service, to the extent of providing sequencing of patients’ complete genomes to a much wider range of the population than has been achieved so far, which has mostly been through the 100,000 Genomes project, which started in 2014. The latest plan is to provide funding to sequence up to 20,000 babies to look for signs of rare and inherited diseases.

Such sequencing is far more comprehensive than the tests offered by the consumer-level kits, which generally only home in on a select thousand or so genetic markers out of the many millions in the entire genome. The view that extensive genetic analysis will make healthcare more efficient and effective seems a natural conclusion. After the Human Genome Project at the beginning of the millennium, medical and biological-science researchers were enthusiastic that the same technology could be applied across the population. But there are questions of whether the results, at least those achievable this decade, justify what could simply be an expensive experiment in mass data collection.

The multi-billion dollar effort of the Human Genome Project delivered two things that could drive mass sequencing. It provided a template that would make it possible to deliver subsequent genomes of specific individuals far more cheaply, as well as an opportunity to demonstrate how a non-intuitive but highly successful approach to sequencing has succeeded in driving costs down over time. ‘Shotgun’ sequencing fragments the long chains of DNA into short segments that are comparatively easy to match chemically. The output of the shotgun sequencer is a huge pile of short nucleic-acid base-pair sequences presented in an entirely random order that powerful computers then have to piece together. But as counterintuitive as it may seem, it is not impossible to piece most of the information in the right order, especially when you have a template to match them against.

Next-generation sequencing took this a step further by trading the analysis of even shorter fragments for greater computer effort. The data from a sequencing run conducted using this equipment can easily consume 100Gbyte of disk space, even when compressed, and on a small cluster can take more than a day to process. However, some teams have found by using parallel servers on the cloud they can bring that time down to less than half an hour. A group in China reported in 2017 their cost for running whole-genome sequencing on Amazon’s cloud machines could be as little as $16.50.

Thanks to its extensive use of computer analysis, there are claims of a Moore’s Law of genome sequencing that promises the $100 sequence within a reasonable timeframe. These claims have been partly borne out by figures from the US National Human Genome Research Institute (NHGRI), which claims the costs of sequencing dropped sharply after 2007, once next-generation sequencing took hold, from around $10m per genome to less than $10,000 in 2011. The sudden decline in costs was accompanied by optimistic projections of the sub-$1,000 or even sub-$100 test. But evidence for this is harder to find. Although the advantage provided by next-generation sequencing far exceeded that of Moore’s Law, the chipmaking industry’s favourite exponential curve is catching up again as DNA sequencing costs flatten out. The NHGRI found the costs, which the institute estimated at a couple of thousand dollars in 2015, barely moved up to the beginning of 2019. Since then core costs fell to just under $1,000 before rising slightly.

A 2018 study led by the Health Economics Research Centre at the University of Oxford similarly found for users of genetic analysis services “only limited evidence that the cost of whole genome sequencing was decreasing”.

Costs could come down further through a combination of improving technology and trade-offs in accuracy (see ‘Shredding the costs’). But the most practical trade-off is to focus on the patients who are most likely to benefit, which focuses effort on a relatively small proportion of the population.

For the UK’s 100,000 Genomes Project, the government-funded company Genomics England had two classes of patient in mind. The largest group was that of people suffering from rare diseases suspected to have a genetic cause. As most rare diseases have genetic causes, it made sense to focus on them for a mass-sequencing project: two-thirds of the people recruited were sufferers of rare diseases – the normal practice is one test per sufferer, though both biological parents will also be sampled.

The other group consisted of 35,000 cancer patients. Their genomes had to be tested twice: once for their native DNA and again for the DNA from a tumour, which would most likely be different, with key genes missing or rendered dormant.

Mark Caulfield, chief scientist at Genomics England and professor of clinical pharmacology at Queen Mary College, London, argues the treatments for some rare diseases can be surprisingly simple but could only be properly identified with the help of genome sequencing. At the UPKGx conference for genomics in stratified medicine last year, he cited the case of a girl with a debilitating condition that, once identified as lacking an important gene, was treated using dietary supplements. “Before the treatment, she had become increasingly locked-in. The dietary change means she is a new child now. This is not going to happen to everybody but even for a few that is a big transformation,” he said, adding that early interventions can lead to large cost savings for the NHS and other support services.

Because of the possibility of linking genetic alterations to potential drug treatments, the main route data has taken from projects such as 100,000 Genomes has been to go to pharmaceutical companies. A major problem for these companies lies in the cost of performing trials, many of which go nowhere because not enough patients see a benefit as their cancer or disease has a genetic profile that does not fit the treatment. This is where organisations such as Genomics England see an expanding database of genomes helping the NHS. The hope is that they will “co-develop therapies and link that to a much-reduced price for that therapy in the NHS targeted to the people who will benefit the most”.

Expanding whole-genome sequencing to a wider range of the population would, in principle, provide much more data. Here the genetics get far more complex than tracking down a single troublesome genetic change. Genome-wide association studies (GWAS) try to identify combinations of genes that appear to make individuals more susceptible to certain diseases or make treatment more difficult. Some of the associations are reasonably clear, such as those identified with some breast cancers, but most are not.

In a 2019 article for Genetics in Medicine, Nicholas Wald and Robert Old of the Wolfson Institute of Preventive Medicine, argued practically no genome-scaled score from a GWAS analysis obtained up to that point met the statistical requirements for treating a disease marker as actionable. Although a risk might be elevated it might not warrant any preventive treatment. “It is not well recognised that estimates of the relative risk between a disease marker and a disease have to be extremely high for the risk factor to merit consideration as a worthwhile screening test,” they concluded.

The difficulty of obtaining actionable information for patients who do not fall into the groups already identified by the 100,000 Genomes project makes the cost of whole-genome sequencing a critical issue. It is why, despite government willingness, medical practitioners remain cautious, including about Hancock’s plan announced in 2018 to widen genetic sequencing to five million people. Caulfield told the UKPGx conference: “Five million genomes at the whole-genome level is unaffordable for a public health system.” He said the practice of routine sequencing could expand some way beyond the current level of just over 100,000 but the total might fall some way short of that target because the benefits do not justify the cost.

A cheaper and more scalable option may be to perform a much simpler genetic test, using the same kinds of genotyping panels as the one used by Hancock, but in a more targeted way. One problem that follows any drug-based treatment is their variable efficacy. Some patients will react well to a course of drugs, others see little to no benefit. And a very unlucky few will suffer severe side effects. One example revolves around a common treatment for epilepsy: the drug Carbamazepine.

In patients with a certain genetic pattern – known as the HLA-B*15:02 allele – their bodies respond with an immune response to Carbamazepine. Although most with the antigen will experience only mild side effects, extreme cases lead to skin sores and cell damage so bad it can result in death. This may seem to be a prime candidate for testing epilepsy sufferers for that allele. But even then, the numbers may not justify it.

One problem lies in the statistics of false positives. At the UKPGx conference, Dyfrig Hughes, professor in pharmacoeconomics at Bangor University, described a study in Malaysia which found more than 200 patients would need to be screened to avoid a single case of the necrolysis reaction and this is a country where the HLA-B*15:02 allele is far more common. Even then, the test is not conclusive as only a quarter of those identified would suffer the extreme side effects, with the others being prescribed a less-effective alternative that does not deal adequately with their symptoms.

The results indicated there might be three patients who suffer uncontrolled epilepsy because of the altered prescription for each one who avoids the potentially fatal reaction. In the UK, the relative cost of screening comes in because a lower number of patients would be at risk of the adverse reaction.

Because of the low probability of extreme side effects, the cost of testing quickly begins to weigh on genotyping, though with some expensive drugs it easily makes sense. It is relatively easy to trade off a drug that costs tens of thousand pounds a year against a test that costs hundreds – if a genetic association is known and the population of sufferers is small. But, for the most part, single-allele tests are unlikely to be cost-effective.

“However, if you move to a multi-gene panel, where you analyse the results of 50 or more genes, then suddenly it becomes much more worthwhile. The more you include the more cost-effective it becomes,” Hughes claims. At some point, the number of genes you include is so large the practice might as well move to whole-genome analysis.

The problem is working out which 50 should be tested and on whom. Individual patients are unlikely to show positives for more than a few items on the panel and the specialist ordering the test will most likely only do so if a drug they want to prescribe appears on a ready-made genotyping panel. It will be up to health authorities to work out which mixture of genetic tests go onto those standard panels, presumably using some form of statistical analysis where multiple drugs are often prescribed together. Heart-condition treatments based around warfarin are cheap to produce but can lead to serious side effects. These might make good options to put alongside genetic tests for more specific and expensive drugs.

The genotyping panel is a long way from whole-genome sequencing at the population level but it may go some way to making genetic analysis far more widespread than it is today. People like Hancock, who have had their own limited-scale tests conducted through consumer-level genotyping, may be the vanguard in terms of the kind of technology that will be rolled out. But the aim of those tests and the DNA information they focus on is likely to be quite different.

Genome sequencing

Shredding the costs

Genome sequencing is far from being an exact science. Although touted as whole genome sequencing, the method in use can miss as much as 5 per cent of the DNA in the genome.

This is because there are large chunks of chromosomes that contain repeated sequences that current forms of sequencing cannot easily handle. Repeated genes and seemingly nonsense sequences such as repeated pairs of nucleic acids – the TATATATA pattern is a common example – are often harmless.

Sometimes these repeats may be less benign and lead to serious inherited conditions: the nonsense sequences can turn up inside a protein-​coding gene and disrupt its function entirely. In other cases, a gene that appears more than once can disrupt its function but it can be difficult to spot with methods that rely on shredding the DNA before analysis.

One way to overcome the problem of dealing with repeats and other DNA ‘junk’ is redundancy: generate enough short DNA segments to be able to hit each section multiple times and take advantage of small offsets in each to give the reassembly software more to work with. For a coverage level of 30x, which is a typical minimum in human-genome sequencing, the machines need to crunch through more than 100 billion nucleic-acid bases. You can cut the coverage level but your chance of missing something increases.

There are possible technological alternatives. Rather than shred the DNA and expect the computer to piece the results back together from scratch, one option is to do the shredding in two stages with the help of microfluidic technology. Microfluidics uses the same micro-machining processes as those used to create the gyroscopes, microphones and accelerometers that go into mobile phones to create tiny reaction chambers.

As one example, researchers from the University of San Diego at California demonstrated a technique in 2017 that shredded the DNA in two stages. Using a strong alkali broth, the first broke the chromosomes down into chunks of DNA of around half a million bases. Each chunk was separated and sequenced, giving the computer a head start on reassembling the jigsaw puzzle. Such techniques could translate into lower coverage and cheaper sequencing.

Another technique is to try to increase the lengths of DNA that sequencing machines handle to try to capture those difficult repeating segments. But it represents another trade-off. The chemical processes needed to preserve and amplify those longer segments tend to be more expensive and call for more specialised machinery. You might be able to cut coverage to 10x and reduce costs there but the fundamental cost of processing may be higher. For the moment, because computer time is cheap, the emphasis is on high coverage and throughput.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them