PACE Continuous Innovation Indicators: a new tool to measure progress in cancer research

Share :
Published: 7 Jan 2015
Views: 3135
Rating:
Save
Prof Gordon McVie - ecancer and European Institute of Oncology, Milan, Italy

From the “War on Cancer” to sponsored walkathons and races, society constantly aims to move cancer research forward - but can these results be quantified? Prof McVie tells ecancertv about a novel set of tools to measure research innovation, described in a new research paper published in ecancermedicalscience.

The PACE Continuous Innovation Indicators™ allow users to measure progress against cancer over time, and generate graphical outputs that may help them identify gaps and needs.

Click here to read the news story.

PACE Continuous Innovation Indicators: a new tool to measure progress in cancer research

Prof Gordon McVie - ecancer and European Institute of Oncology, Milan, Italy


PACE stands for Patient Access to Cancer Care Excellence. It describes a bunch of people who have got together under Lilly’s auspices, in other words it’s a drug company grouping of patient advocates, scientists and clinicians. They have been discussing for a couple of years now how to measure the evidence that’s around which would point to whether we are indeed making progress or not in particularly types of illnesses, particularly cancer.

PACE starts from a feeling that for all the mega-bucks that have been thrown at cancer research, and the drug companies are the first to claim that it takes many millions of dollars to take a drug to market because of the intense research and so on, and what is there to show for it? A stack of drugs over the last couple of years have been regulated and agreed by FDA and EMA where you’re looking at two or three months of improvement of survival or sometimes four months, sometimes five months. The feeling is is this actually value for money. I see this partly as a way to find evidence of small incremental improvements in outcomes for patients adding up, maybe, to a more significant progress than we’ve been led to believe.

The group of people that Lilly invited to discuss the way we might put evidence together believe that there’s a scale of very reliable to not so reliable evidence. So the very reliable evidence would be a meta-analysis of drug A versus drug A plus B or drug B versus drug X. Then there would be, let’s say, a Cochrane analysis of a particular area of research, it need not always be drugs, of course, it could be a different way of doing surgery with a robot or giving radiotherapy with a CyberKnife or Gamma Knife instead of conventional radiation schedules. And going down to the lower level of evidence such as a case report. A good illustration would be the one in the November issue of ecancer of a patient with an inoperable intrahepatic cholangiocarcinoma, resistant to chemotherapy, resistant to radiotherapy, metastatic disease and remarkable response to two targeted agents given in combination against BRAF and MEK. A single case report, excellent response and we have to see that as evidence of something working but it doesn’t stand up to a meta-analysis.

So that’s the thinking behind getting together some of these indicators and the hope will be that with a layering technique with some really interesting modern and useful algorithms there will be a way of measuring how particular kinds of treatments are improving and at what pace they are improving the outcome for patients.

One of the authors of the PACE paper has compared it to the cancer genome. The cancer genome was published and everybody can go in and pull out a gene of interest or a gene signature of interest and do further research with that. The hope is that this sort of set of indicators, layered out in importance but put together with new algorithms and continuously evaluated as patient trials get more mature, that this will be available in little pieces of evidence which will be like the human genome or the cancer genome and allow researchers to go in and find evidence to prove or support a hypothesis or to help choose a new branch of research or to help underpin evidence to have support for a newly developed form of treatment.

Clinical trials are looked at and analysed by a group of experts who have got a series of weights to put on them according to how rigorous the technique has been to design the trial and also how mature the data is, whether the trial is reporting on, say, progression free survival because the trial isn’t finished yet or because the final endpoint, which might be survival, has not been reached. So progression free survival is very legitimate data, very important data, particularly to patients, people often miss that point. And the data can be added to and updated with a separate piece of evidence later when the trial is mature enough to show survival differences, if they exist. But it’s also possible out of a clinical trial to pick up other surrogate markers or other additional markers which, as I say, might be more important to the patient, such as quality of life, such as restoration of function, such as relief of psychological symptoms such as depression, anxiety etc. So an expert group looks at different pieces of evidence and weighs it and they weigh for scientific quality but they also weigh for value for the patient.

It’s hoped that this approach to assembling data, mapping data out, endpoint data, and pinning values onto them will be of great use for subsequent researchers who can pick pieces of evidence and build a story or question a new hypothesis given suggestive data implying, perhaps, a research outcome. Or public health colleagues who are looking to see whether there really has been a change in the outcome for the treatment of, say, kidney cancer. If you looked at one trial of one angiogenesis inhibitor in kidney cancer you might think that great results they are not. On the other hand, if you look at all the results of all the five or six drugs, is it now, that have been approved by the FDA for kidney cancer you will see that the whole way of managing kidney cancer has been turned upside down. First of all there were no drugs and then there were some biologicals and then suddenly there were active drugs. These active drugs were mostly angiogenesis inhibitors in the first instance but there are several small molecules after that, antibodies and small molecules, and it was found that combining these drugs in kidney cancer mostly just gave extra toxicity but sequentially the survival of kidney cancer patients has trebled, we’re up to four, five, six, seven years now because once they get a response from one drug and when it fails another one takes over and you get a response and then another one and then another one. This is a sequence which is not able to be measured at the moment except with this sort of design of gathering of information and piecing it together.

Another important trend which certainly clinical researchers are looking at would be in the hemato-oncology field, with the leukaemias and the myelomas where we’re finding that with the advent of really selective antibodies and biologicals the days of treating some of these haematological malignancies with cytotoxic drugs may be running low. It may be that in five years’ time, if the trends which you can quite easily map out with this indicators type of technology, that in five years’ time we might not be using cytotoxic drugs for the treatment of some leukaemias and maybe some lymphomas too. Unthinkable even five years ago we would be using biologicals and antibodies; everybody knows that the biologicals and antibodies are far, far kinder to patients and far less side effects than cytotoxics.

So I think there are a number of possible benefits from carrying on with this approach to research and certainly as a paper it should stimulate other researchers to go and see if it’s reproducible and see if it’s helpful, see if it’s accurate and then maybe in a couple of years’ time with three or four more publications testing this methodology we’ll know exactly how much improvement it’s giving.