I’ve been very fortunate over the last couple of years at a time when the National Cancer institute and the NIH as a whole have gotten substantial raises in our appropriations from our Congress and a substantial fraction of those funds has actually been directed towards studies of precision oncology. So over the past two years we’ve used those monies in a variety of different ways to address problems in the conduct and development of the whole field of precision oncology.
Tell us about your mouse models.
Over the last several years we’ve been developing a repository of patient derived tumour models that grow in special mice that are highly immunocompromised that allow over 50% of the samples we receive from around the United States to grow in those models. We now have upwards of between 400-500 of such models that we’re distributing at really very modest cost to the investigator community. Those samples have all been evaluated and characterised by next generation sequencing, by whole exome sequencing, by RNA sequencing and some of them have basically therapeutic data as well as the patient’s prior treatment history. So we’ve had a large response from the investigative community; we think that they will be very useful for the development of preclinical clinical trials of both novel single agents and combinations.
The specific new project that we’re doing is to develop canine versions of human immunotherapies because that we hope will allow us to do much more intensive biomarker development with those reagents and antibodies. The second piece of that has been lacking and we hope at least to improve is in the development of our early phase clinical trials network. We are rolling out in about a week or two a new pharmacodynamics and next generation sequence set of core facilities and core laboratories to help all of the investigators involved in that network which studies single agents and combinations of investigative agents for which the NCI holds the IND from the FDA to actually standardise and to perform assays of target validation for the patients enrolled in those trials. So developing those assays is a fairly expensive endeavour and so we have spent several years developing whole suites of assays across many different pathways that we now are going to roll out and make available for these clinical trials.
The third piece of what I presented really involves the notion of how do we better share the data that has been accumulated over many years from clinical trials and the samples that have been developed at the NCI. So for the former about a year and a half ago now we opened up what we call the NCI Data Archive which has 22 recently completed NCI supported phase III trials, both the investigational arms and the control arms for those data. With the appropriate permissions all of those data are available to the entire world to download and analyse. What we just initiated about six weeks ago is something we call the NCI Navigator which is an online resource that allows individuals from anywhere around the world to try to locate examples of tumour specimens, blood etc., from clinical trials that have been conducted and supported by the NCI over many decades. There are over 800,000 samples in this repository and this gives the investigators an online way to evaluate whether or not samples actually exist from the clinical trials for which they were taken. Then there’s a mechanism that involves a peer review of the requests for the use of those tissues. This is not something that involves additional resources but it does have to go through peer review because the samples are really very precious.
Is there a large collaborative aspect to your work?
One thing that I should point out is that actually by legislative fiat all of the data that we generate on our clinical trials and all of these specific resources for models are publically available. So we’re very interested in collaborating with others, providing information to others and, in particular, having the data downloaded and evaluated as individuals around the world define their own needs and their own clinical trials. It’s really important that US Government resources are used in the most profitable way to advance the science worldwide.
What is the APOLLO moonshot project?
The APOLLO moonshot project is actually an activity that is a collaboration between the Department of Defense, the United States Department of Defense, the United States Veterans Administration and the NCI. It has been a very interesting exercise for us because although patient compliance is often a very difficult issue with respect to clinical trials in this case all of the patients involved are in the United States military system and they have outstanding electronic health records. Therefore the idea is to prospectively collect tissues and blood to understand the proteomic profiles of those patients in a system in which the clinical information can be extremely well annotated.
So several portions of the APOLLO project have actually been kicked off and there’s data being generated. The proteomic data are being generated at sites that are supported by the National Cancer Institute; the genomic information from these same patients is actually being developed at sites that are supported by the Department of Defense and their clinical information is being collated together on a website that we share with them.
So basically our notion is that we want to set up a series of networks that interact with each other. So having a large biobank would be of really minor use unless it were there to inform and supply tissues for investigators in our network studying drug sensitivity and resistance. In the same way, collecting tissues from patients undergoing immunotherapy would not be of great use unless it was being used to develop new biomarkers for immunotherapeutic response. So overall these are activities that on a yearly basis are involving the expenditure of literally tens of millions of dollars and really only made possible by the moonshot funding that we received from Congress about a year and a half ago.