Translational research in breast cancer

Share :
Published: 26 Jul 2017
Views: 2143
Rating:
Save
Dr Mark Pegram - Stanford Medical School, Stanford, USA

Dr Pegram talks with ecancer at NOSCM 2017 about translational research in breast cancer, with the focus on diagnostics and the challenges faced in diagnosis and treatment of breast cancer. He discusses a recent paper from Christina Curtis at the Stanford Medical School investigating the evolutionary dynamics of outgrowth in different sub-populations of mutant tumour cells in solid tumours. He goes on to talk about a number of other papers that he discusses in this talk at the meeting.

Today my assigned topic for discussion in the breast cancer session was an assignment that was really a challenge given to me by the program chairs, Dr Raez and Dr Santos. They gave me the title of ‘Update on translational research in breast cancer’ which is a very broad topic but it gave me the freedom to discuss and consider some possibilities that might not cross a routine clinician’s mind on a busy day.

So I focussed in the area of diagnostics and challenges that we face now in diagnosis and treatment of breast cancer. To this end I discussed a paper that just was published by one of my colleagues at Stanford, Dr Christina Curtis, who looked at the evolutionary dynamics of outgrowth of different subpopulations of mutant tumour cells in solid tumours, breast cancer included. Basically she explored how these dynamics come into play under various selection pressures like selection pressure from our therapeutics. Basically with this model framework you can superimpose actual data from solid tumours and deduce what the types of selection pressure might be, whether it be neutral or positive or negative or some combination thereof. This is very important because tumours are collections of highly heterogeneous groups of tumour cells with different types of mutations within the tumour itself. So it’s important probably to do multiple samples of a tumour because as these cells outgrow there’s spacial heterogeneity as well that evolves.

The other topic that I discussed was Dr Haruka Itakura’s paper in Science Translational Medicine looking at integration of digital image datasets along with molecular expression array datasets. So it’s now possible through bioinformatics tools to integrate digital image data along with, let’s say, gene expression array data and have more interesting and complex phenotypes of tumours that tend to cluster together and have particular suites of genes that are altered within each cluster. That gives us the possibility of new diagnostic modalities and perhaps new therapeutic treatments that have a more comprehensive tool to integrate all of the available clinical data.

I moved from there to a consideration of how all of the multigene assays have evolved over the past five to ten years. Most of them started with gene expression array, that is looking at all the messenger RNAs that are upregulated or downregulated in a given tumour sample. Those patterns inform the intrinsic phenotypes of breast cancer which has therapeutic implications. The steroid receptor positive phenotypes are treated with hormone therapy; the HER2 enriched phenotypes are treated with HER2 targeting agents etc. But what was interesting in comparing all of the available commercial diagnostic assays that are multigene tests in human breast cancer, a paper was recently published by a British group called the OPTIMA trial where they compared all of these commercial assays, about five different assays ranging from Oncotype DX to MammaPrint to the PAM50 assay to IHC4 and IHC4-AQUA. They compared the results of all those tests in a panel of about 313 newly diagnosed breast cancer patients and you would expect that since all of those diagnostic tests have some ability to cull out low risk populations from high risk populations you would think that they would all identify relatively the same types of patients no matter which assay you used. In fact there was a paper published in The New England Journal of Medicine by Fan et al back in 2006 suggesting that probably most of these tests are able to cull out a low risk group that probably would not benefit from chemotherapy and therefore it wouldn’t matter which test you ordered, maybe they’re identifying the same patients. Well in this OPTIMA trial the opposite was observed – when they measured all five assays in all 300-odd tumours they found marked discordance from one assay to the next. For instance, the Oncotype DX score had the highest percentage of low risk patients compared to any of the other assays. So that calls into question the potential of each of these assays to truly identify all of the patients that are low risk.

Based on that data it was impossible for the authors of the paper to conclude which test was best. We don’t know whether the Oncotype score perhaps over-culls low risk or whether the others are under-culling low risk, for example. As it happens a number of these tests have been looked at retrospectively also for their ability to predict response to chemotherapy, not just prognosis that the assays were built upon. That introduces another problem and that is that there are a number of expression arrays that have been published in the literature now looking at specific gene expression profiles that predict for specific types of chemotherapy response. Yet none of those are integrated into our current armamentarium. All of the commercial assays were developed on prognostic signatures, not response signatures.

So I offered a solution, perhaps. One solution to this conundrum and the discordance in the OPTIMA trail would be to take a step back and re-look at all of the bioinformatics and information and platforms that were used to develop the current generation of assays. This was done by the METABRIC investigators and published just a couple of years ago. They looked at a set of 2,000 primary breast cancers divided into cohorts of 1,000, a test set and a validation cohort. They used a new tool which actually integrates both the messenger RNA data and the copy number variation data. It turns out that in human breast cancer the copy number variation data harbours a lot of the information on drivers of oncogenesis, particularly in breast and ovarian cancers, whereas in other tumour types it’s more collections of point mutations, single point mutations, that offer that same capability. Consequently by integrating the expression array data with the copy number variation data you get a more high resolution picture of the phenotypes of breast cancer. These are called integrated clusters and there are ten subtypes that fall out of that type of algorithm and they correlate with prognosis, they correlate with response to neoadjuvant chemotherapy. There’s much greater granularity in their ability to predict chemo response as compared to any of the other assays that are currently commercially available. So in future generations it’s maybe very likely that we’ll look at these more highly integrated datasets that contain more of the molecular information, not just messenger RNA information. Although the integrated cluster now is able to take a step back and show that some of their copy number combined data have gene expression profiles that correlate with them quite well which sets the stage for perhaps future commercialisation of that type of technology. It would be cheaper and easier if they just used the array data rather than the integrated data. It looks like they have the capability to do that.

So that’s quite exciting, it’s a look forward into the future development of breast cancer diagnostics. There’s potential for more complete understanding of molecular phenotypes in breast cancer and consequently that will inform better resolution in both prognosis but also treatment outcomes in our patients.