FQN digests 2016 and injects topics of 2017

Bruker reveals focus for applied markets

By Joseph James Whitworth contact

- Last updated on GMT


Related tags: Food industry, Olive oil, Mass spectrometry

To understand the food safety and quality landscape we sent a Q+A to several companies in industry to discuss highlights of 2016 and to predict what this year could bring.

Here is the second with Bob Galvin, VP Applied Markets at Bruker Daltonics. If you missed the first with Ashley Sage, senior manager - applied markets development EMEAI at SCIEX, go here​.

FQN: What was the highlight of 2016 for your company and why?

Galvin:​ The focus in applied markets is to make things faster, less false positives and screen for more compounds, get higher throughput through speed. One of the key guidelines we have to make sure we comply with are the SANTE/11945/2015 guidelines and those dictate everything we do with software developed to follow those.

A lot of our customers are the European reference labs where screening and quantitation in a single run is becoming more of the norm now. So historically it is clear and fair to say, triple quadrupoles have had a very strong place in food analysis, it is targeted analysis and they’ve dominated.

The Q-TOF orthogonal time of flight instruments have made quite a significant improvement in performance over the last few years especially in the quantitation side and we are seeing a move in the industry towards screening and quantitating via one of these types of systems. The reason being that one, you screen faster and two, you can go back and respectively analyse your data for something which wasn’t targeted.

With the triple quadrupole mass spectrometry systems which have dominated, you have to tell it what to look for. So in tomatoes for instance you have a list of 20-25 pesticides that they have to pass and you set your mass spectrometer up with triple quadrupole just to look for those 20-25. If there is anything else in there which you are not looking for the system is blind.

Whereas with the orthogonal TOF systems it sees everything which is there and reports exactly what it finds. So you are wanting to report the 20-25 standard compounds but if anything else is there caused by pollution in the water used to water the tomatoes that is also picked up and screened for and reported.

So the change from the large labs who are doing thousands and thousands of samples every day is relatively slow but what we are seeing in the reference labs, they are now starting to pick these screening methods up and move towards those.

The first move is towards targeted but with the orthogonal TOF. When you target in the orthogonal TOF you report what you look for but you don’t throw anything else away. You collect data on everything so if in a month’s time you want to go back and re-analyse that data and look for another two or three pesticides or pollutants you can do that because the data exists. When you interrogate that data you typically interrogate it for the list of pesticides associated with the fruit or vegetable.

You can also do untargeted screening where you literally go and report everything out and that is a move we are starting to see a little bit in food but more in environmental. More of the move is towards the targeted for the fruits via the orthogonal TOF systems.

The workflows being used in the untargeted approach are very similar to what has been established in metabolomics for example, where you are working out what the pathways are and what’s around. So the software already exists to do untargeted we’re just moving it from a biological type sample to a food or environmental one. There are not really any new developments there but it is the adoption by the industry to move into this direction.

What are focus areas for 2017?

Galvin:​ We will enhance the screener packages, more automation and increase database size and chance of retrospective integration, add more volatile and semi-volatile compounds. Historically, most of this analysis is done by LC-MS with the volatiles being done by the gas chromatography instruments. Our Orthogonal Time-of-Flight (OTOF) can take a gas chromatogram trace into them so we have a collaboration with a European group who will be adding about 150-200 volatile and semi-volatile compounds that are typical analysed by GC into the database so the system then becomes more flexible with higher throughput.

We are linking more into the LIMS – so stronger links for taking large sample lists in and reporting the findings of the sample lists back out to the LIMS system. We are moving into faster chromatography again so a screen could maybe come down to 10 minutes, we are planning to get below 10 minutes and of course throughput comes up. A lot of these large laboratories basically see themselves as a money printing outfit, sample in equals dollars out so the more throughput we give them the more money they make.

We are starting to move towards parallel processing whilst acquisition is taking place and that will speed up things when the run is eventually finished and we have all the calibrations, and the blanks and the QC samples we aim to have results out within a few minutes and then back into the LIMS system and that is work in progress for 2017.

We’re also in the process of launching a series of Bruker HPLC systems configured for a variety of the food markets whether it is normal throughput or high throughput or even with online extraction so it will be a total package including HPLC columns where we have partnered with a column manufacturer.

Is food fraud/authenticity getting as much attention as food safety or second in line?

Galvin​: It is one we are working on quite a lot. I mentioned the microbiology side which is using a very different type of mass spectrometer, a MALDI-TOF system, so these systems have a protein fingerprint type application where they screen against databases and a number of these are used in the food industry. They are also now being used for taking a fingerprint from meats or fish and the authenticity of that sample is initially screened by DNA sequencing so you know what is in your library is correct and then you can screen your unknowns against your database entries.

We are working with people in ongoing projects, some of the whitefish such as halibut, Dover sole, cod are expensive and when taken away from the fish the skin is very difficult to actually prove what it is. So we are seeing our users work on that. The same for meat, a lot of the more exotic meats, game meats, are quite often fakes so we are seeing databases on that.

Obviously with fish it is a lot easier to buy a complete fish, which helps in the visual identification part as well, as soon as that fish is cut up then any sample you’re given is far more difficult to visually inspect. There are those libraries being generated.

We provide the technology and instrumentation to do this, the benefit of MALDI is it is a very fast acquisition technology. A MALDI spectra you could acquire in less than a minute. It is the sample preparation that is the most timely thing whereas with LC-MS it is usually the analysis is more timely than the sample preparation.

When you get to meats it is very difficult then to get a visual identification as you very rarely see the complete piece and then making sure it is the right sample through genetic testing and comparing your results to the DNA database. We do have sequences in the database by genetic analysis that then becomes your reference material where you take your protein spectra from, that goes into your database and your unknowns are screened against that.

This is fairly new using MALDI but quick and there is a big benefit to doing that. Historically, there’s a lot more mass spectrometers in the food industry which are LC based so there are some LC based methodologies out there but they are time consuming and they are lengthy analysis and you have to target specific compounds within each species to prove it is either there or absent whereas with a MALDI spectra it is more like taking a picture of all the proteins and preparing that against the database.

The same for olive oil adulteration, if you adulterate with something that is not olive oil then it will have different components within it which will show up in different parts of the spectra. So again it is comparison to a database and we use principal component analysis quite a lot in this type of work which shows how pure it is or what the adulterant is.

Again in this you have to tell the mass spectrometer well this is olive oil, these are the peaks that come from the olive oil analysis, this is rapeseed oil or soya bean oil and these are the peaks that come from these other oils and then when the adulterated sample is analysed it will recognise some of the peaks are olive oil and some are from another oil and it will start to group them so you have to teach it and give it a mixture and it stores these and compares everything against its reference spectra.

It would analyse this by principal component analysis and then let the analyst know where on the PCA plot the unknown sample sits, does it sit in the area where the olive oil resides in or does it sit in an area where we have a mixture or maybe even sits in something which is far away from olive oil. We’ve been working with some groups in Spain and in China on this application. Spain is an obvious one as a lot of olive oil is generated in that part of the Mediterranean and a lot of fake foods are coming out of China so there is some work going on over there as well.

Everything for the food industry really has to be as fast as possible, as cheap as possible and as robust as possible in comparison to what you do in the pharmaceutical business. For a lot of equipment we can use the lower end of our mass spectrometers, lower end doesn’t imply they are cheap they are still $150-200,000 plus but we don’t typically target the food industry with half a million or million dollar boxes which often goes into the prime research groups in pharmaceutical companies or into the academic research institutes.

We are also starting to see databases coming up on insects, this is not for fraud, but for proteins in the third world countries. Insects are being looked at as a source of protein. All of this is relatively early stages, it is not as developed as the pesticide or mycotoxin screening in the food.

What issues and regulations could affect 2017?

Galvin:​ On the LC-MS said the challenge we’ve had in the past was for acetonitrile but it has come and gone a little bit. There was a shortage which was hampering sample throughput and in some cases might have forced companies to move to ethanol as a solvent in the LC-MS reverse phase LC-MS separation.     

I did a development-type stint at SCIEX in 2003-04 when changes where happening and we were moving mass spectrometers from high throughput in the pharmaceutical business sideways to the food industry we did see a lot of change and the guidelines for Europe were the most stringent. Japan then came out with some strict guidelines but the US guidelines have historically not been as tough as Europe or Japan. We key an eye on the Food Safety Modernization Act which we are starting to see some implications from.

Are there any areas where you think 2017 will be the ‘breakthrough’ year

Galvin​: There is one piece of new technology we are highlighting, it is called trapped ion mobility mass spectrometry (TIMS), and we launched it last year at ASMS. That gives us really an extra separation device, as well as having time and mass and sensitivity you then go to cross sectional area. Its main push is in trying to find more biomarkers, human fluids are very complex. We believe maybe in a year or two that the extra separation power that this technology gives you will allow you to speed up analysis even further. This is research thinking – new technology that goes into the pharmaceutical, biotech and medical world is usually at a different price point then the food industry would like to see it at. I think the second generation when we’ve mastered the extra resolving power that the trapped ion mobility will give you will then breakthrough to providing faster analysis in the food industry. History has shown us a normal slide from high end products becoming obsolete at the time in the pharmaceutical business becoming cheaper to manufacturer and sliding into food or environmental analysis.

There’s also quite a big push to having less false positives or false negatives and increasing the accuracy quite considerably. The more information you can take from the attributes of the compound the more accurate you are going to be.

That is another reason why the screeners, the OTOF products, are starting to find some favour against the triple quadrupole because you have the retention time by separation and a molecular line which is the same, you have the benefits of a QTOF as you are looking at accurate mass and you can also look at accurate mass and number of fragments and you can also make use of the isotope pattern as well which is quite characteristic. So making use of that in the confirmation of what you are actually screening and looking for and reporting is the correct compound and not something else which might have a similar mass.

Related topics: Food Safety & Quality

Related news

Show more

Follow us


View more