By Suzanne Osborne
The Food and Drug Administration’s (FDA) Food Safety Modernization Act (FSMA) holds the promise of food safety from “farm-to-fork” using science-based standards to detect and respond to food contamination at each step along the supply chain. Yet, when we look at pathogen detection methods as a whole, it becomes clear that “farm-to-fork” surveillance is a myth.
Pathogens are traditionally identified in labs by growing samples on selective media. This slow approach is increasingly unrealistic for today’s food production systems, and technology and scientific progress have led to the development of nucleic acid-, biosensor-, and immunoassay-based methods of pathogen detection.
Each of these methods has sensitivity, specificity, or ease-of-use advantages. But most of these modern detection strategies still require about 24 hours for results, as well as some degree of technical expertise, costly equipment, and sample enrichment prior to testing. In short, the majority of detection methods are not feasible for use by farmers or consumers. How can we call it food safety from “farm-to-fork” when the “farm” and the “fork” components are missing? Let’s look a little closer at this issue.