Microbiological risk
assessment in food
processing
Edited by
Martyn Brown and Mike Stringer
Published by Woodhead Publishing Limited
Abington Hall, Abington
Cambridge CB1 6AH
England
www.woodhead-publishing.com
Published in North America by CRC Press LLC
2000 Corporate Blvd, NW
Boca Raton FL 33431
USA
First published 2002, Woodhead Publishing Limited and CRC Press LLC
2002, Woodhead Publishing Limited
The authors have asserted their moral rights.
This book contains information obtained from authentic and highly regarded sources.
Reprinted material is quoted with permission, and sources are indicated. Reasonable
efforts have been made to publish reliable data and information, but the authors and
the publishers cannot assume responsibility for the validity of all materials. Neither the
authors nor the publishers, nor anyone else associated with this publication, shall be
liable for any loss, damage or liability directly or indirectly caused or alleged to be
caused by this book.
Neither this book nor any part may be reproduced or transmitted in any form or by
any means, electronic or mechanical, including photocopying, microfilming and
recording, or by any information storage or retrieval system, without permission in
writing from the publishers.
The consent of Woodhead Publishing Limited and CRC Press LLC does not extend
to copying for general distribution, for promotion, for creating new works, or for
resale. Specific permission must be obtained in writing from Woodhead Publishing
Limited or CRC Press LLC for such copying.
Trademark notice: Product or corporate names may be trademarks or registered
trademarks, and are used only for identification and explanation, without intent to
infringe.
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library.
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the Library of Congress.
Woodhead Publishing Limited ISBN 1 85573 585 7 (book); 1 85573 668 3 (e-book)
CRC Press ISBN 0-8493-1537-9
CRC Press order number: WP1537
Cover design by Martin Tacchi
Project managed by Macfarlane Production Services, Markyate, Hertfordshire
(e-mail: macfarl@aol.com)
Typeset by MHL Typesetting Limited, Coventry, Warwickshire
Printed by TJ International, Padstow, Cornwall, England
List of contributors ..................................................... xi
Preface ................................................................. xv
1 Introduction ...................................................... 1
M. Brown, Unilever Research, Sharnbrook and M. Stringer,
Campden and Chorleywood Food Research Association, Chipping
Campden
1.1 References . . . ................................................ 4
2 The evolution of microbiological risk assessment ............... 5
S. Notermans and A. W. Barendsz, TNO Nutrition and Food Research
Institute, Zeist and F. Rombouts, Wageningen Universiteit
2.1 Introduction . . ................................................ 5
2.2 Historical aspects of safe food production . . ................. 6
2.3 The evolution of food safety systems ........................ 7
2.4 International food safety standards . . . ........................ 23
2.5 Present and future uses of microbiological risk assessment . . 29
2.6 List of abbreviations . . . ...................................... 38
2.7 References . . . ................................................ 39
Part I The methodology of microbiological risk assessment ....... 45
3 Microbiological risk assessment (MRA): an introduction ...... 47
J. L. Jouve, Ecole Nationale Ve¡äte¡ärinaire de Nantes
3.1 Introduction . . ................................................ 47
Contents
3.2 Key steps in MRA ........................................... 49
3.3 Hazard identification ......................................... 53
3.4 Hazard characterisation/dose¨Cresponse assessment . . . . . . . . . . . 54
3.5 Exposure assessment ......................................... 57
3.6 Risk characterisation ......................................... 60
3.7 References ................................................... 63
4 Hazard identification ............................................. 64
M. Brown, Unilever Research, Sharnbrook
4.1 Introduction: the importance of correct hazard
identification . . . . . . . . ......................................... 64
4.2 What is hazard identification? . . . . ........................... 64
4.3 What hazard identification should cover and produce
as an output . . . . . . . . . ......................................... 65
4.4 What to do in hazard identification . . . . . . .................... 66
4.5 Key information in hazard identification . .................... 67
4.6 Tools in hazard identification . . . . . ........................... 69
4.7 Microbial hazards . . . ......................................... 70
4.8 Identifying the origin and distribution of microbial
hazards . . . . ................................................... 72
4.9 Changes in microbial hazards . . . . . ........................... 73
4.10 Other biological hazards . . . .................................. 75
4.11 References ................................................... 75
5 Hazard characterization/dose¨Cresponse assessment ............. 77
S. B. Dennis, M. D. Milotis and R. L. Buchanan,
US FDA, College Park
5.1 Introduction: key issues in hazard characterization . . . . . . . . . . . 77
5.2 Types of dose¨Cresponse data . . . . . . ........................... 83
5.3 Modeling dose¨Cresponse relationships . . . .................... 86
5.4 Problems in hazard characterization . . . . . . .................... 90
5.5 Future trends . . . . . . . . ......................................... 94
5.6 Sources of further information and advice . . . . . . ............. 96
5.7 References ................................................... 97
6 Exposure assessment ............................................. 100
M. Brown, Unilever Research, Sharnbrook
6.1 Introduction . . . . . . . . . ......................................... 100
6.2 The role of exposure assessments in microbiological
risk assessment . . . . . . ......................................... 101
6.3 What¡¯s in an exposure assessment? . . . . . . .................... 105
6.4 Who should do an exposure assessment and when? . . . . . . . . . . 109
6.5 Building up supply chain data for an exposure assessment . . 109
6.6 Sources of information . . . . . .................................. 111
6.7 Types of data used in an exposure assessment . . ............. 114
vi Contents
6.8 The output of an exposure assessment . . . . . . ................. 117
6.9 References . . . ................................................ 123
7 Risk characterisation ............................................. 127
P. Voysey, K. Jewell and M. Stringer, Campden and Chorleywood
Food Research Association, Chipping Campden
7.1 Introduction: key issues in risk characterisation . . . . .......... 127
7.2 Risk characterisation requirements . . . ........................ 129
7.3 Risk characterisation methods . ............................... 135
7.4 Quantitative and qualitative outputs . . ........................ 142
7.5 Risk characterisation in practice: some examples . . .......... 147
7.6 Current problems and future trends . . ........................ 151
7.7 References . . . ................................................ 153
8 Risk communication ............................................. 155
R. Mitchell, Public Health Laboratory Service, London
8.1 Introduction . . ................................................ 155
8.2 The concept of risk . . . . ...................................... 156
8.3 Risk perception . . . . . . . . ...................................... 158
8.4 The concept of communication . . . . . . ........................ 163
8.5 Risk communication . . . ...................................... 166
8.6 The future of risk communication . . . ........................ 169
8.7 References . . . ................................................ 170
Part II Implementing microbiological risk assessments ............ 173
9 Implementing the results of a microbiological risk
assessment: pathogen risk assessment ........................... 175
M. Van Schothorst, Wageningen University
9.1 Introduction . . ................................................ 175
9.2 Establishing food safety objectives . . . ........................ 177
9.3 Developing food safety management strategies . . . . .......... 181
9.4 Establishing microbiological criteria . ........................ 185
9.5 Problems in implementation . . ............................... 190
9.6 Future trends . ................................................ 191
9.7 References . . . ................................................ 191
9.8 Acknowledgement . . . . . ...................................... 192
10 Tools for microbiological risk assessment ....................... 193
T. Wijtzes, Wijtzes Food Consultancy, Gorinchem
10.1 Introduction . . ................................................ 193
10.2 Qualitative tools for risk assessment . ........................ 195
10.3 Predictive modelling . . . ...................................... 196
10.4 Tools for modelling, prediction and validation . . . . . .......... 203
10.5 Future trends . ................................................ 209
Contents vii
10.6 Sources of further information and advice . . . . . . ............. 210
10.7 References ................................................... 211
11 Microbiological criteria and microbiological risk
assessment 214
T. Ross, University of Tasmania, and C. Chan, Safe Food Production
NSW, Sydney
11.1 Introduction . . . . . . . . . ......................................... 214
11.2 Types of criteria . . . . ......................................... 215
11.3 Key issues in the use of microbiological criteria ............. 217
11.4 Dealing with variability, uncertainty and hazard
severity: sampling plans . . . .................................. 221
11.5 Microbiological criteria and food safety assurance:
food safety objectives . . . . . . .................................. 226
11.6 Using microbiological risk assessments:
to set microbiological criteria . . . . . ........................... 228
11.7 Using microbiological risk assessments to develop
performance and process criteria . . ........................... 231
11.8 Using microbiological risk assessments to prioritise
risk management actions . . . .................................. 236
11.9 Using criteria in risk assessments . ........................... 237
11.10 Future trends . . . . . . . . ......................................... 239
11.11 Further reading . . . . . . ......................................... 240
11.12 References ................................................... 241
Appendix . ................................................... 246
12 HACCP systems and microbiological risk assessment .......... 248
R. Gaze, R. Betts and M. Stringer, Campden and Chorleywood
Food Research Association, Chipping Campden
12.1 Introduction . . . . . . . . . ......................................... 248
12.2 Legal requirements for HACCP systems . .................... 249
12.3 International guidance on HACCP implementation . . . . . . . . . . 250
12.4 Problems in HACCP implementation . . . . .................... 256
12.5 The interaction between HACCP systems and
microbiological risk assessment (MRA) . . .................... 258
12.6 The future relationship of HACCP systems and MRA . . . . . . . 261
12.7 References ................................................... 263
13 The future of microbiological risk assessment .................. 266
M. Brown, Unilever Research, Sharnbrook and M. Stringer,
Campden and Chorleywood Food Research Association,
Chipping Campden
13.1 Introduction . . . . . . . . . ......................................... 266
13.2 Information needs for risk assessment . . . .................... 269
13.3 How should risk assessment processes develop? ............. 278
viii Contents
13.4 Key steps in risk assessment . . ............................... 280
13.5 Risk acceptance . . . . . . . . ...................................... 284
13.6 The outputs of risk assessment: risk management
and communication . . . . ...................................... 289
13.7 Conclusion . . . ................................................ 292
13.8 References . . . ................................................ 292
Index . . . . . .............................................................. 293
Contents ix
Chapters 1 and 13
Professor Martyn Brown
Unilever Research
Colworth House
Sharnbrook
Bedford MK44 1LQ
England
Tel: +44 (0) 1234 222351
E-mail: martyn.brown@unilever.com
Dr Mike Stringer
Campden & Chorleywood Food
Research Association Group
Chipping Campden
Gloucestershire GL55 6LD
England
Tel: +44 (0) 1386 842003
Fax: +44 (0) 1386 842030
E-mail: m.stringer@campden.co.uk
Chapter 2
Dr Ir Serve¡ä Notermans and A. W.
Barendsz
TNO Nutrition and Food Research
PO Box 360
3700 AJ
Zeist
The Netherlands
Tel: +31 30 6944943
Fax: +31 30 6944901
E-mail: notermans@voeding.tno.nl
Professor Dr Ir F. Rombouts
Wageningen Agricultural University
Bode 117
Postbus 8129
6700 EV Wageningen
The Netherlands
Tel: +31 317 4 82233
Fax: +31 317 4 84893
E-mail: Frans.rombouts@micro.fdsci.
wau.nl
Contributors
Chapter 3
Professor Jean-Louis Jouve
Ecole Nationale Veterinaire de Nantes
Atlanpole-La Chanterie
BP 40706
44307 Nantes Cedex 03
France
Fax: 02 40 68 77 78
E-mail: Jeanlouis.jouve@fao.org
Chapters 4 and 6
Professor Martyn Brown
Unilever Research
Colworth House
Sharnbrook
Bedford MK44 1LQ
England
Tel: +44 (0) 1234 222351
E-mail: martyn.brown@unilever.com
Chapter 5
Dr Robert L. Buchanan, Dr Sherri
Dennis and Dr Marianna Miliotis
Food and Drug Administration
Center for Food Safety and Applied
Nutrition
Office of Science, HFS-06
5100 Paint Branch Parkway
College Park
Maryland 20740-3835
USA
Tel: 301 436 1903
Fax: 301 436 2641
E-mail: sdennis@cfsan.fda.gov
Chapter 7
Dr P. Voysey, Mr K. Jewell and
Dr Mike Stringer
Campden & Chorleywood Food
Research Association Group
Chipping Campden
Gloucestershire GL55 6LD
England
Tel: +44 (0) 1386 842069
Fax: +44 (0) 1386 842100
E-mail: p.voysey@campden.co.uk
k.jewell@campden.co.uk
m.stringer@campden.co.uk
Chapter 8
Dr R. T. Mitchell
Head, Environmental Surveillance
Unit
Communicable Disease Surveillance
Centre
61 Colindale Avenue
London NW9 5EQ
England
Tel: +44 (0) 20 8200 6868
Fax: +44 (0) 20 8905 9907
E-mail: rmitchel@phls.org.uk
Chapter 9
Dr M. van Schothorst
P.O. Box 8129
Wageningen, 6700 EV
The Netherlands
Tel: +41 21 944 2755
Fax: +41 21 944 2792
E-mail:
michiel.van-
schothorst@micro.fdsi.wau.nl
mvanschot@bluewin.ch
xii Contributors
Chapter 10
Dr Ir Taco Wijtzes
Wijtzes Food Consultancy
Dr Scho¨yerstraat 52
4205 KZ Gorinchem
The Netherlands
Tel: +31 (0)183 614334
Fax: +31 (0)183 617414
E-mail: Wijtzes@foodconsult.nl
Chapter 11
Dr Tom Ross
School of Agricultural Science
University of Tasmania
GPO Box 252-54
Hobart
Tasmania 7001
Australia
Tel: +61 (0) 3 62 26 1831
Fax: +61 (0) 3 62 26 2642
E-mail: tom.ross@utas.edu.au
Mr Chris Chan
Safe Food Production NSW
PO Box A 2613
Sydney South
NSW 1235
Australia
Tel: +61 (0) 2 9295 5777
Fax: +61 (0) 2 9261 2434
E-mail:
chris.chan@safefood.nsw.gov.au
Chapter 12
R. Gaze, R. Betts and Dr Mike
Stringer
Campden & Chorleywood Food
Research Association Group
Chipping Campden GL55 6LD
Gloucestershire
England
Tel: +44 (0) 1386 842000
Fax: +44 (0) 1386 842100
E-mail: r.gaze@campden.co.uk
r.betts@campden.co.uk
m.stringer@campden.co.uk
Contributors xiii
Complete elimination of risk from food manufacture and consumption is an
impossible goal, but risk reduction is an essential part of every food producer¡¯s
responsibility to protect both its customers and its business. Risk reduction is
necessary because the term risk is never applied to good events. This book
presents microbiological risk analysis (MRA) concepts, principles and
techniques to help the reader understand and use them for managing food safety.
Theoretical studies and research work in the area of risk have provided
powerful analytical tools for dealing with microbiological and epidemiological
information, reaching and communicating decisions and then taking preventa-
tive actions that are appropriate to the hazard, consumers and the intended use of
the product. The use of a range of indirect assessment tools as explained in the
chapters of this book is necessary because risk cannot be directly measured; it
can only be calculated, based on data indicating probability and type of hazard.
Many of the techniques available have their roots in risk assessment in other
fields; the chapter authors have refined their application to microbiological risk
assessment. Similarly many international organisations (e.g. World Health
Organisation, Codex Alimentarius, International Life Sciences Institute and the
International Commission on Microbiological Specifications for Foods) are also
focussing these tools on microbiological hazards, whilst also trying to maintain
uniformity in risk-based approaches to protecting all aspects of public health, so
that it is evident to consumers that chemical and microbiological hazards are
assessed in the same way.
The performance of MRA will always be limited by the availability of data.
In spite of this, informal risk assessments leading to management actions are
taken every day based on expert opinion and assumptions that cannot be
validated. Part of the function of this book is to highlight the benefits of formal
Preface
risk analysis systems and encourage risk managers to ensure that transparent and
unbiased risk assessment processes and the best available data are used for
decision making. As part of this process, uncertainties and variability or
imperfections in data should be clearly identified and taken into account by
decisions. Methods for doing this are explained. Potential users of MRA rightly
expect the technique to produce results that they can use to suggest controls and
improvements at costs that their products can bear and their consumers accept.
Use of formal MRA systems will increase the chances of achieving this; but
performance will always be limited by the availability of information for the risk
assessment. It should be within the capabilities of anyone undertaking a risk
assessment to identify hazards and generate reliable supply chain data for the
exposure assessment, the weak link may lie in providing hazard characterisa-
tions appropriate to consumers. As foods and drinks are increasingly ¡®tailored¡¯
for specific consumer groups it is important that we have a better knowledge of
the relationship between exposure to a particular hazard and the severity of any
associated adverse health effects.
Governmental management of food safety is changing on a global basis to
meet the challenge of changing patterns in the food trade, such as globalisation
of the food supply. In the near future, there is the challenge of how MRA will
lead to the establishment of food safety objectives, providing an appropriate
level of protection and how this will impinge on industrial food safety
management practices such as HACCP. Independent, consumer focussed
national (e.g. UK Food Standards Agency) and regional authorities (e.g.
European Food Safety Authority) have been established and they are seeking
common ways of working. Never before has the requirement for absolute
transparency in the decision-making process behind food safety management
policy been so necessary. The three components of risk analysis suggested by
Codex Alimentarius ¨C assessment, communication and management are now
accepted tools for reaching supportable decisions on public health policy, risk
management strategies and preventative measures. To be credible, such decision
making has to be based on the available science and take account of political,
economic and other factors that may alter local perception of a risk. Additionally
the techniques have far wider application at the food production level. For
communicating and managing risks at the plant level by providing material for
improved hazard analyses and providing information to help trading partners
and consumers make informed choices.
The topic of risk is frequently examined by the media, who often dwell on
knowledge of a new hazard or a change in risk to the extent that many
consumers feel that they face more risks from food today than in the past and
producers feel that changes in the trading environment and consumer preference
introduce new hazards and increased levels of risk. This book only deals with
the technical analysis of risk; the topics of risk perception and acceptable risk
are not covered because expertise in MRA does not yet exist to address these
issues. The best that can be produced by risk assessors is a technically justifiable
evaluation of real (rather than perceived) risks, with uncertainty and variability
xvi Preface
clearly explained. Microbiologists cannot handle perceived risks because every
consumer is different and each one perceives hazards and risks differently,
making it difficult to reach valid conclusions. Risk assessment can however
improve the performance of risk management tools, such as HACCP, and this is
well illustrated by the chapters.
Risk analysis is focused on microbiological safety because injuries and deaths
caused by food have the most serious implications possible for a country or
business (e.g. BSE), it is very difficult to fully quantify the costs of consumer
compensation or liability in defining the implications of safety management
failures. Often these costs are borne centrally, whereas risk management
decisions are made at the local level. Closer commercial links between suppliers
and retailers and users of products has resulted in a need for much closer control
and management of food hygiene and safety. This book is aimed at helping risk
managers take a wider view of systems and techniques, so that they can directly
contribute to the success of their business or agency. Different products have
different hazards and different consumers different sensitivities and this must be
considered by risk assessor and risk manager in developing risk cost effective
management procedures.
The most serious long-term consequence of a safety failure for a food
business is a decision by consumers to change their eating habits; risk
modification is an action everybody takes to some extent. An unpredictable
change in eating habits, because of unidentified or uncontrolled hazards can
destroy the value of a business. Therefore decisions between course of action
available to a risk manager, including providing information to consumers must
be guided by the best information (risk assessment) and the most transparent
procedures available ¨C risk management and communication. A comprehensive
MRA provides a framework from which the potential effectiveness of different
intervention or mitigation strategies for risk management can be assessed, thus
enabling more scientifically robust decision making. Users of this book need to
remember that risk analysis deals with real events and real consequences and
that people (e.g. consumers and plant staff and management) are an integral part
of the process of assessment and management. MRA should be used for decision
making or trade-offs, limited by acceptable risk on one hand and consumer
preference on the other. No magic words or infallible techniques exist in this
area, this book only provides access to the existing tools and sources of
information.
Martyn Brown and Mike Stringer
Preface xvii
Attempts to assess the nature of the risks posed by foodborne pathogens to
consumers have long been undertaken by the food industry as a means of
ensuring safe food. However, the 1990s in particular have seen growing
government and industry commitment towards developing an internationally-
accepted methodology for assessing the importance of microbiological risks. A
number of factors have driven this process. Serious and well-publicised
outbreaks of foodborne disease in the US and Europe have highlighted the need
to improve the identification of new hazards, the assessment and management of
existing microbiological food safety risks, and the need for dialogue with
consumers about microbiological safety (Pennington, 1997; Tuttle et al., 1999).
At the same time, developments in risk assessment methodology, better
microbiological data and greater computing power have made it possible to
develop more sophisticated and meaningful risk assessments (Tennant, 1997;
Benford, 2001; Morgan, 1993).
Further impetus has been provided by the continued globalisation of the food
supply, and renewed attempts to harmonise food safety principles and practice in
international trade. In 1993 the Uruguay Round of the General Agreement on
Tariffs and Trade (GATT) resolved that barriers to international trade in food,
including those designed to protect public health, could only be science-based.
In response, member countries of the World Trade Organisation (WTO)
concluded the sanitary and phytosanitary (SPS) agreement (Anon, 1995a and b).
The SPS agreement proposed the key requirements necessary to demonstrate
equivalent levels of safety in foods originating in different nations, produced by
different manufacturing systems and complying with differing regulatory
requirements. The agreement requires that food safety measures taken by
individual countries are:
1
Introduction
M. Brown, Unilever Research, Sharnbrook and M. Stringer,
Campden and Chorleywood Food Research Association, Chipping
Campden
? applied only to the extent required to protect human health
? based on scientific principles
? not maintained without scientific evidence
? based on an assessment of the risk to health that is appropriate to the
circumstance
The WTO turned for guidance on defining suitable criteria to the Codex
Committee on Food Hygiene of the Codex Alimentarius Commission (CAC), the
body set up by the WTO and the Food and Agriculture Organisation (FAO) of the
United Nations to develop benchmark standards and procedures for international
food safety management. The CAC subsequently developed principles and
guidelines to define the nature of, and provide a methodology for, assessing the
risks to human health from pathogens in foods (Anon., 1996; Anon., 1999). These
guidelines provide the foundation for a common methodology for microbiological
risk assessment and management by WTO countries.
The CAC defines microbiological risk assessment as a scientifically-based
process involving four key steps which are designed to produce a risk estimate:
1. hazard identification
2. hazard characterisation
3. exposure assessment
4. risk characterisation
Hazard identification identifies the causal relationship between a pathogenic
agent, an illness and a food as one vector of a specified illness. Hazard
characterisation, or dose-response characterisation as this stage is also known,
attempts to relate the probability and severity of illness to the dose of the
pathogen (or toxin) ingested by the consumer. Exposure assessment seeks to
estimate the scale of exposure by assessing how much and how often consumers
are exposed to a hazardous agent in food as a result of contamination levels and
the effects of processing, distribution and consumer use. Finally, risk
characterisation synthesises the output of the previous stages to provide an
estimate (qualitative or quantitative) of the level of risk for a defined group of
consumers from the identified pathogen in a particular food product.
This introduction shows that microbiological risk assessment is still a
relatively new and emerging discipline. Relatively few formal microbiological
risk assessments have been completed, in part because of the resources required
and the relative paucity of information in some areas. In particular, few formal
assessments have been undertaken by the food industry to form the basis for risk
management decisions. As a result, much remains to be discovered in the light
of practical experience (some of these completed assessments are discussed in
Chapter 7). A number of immediate challenges have been identified by
individual formal assessments. These challenges include (Anon., 2000a and b:
Ross and McMeekin, 2002):
? problems in the quantity and quality of suitable and relevant data
? issues in the handling of variability and uncertainty
2 Microbiological risk assessment in food processing
? the limited availability of trained personnel
? debates over methodology, for example how best to model the inputs of the
hazard to the supply chain and the resulting outputs with the product, and
how to model dose-response data
? how to express the output of a risk assessment in a way that is both accurate
and meaningful to food safety managers and consumers
This book is the first comprehensive review not only of the methodology of
microbiological risk assessment in the light of experience, but also of the range
of problems encountered in practice and how these might be addressed. Two
initial chapters set the scene. The first (Chapter 2) puts microbiological risk
assessment in the context of the broader development of international food
safety standards, whilst Chapter 3 introduces basic microbiological risk assess-
ment methodology. These chapters are followed by authoritative coverage of the
four key stages in microbiological risk assessment (Chapters 4 to 7), explaining
and reviewing the individual steps which underpin each stage, the problems
involved in a practical study and how they might be overcome or, their effects at
least, minimised. A subsequent chapter (Chapter 10) reviews the range of
qualitative, quantitative and computational tools (such as predictive modelling)
available to support each of these stages in an assessment.
The CAC has placed risk assessment as the first step within a broader
framework of risk analysis consisting of:
? risk assessment
? risk communication
? risk management
As its name suggests, risk assessment provides a formal, validated and
transparent estimate of the level of risk which can be communicated to key
groups, such as policy and decision makers, QA professionals and consumers.
Such assessments provide a basis for making decisions, setting priorities and
adopting appropriate procedures for food safety management. The book
therefore includes a chapter on the challenge of risk communication (Chapter
8). There is also a detailed introduction to the issues involved in using risk
assessment as a basis for the effective management of pathogen risks related to
food production (Chapter 9). Chapter 11 discusses how such assessments can be
used to establish microbiological criteria (for specifications) and food safety
objectives (FSOs). The chapter shows how these can be used as inputs into food
safety management tools such as hazard analysis and critical control point
(HACCP) systems, or as benchmarks for establishing equivalence between food
safety management or regulatory regimes. Chapter 12 considers in detail the
critical relationship between microbiological risk assessment and food safety
management systems such as HACCP systems. The concluding chapter looks at
the future of microbiological risk assessment, including developments in
methodology, risk communication and management, and the acceptance of risk
by consumers.
Introduction 3
1.1 References
ANON. (1995a) Results of the Uruguay round of the multilateral trade
negotiations 1993: agreement on application of sanitary and phytosanitary
measures. World Trade Organisation, Geneva.
ANON. (1995b) Results of the Uruguay round of the multilateral trade
negotiations 1993: agreement on technical barriers to trade. World Trade
Organisation, Geneva.
ANON. (1996) Principles and guidelines for the application of microbiological
risk assessment. Alinorm 96/10 Codex Alimentarius Commission, Rome.
ANON. (1999) Principles and guidelines for the conduct of microbiological risk
assessment. Alinorm 99/13A Codex Alimentarius Commission, Rome.
ANON. (2000a) Guidelines on hazard characterisation for pathogens in food and
water (preliminary document). World Health Organisation/Food and
Agriculture Organisation of the United Nations, Rome.
ANON. (2000b) Report of the joint FAO/WHO expert consultation on risk
assessment of microbiological hazards in foods: 17¨C21st July, 2000.
World Health Organisation/Food and Agriculture Organisation of the
United Nations, Rome.
BENFORD, D (2001) Principles of risk assessment of food and drinking water
related to human health. International Life Sciences Institute (ILSI),
Brussels.
MORGAN M G (1993) Risk analysis and management, Scientific American 269
(1): 32¨C41.
PENNINGTON T H (1997) The Pennington Group Report on the circumstances
leading to the 1996 outbreak of infection with E.coli 0157 in Central
Scotland, the implications for food safety and the lessons to be learned.
The Stationery Office, Edinburgh.
ROSS T and MCMEEKIN T (2002) Risk assessment and pathogen management, in
Blackburn, C de W and McClure, P (eds), Foodborne pathogens: hazards,
risk analysis and control. Woodhead Publishing Ltd, Cambridge.
TENNANT, D R (1997) Integrated food chemical risk analysis, in Tennant, D R
(ed), Food chemical risk analysis. Blackie Academic and Professional,
London.
TUTTLE J, GOMEZ T, DOYLE M, WELLS J, ZHAO T, TAUXE R and GRIFFIN P (1999)
Lessons from a large outbreak of E.coli 0157 infections Epidemiol Infect
122: 195¨C92.
4 Microbiological risk assessment in food processing
2.1 Introduction
Microbial risk assessment (MRA) is a relatively new tool in the quest for a
better means of ensuring the production of safe food. As stated in Chapter 1,
MRA comprises four successive key steps: (i) hazard identification, (ii)
hazard characterisation, (iii) exposure assessment and (iv) risk characterisa-
tion. The use of risk assessment ensures that control of food safety is based on
a logical and scientific approach to the problems involved. In practice,
elements of MRA have been utilised for many years, although, in earlier
times, they were not formally recognised as such. Hazard identification, for
example, began at the end of the nineteenth century when the work of van
Ermengen served to clarify the etiology of botulism in humans (Van
Ermengem, 1896). Later milestones in this category included the recognition
of Clostridium perfringens as a foodborne pathogen in 1943 (McClane, 1979)
and Bacillus cereus in the 1950s (Granum, 1997). Human infections with
Listeria monocytogenes were well known by the 1940s and foodborne
transmission was suspected (Rocourt and Cossart, 1997), but it was not until
the occurrence of an outbreak in Canada in 1981 that conclusive evidence was
obtained. In this case, illness followed the consumption of contaminated
coleslaw (Farber and Peterkin, 2000). Since then, numerous foodborne
outbreaks have been reported in different countries, and prevention of
listeriosis has become a major challenge for the food industry.
Regarding hazard characterisation, data have been obtained from the analysis
of many incidents of foodborne disease. Although such information is not
sufficient to establish dose¨Cresponse relationships, some outbreaks have yielded
useful data on attack rates and exposure levels for particular pathogens.
2
The evolution of microbiological risk
assessment
S. Notermans and A.W. Barendsz, TNO Nutrition and Food
Research Institute, Zeist and F. Rombouts, Wageningen Universiteit
Even in the distant past, there was evidence of a rational approach to the
control of food safety. Therefore, the evaluation of MRA in this chapter begins
with some historical aspects of safe food production, followed by discussion of
food control systems that have been developed and applied in the past, with
special reference to MRA principles. Section 2.4 deals with the establishment of
international food safety standards based on the use of risk assessment. In
Section 2.5, consideration is given to the ways in which MRA is becoming
integrated in food industry practices and some examples of beneficial
applications are included. Finally, current issues in MRA are discussed.
2.2 Historical aspects of safe food production
The need to produce safe food has a long history. Problems with foodborne
diseases must have been a continuous preoccupation of early humans once they
began their hunting and food-gathering activities, and domestic production of
food animals and crops. Although the exact timing is uncertain, organised food
production probably started between 18 300 and 17 000 years ago, when barley
production is said to have flourished in the Egyptian Nile Valley (Wendorf et
al., 1979). During that time, there was a need to preserve the grain and keeping it
in a dry condition was an obvious precaution. Attempts to preserve other foods
were based mainly on experience gained in associating the spoilage of a food
with the manner in which it had been prepared and stored. The same would be
true for keeping food safe. Increasingly, it became clear that a safe condition
could only be maintained if the product was kept dry and away from contact
with air. Some foods were treated with honey and later with olive oil (Toussaint-
Samat, 1992). This led to the development of additional preservative measures,
such as heating and salting. Once salt had been found to have a preservative
capability, its value increased, since it was not available in sufficient quantity to
meet the demand. According to Toussaint-Samat (1992), the large amount of salt
in the Dead Sea was one of the reasons for the interest of the Romans in
Palestine.
Over many millennia, humans have learned how to select edible plant and
animal species, and how to produce, harvest and prepare them for food purposes.
This was mostly done on the basis of trial and error and from long experience.
Many of the lessons learned, especially those relating to adverse effects on
human health, are reflected in various religious taboos, which include a ban on
eating specific items, such as pork, in the Jewish and Muslim religions
(Tannahill, 1973). Other taboos showed a more general appreciation of food
hygiene. In India, for example, religious laws prohibited the consumption of
certain ¡®unclean¡¯ foods, such as meat cut with a sword, or sniffed by a dog or cat,
and meat obtained from carnivorous animals (Tannahill, 1973). Most of these
food safety requirements were established thousands of years ago when religious
laws were likely to have been the only ones in existence. The introduction of
control measures in civil law came much later.
6 Microbiological risk assessment in food processing
Because the underlying causes of foodborne illness were unknown, microbial
food poisoning was recurrent. However, the situation changed after 1795, when the
French government, driven by war, offered a substantial reward for anyone
developing a new method of preserving food. It was Nicholas Appert, a Parisian
confectioner, who accepted the challenge and developed a wide-mouth glass bottle
that was filled with food, corked and heated in boiling water for about six hours. In
1810, Durand in England patented the use of tin cans for thermal processing of
foods, but neither Appert nor Durand understood why thermally processed foods
did not spoil (Hartman, 1997), despite the fact that in 1677 van Leeuwenhoek had
discovered ¡®his little heat-sensitive animalcules¡¯ (Dobell, 1960).
Louis Pasteur provided the scientific basis for heat preservation in the period
1854¨C1864. During this time, he showed that certain bacteria were either
associated with food spoilage or caused specific diseases. Based on Pasteur¡¯s
findings, commercial heat treatment of wine was first introduced in 1867 to
destroy any undesirable microorganisms, and the process was described as
¡®pasteurisation¡¯. Another important development occurred in Germany, when
Robert Koch introduced a method of growing microorganisms in pure culture
and, with colleagues, first isolated the vibrio cholerae bacterium in 1884, during
a worldwide pandemic (Chung et al., 1995). Over the next 100 years or more,
laboratory isolation and study of pure cultures of microbes remained among the
predominant activities of food microbiologists (Hartman, 1997).
2.3 The evolution of food safety systems
When it was accepted that people could contract disease from contaminated
food, hygiene control laws were introduced and examples can be seen in old
legal records. Table 2.1 gives an overview of the more important milestones in
developing food safety systems. In the absence of knowledge about the causes of
serious foodborne diseases and their etiology, use was made of the ¡®prohibition¡¯
principle. This means that it was prohibited to produce and/or to consume
certain types of food after it was realised that the foods could be a cause of high
mortality. The principle was used particularly to protect special groups of
individuals within society, such as soldiers. After the recognition at the end of
the nineteenth century that microbial agents were often responsible for
foodborne illness, systems for controlling the safety of the food supply began
to be introduced.
First, use was made of microbiological testing of foods and this became
widely accepted as a means of assessing food safety during the early part of the
twentieth century. Eventually, statutory microbiological requirements relating to
food safety were established in many parts of the world. Further progress
occurred when Esty and Meyer (1922) developed the concept of setting process
performance criteria for heat treatment of low-acid canned food products to
reduce the risk of botulism. Later, many other foods processed in this way were
controlled in a similar manner. An outstanding example is the work of Enright et
The evolution of microbiological risk assessment 7
al. (1956, 1957) who established performance criteria for the pasteurisation of
milk that provided an appropriate level of protection against Coxiella burnetii,
the causative agent of Q fever. Studies for tuberculosis had been carried out
earlier. This work is an early example of the use of risk assessment principles in
deriving process criteria.
With greater knowledge of the more important foodborne diseases and the
establishment of risk factors from analyses of outbreaks came the development
of more comprehensive means of controlling food safety in production. These
included the elaboration of good manufacturing practice (GMP), which helps to
minimise microbial contamination of food from personnel and the production
environment, and, ultimately, the hazard analysis critical control point (HACCP)
system (Department of Health, Education and Welfare, 1972), in which GMP
plays an important part.
The ability of different bacteria to multiply in foods is influenced by several
key factors, including pH, water activity and storage temperature. The effects of
these factors, both singularly and in combination, have been studied extensively
in laboratory media and model food systems, and this has led to the development
of mathematical models for predicting bacterial growth in commercial food
products. Although not a food safety system on its own, predictive modelling is
a valuable tool, which has helped to make possible the introduction of quantitive
risk assessment (QRA). The latter has been used for many years in other
disciplines and its use in food microbiology has been stimulated by the decision
of the World Trade Organisation (WTO) to promote free trade in safe food
(Anon., 1995). It has been emphasised, however, that control of food safety in
this context must be based on the application of sound scientific principles, and
risk analysis is seen as the basis for ensuring that the requirement is met.
The next sub-section gives more detailed information on the above-
mentioned food safety initiatives, with special reference to risk assessment
procedures.
Table 2.1 Important milestones in the development of food safety systems
Time Activity
Distant past Use of ¡®prohibition¡¯ principle to protect special groups within
society against foodborne illnesses
1900 to present Microbiological examination of food
1922 Introduction of process performance criteria by Esty & Meyer for
canned, low-acid food products
1930¨C1960 Use of risk assessment (for different pathogenic organisms) in
setting process performance criteria for heat pasteurisation of milk
1960 Introduction of good manufacturing practices
1971 Introduction of formal hazard analysis critical control point
system
ca 1978 Start of predictive modelling of bacterial growth in food
1995 Introduction of formal quantitative risk analysis
8 Microbiological risk assessment in food processing
2.3.1 The ¡®prohibition¡¯ principle
As their trade in food increased, the Romans paid greater attention to the
question of preventing spoilage, and a new rule emerged: it was prohibited to
sell spoiled food of any kind. The aedilis (churchwarden) inspected and
controlled food markets, and was charged with confiscating any food that had
become spoiled. Over the last 2000 years, the ¡®prohibition¡¯ principle has
continued to be applied in many societies to protect consumers from both
spoiled food and that likely to contain deadly disease agents. Some examples are
given below.
Consumption of blood products
In an historical account of food safety measures, Baird-Parker (2000) describes
the action taken by Emperor Leo VI of Byzantium (AD 886¨C911). The Emperor
introduced an outright ban on the consumption of blood products as a means of
reducing the high incidence of poisoning associated with sausages among his
people. The law applied particularly to blood sausages and carried a high penalty
if it was disregarded, which indicates the seriousness of the problem. It was
stated: ¡®A person found to have blood prepared as food, whether he buys or sells
it, shall have all his property confiscated and, after having been severely
scourged and disgracefully shaved, shall be exiled for life.¡¯ From the data
available and current expert knowledge, it is clear that the ¡®blood disease¡¯ was
actually botulism.
Selling of contagious flesh
In a document entitled ¡®A history of government regulation of adulteration and
misbranding of food,¡¯ Hutt and Hutt (1984) refer to the English Statute of Pillory
and Thumbrell (1266/67). This required the following: ¡®If any butcher do sell
contagious flesh or which has died from the murrain (rinderpest), he must be
punished.¡¯ The customary punishment was to be placed in the stocks with the
offending meat buried underneath.
Unsold fish
In 1319, the municipal authorities in Zurich, Switzerland, issued an ordinance
prohibiting the sale of any fish that had been left over from the day before. A
similar rule also operated in the city of Basel. However, such fish, could still be
sold to strangers (Kampelmacher, 1971).
Eating of pufferfish
In Japan, a dish known as ¡®fugu¡¯ is, historically, one of the most favoured and
heraldic forms of fish eating Nevertheless, consumption of this food has resulted
in many deaths, and the problem continues to this day. Consumption of the
delicacy was banned in 1550 by the Emperor, after a group of soldiers had died,
but the ban was abolished in 1888 when the Japanese Prime Minister tasted a
small sample of fugu and survived. This disease is known as blowfish or
pufferfish poisoning and is due to the neurotoxic effects of the tetrodotoxin,
The evolution of microbiological risk assessment 9
which occurs in various species of pufferfish. The dish is now prepared only by
chefs who have been specially trained and certified by the Japanese government
and can be relied upon to free the flesh of the toxic liver, gonads and skin.
Despite these precautions, many cases of tetrodotoxin poisoning are reported
each year in people consuming fugu (Source: Medical Journal, 12 June, 2001,
vol. 2, no. 6).
Sale of bongkrek
In the Regency of Banyumas and surrounding areas of Central Java, Indonesia,
tempe bongkrek and other coconut-based products are prepared from partly
defatted coconut. The raw material for tempe bongkrek is sometimes mixed with
the residue obtained from the manufacture of tofu (soybean curd) and allowed to
ferment with the mould Rhizopus oligosporus. Under certain conditions, a
contaminating bacterium, Burkholderia cocovenans, is able to grow and produce
two distinct toxins: colourless bongkrek acid and yellow-coloured toxoflavin.
Bongkrek food poisoning usually has a latency period of 4¨C6 hours. Typical
symptoms include malaise, abdominal pain, dizziness and extensive sweating.
The victim becomes fatigued and drowsy and eventually passes into a coma.
Death occurs 1¨C20 hours after the onset of the initial symptoms (Steinkraus,
1996). Because many Banyumas people have died as a result of eating tempe
bongkrek, sale of the product is now prohibited.
2.3.2 The ¡®precautionary¡¯ principle
Once proper scientific data became available, the principle of prohibition began
to be largely replaced by food safety regulations, which included process
performance criteria, product specifications and specified storage conditions.
The risk of botulism from blood sausages was minimised by introducing both
product specifications and requirements for storage. As mentioned previously,
the safety of fugu was improved by giving more attention to the training of chefs
and ensuring that toxic organs were properly removed from the fish.
Despite these advances, another principle, the ¡®precautionary¡¯ principle, is
still relevant in some situations, although its application is mainly restricted to
certain vulnerable groups of the population, where absolute safety cannot be
guaranteed with respect to some foods. For example, senior citizens in the USA
are advised not to eat the following types of food (see www.foodsafety.gov/~fsg/
sr2.html):
? Raw fin fish and shellfish, including oysters, clams, mussels and scallops.
? Raw or unpasteurised milk or cheese.
? Soft cheese, such as feta, brie, camembert, blue-veined and Mexican-style
cheese.
? Raw or lightly cooked eggs or egg products, including salad dressings, cookie
or cake batter, sauces and beverages, such as egg nog. (Foods made from
commercially pasteurised egg are safe.)
10 Microbiological risk assessment in food processing
? Raw meat or poultry.
? Raw sprouts (alfalfa, clover and radish).
? Non-pasteurised or untreated fruit or vegetable juice. (These juices will carry
a warning label.)
The reason for giving such advice to the elderly is that they are more likely to
be affected by any harmful bacteria that are present in the above foods. Once
illness occurs, older people face the risk of more serious health problems, even
death. With increasing age, natural defences, such as the immune system and
production of stomach acid, become weaker. Also, underlying conditions,
including diabetes and kidney disease, as well as some cancer treatments, may
increase the risk of an individual succumbing to foodborne illness and suffering
serious consequences. Other groups within the population may also show greater
susceptibility to foodborne illness. These include pregnant women, neonates and
patients given immunosuppressive drugs for treatment of diseases such as cancer
and rheumatoid arthritis. Here, examples of appropriate precautionary advice
include a recommendation to avoid feeding honey to infants below one year of
age, because of the risk of botulism (described in detail by Lund and Peck,
2000). Also advice is given in several countries to pregnant women to stop
eating certain pates and soft cheeses due to the risk of contracting listeriosis (see
Fig. 2.1).
Unfortunately, the warning of vulnerable groups against these particular
hazards varies considerably between countries and in some cases is non-existent.
At the other end of the ¡®precautionary¡¯ scale is the use of it in risk management
when there is a lack of proper scientific evidence or possible legal difficulties. A
recent example from the UK was the ban on butchers selling beef-on-the-bone,
because of the perceived risk of transmitting to humans the agent of bovine
spongiform encephalopathy from bone marrow. Although the risk was
considered extremely small, sale of the product was nevertheless prohibited
by law.
2.3.3 Establishing process criteria
At the start of the twentieth century, it had already been recognised that
protection of the public against foodborne hazards required proper control of
heat treatments used commercially in food production. Two example are
presented here: (i) the performance criteria for destroying spores of Clost.
botulinum in low-acid, canned foods (Esty and Meyer, 1922) and (ii) the process
criteria for Cox. burnetii in milk pasteurisation, as determined by Enright et al.
(1957).
Setting of process performance criteria for heat treatment of low-acid canned
foods
The first mathematical evaluation of the heat sterilisation process for canned
foods was made by Bigelow et al. (1920) and later developed by Ball (1923) to
The evolution of microbiological risk assessment 11
derive methods for calculating the times necessary to process canned foods at
appropriate temperatures. For commercial sterilisation, the goal of thermal
processing was to reduce the probability of survival and growth of
microorganisms in a particular canned food to an acceptably low level. The
starting point for the rationale of what is now termed ¡®an appropriate level of
protection¡¯ (ALOP) was the work of Esty and Meyer (1922). They derived
process performance criteria for the destruction of spores of proteolytic strains
of Clost. botulinum in low-acid canned foods. It was proposed that
requirements for sterilisation should be based on (i) the response to heating
of the most heat-resistant spores found among strains of Clost. botulinum and
Fig. 2.1 Health advice given to pregnant women by the Health Department of Western
Australia 1995.
12 Microbiological risk assessment in food processing
(ii) a reduction in the spore population by a factor of 10
11
¨C10
12
to ensure the
desired level of product safety. For this purpose, heat inactivation trials were
carried out on 109 different strains of the test species. The resultant
performance criteria, based on the approach outlined above, have been applied
over many years and have proved to be sound, with an adequate margin of
safety (Pflug and Gould, 2000).
Process performance criteria for heat pasteurisation of milk
The work of Enright et al. (1957) led to the development of process standards
for controlling Cox. burnetii in milk. The heat treatments used initially for milk
were designed to inactivate any tubercle bacilli present and these were
considered to be the most heat-resistant of the nonsporing pathogenic bacteria
likely to occur in the product. The treatments were based on information from
many studies on the heat-resistance of both human and bovine strains
(Mycobacterium tuberculosis and Myc. bovis respectively). In the USA, the
heating regime adopted in 1924 for the conventional process was 142 oF
(61.1 oC) to 145 oF (62.8 oC) for 30 min. In 1933 a heating regime was introduced
for the high-temperature, short-time (HTST) process: 161 oF (71.7 oC) for 15 s.
In practice, Cox. burnetii appears to be slightly more heat-resistant than the
tubercle bacilli and, following recognition that the organism, which causes Q
fever in humans, could be transmitted by raw milk, it was necessary to check
on the adequacy of existing pasteurisation processes for inactivating the
organism. The work undertaken by Enright and colleagues (1956, 1957)
fulfilled this requirement and, although no formal MRA was employed,
elements of the MRA approach were implicit in their studies. These aspects
are discussed below.
? The organism and the disease it causes: Cox. burnetii is a small, Gram-
negative bacterium, originally classified as a rickettsia, that cannot be grown
in axenic culture but can now be cultivated in vitro in various cell lines
(Maurin and Raoult, 1999). Q fever is characterised by fever, chills and
muscle pain, with occasional long-term complications. It was first described
by Derrick (1937) and is known to occur worldwide. The organism infects
many wild and domestic animals, which often remain asymptomatic.
Domestic animals, such as cattle, sheep and goats, are considered the main
sources of infection for humans (Maurin and Raoult, 1999) and, when shed in
milk, Cox. burnetii is often present in relatively high numbers.
? Hazard identification: contact with infected animals was known to result in
transmission of Cox. burnetii to people, with subsequent development of
illness, and the likelihood of the organism contaminating raw milk was
recognised. Early on, there was a lack of epidemiological evidence for
transmission via milk, but this was suspected in several outbreaks and there
was strong supporting evidence from a UK outbreak in 1967 (Brown et al.
1968). Thus, the hazard was the presence of Cox. burnetii in milk intended
for human consumption.
The evolution of microbiological risk assessment 13
? Dose response: there was no information on the dose response in humans,
since challenge trials had not been carried out and epidemiological data were
lacking.
? Exposure assessment: information relevant to this step in MRA was obtained
by injecting guinea pigs to determine the presence and titre of Cox. burnetii in
milk. The organism was found in 33% of 376 samples of raw milk from
California, USA. ¡®The maximum number of Cox. burnetii demonstrated in the
milk of an infected dairy cow was the number of organisms contained in 10 000
infective guinea pig doses of Cox. burnetii per millilitre¡¯ (Enright et al., 1957).
Similar titres were found in milk that had been frozen and thawed. However, the
study did not involve testing of all breeds of dairy cattle, and it is possible that
even higher levels of shedding may have occurred in some breeds that were not
examined. Nevertheless, it was concluded that the maximum level of consumer
exposure would be represented by the highest infective dose demonstrated in
this study and that the pasteurisation process should bring about thermal
inactivation of such a number (Enright et al., 1957).
? Risk characterisation: the risk involved in consuming raw milk could not be
estimated because of the absence of dose¨Cresponse data. The data for the
prevalence of contaminated milk, the maximum level of contamination and
the fact that milk would have been consumed regularly by the majority of the
population were probably implicit factors in an assumption that the risks
associated with inadequate heat treatment were high.
The studies of Enright et al. (1956, 1957) led to the conclusion that heating at
¡®143 oF for 30 min was wholly inadequate to eliminate viable Cox. burnetii from
whole, raw milk, while heating at 145 oF ensures elimination of these organisms
with a high level of confidence¡¯ (Enright et al., 1957). This led to the adoption of
the higher temperature for vat pasteurisation in the USA. The work on the HTST
process indicated that the recommended standard of 161 oF for 15 s was
sufficient for total elimination.
2.3.4 Microbiological examination of food
Microbiological testing, as a means of assessing whether a food product is
hazardous due to the presence of pathogens, is of relatively recent origin. It
became the vogue only after Robert Koch developed a method for growing
microorganisms in pure culture and foodborne organisms capable of causing
spoilage or disease were recognised and could be enumerated (Hartman, 1997).
Over the last 80 years or so, many different methods have been devised for
detecting pathogenic organisms and/or their toxins. Even from the beginning of
that period, statutory microbiological requirements relating to food safety were
established in many countries and were based on the testing of prepared foods
for the organisms or toxins of concern.
A disadvantage was that routine examination of foods for a multiplicity of
pathogens and toxins was impractical in most laboratories and an alternative
14 Microbiological risk assessment in food processing
approach was needed. This led to widespread use of microbial groups or species
that were more readily detectable in foods and considered to be indicative of
conditions in which the food had been exposed to contamination with pathogens,
or been under-processed. Enumeration of the organisms was even used as a
measure of the possible growth of pathogens in a food, should these have been
present. The bacteria in question were termed ¡®indicator organisms¡¯ and they
have value for indirect assessment of both microbiological safety and quality of
foods. The use of indicator organisms flourished, especially in the period 1960¨C
1980. During that time, numerous procedures for enumerating bacterial
indicators were described (e.g. American Public Health Association, 1966;
United States Food and Drug Administration, 1972). Clearly, the main objective
of their use was to reveal conditions of food handling that implied a potential
hazard. Furthermore, some indicators were proposed as a possible index rather
than a mere indication of faecal contamination in food (Mossel, 1982).
Setting criteria
The traditional approach to controlling food safety has been based on education
and training of personnel, inspection of production facilities and operations, and
microbiological testing of the finished product. Testing of the product is usually
an integral part of the overall control programme, and the perceived risk of
foodborne illness from the presence of a particular pathogen is reflected in the
limit values that are set for the organism in a specific type of food. Where
possible, these criteria are based on epidemiological data and are a reflection of
the minimum dose expected to cause illness. Table 2.2 gives some values that
are essentially derived from analyses of outbreaks of foodborne disease. The
data show a clear parallel between limit values and the minimum dose
associated with human disease. In general, infective organisms such as
Salmonella should be absent from food because very low numbers are known to
be capable of causing illness (D¡¯Aoust, 1989). On the other hand, toxigenic
bacteria, such as Staphylococcus aureus, may be acceptable at levels that are
well below those causing food to become hazardous. With foodborne
intoxications caused by Staph. aureus, the numbers present in the food usually
exceed 10
7
cfu (colony-forming units) per g (Bergdoll, 1989).
Shortcomings of microbiological testing
Leaving aside questions regarding the accuracy and reproducibility of the methods
used, it is clear that microbiological testing of food is of limited value without a
sound sampling plan. To overcome the problem, a book on food sampling was
produced by the International Commission on Microbiological Specifications for
Foods (ICMSF, 1974). The book gives details of statistically based sampling plans
for the microbiological examination of different types of food.
Although the book gives an excellent account of the various sampling plans,
it also reveals the limitation of testing for pathogenic organisms that may be
infrequent, low in number and unevenly distributed throughout the test batch,
especially when complete absence is the only acceptable result. Thus, testing to
The evolution of microbiological risk assessment 15
ensure that the target pathogen is absent from the batch requires uneconomically
large numbers of samples, with no guarantee that absence of the organism can be
established.
2.3.5 Introduction of GMP and HACCP
GMP
One of the first quality assurance systems developed by the food industry was
that involving the application of GMP, as a supplement to end-product testing.
GMP has been used for many years to ensure the microbiological safety and
quality of food, and it provides a framework for hygienic food production. The
establishment of GMP is the outcome of long practical experience and it
includes attention to environmental conditions in the food plant, e.g.
requirements for plant layout, hygienic design of equipment and control of
operational procedures. The GMP concept is largely subjective and qualitative
in its benefits. It has no direct relationship with the safety status of the product.
For these reasons, the concept has been extended by introducing the HACCP
system, which seeks, among other things, to avoid reliance on microbiological
testing of the end-product as a means of controlling food safety. Such testing
may fail to distinguish between safe and unsafe batches of food and is both time-
consuming and relatively costly.
Table 2.2 Correlation between minimum dose considered to cause disease and criteria
set for end-products
Pathogenic Minimum dose Probability of General end-
organism considered to infection from product criteria
cause disease
a
exposure to 1 used
c
organism
b
Infectious organism
Shigella 1 1.0 10
3
Absence/25 gram
Salmonella 1 2.3 10
3
Absence/25 gram
Campylobacter 1¨C10 7.0 10
3
Absence/25 gram
Listeria monocytogenes >10
3
< 100/gram
Vibrio parahaemoliticus >10
4
<10
3
/gram
Toxico-infectious organisms
Clostridium perfringens >10
6
<10
5
10
6
/gram
Bacillus cereus >10
6
<10
5
10
6
/gram
Organisms causing intoxication
Staphylococcus aureus >10
6
<10
5
10
6
/gram
a
Based on analysis of foodborne disease outbreaks (presented in Doyle, 1989).
b
Rose and Gerba (1991).
c
Criteria for pathogenic organisms are not yet well established and they may differ from
country to country. The validity of the criteria starts mostly after production and ends at
the time of consumption.
16 Microbiological risk assessment in food processing
HACCP
The HACCP concept is a systematic approach to the identification, assessment
and control of hazards in a particular food operation. It aims to identify problems
before they occur and establish measures for their control at stages in production
that are critical to ensuring the safety of food. Control is proactive, since
remedial action is taken in advance of problems occurring.
In a review of the historical background, Barendsz (1995) and Untermann et
al. (1996) described the development of the HACCP approach, which began in
the 1960s. The concept arose from a collaboration between the Pillsbury
Company, the US Army Natick Research and Development Laboratories and the
US National Aeronautics and Space Administration. The original purpose was to
establish a system of safe food production for use in human space travel. At that
time, the limitations of end-product testing were already appreciated and
therefore more attention was given to controlling the processes involved in food
production and handling. When first introduced at a meeting on food protection
(Department of Health, Education and Welfare, 1972), the concept involved
three principles: (i) hazard identification and characterisation; (ii) identification
of critical control points (CCPs) and (iii) monitoring of the CCPs.
Many large food companies started to apply HACCP principles on a voluntary
basis, and in 1985 the US National Academy of Science recommended that the
system should be used. Further support came from the ICMSF (1988), which
extended the concept to six principles. They added specification of criteria,
corrective actions and verification (see Table 2.3). In 1989, the US National
Advisory Committee on Microbiological Criteria for Foods (NACMCF) added in a
further principle: the establishment of documentation concerning all procedures
and records appropriate to the principles and their application. Use of the HACCP
system was given an international dimension by the Codex Alimentarius
Commission (CAC) which published details of the principles involved in 1991
and their practical application (CAC, Committee on Food Hygiene, 1991). In 1997,
the CAC laid down the ¡®final¡¯ set of principles and clarified the precise meaning of
the different terms (CAC, Committee on Food Hygiene, 1997):
? General principles of food hygiene (Alinorm 97/13, Appendix II).
? HACCP system and guidelines for its application (Alinorm 97/13A,
Appendix II).
? Principles for the establishment and application of microbiological criteria
for foods (Alinorm 97/13A, Appendix III).
The full HACCP system, as described in Alinorm 97/13, is shown in Table
2.3. The document also gives guidelines for practical application of the HACCP
system. By 1973, the FDA had made the use of HACCP principles mandatory
for the production of low-acid canned foods (FDA, 1973) and, in 1993, the
system became a legal requirement for all food products in the European Union
(Directive 93/43).
Despite widespread usage, the present HACCP concept still has some weak
points. One of them is the definition of a hazard. This is not defined as ¡®an agent
The evolution of microbiological risk assessment 17
with the potential to cause an adverse health effect¡¯, as in risk assessment, but as
¡®an unacceptable contamination, growth and/or survival by microorganisms of
concern¡¯ (ICMSF, 1988), which is more restrictive and does not cover all
possible hazards. Another weakness arises from the definition of a CCP. It is
stated that a CCP is a location, practice, etc. where hazards can be minimised
(ICMSF, 1988; International Association of Milk, Food and Environmental
Sanitarians (IAMFES), 1991) or reduced to an acceptable level (Bryan, 1992;
Alinorm 97/13). In both cases, these are qualitative objectives and may lead to
differing interpretations. It was Notermans et al. (1995) who first made a plea to
use the principles of quantitative risk assessment for setting critical limits at the
CCPs (process performance, product and storage criteria). It was their opinion
that only when the critical limits are defined in quantitative terms can the level
of control at the CCPs be expressed realistically. At the International
Association of Food Protection (IAFP) meeting in 2001, Buchanan et al.
(2001) also favoured the use of these principles and suggested that food safety
objectives should encompass end-product criteria, which are related to the
criteria used in processing.
2.3.6 Predictive modelling
Modelling in food microbiology began about 1920, when methods were
developed for calculating thermal death times. These models revolutionised the
canning industry (Pflug and Gould, 2000). Later, Monod (1949, 1950)
developed a model that described the continuous, steady-state culture of
Table 2.3 The seven principles of the HACCP system (CAC, Committee on Food
Hygiene, 1997)
Principle Activity
1 Conduct a hazard analysis List all potential hazards associated with
each step, conduct a hazard analysis, and
consider any measures to control identified
hazards
2 Determine the critical control points
(CCPs)
Determine CCPs
3 Establish critical limit(s) Establish critical limits for each CCP
4 Establish a system to monitor control
of the CCP
Establish a system of monitoring for each
CCP
5 Establish corrective actions Establish the corrective action to be taken
when monitoring indicates that a particular
CCP is not under control
6 Establish verification procedures Establish procedures for verification to
confirm that the HACCP system is
working effectively
7 Establish documentation and record
keeping
Establish documentation concerning all
procedures and records appropriate to
these principles and their application
18 Microbiological risk assessment in food processing
microorganisms and became the basis for continuous fermentation processes. In
principle, the model was analogous to that used for chemical processes. The
recent resurgence of predictive modelling in relation to microbial growth in food
originated in the 1960s and has been reviewed by Ross and MacMeekin (1993).
In contrast to the situation studied by Monod, the identities and concentrations
of nutrients involved are unknown and the organisms of interest are initially
present in low numbers, with growth conditions often being less than optimal.
For these reasons, initial attempts at mathematical modelling in food
microbiology have been more empirical than was the case for fermentation
processes, focusing on batch rather than continuous-culture kinetics. As shown
by Whiting and Buchanan (1997), growth data are fitted to equations using
interactive least-square computer algorithms. Assumptions about randomness,
normal distribution and stochastic specifications are the same as they would be
for any statistical application of regression (Ratkowsky, 1993). All models are
actually simplifications that represent the complex biochemical processes
controlling microbial growth and are limited to the most important input
parameters, such as temperature, time, water activity and pH. One of the reasons
for simplifying the approach is that knowledge of the complex biochemical
processes involved is far from complete. The major advantage is that the current
models are easy to handle; however, the outcome should always be used with
caution and verification may be necessary in some cases.
Primarily, the development of predictive modelling was driven by the
proliferation of refrigerated and limited shelf-life food products. It was recognised
that (i) even so-called ¡®rapid¡¯ microbiological methods were too slow to show,
within an acceptable period of time, whether microbes in the product grew or died
(Spencer and Bains, 1964); (ii) testing of factors in a food product that affect
microbial growth and toxin production, whether singularly or in combination, is
laborious and time-consuming and (iii) work had been done in Canada to draw
together the results of numerous growth experiments on Clost. botulinum
(Hauschild, 1982). The mathematical and statistical tools already existed prior to
the expansion in modelling activity and the process was favoured by the introduction
of powerful personal computers and the availability of user-friendly software.
In the review of Ross and MacMeekin (1993), the main reasons for
developing predictive models were summarised as follows:
? To permit predictions of product shelf-life and safety, and the consequences
of changes in product formulation or composition; to facilitate a rational
design for new processes, etc.; to meet or to obtain an insight into
requirements for product safety or shelf-life.
? To allow objective evaluations to be made of processing operations and, from
this, an empowering of the HACCP approach.
? To provide an objective evaluation of the consequences of any lapses in
process control and subsequent storage of the end-product.
Now that MRA has become established in food microbiology, it is clear that
the use of predictive models is essential in risk assessment. This is especially
The evolution of microbiological risk assessment 19
true for exposure assessment. In many foods, particularly those that are fresh and
have a short shelf-life, rapid changes in microbial populations can occur and the
models are needed to determine, for example, the necessary storage conditions.
The models can also provide information about risk factors in handling the
product, which have a considerable influence on human exposure to particular
pathogens. They may also help to clarify the effects of different control options.
Thus, the modelling approach facilitates control of the most important factors
that affect food safety. Without the use of predictive models, a quantitative
MRA for assessing food safety would be virtually impossible.
2.3.7 Introduction of QRA
Systematic risk analysis approaches have been used by the Food and Agriculture
Organisation of the United Nations (FAO) and the World Health Organisation
(WHO) since 1955, when the evaluation of food additives at the international
level was initiated as a result of a joint FAO/WHO conference on food additives.
The conference recommended to the Directors-General of FAO and WHO that
one or more expert committees should be convened to address the technical and
administrative aspects of chemical additives and their safety in food. This
recommendation provided the basis for setting up the Joint FAO/WHO Expert
Committee on Food Additives (JECFA). The JECFA started its meetings in
1956, initially to evaluate the safety of food additives.
Risk assessment has also evolved over the last decade within the CAC. The
Commission, which was established in 1962 under the parentage of the FAO and
WHO, is an intergovernmental organisation with the responsibility for
developing international standards, guidelines or other recommendations for
food in order to protect the health of consumers and facilitate international trade.
In the course of time, the CAC has enlarged its activities and, in addition to risk
evaluation for food additives, chemical contaminants, pesticide residues and
veterinary drug residues in foods, the issue of biological hazards in foods is now
also being addressed. However, no clear MRA activities were undertaken prior
to 1995.
The development of MRA was strongly stimulated when in 1995, at the
GATT Uruguay Round, the WTO was established and a free trade in safe food
was agreed. In the WTO Agreement on the Application of Sanitary and
Phytosanitary Measures, the so-called SPS Agreement (Anon., 1995), requires
that countries signatory to the agreement base their laws concerned with
protecting human, animal and plant health on a risk analytical basis. Thus, the
SPS Agreement requires food safety legislation to be scientifically based and the
process of risk assessment to be applied, for example, when introducing
microbiological criteria for controlling imported foods. In the pursuance of
harmonisation and to avoid the need for all countries and all food producers to
carry out a risk assessment on each of their products, the WTO SPS Agreement
has chosen the scientifically based international standards, guidelines and
recommendations of three organisations, one of which is the CAC, as the
20 Microbiological risk assessment in food processing
preferred measures for adoption by WTO members. In addition, the SPS
Agreement states that countries should take into account the risk assessment
technique developed by the relevant international organisations, when
undertaking a risk assessment. As a result of this, the FAO and WHO began
to organise expert consultations dealing with food safety risk assessment, with
the purpose of providing member countries with principles and guidelines for
undertaking such an assessment. An overview of the key documents produced is
given in Table 2.4.
The first expert consultation was devoted to the application of risk analysis to
food safety standards issues. The consultation was convened at the request of the
Forty-first Session of the CAC Executive Committee, with the aim of promoting
consistency in the use of risk analysis for food safety purposes. The main
objective was to provide the FAO, WHO and CAC, as well as member countries,
with advice on practical approaches for the application of risk analysis to food
standards issues. At that meeting, food safety risk analysis terms were defined. A
model for risk assessment was also agreed upon. This comprises the four
components: (i) hazard identification, (ii) hazard characterisation, (iii) exposure
assessment and (iv) risk characterisation. At that consultation, the estimation of
risk from biological agents was considered in detail, since it was the general
view of the experts that such risks are in many ways a much larger and more
immediate problem to human health than risks associated with chemical
contaminants in food.
At an expert consultation in 1997, a risk management framework was set up
and general principles of food safety risk management were elaborated. In
addition, key risk management terms were defined. The main elements of risk
management were identified as (i) risk evaluation, (ii) assessment of risk
management options, (iii) implementation of management decisions and (iv)
monitoring and review. As far as the general principles are concerned, it was
stated that risk management decisions should be transparent, primarily aimed at
the protection of human health and should ensure that the scientific integrity of
the risk assessment process is maintained.
As a logical continuation, a third expert consultation dealt with the
application of risk communication. The main issues addressed at this meeting
Table 2.4 FAO/WHO documents dealing with food-related risk analysis
Year Risk analysis documents References
1995 Application of risk analysis to food standards issues FAO/WHO (1995)
1997 Risk management and food safety FAO/WHO (1997)
1998 The application of risk communication to food
standards and safety matters
FAO/WHO (1998)
1999 Risk assessment of microbiological hazards in foods FAO/WHO (2000a)
2000 The interaction between assessors and managers of
microbiological hazards in food
FAO/WHO (2000b)
The evolution of microbiological risk assessment 21
were the principles of risk communication and barriers to, and strategies for,
making the process effective. It is generally accepted that risk communication
is essential throughout the risk analysis process. For successful risk
communication, it is important that (i) all interested parties are involved,
(ii) use is made of individuals trained in risk communication, (iii) risk
communication is received and understood, and (iv) transparency is fostered
during the whole process. The nature of the risk and the benefits and
uncertainty in risk assessment and assessment of risk management options are
regarded as the main elements for effective risk communication. The main
barriers in risk communication include differences in perception and
receptivity, lack of understanding of the scientific process, and media and
social characteristics.
The fourth expert consultation, convened in 1999, was directed specifically at
risk assessment of microbiological hazards in foods. The main outcome of this
consultation was an outline strategy and mechanism for addressing MRA at the
international level. The expert consultation made recommendations regarding
the activities required to support MRA and how to improve the necessary
capabilities. In addition, it recommended that outcomes of risk assessments
should be immediately integrated into HACCP plans, that additional expert
meetings should be held and that collaborative studies should be conducted
between developing and developed countries.
In 2000, a second specific expert consultation was organised, dealing with the
interaction between assessors and managers of microbiological hazards in foods.
At this consultation, the linkage between risk assessment and risk management
was discussed in more detail, with the aim of providing guidance on how both
processes could be improved. Issues addressed ranged from the identification of
a food safety problem and the establishment of risk profiles to assessment of the
effectiveness of management decisions. The former is of interest in relation to
collecting as much information as possible for both risk assessment purposes
and effective risk management.
Current microbiological risk assessment activities
In 1999, following the request of the CAC and in order to address the needs of
their member countries, the FAO and WHO initiated a series of joint expert
consultations to assess risks associated with specific microbiological
contaminants in foods. This followed the adoption by the CAC of the Principles
and Guidelines for the Conduct of Microbiological Risk Assessment (MRA),
elaborated by the Codex Commission for Food Hygiene (CCFH) (CAC/GL 30,
1999).
The aims of these joint expert consultations were to provide a transparent
review of scientific data on the state of the art of MRA, and to develop the
means of achieving sound quantitative risk assessments for specific pathogen¨C
commodity associations. The work included an evaluation of existing risk
assessments, a review of the available data and risk assessment methodologies,
highlighting their strengths and weaknesses and how they might be applied,
22 Microbiological risk assessment in food processing
provision of examples and identification of information needs/gaps. A further
aim of these consultations was the development of guidelines relating to the
different steps in risk assessment, such as hazard characterisation and exposure
assessment. The purpose of such guidelines would be to help the risk assessor,
the risk manager and other interested parties to understand the principles and
science behind the risk assessment steps.
Three such consultations have already been convened. Two of these, one in
July 2000 and one in May 2001 have dealt with the risk assessment of
Salmonella spp. in broilers, Salmonella enteriditis in eggs and Listeria
monocytogenes in ready-to-eat foods. These assessments are currently near
completion. In July 2001 another expert consultation addressed risk assessment
of Campylobacter spp. in broiler chickens, and Vibrio spp. in seafood. Work on
these will continue for another year. The work plan and priorities programme for
work on MRA are established by FAO and WHO, taking into consideration the
needs of the CCFH, as well as the member countries.
2.4 International food safety standards
2.4.1 Setting of current international standards
Based on the SPS Agreement, food safety standards need to be based on sound
science and risk assessment. Figure 2.2 shows how these standards are set. The
starting point is the relevant food safety policy. By using risk analysis, this
policy is transformed into food safety objectives, which equate with an agreed
level of consumer protection.
Fig. 2.2 The use of risk analysis to convert a food safety policy into food safety
objectives.
The evolution of microbiological risk assessment 23
Currently, the FAO and WHO are the organisations concerned with food
safety at the international level. As far as international food safety standards are
concerned, these are established under the Joint FAO/WHO Food Standards
Programme by the CAC. This organisation has delegated the development of
standards, guidelines and other recommendations to its subsidiary bodies, which
are guided by the CAC. Normally, the general subject Codex committees
(described as ¡®horizontal¡¯ Codex committees) are more routinely involved in
risk management. These include the Codex committees on Food Additives and
Contaminants, Pesticide Residues, Residues of Veterinary Drugs in Food, Food
Hygiene, General Principles, Food Labelling, and Nutrition and Food for Special
Dietary Uses. The tasks of these intergovernmental bodies are to prepare draft
standards, guidelines and recommendations for consideration by the CAC.
The process of setting international food safety standards is expressed in Fig.
2.3.
Initiating the process of standard setting
The risk analysis procedure is usually initiated by one of the respective Codex
committees, when it proposes setting standards for additives, contaminants,
microbiological agents, etc. This process may also be triggered by direct
requests to FAO/WHO from member countries. The initiation of the evaluation
procedure serves as the hazard identification step.
Risk assessment
The first step in the process of risk analysis is risk assessment, which is carried
out by independent expert committees or groups that advise the respective
Codex committees. At present, there are two long-standing expert groups that
provide advice to Codex, governments and industry. They are the JECFA and
the Joint FAO/WHO Meeting on Pesticide Residues (JMPR). In addition, FAO
and WHO convene ad hoc expert consultations, as required, to address specific
issues not covered by JECFA or JMPR. In recent years, several expert
consultations have been held on microbiological hazards in food, the risk
assessment of foods derived from biotechnology and on animal feeding and food
safety. Recently, the CAC, at its Twenty-fourth Session, held in Geneva,
Switzerland from 2 to 7 July 2001, requested the FAO and WHO to further
strengthen scientific support for science-based decision making. The FAO and
WHO, conscious of the importance of this issue, are currently studying the
possibility of harmonising the risk assessment procedures used by the various
scientific advisory groups and are looking for ways to improve the quality,
quantity and time-lines of scientific advice.
The work of the JECFA now includes the evaluation of contaminants,
naturally occurring toxicants and residues of veterinary drugs in food. For food
additives, the JECFA normally establishes so-called acceptable daily intakes
(ADIs) on the basis of available toxicological and other relevant information.
Specifications for identity and purity are also developed for food additives,
which help to ensure that the product in commerce is of appropriate quality, can
24 Microbiological risk assessment in food processing
be manufactured consistently, and is equivalent to the material that was
subjected to toxicological testing. For contaminants and naturally occurring
toxicants, levels corresponding to ¡®tolerable¡¯ intakes, such as the provisional
maximum tolerable daily intake (PMTDI) or provisional tolerable weekly intake
(PTWI) are normally established when there is an identifiable no-observed
effect level. If such a level cannot be identified, the Committee may provide
other advice depending on the circumstances. In the case of veterinary drugs,
Fig. 2.3 Visualisation of the process of setting international food safety standards.
The evolution of microbiological risk assessment 25
data on good practice are evaluated and corresponding maximal residue levels
(MRLs) in animal tissues, milk or eggs are recommended. Such MRLs are
intended to provide assurance that when the drug has been used properly, the
intake of any residues of the drug in food is unlikely to exceed the ADI.
The JMPR comprises the Joint Meeting of the FAO Panel of Experts on
Pesticide Residues in Food and in the Environment and the WHO Core
Assessment Group. The JMPR carries out toxicological evaluation of pesticide
residues, normally resulting in an estimate of the ADI. In addition, the JMPR
proposes MRLs for individual pesticides in or on specific commodities. These
MRLs are primarily based on the residue levels estimated in supervised field
trials, when the pesticide is used according to good agricultural practices (GAP).
In cases where initial estimates indicate that the ADI may be exceeded, more
refined intake calculations are performed, using national food consumption data
and information from pesticide residue monitoring programmes.
Both the JECFA and JMPR establish chemical safety standards that are based
on a review of toxicological studies in the most sensitive test-animal species.
They allow for an adequate level of safety, use risk assessment procedures,
consider use and consumption patterns and define specifications for the identity
and purity of food grade chemicals to be used.
For microbiological hazards, there is currently no JECFA or JMPR-like body.
For food safety risk assessment activities, ad hoc expert consultations are set up
and independent and appropriately qualified experts are invited. A procedure for
this process, adopted and in use since 2000, enhances the principles of
transparency, equal opportunity, excellence and independence, and seeks to
harmonise the working procedures between different expert bodies and between
FAO and WHO. Briefly, the procedure involves the following steps:
? An open call for experts is made 6 months prior to each expert meeting.
? Review of candidates by a four-member selection panel.
? Completion of a ¡®Declaration of Interests¡¯ form, indicating institutional
affiliation by candidates.
? Secretariat selects appropriate individuals.
? Secretariat notifies governments of the selected experts to obtain their
consent.
? Secretariat invites the experts.
Risk management
The risk management activities are carried out by the respective Codex
committees, comprising participants from all member countries, including
representatives of industry, consumers and governmental bodies. These
representatives carry out the risk management part of the standard-setting
procedure. Draft standards, guidelines and recommendations are elaborated via
an eight-step process (or in some cases a five-step, accelerated process) by the
committees. The final decision regarding their adoption is made by the CAC.
26 Microbiological risk assessment in food processing
2.4.2 International criteria: future trends
The generic frameworks of current food safety systems used for chemicals and
microbiological agents show some similarities, but also important differences.
The frameworks for both entities are presented in Fig. 2.4, which is based on the
paper given by Hathaway at the IAFP congress, held in Minneapolis, 2001
(Hathaway, 2001).
For the setting of international criteria for chemicals, an international food
safety policy has been developed. The policy comprises certain general rules.
Examples of these are that carcinogens should be absent from food and the aim
should be to follow the ALARA principle, which means that, for extraneous
chemicals, levels ¡®as low as reasonably achievable¡¯ are required. Also, an
appropriate level of protection has been agreed. For most chemicals, levels
below the no-effect level, including an uncertainty factor, are considered to
provide an appropriate level of protection (ALOP). The risk assessment process
Fig. 2.4 Generic framework of current food safety systems as developed for chemicals
(additives, pesticides, etc.) and for microbiological agents (based on Hathaway, 2001).
The evolution of microbiological risk assessment 27
is primarily directed at assessing the characteristics of potentially hazardous
agents and exposure assessment. The risk management process, which is carried
out by the relevant Codex Commission, results in the final food safety
objectives. These must be incorporated as the required process, product and
storage criteria in the HACCP system.
With the passage of time, the system used for chemicals has proved to be
very effective in preventing foodborne illness from this source. Actually, with
some exceptions, chemical contaminants and residues do not cause overt health
problems, and in that respect they are quite different from microbiological
agents. Almost all reported foodborne illness is caused by pathogenic organisms
present in food.
As far as microbiological agents are concerned, there is, at present, no food
safety policy associated with the setting of international criteria. Also, unlike
chemicals, there is no concept of any levels of product contamination with
specific pathogens that would provide an ALOP. The approach to
microbiological food safety can be summarised as follows. The system begins
with a quantitative risk analysis. Depending on the outcome, appropriate levels
of protection are agreed and food safety objectives set. These objectives then
need to be reflected in process, product and storage criteria for incorporation
into the HACCP system.
There is some debate about whether a unified approach should be developed
for both chemicals and microbiological agents, and essential differences in the
risks that they pose to human health need to be understood and taken into
account. Other important factors are given in the following:
? Stability. While concentrations of most chemicals remain relatively stable in
foods during storage, microbial contaminants may die-off or even multiply,
depending on the conditions.
? Behaviour. The storage behaviour of microorganisms in foods is affected by
various intrinsic and extrinsic factors, and can vary considerably from food to
food and from one organism to another.
? Origin. Chemical contamination of foods with residues of veterinary drugs,
pesticides, etc. comes from extraneous sources, but many microorganisms
occur naturally, especially in raw foods, and their presence cannot be
avoided.
? End-product criteria. Although clearly useful for chemicals, such criteria
are of less value for microorganisms. This is largely due to changes in
microbial counts with time and the difficulty of detecting low numbers of
specific pathogens, which, if present, are often distributed unevenly in the
food. Therefore, a negative result is no guarantee that the target organism is
entirely absent from the test batch.
? Exposure assessment. Because of the above-mentioned changes in microbial
populations during storage, the value of any counts obtained for the purposes
of exposure assessment will depend upon the timing of the tests and the
subsequent storage history of the food.
28 Microbiological risk assessment in food processing
? Assessment of dose¨Cresponse relationship. The necessary information for
microbial pathogens cannot be obtained from animal experiments and must
be taken from feeding trials involving human volunteers or be based on count
data from foods associated with specific and well-documented outbreaks.
It is clear that the risks to consumers from chemicals in foods are very
different from those presented by microbial pathogens, and a unified approach to
their regulation may not be feasible, as far as the setting of criteria for the end-
product is concerned. The problem is compounded by the practical difficulties
that arise when considering the dynamic nature of microbial populations in
foods and the uncertainties surrounding the detection of pathogens. Therefore,
the systems used in each case to ensure the required level of food safety are
likely to remain separate for the foreseeable future.
2.5 Present and future uses of microbiological risk assessment
2.5.1 Trends in food safety control
Traditionally, food safety is assessed retrospectively through microbiological
testing of randomly selected food samples. This is done by both the food
producer and the appropriate regulatory body. The approach may confirm that
the food meets certain statutory criteria at the point of sampling, but takes no
account of the likely changes in microbial populations during subsequent
handling and storage of the product up to the point of consumption. In practice,
there is usually no information on whether such control criteria are effective in
protecting consumers. Because of these shortcomings, food safety control is
increasingly dependent on a more prospective approach, involving the
application of GMP and HACCP principles. For this purpose, the use of
predictive microbiology has proved to be as valuable as it was previously in
developing processes for, for example, heat inactivation of microorganisms and
their spores. Recent progress in predictive modelling has facilitated exposure
assessment at each stage of the food chain and has permitted the introduction of
risk analysis, which has provided a new milestone in the production of safe food.
Thus, acceptably safe food can be produced almost entirely in a prospective and
predictable manner, and it is possible to predict that any necessary criteria can
be met at the time the food is consumed. The modern approach to safe food
production, including the role of GMP, HACCP and risk assessment, is shown
schematically in Fig. 2.5. The first step requires a quantitative risk assessment to
identify the hazards.
These are then characterised, mostly in terms of dose¨Cresponse relationships
and the severity of the illness caused, followed by exposure assessment and risk
characterisation. Finally, risk management requirements are established, using
Codex Alimentarius standards, guidelines and recommendations. These involve
all interested parties, such as food producers, regulatory authorities, consumer
organisations and scientists (the so-called stakeholders). However, any resultant
The evolution of microbiological risk assessment 29
microbiological standards are of limited value, for the reasons discussed
previously, and only useful for microbiologically stable food products.
Therefore, participants at the IAFP congress in Minneapolis, USA, in August
2001, including members of the ICMSF, proposed a change from control based
on food standards to a system involving an ALOP at the time of consumption of
the food.
An ALOP results from the outcome of a risk assessment, taking account of
the costs involved in any control action. Such an analysis is made by the
stakeholders, with the knowledge that reducing the risk of a hazard occurring
will increase the food production cost, but is hardly likely to reduce the risk to
zero. The nature of the ALOP depends very much on the severity of the hazard
and the type of food in question. For canned foods that are purchased by large
numbers of consumers, the ALOP for toxigenic Clost. botulinum implies that the
occurrence of botulism is reduced to a negligible level. In the canning of low-
Fig. 2.5 Schematic presentation of the manner in which microbiologically safe food is
produced and the role of risk assessment and risk management. For further explanation
see text.
30 Microbiological risk assessment in food processing
acid foods, it is generally agreed that the ALOP requires the use of a process
giving a (theoretical) 10
11
¨C10
12
-fold reduction in the level of Clost. botulinum.
For freshly cut vegetables that are eaten raw, the ALOP may require a 50%
reduction in foodborne disease over a 10 year period. A similar kind of target
has been set in the USA, by the FDA for raw poultry meat (Buchanan et al.,
2001). Clearly the targets must be expressed in terms of food safety objectives.
In relation to poultry meat, a 50% reduction in disease over 10 years can follow
only from a corresponding decrease in pathogen contamination of poultry
carcasses. The relevant calculation can now be made from a proper risk
assessment. From the producer¡¯s viewpoint, the meeting of food safety
objectives is just one consideration. Account must also be taken of any specific
customer (retailer) requirements as well as the producer¡¯s own profitability.
In producing safe food, there are various aspects, which can be grouped in
three main categories:
1. The type of process used, which may include heat treatment, irradiation,
high-pressure technology, etc.
2. Product composition, including addition of, for example, salt, acids or other
preservatives.
3. Storage conditions, involving storage temperature and time, gas packaging,
etc.
Effective management of these aspects allows all food safety requirements to
be met. In doing so, it is necessary to define criteria for process performance,
product composition and storage conditions. The setting of the criteria is the task
of the risk manager, and use of the HACCP concept is the managerial tool to
ensure that the criteria will be met in practice. Finally, a verification step is
needed to demonstrate that the ALOP, the customer requirements and the
producer¡¯s own objectives are being met. If, for any reason, it is impossible to
meet the ALOP, then production of the food in question must cease.
2.5.2 Some examples
Setting storage criteria for pasteurised milk
The presence of Bacillus cereus in pasteurised milk should be considered
hazardous, because the organism is potentially pathogenic and can multiply in
this product. The organism is also associated with foodborne illness resulting
from the consumption of dairy products. In most European countries, a limit
value of 10
4
organisms per ml or gram at the time of consumption has been set
for dairy products and other foods. Some countries, including the Netherlands,
accept the presence of 10
5
per ml in milk, and to meet this limit, the storage
criteria for pasteurised milk are 7 oC for a maximum of 7 days. Human exposure
to Bac. cereus from milk consumption was studied by Notermans et al. (1997).
Exposure was assessed by (i) enquiring about storage conditions (temperature
and time) for pasteurised milk that were used by households in the Netherlands
and (ii) carrying out storage trials at 6¨C12 oC. The temperatures studied were
The evolution of microbiological risk assessment 31
those observed in a survey of Dutch domestic refrigerators. The probability of
exposure to different doses of Bac. cereus is given in Table 2.5.
The results demonstrated that 7% of the milk contained >10
5
Bac. cereus per
ml at the time of consumption. It was also shown that storing milk according to
the producer¡¯s recommendations would prevent the limit value of 10
5
per ml
from being exceeded.
Risk management options
It is clear that managerial action is required to ensure that the official criterion is
met. In order to take such action, it is necessary to assess the predominant
factors that determine the final level of Bac. cereus when the milk is consumed.
These are:
? The initial level of contamination with the organism (N
o
), which is influenced
by factors such as the grazing period for the cows and control of hygiene
during milking.
? The storage time (t) for the pasteurised milk.
? The storage temperature (T) of the milk.
Zwietering et al. (1996) derived an equation for calculating the effects of each of
the above variables on the numbers of B. cereus finally present (N):
N N
0
e
0:013T
2
t
From the equation, it can be observed that storage temperature has the largest
effect on the level of Bac. cereus at the time the milk is consumed. This is
followed by storage time, while initial count has only a minor effect. The effects
are illustrated by the data presented in Table 2.6.
Selection of new control options
The simplest option for the milk producer would be to do nothing, since the
prescribed storage conditions are quite adequate on the label. However, the
situation is different if consumer complaints start to increase and the producer
Table 2.5 Exposure to Bacillus cereus after
consumption of pasteurised milk based on model
experiments of Notermans et al. (1997)
Exposure dose Probability
(organisms/ml) (%)
>10 99
>10
2
21
>10
3
14
>10
4
11
>10
5
7
>10
6
4
>10
7
<1
32 Microbiological risk assessment in food processing
loses business. In this case, a lower storage temperature could be recommended
on the label, although it is by no means certain that consumers would respond by
reducing the temperature in their refrigerators. In addition, many countries have
stipulated temperatures for storing chilled foods, e.g. 7 oC in the Netherlands,
and any decrease would involve negotiations with trading authorities, retailers
and consumer bodies. The last option would be to reduce the maximum storage
time, but this would raise other considerations. Retailers, for example, may well
favour such a step, for the simple reason that consumers would need to purchase
milk more frequently. While reducing the temperature would possibly be more
costly to the retailer, a shorter storage time would necessitate more frequent
deliveries and therefore be an additional cost to the supplier.
Because of progress in predictive modelling, the risk assessor is able to
determine the effect on product safety of different storage conditions. It is,
however, the risk manager who has to make the final decision on the action to be
taken, and this involves consideration of all the relevant aspects of the problem.
From targets to HACCP criteria
In many countries, poultry meat products contribute significantly to foodborne
disease, especially that caused by Salmonella and Campylobacter spp. Although
various attempts have been made to improve the situation, little progress has
been made until recently. One of the reasons may be the continuing deadlock in
accepting responsibility. Consumers expect pathogen-free products, which
cannot be achieved at the present time, while producers refer to the unhygienic
practices of consumers, when food is prepared in the kitchen. In order to change
this situation in the USA, the FDA has set a target, whereby foodborne disease
from poultry meat will be reduced by 50% over a ten-year period (Buchanan et
al., 2001) and producers are held responsible for meeting the target.
For operational purposes the target, which is an ALOP, needs to be translated
into appropriate process, product and storage criteria. To set the criteria, it is
necessary to calculate the requisite reduction in contamination of poultry meat
with the key pathogens (food safety objectives). The following steps are
required:
Table 2.6 Storage times for pasteurised milk giving a final count of Bacillus cereus of
10
5
/ ml: effects of initial number and storage temperatures (Notermans et al., 1997).
Initial number Storage temperature (
o
C)
per ml 6 8 10 12
0.001 13.4
a
7.6 4.8 3.4
0.01 11.3 6.4 4.1 2.8
0.1 9.2 5.2 3.3 2.3
1 7.0 4.0 2.5 1.8
10 4.9 2.8 1.8 1.2
a
Storage time in days.
The evolution of microbiological risk assessment 33
? Assessment of the prevalence of Salmonella and Campylobacter spp. in
commercial broiler flocks.
? Quantitative assessment of product contamination with the pathogens at the
end of processing.
? Determination of the effect of storage on pathogen contamination.
? Assessment of the effects of food preparation by consumers on the survival
and spread of the pathogens.
? Collection of consumption data.
The information thus provided will allow an assessment to be made of human
exposure to the pathogens at the time the food is consumed. Existing dose¨C
response relationships can be used to determine the likely number of disease
incidents or the probability of disease. These figures now need to be reduced by
50% and the new target for exposure can be determined again from the dose¨C
response relationships.
There are several ways in which the new food safety objectives can be met.
One approach is to set process performance criteria, which might include low
environmental temperature, minimum processing time, spraying carcasses with
lactic acid, washing in chlorinated water etc. Also, product storage conditions
(low temperature, short time) may be important to minimise any risk of growth
of pathogens. The processing procedures and the conditions of processing and
storage provide the CCPs in the HACCP system. If information is available,
critical limits can be based on published data, although this is not always
possible for specific parts of the process or storage conditions. Until more
sophisticated models have been developed, the necessary calculations must be
based on simple D-values and modelling of growth parameters.
2.5.3 Current issues in microbiological risk assessment
As with all risk assessment procedures, MRA comprises hazard identification,
hazard characterisation, exposure assessment and risk characterisation as the four
basic elements. It is a relatively new discipline in relation to the production of
microbiologically safe food. However, the principles embodied in the approach
have been applied for many years, especially to heat processes developed for
low-acid canned foods and treatment of milk in the 1930s and 1940s respectively.
The resultant heating regimes have proved to be very successful in controlling
any foodborne diseases that might be associated with these sources.
Although MRA was first introduced as a food safety measure in 1995, its use
has been limited and the approach has yet to lead to internationally recognised
microbiological criteria. It may take longer than anticipated for the concept to be
universally accepted and applied, but initiatives taken by the WHO and FAO to
organise meetings of experts (see Section 2.4) could help to stimulate interest. It
should be recognised, however, that the application of MRA in the production of
safe food will be hampered by the present lack of any comprehensive,
microbiological food safety policy. The problem is compounded by the large
34 Microbiological risk assessment in food processing
diversity of available food products that vary from fully processed up to almost
unprocessed ready-to-eat products. Within these categories, there are differences
in processing methods, product composition and storage conditions. In addition,
microbial contamination may be introduced into the food chain, sometimes from
the raw materials used in product manufacture. In other instances such
contamination may come from organisms that are endemic in the processing
environment or through human handling of the food, etc. Despite the use of
various processes for reducing the microbial load on food, consumer safety
cannot always be guaranteed because of the potential for recontamination.
As well as the above-mentioned product diversity, there are significant
differences between products in the types of microorganisms that may be
present. Among the variety of possible foodborne pathogens are ricketsiae,
viruses, bacteria, moulds and parasites. Each of these groups contains organisms
with particular growth characteristics, ecological behaviour and disease
potential. With so much variation between products and the nature of the
contaminants present, it is hardly surprising that separate risk assessments are
not really feasible. This is especially so if end-product criteria based on MRA
are required in each case. Such an approach has been attempted recently by the
ICMSF, but without real success. For example, in relation to potential growth of
Listeria monocytogenes in food and the varying sensitivity to this pathogen
among different human groups, the ICMSF proposed 15 separate categories of
food (ICMSF, 1994), each with its own food safety objective (FSO).
Among other weak points in current attempts to use MRA are difficulties in
(i) exposure assessment, (ii) assessment of dose¨Cresponse relationships and,
consequently, (iii) the uncertain outcome of risk characterisation. Another aspect
to be considered is human perception, which has no direct relationship to health
problems, but carries significant implications for consumer confidence in the
safety of the food supply. Finally, it should be noted that mistakes are sometimes
made in attributing human illnesses to the consumption of contaminated food
and a misleading impression may result.
Exposure assessment
Microbiological models are an important tool for this exercise. Suitable models
are necessary because it is impracticable to test individual food products for this
purpose (see Section 2.4). The value of end-product testing is mainly in relation
to verification procedures, which are discussed below.
For the purposes of exposure assessment, the Monte Carlo type of model is
particularly relevant and is based on the distributions of all appropriate variables
in the food production process. These will include product composition and
storage conditions, consumption patterns, etc. The approach involves taking
random values for each of the distributions to assess the final exposure
distribution. A weakness is that the distributions of variables must be
independent of each other and often this is not the case. For example, storage
time and temperature for pasteurised products are usually inter-dependent.
Nevertheless, the Monte Carlo approach provides much more realistic data,
The evolution of microbiological risk assessment 35
when compared with a worst-case scenario, as used in the past. Furthermore, it
provides information that takes account of the uncertainty or variability in
human exposure to microorganisms. Experimental data on human exposure to
pathogenic organisms via beef hamburgers (Cassin et al., 1998) and Salmonella
enteritidis from pasteurised liquid egg (Whiting and Buchanan, 1997) show that
exposure levels can vary widely, although no allowance was made for error.
Dose¨Cresponse relationship
In risk assessment, much attention is given to dose¨Cresponse relationships,
which are considered essential in risk assessments for toxins, food additives,
drug residues, etc. After MRA became a legal requirement in 1995, attention
was also focused on microorganisms in this respect. Although it is clear that
experimental use of animal models has provided important information on
mechanisms of pathogenicity in organisms such as L. monocytogenes
(Notermans et al., 1998), the data obtained cannot be used to derive dose¨C
response relationships for humans. Instead, such relationships are mainly based
on data from human volunteer studies or the analysis of foodborne disease
outbreaks involving microorganisms. These data show considerable variation,
even between the serotypes of Salmonella (Kothary and Babu, 2001). Currently,
however, reliable information on microbiological dose¨Cresponse relationships is
still very scarce. Among the difficulties is the fact that challenge studies on
volunteers can only be carried out with the less dangerous pathogens and, of
course, the volunteers will usually be healthy adults. In practice, foodborne
infections are commonly seen in the more vulnerable groups within the general
population (infants, the elderly, people undergoing treatment with
immunosuppressive drugs, people with AIDS). These individuals may constitute
about 20% of the whole population.
A further aspect, which must be taken into account, is the physiological
condition of the disease agent. This will affect virulence and, in turn, the dose¨C
response relationship (Abee and Wouters, 1999). The situation is complicated by
the ability of some microorganisms to protect themselves against external stress
factors that might arise in minimally processed food products (Abee and
Wouters, 1999; Hecker and Vo¨lker, 1998). Protection may also be afforded
against the acid conditions of the stomach, during passage of the contaminated
food following ingestion (Abee and Wouters, 2002). On the other hand,
virulence may be adversely affected by the nature of the food matrix, within
which the organism is contained, so various factors must be considered.
Risk characterisation
This is the outcome of exposure assessment and establishment of the dose¨C
response relationship, taking account of the severity of illness caused by a
particular pathogen. However, it suffers from the fact that both exposure
assessment and dose¨Cresponse analysis are not yet clearly established in MRA.
Only time will tell whether the present approaches in exposure assessment and
dose¨Cresponse analysis will result in widespread acceptance and application of
36 Microbiological risk assessment in food processing
MRA. There is some debate about the possible use of epidemiological data on
microbial foodborne illness as an alternative for the purposes of risk
characterisation. Because relevant information is lacking in nutritional risk
assessment, use of epidemiological data has become common and is applied
successfully. In relation to microbial foodborne illness, data collected in
countries such as the USA, the UK and the Netherlands could be used to
determine, for example, the incidence rate for human salmonellosis caused by
egg consumption, eating of poultry meat, etc. The uncertainty of the outcome of
that kind of calculation is relatively well defined and very much less than that
from data based on exposure assessment and dose¨Cresponse modelling. Also, the
use of epidemiological methods, such as case-control and cohort studies, allows
the most important risk factors to be identified.
Where to go from here?
The introduction of MRA is essential in order to assess the risk and severity of a
microbial foodborne disease. For the management of an unacceptable risk, FSOs
would need to be formulated. These should not be simply microbiological
criteria, such as a specified number of cells of a particular pathogen that can be
present in a food at the time of consumption. A better approach is that used
recently in the USA, where targets have been established as FSOs for raw
poultry and red meat products (Buchanan et al., 2001). The target for foodborne
disease caused by poultry consumption is to reduce the present level by 50%
over a period of 10 years (see Section 2.5.2). Setting a target for raw products of
this kind is an attractive proposition, but its success depends largely on the
availability of an appropriate means of reducing microbial contamination of the
product and a reliable system for collecting data on foodborne disease.
Verification
Currently, an important issue in microbiological risk analysis is the process of
verification, which is a means of determining whether the analysis, including
MRA, has been carried out correctly and that an acceptable level of protection
has been obtained. The process of verification is presented schematically in Fig.
2.6. Verification comprises several elements: (i) an evaluation to determine
whether the risk analysis resulted in FSOs and, when introduced, whether these
met the expectations of the stakeholders, i.e. all those involved in the process
and, if not, (ii) adaptation of the FSOs, or (iii) re-evaluation of the MRA. The
last step is also relevant when new scientific information becomes available that
questions the value of the MRA. Adaptation of FSOs may also be necessary as a
result of epidemiological data on the frequency of foodborne diseases, data from
microbiological monitoring of the food product or any new information, such as
that involving changes in risk factors. This last point illustrates the dynamic
nature of the circumstances involved in the production of microbiologically safe
food.
The evolution of microbiological risk assessment 37
2.6 List of abbreviations
ADI Acceptable Daily Intake
ALARA As Low As Reasonably Achievable
ALOP Appropriate Level Of Protection
CAC Codex Alimentarius Commission
CCFAC Codex Commission for Food Additives and Contaminants
CCFH Codex Commission for Food Hygiene
CCP Critical Control Point
CCPR Codex Commission for Pesticide Residues
CCRVDR Codex Commission for Residues of Veterinary Drugs in Food
FAO Food and Agriculture Organisation of the United Nations
FDA Food and Drug Administration
FSO Food Safety Objective
GAP Good Agricultural Practice
GMP Good Manufacturing Practice
HACCP Hazard Analysis Critical Control Points
HTST High-Temperature, Short-Time
IAMFES International Association of Milk, Food and Environmental
Sanitarians
IAFP International Association of Food Protection
ICMSF International Commission on Microbiological Specification for
Foods
JECFA Joint FAO/WHO Expert Committee on Food Additives
JMPR Joint FAO/WHO Meeting on Pesticide Residues
MRA Microbiological Risk Assessment
MRL Maximal Residue Level
Fig. 2.6 The process of verification of microbial risk assessment.
38 Microbiological risk assessment in food processing
NACMCF US National Advisory Committee on Microbiological Criteria for
Foods
PMTDI Provisional Maximal Tolerable Daily Intake
PTWI Provisional Tolerable Weekly Intake
QRA Quantitative Risk Analysis
SPS WTO Agreement on the Application of Sanitary and
Agreement Phytosanitary Measures
WHO World Health Organisation
WTO World Trade Organisation
2.7 References
ABEE T and WOUTERS J A (1999), ¡®Microbial stress response in minimal
processing¡¯, Int. J. Food Microbiol. 50 65¨C91.
ABEE T and WOUTERS J A (2002), ¡®Stress response and food safety¡¯ in
Symposium Proceedings Frontiers in Fermentation and Preservation,
Joint Meeting Society for Applied Microbiology, UK/The Netherlands
Society for Microbiology, Wageningen.
AMERICAN PUBLIC HEALTH ASSOCIATION, SUBCOMMITTEE ON METHODS FOR THE
MICROBIOLOGICAL EXAMINATION OF FOODS (1966), Recommended
Methods for the Microbiological Examination of Foods. 2nd ed. American
Public Health Association Inc, New York.
ANON. (1995), Trading into the future, World Trade Organisation, Geneva.
BAIRD-PARKER T (2000), ¡®The production of microbiological safe and stable
food¡¯, in Lund B M, Baird-Parker T C and Gould G W, The
Microbiological Safety and Quality of Food, Aspen Publishers,
Gaithersburg, 3¨C18.
BALL C O (1923), ¡®Thermal process time for canned food¡¯, Bull. Natl. Res.
Council No. 37 Vol 7, Part 1. Natl. Res. Council Washington, DC.
BARENDSZ A W (1995) ¡®Kwaliteitsmanagement: HACCP de ontbrekende
schakel¡¯ in HACCP, A Practical Manual, Keesing Noordervliet, Houten.
BERGDOLL M S (1989), Staphylococcus aureus in Doyle M P, Foodborne
Bacterial Pathogens, Marcel Dekker Inc, New York, 464¨C524.
BIGELOW W D, BOHART G S, RICHARDSON, A C and BALL C O (1920), Heat
Penetration in Processing Canned Foods, Bull. No. 16-L, Res. Labs. Natl.
Canners Assoc., Washington, DC.
BROWN G L, COLWELL D C and HOOPER, W L (1968), ¡®An outbreak of Q fever in
Staffordshire¡¯, J. Hygiene, Cambridge 66 649¨C655.
BRYAN F L (1992) Hazard Analysis Critical Control Point Evaluations: A Guide
to Identifying Hazards and Assessing Risks Associated with Food
Preparation and Storage. WHO, Geneva.
BUCHANAN R L et al. (2001), ¡®Moving beyond HACCP ¨C Risk management and
food safety objectives¡¯, in Symposium Abstracts, IAFP 88th Annual
Meeting, Minneapolis.
The evolution of microbiological risk assessment 39
CAC/GL30 (1999), Principles and Guidelines for the Conduct of Microbiological
Risk Assessment. Codex Alimentarius Commission, Food and Agriculture
Organisation, World Health Organisation, Rome.
CAC, COMMITTEE ON FOOD HYGIENE (1991), Draft Principles and Applications of
the Hazard Analysis Critical Control Point (HACCP) System, Alinorm 93/
13, Appendix VI. Food and Agriculture Organisation, World Health
Organisation, Rome.
CAC, COMMITTEE ON FOOD HYGIENE (1997), Hazard Analysis Critical Control
Point (HACCP) and Guidelines for its Application, Alinorm 97/13, Food
and Agriculture Organisation, World Health Organisation, Rome.
CASSIN M H, LAMMERDING A M, TODD ECD, ROSS W and MCCOLL R S (1998)
¡®Quantitative risk assessment of Escherichia coli O157:H7 in ground beef
hamburgers¡¯, Int. Journal of Food Protection 41 21¨C44.
CHUNG K-T, STEVENS S F and FERIS D H (1995), ¡®A chronology of events and
pioneers of microbiology¡¯, SIM News 45 3¨C13.
D¡¯AOUST J-Y (1989), ¡®Salmonella¡¯ in Doyle M P, Foodborne Bacterial
Pathogens, Marcel Dekker Inc., New York, 328¨C446.
DEPARTMENT OF HEALTH, EDUCATION AND WELFARE PROCEEDINGS (1972),
¡®National Conference on Food Protection¡¯, US Governmental Printing
Office, Washington, DC.
DERRICK E H (1937), ¡® ¡®¡®Q¡¯¡¯ fever, a new fever entity: clinical features, diagnosis,
and laboratory investigation¡¯, Med J Australia, 2 281¨C299.
DOBELL C (1960), Antony van Leeuwenhoek and his ¡®Little Animals¡¯, Dover
Publications, New York.
DOYLE M P (1989), Foodborne Bacterial Pathogens, Marcel Dekker Inc., New
York, 328¨C446.
ENRIGHT J B, SADLER, W W and THOMAS R C (1956), ¡®Observations on the thermal
inactivation of the organism of Q fever in milk¡¯, J Milk Food Technol.,
1956 10 313¨C318.
ENRIGHT J B, SADLER, W W and THOMAS R C (1957), Thermal inactivation of
Coxiella burnetii and its relation to pasteurisation of milk, Public Health
Service Publication No. 517, US Government Printing Office,
Washington, DC.
ESTY J R and MEYER K F (1922), ¡®The heat resistance of spores of Bacillus
botulinus and allied anaerobes¡¯, XI J. Inf. Dis., 31 650¨C663.
FAO/WHO (1995), Application of Risk Analysis to Food Standards Issues, Report
of a joint FAO/WHO Expert Consultation, World Health Organisation,
Geneva, Switzerland.
FAO/WHO (1997), Risk Management and Food Safety, Report of a joint FAO/
WHO Expert Consultation, Rome, Italy.
FAO/WHO (1998), The Application of Risk Communication to Food Standards
and Safety Matters, Report of a joint FAO/WHO Expert Consultation,
Rome, Italy.
FAO/WHO (2000a), Risk Assessment of Microbiological Hazards in Foods,
Report of a joint FAO/WHO Expert Consultation, Rome, Italy.
40 Microbiological risk assessment in food processing
FAO/WHO (2000b), The Interaction Between Assessors and Managers of
Microbiological Hazards in Food, Report of a joint WHO Expert
Consultation, Kiel, Germany.
FARBER J M and PETERKIN P I (2000), ¡®Listeria monocytogenes¡¯, in Lund B M,
Baird-Parker T C and Gould G W, The Microbiological Safety and Quality
of Food. Aspen Publishers Inc., Gaithersburg, 1178¨C1232
FDA (1973), Acidified Foods and Low Acid Foods in Hermetically Sealed
Containers in Code of US Federal Regulations, Title 21, 1 Parts 113 and
114 (renumbered since 1973), FDA, Washington, DC.
GRANUM P E (1997), ¡®Bacillus cereus¡¯ in Doyle M P, Beuchat L R and Montville
TJ,Food Microbiology: Fundamentals and Frontiers, ASM Press,
Washington, 327¨C336.
HARTMAN P A (1997), ¡®The evolution of food microbiology¡¯, in Doyle M P,
Beuchat L R and Montville T J, Food microbiology: Fundamentals and
frontiers, ASM Press, Washington, 3¨C13.
HATHAWAY S C (2001), ¡®An international perspective on food safety objectives¡¯
¨C risk management and food safety objectives¡¯, in Symposium Abstracts,
IAFP 88th Annual Meeting, Minneapolis.
HAUSCHILD A H W (1982), ¡®Assessment of botulism hazards from cured meat
products¡¯, Food Technol. 36 95¨C104.
HECKER M AND VO
¨
LKER U (1998), ¡®Non-specific, general and multiple stress
resistances of growth-restricted Bacillus subtilis cells by the expression of
the
B
regulon¡¯, Mol. Microbiol. 29 1129¨C1136.
HUTT P B and HUTT P B II (1984), ¡®A history of government regulation of
adulteration and misbranding of food¡¯, Food Drug Cosm. Law 39 2¨C73.
IAMFES (INTERNATIONAL ASSOCIATION OF MILK, FOODS AND ENVIRONMENTAL
SANITARIANS, INC.) (1991), Procedures to Implement the Hazard Analysis
Aritical Control Point System, IAMFES document 502, Ames.
ICMSF, (THE INTERNATIONAL COMMISSION ON MICROBIOLOGICAL SPECIFICATIONS
OF FOODS) (1974), Micro-organisms in Foods 2. Sampling for Micro-
biological Analysis: Principles and Specific Applications, University of
Toronto Press, Toronto.
ICMSF, (THE INTERNATIONAL COMMISSION ON MICROBIOLOGICAL SPECIFICATIONS
OF FOODS) (1988), Micro-organisms in Foods. Application of the Hazard
Analysis Critical Control Point (HACCP) System to Ensure
Microbiological Safety and Quality. Blackwell Scientific Publications,
Oxford.
ICMSF (1994), Choise sampling plan and criteria for Listeria monocytogenes, Int.
J. Food Microbiol. 22 89¨C96.
KAMPELMACHER E H (1971), Since Eve Ate Apples, Inaugural address 11
November, Wageningen University.
KOTHARY M H and BABU U S (2001), ¡®Infective dose in volunteers: a review¡¯, J.
Food Safety 21 49¨C73.
LUND B M and PECK M W (2000) ¡®Clostridium botulinum¡¯, in Lund B M, Baird-
Parker T C and Gould G W, The Microbiological Safety and Quality of
The evolution of microbiological risk assessment 41
Food. Aspen Publishers Inc., Gaithersburg, 1057¨C1109.
MAURIN M and RAOULT D (1999), ¡®Q Fever¡¯, Clin. Microbiol. Rev. 12 518¨C553.
MCCLANE B A (1979), ¡®Clostridium perfringens¡¯, in Doyle M P, Beuchat L R and
Montville T J, Food Microbiology Fundamentals and Frontiers, ASM
Press, Washington, 305¨C326.
MONOD J (1949), ¡®The growth of bacterial cultures¡¯, Ann. Rev. Microbiol, 3 371¨C
394.
MONOD J (1950), ¡®La technique de Culture continue. The¡äorie et application¡¯ Ann.
Inst. Pasteur, 79 390¨C407.
MOSSEL D A A (1982), Microbiology of Foods, Utrecht, University of Utrecht.
NOTERMANS S, GALLHOFF G, ZWIETERING M H and MEAD G C (1995), ¡®Identification
of critical control points in the HACCP system with a quantitative effect on
the safety of food products¡¯, Food Microbiol. 12 93¨C98.
NOTERMANS S, DUFRENNE J, TEUNIS P, BEUMER R, TE GIFFEL, M and PEETERS
WEEM P (1997), ¡®A risk assessment study of Bacillus cereus present in
pasteurised milk¡¯, Food Microbiol. 30 157¨C173.
NOTERMANS S, DUFRENNE J, TEUNIS P and CHACKRABORTY T (1998), ¡®Studies on
the risk assessment of Listeria monocytogenes¡¯, J. Food Protection, 61
244¨C248.
PFLUG I J and GOULD G W (2000), ¡®Heat treatment¡¯ in Lund B M, Baird-Parker T
C and Gould G W, The Microbiological Safety and Quality of Food.
Aspen Publishers Inc., Gaithersburg, 37¨C63.
RATKOWSKY D A (1993), ¡®Principles of nonlineair regression modelling,¡¯ J. Gen.
Microbiol, 12 245¨C249.
ROCOURT J and COSSART P (1997), ¡®Listeria monocytogenes¡¯ in Doyle M P,
Beuchat L R and Montville T J, Food Microbiology Fundamentals and
Frontiers, ASM Press, Washington, 337¨C352.
ROSE J B and GERBA C P (1991), ¡®Use of risk assessment for development of
microbial standards¡¯ Water Sci. Technol, 24, 29¨C38.
ROSS T and MACMEEKIN T A (1993), ¡®Predictive microbiology¡¯, Int. J. of Food
Protection 23 241¨C264.
SPENCER R and BAINES C R (1964), ¡®The effect of temperature on the spoilage of
wet fish. I. Storage at constant temperatures between 1 oC and 25 oC¡¯,
Food Technol., 18 769¨C772.
STEINKRAUS K H (1996), Handbook of Indigenous Fermented Foods, Marcel
Decker Inc, New York.
TANNAHILL R (1973), Food in History, Stein and Day Publishers, New York.
TOUSSAINT-SAMAT M, (1992), History of Food, Blackwell Publishers,
Cambridge.
UNITED STATES FOOD AND DRUG ADMINISTRATION, DIVISON OF MICROBIOLOGY
(1972), ¡®Bacteriological Analytical Manual¡¯, 3rd ed., Washington, DC.
UNTERMANN F, JAKOB P and STEPHAN R (1996), ¡®35 Jahre HACCP-System. Von
NASA-Komzept bis zu den Definitionen des Codex Alimentarius¡¯,
Fleischwirtschaft 76 589¨C594.
VAN ERMENGEM E (1896), ¡®Ueber einem neuen anaeroben Bacillus und seine
42 Microbiological risk assessment in food processing
Beziehungen zum Botulismus¡¯, Z. Hyg. Infectionskrankh 26 1¨C56. English
translation (1979), Rev. Infect. Dis. 1 701¨C719.
WENDORF F R, SCHILD R, EL HADIDI N, CLOSE A E, KOBUSIEWICZ H, WIECKOWSKA H,
ISSAWI B and HAAS H (1979) ¡®Use of barley in the Egyptian late
paleolithic¡¯, Science 205 1341¨C1348.
WHITING R C and BUCHANAN R L (1997),¡¯Predictive modelling¡¯ in Doyle M P,
Beuchat L R and Montville T J, Food Microbiology Fundamentals and
Frontiers, ASM Press, Washington, 728¨C739.
ZWIETERING M, NOTERMANS S. and DE WIT J (1996), ¡®The application of
predictive microbiology to estimate the number of Bacillus cereus in
pasteurised milk at the point of consumption¡¯, Int. J. Food Microbiol. 30
55¨C70.
The evolution of microbiological risk assessment 43
Part I
The methodology of microbiological risk
assessment
3.1 Introduction
New or re-emerging microbiological food safety problems have established the
need to ensure that microbiological hazards are managed against the background
of a sound scientific process. Essentially, this process consists of gathering and
analysing scientific information and data with the objective of identifying what
pathogens and/or their toxins or metabolites, foods or situations may lead to
foodborne illness, and then of determining the magnitude of the impact these
may have on human health, together with an identification of the factors that
influence it. This scientific process is known as ¡®microbiological risk
assessment¡¯ (MRA). The present requirement is that MRA should be conducted
according to a structured format, and the development of quantitative,
probabilistic approaches is encouraged.
There are a number of areas where risk assessment may inform public or
private decision making. Where public organisations are concerned, these
encompass:
? Policy determination (determination of an ¡®appropriate level of protection¡¯ ,
identification of risk mitigation strategies and establishment of priorities for
action).
? Control activities (analysis and evaluation of the impact of production
systems on food safety, identification of the best points at which to
implement control, comparison of control options/mitigation measures).
? Design and implementation of monitoring and surveillance programmes and
inspection systems.
? Apportionment of resources (how much public money is it desirable to spend
for various purposes and on what?).
3
Microbiological risk assessment (MRA):
an introduction
J.-L. Jouve, Ecole Nationale Ve¡äte¡ärinaire de Nantes
? Guidance for food safety and microbial research (acquisition of information
and data that are lacking in the actual knowledge base).
? Education (advice to private organisations on how to manage food safety
risks and to individuals on their food choices and related behaviour).
Similarly, in addition to complying with their statutory duty, private
organisations involved in the production/manufacture of foods should ensure
that they manage food safety risks in a way that is consistent with the
expectations and requirements of society. They should, in particular, determine
the relative importance of factors and parameters in the production/manufacture/
handling systems they operate, their possible variations, and their impact on the
safety of the food. They should equally design, or consider altering, their
products, processes and/or control measures to meet the level of protection
required. In doing so, food-producing companies should ensure that their
specific policy choices and market constraints do not compromise the intangible
requirement for food safety.
A formal, and preferably quantitative, microbiological risk assessment
(MRA) is a useful tool in carrying out the above functions (Lammerding, 1996):
? It provides a structured and explicit approach to examining the nature and
characteristics of the hazard(s) under consideration, the production to
consumption pathways and how they impact on the fate of hazards and level
of human exposure, the potential and severity of adverse consequences and
the factors involved and the subsequent risk incurred.
? It improves an understanding of the key issues and assists the efforts to foster
resources and interventions where they are most necessary and/or useful. In
particular, it provides input for adoption of a goal-setting approach to
legislation and standards into the food safety control and assurance
programmes of the food industry.
? The MRA report serves as a source of information and a database for
informed decision making. It also provides an aid to identify where gaps in
knowledge exist and thus, where additional information is needed. It
therefore helps to identify research needs, to establish research priorities and
to design commissioned studies.
? It increases consistency and transparency of the analytical process. A formal
MRA provides explicit data that are amenable to review. It describes
shortfalls, such as the nature and extent of uncertainties attached to the data.
It makes explicit, and focuses attention on, the structure of models used, and
the assumptions made, and discusses how these impact on the risk estimates.
? It facilitates communication between the scientific and technical experts, the
decision makers and other interested parties. It makes the risk and its
determinants more transparent.
? It assists the appraisal of the health impact of risk management options by
allowing model simulations of control measures before they are
implemented. This also allows for more rigorous application of other tools
utilised in decision making.
48 Microbiological risk assessment in food processing
However, some inherent limitations of MRA have been discussed in many
documents. Suffice it to say that MRA is, and will probably continue to be, an
imprecise discipline. It utilises the information that is currently available:
therefore the results of an MRA can only be as good as the information and
models utilised. Uncertainties permeate the whole process, caused in particular
by the incompleteness of data, the imperfect understanding of biological
processes and the methodology adopted to design and operate models. Also,
much criticism has been raised because value judgements and policy choices
may be incorporated in the process. MRA often operates in a decision-making
context that may impose pressures on the content of the assessment itself, unless
appropriate safeguards are established. The usefulness of MRA depends on the
decision context. Finally, one should be aware of the warning by Ralph Nader
(1993) who perceived risk assessment as ¡®a massive overcomplication and
overabstraction¡¯ that attempts to make precise something that by nature cannot
be precise.
With regard to these limitations, the value of MRA should not be
exaggerated. MRA will never provide simple solutions to complex problems:
at its best, it can only be a credible, science-based input into the
multidimensional, value-laden considerations that contribute to shape decisions
regarding food safety. Risk assessment will never replace sound judgement and
considered governance with regard to risk issues. It is the responsibility of
scientists and risk assessors, combined with risk managers, to increase the
reasonableness, consistency, transparency and credibility of MRA. This being
ensured, it can be expected that, in many cases, MRA will contribute to
improving decisions and be an essential aid to promoting understanding and
confidence in resultant actions.
3.2 Key steps in MRA
In the food sector, microbiological risk assessments have been conducted for
many years in one form or another by the scientific community, the food
industry and regulatory bodies. Recently, however, the need to adopt more
formal approaches and principles led to the development of framework(s) for
microbiological risk assessment for foods.
In 1999, the Codex Alimentarius Commission adopted, on the proposal of the
Codex Committee on Food Hygiene, a document entitled Principles and
Guidelines for the Conduct of Microbiological Risk Assessment (Alinorm 99/13
A). This document defines microbiological risk assessment as
A scientifically based process consisting of the following steps:
(i) hazard identification,
the identification of biological agents capable of causing adverse health
effects and which may be present in a particular food or group of
foods,
Microbiological risk assessment (MRA): an introduction 49
(ii) hazard characterisation,
the qualitative and/or quantitative evaluation of the nature of the
adverse health effects associated with the hazard. [A desirable feature
of hazard characterisation is establishing a dose¨Cresponse relationship,
i.e. the determination of the relationship between the magnitude of
exposure (dose) to a biological agent and the severity and/or frequency
of associated adverse health effects (response)],
(iii) exposure assessment,
the qualitative and/or quantitative evaluation of the likely intake of
biological agents via food as well as exposures from other sources if
relevant,
and (iv) risk characterisation,
the process of determining the qualitative and/or quantitative estimation,
including attendant uncertainties, of the probability of occurrence and
severity of known or potential adverse health effects in a given
population, based on hazard identification, hazard characterisation and
exposure assessment.
This document provides an outline of the elements of a microbiological risk
assessment, indicating the types of decision that need to be considered at each
step. The Codex guidelines are interesting for several reasons. Although they
bear similarities with the paradigms utilised in other fields of activities and thus
ensure commonalties of approaches, they allow features unique to the attributes
and concerns of microbiological food safety to be incorporated. They are
flexible enough to handle a variety of applications, they may be used for
planning and conducting qualitative or quantitative MRAs of varying com-
plexity and they have been applied successfully to a variety of MRAs in
different food safety contexts. As a consequence, they can be considered as an
internationally recognised framework of primary interest to governments and
other organisations, companies and other interested parties that need to prepare a
microbiological risk assessment for foods.
In addition to Codex, several groups have developed guidelines of more
general application, which have also been proposed for use in microbiological
risk assessment for foods. These documents are generally centred on the same
logic as the Codex guidelines and include all essential steps. However, they
differ in the denomination and grouping of stages, and, at times, on the extent
to which risk management and risk communication are integrated in the risk
assessment framework. For instance, the framework developed by the ILSI
Risk Science Institute (ILSI, 2000) places a specific emphasis on the need for
a dialogue among the risk manager, the risk assessor and the stakeholders to
utilise resources to produce scientifically sound risk assessments relevant to
management decisions and public concerns. Therefore, the initial step in the
ILSI framework is ¡®problem formulation¡¯, a systematic planning step that
identifies the goals, breadth and focus of the microbiological risk assessment,
the regulatory and policy context of the assessment and the major factors that
50 Microbiological risk assessment in food processing
need to be addressed for the assessment. The risk assessment itself is defined
by the ¡®analytical phase¡¯ which is the technical examination of data
concerning potential pathogen exposure and associated human health effects.
Elements of the process are ¡®characterisation of exposure¡¯, which includes
pathogen characterisation, pathogen occurrence, exposure analysis and results
in an exposure profile, and ¡®characterisation of human health effects¡¯ which
include host characterisation, evaluation of human health effects and quantifi-
cation of the dose¨Cresponse relationship; the result is a host¨Cpathogen profile.
Risk characterisation is the final phase, combining the information of the
exposure profile and the host¨Cpathogen profile. In another context, the
framework for import risk analysis developed by the Office International des
Epizooties (OIE, 1998), recognises that hazard identification is the necessary
first stage, but places it outside the risk assessment process. The risk
assessment itself includes ¡®release assessment¡¯, a description of the biological
pathway(s) necessary for a risk source to introduce biological agents into a
particular environment, and a qualitative or quantitative estimate of the com-
plete process occurring; ¡®exposure assessment¡¯; ¡®consequence assessment¡¯;
and concludes with ¡®risk estimation¡¯.
Although differences in approach may create difficulties in communication
and understanding, differences between frameworks are only a minor problem,
provided that all the essential components described in the Codex document are
included, and that the approach is adapted to its specific purpose, is internally
consistent, and fulfils a number of essential principles. In particular,
? An MRA should clearly state the purpose of the exercise including the
form of the estimate that will be the output: the purpose and objective of
the MRA should be clearly identified, as well as the questions that the risk
assessment should answer. This requires an appropriate dialogue between
assessors and managers, without influencing the necessary independence and
integrity of the risk assessment.
? The MRA should be transparent: methods, assumptions and judgements
should be clearly stated and understandable to the intended audience, who
should also be provided with the information necessary to evaluate the nature
and adequacy of the data and methods utilised.
? Data should be of sufficient quality and precision: data and data collection
systems should be of demonstrable quality, whereas the best available
information and expertise should be applied in order to reduce uncertainty
and increase reliability of the risk estimate.
? The risk estimate should contain a description of uncertainty and where
the uncertainty arose during the risk assessment process: there should be
a clear understanding and description of any limitations in the data, methods
or models utilised in the risk assessment and of how these limitations
influence the risk estimate.
? Where appropriate, the MRA should consider the fate of the
microbiological hazard(s) in food and the disease process following
Microbiological risk assessment (MRA): an introduction 51
infection: the dynamics of microbial growth, survival or death should be
explicitly considered (and also, where applicable, the dynamics of toxin
formation and destruction). The interactions between humans and the
pathogenic agent following consumption and infection as well as potential for
further spread should be part of the assessment.
? Risk estimates, where possible, should be reassessed over time against
independent human illness data and when new data become available.
Based on the Codex framework, qualitative or quantitative microbiological
risk assessments may be undertaken (Lammerding and Fazil, 2000).
Qualitative risk assessments provide a descriptive treatment of information,
based principally on collation and review of scientific literature and data. Most
traditional microbiological risk assessments in the food sector have been, and
still are, mainly qualitative. Qualitative risk assessments remain the only option
when data, time or other resources are limited. Alternatively, they may be
undertaken as a first evaluation of a food safety issue and/or to determine
whether a more sophisticated, quantitative approach is necessary. Qualitative
MRAs should follow the systematic approach delineated in the Codex
framework and include sections dealing with hazard identification, hazard
characterisation (including, where available, review of dose¨Cresponse
information), exposure assessment and risk characterisation.
Quantitative risk assessments are mathematical analyses of numerical data,
based on mathematical (and probabilistic) models. The development of
quantitative approaches to microbiological risk assessment is currently
encouraged, based on the assumption that these would increase transparency,
provide a better insight into the microbiological risk, while allowing for
comparisons such as between processes or between the effectiveness of
different control options. By developing mathematical models, risk assessors
are forced to carefully consider and characterise the scientific basis for their
estimates, including an explicit statement of all the assumptions made. The
models utilised are, in themselves, important scientific tools: they provide a
structured framework for analysing the information available, they aid in
identifying data gaps and assist in optimising the collection of data where they
are most needed, they provide a context for discussing the biological
processes involved and for improving their understanding, and they help in
identifying and focusing on critical issues. However, the expectations placed
on quantitative MRA should not be exaggerated. The results should be
interpreted carefully and are valid only as far as the data and assumptions are
valid. Quantitative estimates are by no means exact values, but rather an
indication of the order of probability of an adverse event occurring. Also, in
developing models, mathematics and statistics at advanced and increasingly
sophisticated level are used, making their review and use by non-specialists
difficult. Significant efforts must be made to present the models and results of
a quantitative MRA in a format accessible to the different groups that would
make use of the outputs.
52 Microbiological risk assessment in food processing
In the following sections, the Codex framework will be utilised as a basis to
illustrate and briefly discuss the different elements that need to be considered at
each stage.
3.3 Hazard identification
Hazard identification is conventionally the first step in risk assessment. With
regard to food microbiology, the purpose of hazard identification is to identify
the microorganism(s) of concern that may be present in food.
For most of the formal microbiological risk assessments undertaken so far in
the food sector, the approach to hazard identification has been quite
straightforward and involved usually the a priori definition of a pathogen/
product or process combination, e.g. risk assessment of Listeria monocytogenes
in ready-to-eat foods. Most microorganisms considered so far are established
foodborne pathogens. The situation(s) to be assessed are identified by the risk
managers that commission the risk assessment. In such circumstances, the
association of a pathogen and a particular food is already well documented and
the requirements for formal information are minimal. In fact, in qualitative risk
assessments, hazard identification concentrates on gathering and collating
existing information on the characteristics of the pathogen that affect its ability
to to be transmitted by the product and to cause disease in the host. In
quantitative risk assessment, this information serves as an input to the
development of the model utilised for further analysis.
This situation, however, is unique. It results from the fact that most formal
risk assessments publicly available have been commissioned by public
authorities with the aim of identifying, in relation to a given and well-
established pathogen, the foods or groups of foods that require action, of
determining appropriate preventive/control measures, and establishing
numerical limits and standards. In other circumstances, hazard identification
requires different approaches.
Hazard identification may be developed in relation to the assessment of the
risk potentially associated with a given product. In that case, hazard
identification is a categorisation activity, identifying which microbiological
agents may be transmitted by the food and be potential hazards in a given set of
situations. In that case, reliable hazard identification is dependent on the
availability of microbiological and epidemiological data to determine which
pathogens have been, or could be, associated with the product, and on the
availability of human (or animal) health data on the occurrence and levels of
pathogens in the product of concern. Recently, expert systems have been
developed to assist in this approach (Van Gerween et al., 2000).
Hazard identification may also be the first step in understanding a new, or
emerging, food safety problem. In such circumstances, microbiological hazard
identification is quite similar to hazard identification for toxic chemicals.
Specific emphasis would be given to evaluating the weight of the scientific
Microbiological risk assessment (MRA): an introduction 53
evidence for adverse effects in humans or animals (i.e. in determining, or
confirming, the strength of an association), in ascertaining the ways in which the
adverse effects may be expressed and the major sources of exposure.
3.4 Hazard characterisation/dose¨Cresponse assessment
In the Codex framework, whereas the first stage of a microbiological risk
assessment consists primarily of identifying the microorganism(s) of concern in
a food, hazard characterisation is centred on providing a qualitative and/or
quantitative evaluation of the nature of the adverse effects. This would
preferably include a dose¨Cresponse assessment, so that the dose¨Cresponse
relationship identified at this stage can be combined subsequently with the
potential for exposure, to provide an estimate of the probability of adverse
effects which may occur.
In this context, it has to be borne in mind that the response of a human
population to a pathogen is highly variable. The frequency, severity and duration
of a microbiological disease is dependent on a variety of interacting factors
related to the pathogen, the host and the environment including the food vehicle.
These have been referred to as the ¡®infectious disease triangle¡¯ (Coleman and
Marks, 1998) and the hazard characterisation stage of the risk assessment should
provide information on their characteristics and on their interaction in
determining adverse health effects. Therefore, microbiological hazard
characterisation involves considering three elements: the review of the basic
characteristics of the pathogen, the host and the matrix; the description and
evaluation of the human health effects and the dose¨Cresponse analysis. In
qualitative risk assessments, the approach is mainly discursive and these three
elements and the factors to be considered can be organised in the form of
structured questions that would govern the collection and analysis of
information and data. The approach may be the same in quantitative risk
assessments, but in that case, the information and data collected are collated to
serve principally as a basis for the elaboration of dose¨Cresponse models.
3.4.1 Review of the characteristics of the pathogen, the host and the
environment
When not already done during hazard identification, this stage involves
determination and review of the characteristics of the pathogen that affect its
ability to be transmitted to and cause disease in the host. Specific consideration
should be given to the intrinsic properties of the pathogen that influence
infectivity, virulence and pathogenicity, to the factors that may affect or alter
these characteristics and to the variability in the microbiological population (e.g.
strain variation). Additional consideration should be given to the tolerance to
adverse conditions and resistance to control or treatment processes and to the
potential for secondary spread.
54 Microbiological risk assessment in food processing
The factors related to the host refer to the characteristics of the potentially
exposed population that may influence its susceptibility to a given pathogen.
Specific consideration should be given to the host¡¯s intrinsic or acquired traits
that modify the likelihood of infection and the probability and/or severity of
illness. Many host-related factors may be considered, such as age, immune
status, genetic factors, concurrent or recent infections, use of medication,
pregnancy, breakdown of physiological barriers, nutritional status, and social
and/or behavioural traits. Not all of these factors would be relevant in a specific
risk assessment. What is important is that hazard characterisation provides
information of who is at risk, and on the stratification of the exposed population
with regard to the relevant factors that influence susceptibility and severity.
With regard to foodborne pathogens, the factors related to the environment
are principally those that influence the survival of the pathogen through the
hostile environment of the stomach. These may include the conditions of
ingestion, the composition and structure of the food and the processing
conditions including the potential for microbial competition, etc.
3.4.2 Description and evaluation of adverse health effects
At this stage, the whole spectrum of possible effects should be considered,
including asymptomatic infections and clinical manifestation whether acute,
sub-acute or chronic (i.e. long-term sequellae). In all cases, the characterisation
should include a definition of what is an ¡®infection¡¯ and what constitutes a
clinical ¡®case¡¯. An important element of the analysis of the clinical
manifestations is an evaluation of the severity of their possible outcomes.
Several indicators may be used (ILSI, 2000). For example, for mild gastro-
intestinal illnesses, consideration should be given to the duration of the disease,
or to the proportion of the population affected. Where medical or hospital care is
required, severity may be expressed in terms of costs (e.g. cost of treatment,
value of workdays lost). Where pathogens are associated with a certain degree of
mortality, an indicator would be the mortality rate. Recently, quality of life
indicators have been proposed for use in the evaluation of the human health
effects (Havelaar et al., 2000). These include the number of years lost and the
number of years lived with disability, integrated in one single indicator of the
global health burden, the Disability Adjusted Life Years. A definition of the
severity scale should be provided, specifying what is the indicator chosen and
how it can be measured.
Information on the adverse health effects should also include consideration of
the epidemiological pattern of the disease. The frequency, incidence and
prevalence of the disease and/or its clinical forms should be addressed, together
with their evolution with time and seasonal variations. The description should
include a division of the clinical forms according to specific subpopulations.
Specific consideration would also be given to the extent and amount of
asymptomatic carriers and potential for secondary transmission. The description
should include information on uncertainties and their sources. Wherever
Microbiological risk assessment (MRA): an introduction 55
possible, the characterisation should incorporate information on the
physiopathology of the disease, i.e. on the biological mechanisms involved.
3.4.3 Dose¨Cresponse analysis
Dose¨Cresponse analysis consists of integrating a number of considerations to
determine the relationship between the magnitude of exposure (the dose) and the
manifestation, frequency and/or severity of associated adverse health effects (the
response) in an exposed population. Elements to be considered may include the
characteristics of the pathogen, the host and the matrix, the route and level of
exposure, and the adverse effect considered. Where appropriate information is
available, it also involves a discussion of the biological mechanisms involved.
Dose¨Cresponse analysis is strongly influenced by the quantity and quality of
data available.These may result from experimental studies (human volunteer
feeding studies, animal models, in vitro studies) or from observational studies
(epidemiological investigations, routinely collected data, specific studies). Each
approach has advantages and disadvantages (Buchanan et al., 2000). It is
therefore critical to acknowledge the strengths and weaknesses of the methods of
collection, and the quality of the data utilised, and to express any uncertainty
that exists.
Several problems arise when determining a dose¨Cresponse relationship for
foodborne pathogens. In particular, there is a need to express clearly what
constitutes the actual dose (number of pathogens enumerated per food unit,
number ingested, number that survive through the stomach) and what is the
response (infection, clinical case, specific outcome or indicator). A specific
difficulty refers to the lack of data to characterise infection: the translation of
infection into illness and of illness into different outcomes. In many cases, the
available data may only allow the description of a relationship between a dose
and clinical illness. Other difficulties arise from the numerous sources of
variability that should be taken into account (e.g. in virulence and pathogenicity
of the pathogens, in attack rates and in host susceptibility). Therefore, it is
essential that the dose¨Cresponse analysis clearly identifies which information
has been utilised and how it has been obtained. The elements and extent of
variability should be clearly acknowledged. The uncertainties and their sources
should be thoroughly described.
In the traditional approach to microbiological risk assessment, the analysis of
the dose¨Cresponse relationship is based on the collation of the available clinical
or epidemiological information. At present, there is a tendency to develop
mathematical models. Mathematical models have been used for many years in
the field of toxicology. With regard to microbiological risk assessment, it is
expected that they would provide assistance in dose¨Cresponse analysis when
extrapolation to low doses is necessary, and that they would provide useful
information when accounting for variability and uncertainty. An extensive
discussion of dose¨Cresponse modelling goes far beyond the scope of this
introduction and may be found in Chapter 5. Suffice it to say that these models
56 Microbiological risk assessment in food processing
and their use should be carefully considered. In particular, one has to be aware
that the data utilised and the mathematical models selected become the major
variables in the final risk estimate when extrapolating to low doses (European
Commission, 2000). Fitting different models to the same dataset, or using
different datasets with the same model, can give risk-specific doses (the dose
associated to a given probability of infection or risk, e.g. 1 in 10
6
) that differ by
several orders of magnitude. In this context, there cannot be one single dose¨C
response model, whereas the analyst can only make the best possible choice.
This implies that subjective choices have to be made. Such choices and the
dose¨Cresponse relationship will have a profound influence on the final risk
characterisation. Considering their impact on the decisions to be drawn from the
risk assessment (e.g. degree of conservatism), these choices are often not
divorced from policy considerations. They should be discussed and agreed with
the risk managers that commission the risk assessment. For the sake of
transparency, it is a requirement that the basis for dose¨Cresponse analysis and for
the selection of the mathematical model (as determining the slope of the dose¨C
response curve) should be stated and justified, and their implications on the final
risk estimate and its potential use clearly outlined.
3.5 Exposure assessment
Taking into account the Codex definition, the goal of exposure assessment is to
evaluate the level of microorganisms or microbial toxins in a food at the time of
consumption. However, the description of exposure may be more broadly
understood, and include the characterisation of the nature and size of the
population (or subpopulations) exposed, and of the route(s), magnitude,
frequency and/or duration of that exposure. For microbial pathogens, it has to
be realised that the description of exposure cannot include only the probability
of presence or absence of the pathogen or its occurrence, based on concentration.
It has also to consider the prevalence or distribution of microorganisms in space
and over time. Microbial populations may evolve under particular processing/
storage/product or vehicle/use/in vivo conditions. Because the nature and
probability of adverse effects may vary with the levels of microorganisms,
microbiological exposure assessment is faced with the need to develop a
dynamic approach, so as to account for the numerical changes in the microbial
population. This should include an evaluation of the role and impact of the
intrinsic and extrinsic factors that may influence these changes. Microbiological
exposure assessment should therefore involve the interactive characterisation of
the source(s), the route(s) of exposure, and the pathogen prevalence/occurrence,
to culminate in the evaluation of the magnitude, frequency and pattern of
exposure to a pathogen.
All approaches to exposure assessment, whether qualitative or quantitative,
are case-by-case exercises, and interpretative or evaluative scenarios are used. In
the available MRAs commissioned by national public authorities, or for
Microbiological risk assessment (MRA): an introduction 57
international purposes, these scenarios are not intended to reflect one specific,
local situation, but aim to be representative of mean, typical or most sensitive
situations in a region (or even throughout the world). It has to be acknowledged
that when a standard scenario is used, it is currently difficult to determine its
applicability. In addition, scenarios should reflect the effects of the variety of
factors that impact on pathogen levels and distribution and account for
variability and uncertainty in the parameters involved. Such scenarios are built
on available datasets. They may also be utilised as a basis for data collection.
Therefore attention should be given to the nature and compatibility of data
collected in different contexts, and on how they can effectively contribute to
characterise a valid, or credible, ¡®reference¡¯ situation.
3.5.1 Characterisation of the source(s), route(s) of exposure and pathogen
occurrence
Where the primary goal of a risk assessment is to evaluate the risk to a
population from a given pathogen/product combination, the exposure
assessment should utilise information and data as closely related as possible
to the final exposure. In many circumstances, however, risk assessment is
intended to provide information useful for policy making, and should therefore
provide insight into the factors responsible for increasing the risk and, more
importantly, ways to reduce it. This being the case, determination of the
pathogen occurrence should incorporate information on the various factors that
may influence the level or concentration of the pathogen before the product
reaches the consumer, and their relative influence on the ocurrence and
prevalence or distribution of the pathogen. Exposure assessment then requires
integration of different types of information.
The first relates to the characterisation of the source(s) of the pathogen and
the route(s) of exposure. When the risk assessment is developed with reference
to a specified product/pathogen combination, the characterisation of the source
of exposure is straightforward. Nevertheless, pathogens that are mainly
foodborne may also be transmitted by a variety of media, such as other foods,
drinking water, household products or the general environment and relevant
vehicle(s) should be identified. The associated units of exposure should be
determined (e.g. the number and size of food servings). The production to
consumption pathway(s) should be characterised, with their possible variability.
The size and demographics of the population (or subpopulations) exposed
should be determined. Consideration of the temporal nature (e.g. single or
multiple exposure) or duration of exposure may be important as well as
consideration of potential for secondary transmission. Exposure pathways and
transmission potential may in turn be influenced by the behaviour of the
potentially exposed population.
The second relates to the occurrence and levels of the microorganism or toxin
in the food of concern and to their dynamics over time and at the different stages
of the farm-to-fork chain. Information of interest includes that on the microbial
58 Microbiological risk assessment in food processing
ecology of the food and on the presence of the pathogen in the raw materials and
levels of contamination. The analysis then involves characterising the effects of
the production, processing, handling and distribution steps on the level and
distribution of the pathogen. In this regard, control processes (e.g. thermal
inactivation) have significant effects on pathogen occurrence and should be
considered. The variability, the reliability (level of process control) and the
interdependence of multiple control processes should be analysed. The potential
for (re)contamination (e.g. cross-contamination from other foods, recon-
tamination after a killing treatment) as well as the methods or conditions of
packaging, distribution and storage of the food should also be considered.
The third focuses on consumption/use patterns and on consumer practices
that may affect microbial levels and intake. Elements that may be considered,
according to the scope of the assessment, include socio-economic and ethnic
background, consumer preferences and behaviour as they influence the choice
and the amount of food intake, average serving size and distribution of sizes;
amount of food consumed over a year, considering seasonality and regional
differences, food preparation practices (e.g. cooking habits, cooking time and
temperature, extent of home storage and conditions, including abuse), con-
sumption by specific groups (such as infants, children, pregnant women, elderly
or immuno-compromised populations) and distribution of microorganisms in the
food (e.g. clustering, micro-colonies).
3.5.2 Use of models
The present tendency is to construct exposure assessment models that permit the
description and analysis of the interaction of the above-mentioned factors. The
structure, comprehensiveness and level of details of the model depend on the
purpose and scope of the assessment, based on the risk management questions
and end-points of the assessment. During the development of the model, the
assessor is forced to structure the problem and to identify the key processes to be
modelled and the information needed. The result can be summarised in a
graphical outline of the model structure which should be presented to the
stakeholders and to the risk managers with the underlying assumptions and
uncertainties.
For qualitative (or semi-quantitative) assessments, simple models that
describe the pathways of exposure can be developed. More complex
representations may involve, for instance, event tree or fault tree analyses,
which provide a framework to identify events that could occur and analyse their
likelihood. These may incorporate a semi-quantitative expression of certain
parameters and probabilities (European Commission, 2000).
In quantitative exposure assessments, the relationships between the
determinants of the exposure are modelled mathematically. The model describes
the pathways and processes leading to exposure and may be divided into discrete
model units that can be linked to each other (Lammerding and Fazil, 2000). A
significant feature of quantitative exposure assessment for microbial pathogens
Microbiological risk assessment (MRA): an introduction 59
is the use of predictive microbial approaches and models, within the larger
exposure model. Predictive models use mathematical expressions to characterise
the changes in the pathogen numbers under various intrinsic and extrinsic
conditions. Significant advances have been made in this field in recent years. A
comprehensive analysis of the use of predictive microbiology in exposure and
risk assessment may be found in Chapters 6 and 10.
The same model structure may be the basis for a deterministic or probabilistic
approach. In a deterministic approach, a quantitative assessment of the exposure
is conducted based on a single point estimate of the model parameters.
Deterministic models have several limitations, and, in particular, tend to ignore
variability and uncertainty. In the probabilistic approach, variability and
uncertainty are taken into account by using probability distributions instead of
point estimate values. Probability distributions of the model parameters are
assigned based on experimental data or may be derived from expert elicitation.
A number of techniques may be used to calculate the distribution of the output
of interest. Today, the tendency is to use stochastic simulation techniques, such
as Monte Carlo simulation. This technique involves the random sampling of
each probability distribution in the model to produce a large number of scenarios
(iterations or trials). The result represents a distribution for the output of interest,
based on the combined ranges and frequency of the input parameters. Several
applications of Monte Carlo simulation have documented the merit of the
method, and commercial software is available. In spite of its relative complexity,
the probabilistic approach is now becoming the preferred approach to
quantitative microbiological exposure assessment.
3.6 Risk characterisation
Risk characterisation summarises the information from hazard identification,
hazard characterisation and exposure assessment in a deliberate process to bring
key aspects of the microbiological risk assessment into an integrated picture. It
provides an estimation (the ¡®risk estimate¡¯), which should include the attendant
uncertainties of the probability of the occurrence and severity of adverse effects
in a given population.
It is important to realise that risk characterisation will bridge the risk
assessment process with risk management and decision-making. Therefore, it is
essential that the overall conclusion of the risk assessment is complete,
informative and useful for decision-makers and managers. From this point of
view, risk characterisation should encompass two components:
1. An estimation of the risk that is objective, realistic, credible and
scientifically balanced.
2. A description explaining the degree of confidence in the risk assessment by
clearly delineating the uncertainties and their sources, the assumptions,
along with their impact on the overall assessment. This description should
60 Microbiological risk assessment in food processing
include a discussion on the strengths and limitations of the assessment and
of whether the risk assessment adequately addresses the questions
formulated at the outset of the exercise.
The estimation of the risk can be qualitative or quantitative, depending on the
data and methods utilised. It involves a description of the nature, severity and
consequences of effects anticipated from exposure to a given pathogen, together
with an estimation of the probability of a given population being subjected to
whatever adverse effect being considered. The final result is usually expressed
as an individual risk estimate (e.g. one in a million probability of illness) or as a
population risk estimate (e.g. 10 illnesses per year in a certain region). In
quantitative, probabilistic risk assessments, the output of risk characterisation is
a distribution of the risk.
A specific aspect of quantitative microbiological risk assessments is that a
sensitivity analysis of the result of probabilistic modelling should be performed.
This refers to the evaluation of the variables used for data input with regard to
their effect on the final risk estimate, to provide knowledge on how the effects of
changes in the mathematical approach impacts on the results of the risk estimate
(Vose, 2000). A sensitivity analysis may have two objectives. The first is to
identify the elements or factors that have most impact on the magnitude of the
risk. The second is to determine the robustness of the model toward the existing
uncertainties and assumptions.
Dealing with the first aspect involves carrying out a sensitivity analysis for the
parameters. This can be done in several ways, using well-defined techniques
(Saltelli et al., 2000), e.g. relative sensitivity analysis (where a small change in an
input parameter is compared with the percentage change in the output) or rank
order correlation techniques. Dealing with the second aspect is less defined, and
may involve, for instance, investigating the effect of changing a distribution, or any
other assumption, on the risk estimate. Similarly, scenario analysis can be used to
determine which input parameter(s) contributes most significantly to a given
outcome (e.g. an exceptionally high risk, an exposure below a certain value).
Variability and uncertainty, the two components that describe the degree of
reliability of the risk estimate, should be clearly and distinctly described.
Variability is a function of the system and inherent to any biological population
or parameter. Uncertainty is related to the lack of knowledge and may include,
e.g. in quantitative risk assessments, parameter uncertainty, model uncertainty
and scenario uncertainty. Estimating variability and uncertainty separately will
provide useful information for decisions that could follow from risk assessment
(European Commission, 2000). For instance, if uncertainty is large, the
reliability of future risk assessments may be improved by additional
experimental measurements or by focused research. Where variability
predominates (large heterogeneity in the system) managers could consider
improving the reliability of future risk assessments, e.g. by reducing the number
of possible scenarios, or reducing the heterogeneity of the system considered,
e.g. by managing for a better control of the manufacturing process. In
Microbiological risk assessment (MRA): an introduction 61
quantitative microbiological risk assessments, it should be clearly stated whether
the probability distribution of the risk represents variability, uncertainty, or both.
In relation to the above, a critical element of microbiological risk
characterisation is an assessment of the assumptions that are made during the
analysis, the sources of uncertainty and their impact on the risk estimate. In
many assessments, relevant data may not be available for all aspects of the
analysis, or may not be of adequate quality. Incomplete theory and gaps in
knowledge may also exist. Therefore many choices and assumptions have to be
made, each with varying degree of uncertainty. All choices and assumptions
should be fully acknowledged. Uncertainties and their sources should be
carefully identified and analysed. Choices, assumptions and uncertainties should
be evaluated with regard to their impact on the risk estimate, and perhaps more
importantly, on how they should be used.
It is important to consider an expression of the confidence in the risk
assessment and the resulting estimate; this includes verification and validation.
Verification refers to the technical approach taken and is mainly the respon-
sibility of the assessors and may involve quality control procedures and
specialist review. Validation refers to the scientific acceptability of the assess-
ment and it may involve investigating the effects of plausible choices,
assumptions or scenarios. A specific aspect of quantitative and probabilistic
microbiological risk assessment is the comparison of the estimate with observed
data from epidemiological studies (e.g. cross-sectional surveys, cohort studies,
case-control studies, intervention studies). In this regard, there is a crucial need
to conduct high-quality, targeted and more searching epidemiological studies to
validate the models and to improve the estimations.
Finally, and with specific regard to the current development of quantitative
microbiological risk assessment, it has to be emphasised that risk
characterisation should encompass both mathematical estimates of the risk
and qualitative information (narratives). Simplified numerical presentation of
food safety risks is always incomplete and often misleading. Qualitative
information is particularly useful and offers a number of benefits. It explains the
nature of the adverse effects and the variability in population exposures. It
identifies and explains assumptions, choices, and value judgements and their
impact. It describes uncertainties and explains their impact. It provides
information on the strength and consistency of the scientific evidence that
support the assessment and provides information on the sets of data available,
the sets of data chosen, the incompleteness of databases where appropriate, and
how these impact on the characterisation of the risk. It provides information and
guidance regarding additional or follow-up research. Additionally, the assessors
may be able to identify the potential or availability of counter-assessments from
different groups and explain the supporting analyses and their relative strengths
and weaknesses. All such qualitative information will ensure that the non-
scientists who will use the MRA get a clear message on the nature, likelihood
and severity of the risk, together with an understanding of the plausibility,
strengths and limitations of the assessment process itself.
62 Microbiological risk assessment in food processing
3.7 References
BUCHANAN, R. L., SMITH, J. L. and LONG, W. (2000) Microbial risk assessment:
dose-response relations and risk characterization. International Journal of
Food Microbiology, 58, 159¨C172.
COLEMAN, M. and MARKS, H. (1998) Topics in dose-response modelling. Journal
of Food Protection, 61(11), 1550¨C1559.
EUROPEAN COMMISSION (2000) First Report on the Harmonisation of Risk
Assessment Procedures. Part 1: Report of the Scientific Steering
Committee¡¯s Working Group on Harmonisation of Risk Assessment
Procedures. Internet edition: <http://europa.eu.int/comm/food/fs/sc/ssc/
out83_eu.pdf>; Part 2: Appendices: Internet edition: <http://europa.
eu.int/comm/food/fs/sc/ssc/out84_eu.pdf>
HAVELAAR, A. H., DE HOLLANDER, A. E. M., TEUNIS, P. F. M., EVERS, E. G., VAN
KRANEN, H. J., VERSTEEGH, J. F. M., VAN KOTEN, J. E. M. and SLOB, W. (2000)
Balancing the risks and benefits of drinking water disinfection: Disability
Adjusted Life-Years on the scale. Environmental Health Perspectives,
108, 315¨C321.
INTERNATIONAL LIFE SCIENCE INSTITUTE (ILSI) (2000) Revised Framework for
Microbiological Risk Assessment. An ILSI Risk Science Institute
Workshop Report, ILSI Press, Washington, DC.
LAMMERDING, A. M. (1996) Microbiological food safety risk assessment:
principles and practice. Proceedings, Xth International Conference in
Food Safety, ASEPT, Laval.
LAMMERDING, A. M. and FAZIL, A. (2000) Hazard identification and exposure
assessment for microbial food safety risk assessment. International
Journal of Food Microbiology, 58, 147¨C157.
NADER, R. (1993) Containing violence by containing risk. In T. Burke et al. (Eds)
Regulating Risk. The Science and Policy of Risk, ILSI Press, Washington,
DC.
OFFICE INTERNATIONAL DES EPIZOOTIES (OIE) (1998) Import Risk Analysis. OIE
International Animal Health Code, section 1.4, OIE, Paris.
SALTELLI, A., CHAN, K. and SCOTT, E. M. (2000) Sensitivity Analysis. John Wiley &
Sons Ltd, Chichester.
VAN GERWEEN, S. J., TE GIFFEL, M. C., VAN¡¯T RIET, K., BEUMER, R. R. and
ZWIETERING, M. H. (2000) Stepwise quantitative risk assessment as a tool
for characterization of microbiological food safety. Journal of Applied
Microbiology, 88, 938¨C951.
VOSE, D. J. (2000) Risk Analysis ¨C A Quantitative Guide. 2nd Edition, John Wiley
& Sons Ltd, Chichester.
Microbiological risk assessment (MRA): an introduction 63
4.1 Introduction: the importance of correct hazard
identification
The ability of a risk assessment to indicate requirements for food safety
depends completely on correct hazard identification indicating the relevance
of the hazard to the raw materials, the factory or the finished product. Hazard
identification should provide an estimate of variability in behaviour or
responses between types of the same pathogen (e.g. salmonella), so that the
subsequent exposure assessment can take account of variations in behaviour
caused by processing (e.g. prolonged chilled storage or freezing). These
variations in behaviour may affect factors, such as toxin production, growth
range, thermo-resistance and survival, and provide a more certain basis for
estimating the effectiveness of controls, and hence risks to consumers. If
hazard identification misses or excludes an important hazard, then the
exposure assessment will not consider the impact of the supply chain and its
controls on the hazard level in the final product. Both experience and
analytical data are important means of identifying realistic hazards
(Lammerding and Fazil, 2000). Generally there is good ¡®process¡¯ information
on factory and final preparation steps, but more limited information on
microbial levels during primary production and between major stages in the
supply chain (e.g. manufacturing and retailing).
4.2 What is hazard identification?
The Codex Alimentarius (Anon., 1996) defines a hazard as:
4
Hazard identification
M. Brown, Unilever Research, Sharnbrook
A biological, chemical or physical agent in or property of food that may
have adverse health effects.
Hazard identification is defined as:
The identification of known or potential health effects associated with a
particular agent.
Hazards may pose current, emerging or potential risks to health, and they may
vary in their likely scale and severity. Their effects may be limited to individuals
or small groups, or may have epidemic or even pandemic potential. An
alternative definition is:
A visualisation of the range of likely pathways (inputs and outputs)
affecting the safety of the food product. This may include consideration
of processing, inspection, storage, distribution and consumer practices.
Hazard identification can be looked at from two perspectives. From a
product developer¡¯s perspective, the role of hazard identification is to
identify potential hazards that need to be eliminated (for example by
formulation, processing or guidance on usage) in order to provide a product
that is safe for a target group of consumers to use. Hazard characterisation
provides an analysis of the adverse effects associated with the hazards (for
example through a dose¨Cresponse assessment), while exposure assessment
provides a ¡®what if¡¯ analysis of the possible level of exposure to hazards
through intake of a food product by consumers. On the other hand, hazard
analysis can also be retrospective, examining the epidemiological and other
data after a food safety incident, to characterise adverse health effects
among affected consumers, identifying the foods implicated in causing these
adverse health effects, and isolating where possible the causative agent
responsible.
4.3 What hazard identification should cover and produce as
an output
Hazard identification should identify and characterise the microbiological
hazards to be examined by the subsequent stages of the risk assessment.
Identification of hazards should be based on both inputs and outcomes. It should
cover inputs to the supply chain such as microorganisms or toxins from the raw
materials and ingredients used in the product, and likely sources of
contamination and growth during processing and storage. It should also cover
outcomes such as the following:
? The effect of processing on levels of a hazard (a pathogen for example),
defined by the characteristics and resistance of the hazard (for example to
heat treatment) and the effectiveness of the process in, for example,
delivering a required heat treatment.
Hazard identification 65
? Control of the survival and growth of a hazard by the preservation properties
of the final product during storage and distribution taking into account the
effectiveness of storage and distribution conditions (such as chilled storage
and transport).
? Its intended use and subsequent processing by the consumer (for example
ready-to-eat or for cooking).
? The likely sensitivity of consumers to hazards.
This emphasis on both inputs and outcomes makes it more likely that all relevant
hazards will be considered.
4.3.1 Scope
Hazard identification should show which hazards are realistic for a product, so
that risk characterisation can take account of their probable occurrence and
severity. Compiling an exhaustive list of all possible hazards, whether or not
they are likely to affect consumers of a particular product in practice, can be as
counter-productive as failing to list all relevant hazards for a product. Both may
compromise subsequent stages in risk assessment, either by making the process
over-complex and unmanageable or by missing key hazards. Hazard
identification should therefore concentrate on those likely to be present in a
particular food product and to cause foodborne illness. Physiological
characteristics of hazards should be described in sufficient detail to allow
predictions of likely responses to product composition (e.g. pH and water
activity), processing operations (such as heating) and subsequent storage
conditions (for example modified atmosphere packaging or chilled storage) at
each step in the supply chain up to the point of consumption. The identification
process may be extended to cover how each process stage influences microbial
physiology or virulence and thus the likely level of risk to consumers. Where
process controls may be weak, for example because of particularly contaminated
raw materials or possible temperature abuse during distribution, it may be
necessary to extend the list of realistic hazards.
4.4 What to do in hazard identification
To be considered realistic, hazards must be identified as the causative agents of
waterborne or foodborne disease. This may require experimental work to
demonstrate the causal relationship between a particular strain of an agent and a
disease. Differing microbial strains (for example of Escherichia coli) may be
pathogenic or non-pathogenic. Some strains may be associated with epidemic
disease and serious illness while others may be associated with mild symptoms
and small-scale, sporadic outbreaks. Hazard identification should therefore
include an assessment of the impact of the hazard on human health and an
analysis of when, where and how it achieves such an impact. Challenge study
66 Microbiological risk assessment in food processing
data may be used to assess likely levels of a hazard which can then be compared
with epidemiological studies. Where direct epidemiological data are missing, it
may be possible to assess probable risks from studies of related products in
similar environments.
Selected hazards should be compiled into a descriptive list of the bacteria or
toxins (with details of species or types) associated with:
? Raw materials.
? Methods of production.
? The use of the food.
To be useful for hazard characterisation and exposure assessment, this list
should indicate the specific routes of transmission for each hazard, including
potential events (for example variations in the quality of raw materials or
variations in storage conditions) that may affect levels in the food. Hazard
identification must be done so that overlap with exposure assessment is
minimised, but it must provide enough information to allow assessment of the
final level of the hazard in the product at consumption. Information should allow
the effect of changes in product formulation or processing, such as reduced
cooling times or storage temperatures, to be assessed. It may be possible to
analyse such changes through Monte Carlo simulations, for example, providing
the correct kinetic data for the hazard are available.
4.5 Key information in hazard identification
Four basic types of information on hazards relevant to the product being studied
should be assembled for hazard identification.
4.5.1 Microbial agent information
This information needs to characterise the pathogens of importance to the
product and process. It should include the following:
? Estimates or measurements of the overall numbers and prevalence of the
hazard in the raw materials and process equipment used. The microbial
ecology of the product and raw materials may need to be described, so that
factors affecting the characteristics and pathogenicity of the hazard can be
accounted for.
? Information on resistance to the types of treatment being applied, especially
growth, survival and heat resistance characteristics. These will allow
estimation of growth or inactivation rates during processing and storage
and establish the relevance of predictive modelling. Description of pathogens
should provide the means for outlining pathogen response to intrinsic factors
(e.g. pH, moisture content, nutrient content or antimicrobial constituents) and
extrinsic factors (e.g. heating or storage temperature, relative humidity,
Hazard identification 67
packaging atmosphere and the presence of other microorganisms) that affect
behaviour within the product or process range.
? Production of disease may involve a variety of virulence attributes and host
susceptibility factors. Not all strains will be equally virulent, but on the other
hand not all people are equally susceptible to disease. Information should
include a description of strain-specific pathogenicity attributes, such as the
production of extracellular materials, e.g. toxins or proteases, phospholipases
or bound cellular materials, e.g. antigens or polysaccharide capsules. The
relevance of these factors for target consumers needs to be addressed. This
may have to be obtained from literature searches and consultation with
experts or generated by laboratory studies. Wherever it is found, confidence
limits must be placed on its validity and applicability.
4.5.2 Consumer information
Information on consumers needs to include:
? Consumers affected or target consumers.
? Susceptibility to potential hazards.
? Patterns of food consumption.
? Scale of outbreaks.
? Incidence and severity of illness among those affected.
Microbiological data from clinical, epidemiological and food surveillance
studies should be included. Probable consumers should be identified and their
sensitivity to likely hazards noted, together with the severity of illness arising
from exposure to a hazard. If necessary, reference should be made to the
different sensitivities of young or old individuals, or those with pre-existing
disease, chronic illnesses or immune-deficiency. Foods for these consumers
should be identified as high risk. Healthy individuals may be at only occasional
risk from relatively mild disease, and may therefore be placed outside the scope
of an assessment. It is important to make a sensible assessment as to whether the
risks faced by consumers are significant or negligible.
4.5.3 Food and process linked information
Process-related information should include the following:
? Prevalence and level of the hazard in raw materials and ingredients.
? Effects of processing and storage on the levels of the hazard at various stages
up to consumption.
? Consumer use instructions and the risks of product mishandling.
This information links the physiological characteristics of the hazard to the
properties of the food, its processing and consumer use. It acts as a check that
the relevant characteristics of the pathogen have been covered. The origin (e.g.
tropical or temperate) of raw materials may be important because of its impact
68 Microbiological risk assessment in food processing
on types, prevalence and levels and the possible presence of other micro-
organisms (e.g. lactic acid bacteria) that may interact with the pathogen and alter
its characteristics. Sufficient detail must be provided for the exposure assess-
ment to model the food production process and show the potential for growth,
survival, elimination or alteration of virulence of the hazard up to consumption
under different conditions of use (e.g. in home, restaurants and hotels).
4.5.4 Information quality
The risk assessment team should be aware of the quality of data available and
any assumptions on which it is based. An analysis of information quality should
outline the origins and sources of data and its relevance to the products and lines
under study.
4.6 Tools in hazard identification
The examination of foods for the presence, types and numbers of pathogens
(and/or their metabolites) is of major importance. A variety of routine or
conventional methods are available, along with newer developments that are
more accurate and rapid. Routine tests for identification or tracing are mainly
based on growth characteristics of a pathogen on a selective medium, general
features such as colony colour, form or smell, biochemical properties, detection
of microbial antigens by antibodies or antibiotic susceptibility. More recent
techniques are based on molecular methods allowing the classification and
identification of any isolate based on phenotypic and chemotaxonomic analyses.
Some of these techniques rely on the availability of large databases to ensure
reliable results but, in some cases, these databases have been established for
¡®non-food¡¯ purposes and may provide misleading identities.
There are many web-based tools for assisting in identifying foodborne
pathogens, for example:
? The ¡®bad bug book¡¯ issued by the US Food and Drug Administration (FDA)
(http://vm.cfsan.fda.gov/~mow/intro.html)
? The CDC¡¯s mortality and morbidity reports and guidelines for clinicians
(http://www.cdc.gov/mmwr)
? The US Government¡¯s Food Safety Information Center, USDA/FDA (http://
www.fsis.usda.gov)
? Pathogen behaviour modelling program (http://www.arserrc.gov/mfs/
pathogen.htm)
The introduction of molecular biological techniques for DNA-based typing
can discriminate between isolates of a single species. This information can be
used for epidemiological purposes and may provide insight into the fate or
persistence of pathogens in food or processing. The detection and tracing of
pathogens can be based on:
Hazard identification 69
? Examination of the genotype, e.g. using phage typing, r-RNA sequence
analysis or technologies by on the polymerase chain reaction (PCR).
? Analysis of the phenotype by serotyping, API or electophoresis of cellular
enzymes or metabolic products, for example.
The range of such techniques is covered, for example, in Betts (2002).
The performance of microbiological analytical and detection methods for
pathogens is important for hazard identification. Thresholds of detection (often
1¨C10 cells/g) and the statistical significance of the sample size used (e.g.
estimate average levels and variability) will determine the chances of detecting
pathogens, and hence whether they are considered realistic (Betts, 2002).
Attention should also be paid to process conditions, especially the reliability of
process controls and the accuracy of records or measurements at the major risk
determining steps (e.g. sterilisation, cooking, acidification, chilled or warm
storage or poor hygiene). Critical control points (CCPs) in HACCP plans will
always include process stages and therefore an assessment of equipment or
supplier performance, on a day-to-day basis, may be obtained from quality
assurance data. Confidence limits should be placed on limited or variable data
(e.g. point estimates or use of a range) or it may be incorporated into scenarios
(e.g. best, average or worst).
4.7 Microbial hazards
To manage food safety risks at least cost and with the lowest restrictions on
marketing opportunities or the product range, it is important to identify which
pathogens, foods or situations can realistically lead to foodborne illness
(ICMSF, 1996). The microbial hazards typically associated with foods are listed
in Table 4.1
Potential microbiological biological hazards in food include bacteria, toxins,
viruses, protozoa and parasites. Of the microbiological hazards, the most
important are bacteria and they cause a large proportion (approximately 90%) of
all foodborne illnesses. To fix the scope of realistic hazards for any product or
group of consumers, it may be useful to consider the sensitivity of the consumers
of the food to specific hazards (microorganisms or toxins) and the robustness of
the preservation system and any microbicidal treatment the food is likely to
receive prior to consumption. The severity of differing types of hazard is
described in Table 4.2.
The widest range of realistic hazards will be associated with food intended
for consumption by at-risk consumers, e.g. the very young or old,
immunocompromised or those unusually susceptible to microbiological hazards.
Products may also require identification of a wider range of hazards if they
contain contaminated ingredients or those from unusual or unreliable origins.
Microbial survival/recontamination levels will be important if processing does
not include steps (e.g. pasteurisation) to eliminate hazards before consumption
70 Microbiological risk assessment in food processing
(e.g. raw milk or ready-to-eat foods that typically do not require re-heating).
These hazards may be compounded if there is potential for product abuse during
distribution or consumer handling.
The highest risk products for consumers are ready-to-eat, or able to support
the growth of pathogens, hence the identity of a pathogen and product use will
suggest which steps in the supply chain are risk determining. It is essential to
collect information on these. For example, if the hazard is an infectious
Table 4.1 Microbial hazards typically associated with foods
Bacillus cereus
Brucella abortus
Campylobacter jejuni
Clostridium botulinum
Clostridium perfringens
Coxiella burnettii
Escherichia coli
Enteropathogenic E. coli
Listeria monocytogenes
Mycobacterium bovis
Salmonella spp.
Shigellae
Staphylococcus aureus
Vibrio parahaemolyticus
Vibrio vulnificus
Yersinia enterocolitica
Toxin-producing mould
Table 4.2 The severity of differing types of microbiological hazards
Severe hazards Clostridium botulinum types A, B, E and F, Shigella
dysenteriae, Salmonella typhi; paratyphi A, B, E. coli
O157:H7, Brucella abortis, Brucella suis, Vibrio cholerae,
Vibrio vulnificus,(Taenia solium)(Trichinella spiralis).
Moderate hazards ¨C
extensive spread from
infection
Listeria monocytogenes, Salmonella spp., Shigella spp.,
enterovirulent E. coli (EEC), (Streptococcus pyogenes),
rotavirus, Norwalk virus, Entamoeba histolytica, Ascaris
lumbricoides, Cryptosporidium parvum.
Moderate hazards ¨C
limited spread
Bacillus cereus, potentially toxigenic bacilli, Campylobacter
jejuni, Clostridium perfringens, Staphylococcus aureus, Vibrio
cholerae (non-O1), Vibrio parahaemolyticus, Yersinia
enterocolitica, Giardia lamblia, Taenia saginata.
Other microbiological
hazards
These include naturally occurring toxicants such as
mycotoxins (e.g. aflatoxin), scombotoxin (e.g. histamine),
toxic mushrooms and varieties of shell-fish poisoning.
Toxins Toxins of most concern are produced by Clostridium
botulinum, Clostridium perfringens, Bacillus cereus and
Staphylococcus aureus. All are the result of the growth of
bacteria and production of toxins in foods that often have been
mishandled. These bacteria are common in the environment
but proper cooking, fermentation, cooling and storage of food
can prevent their growth and, more importantly, the production
of their toxins. However, cooking may not destroy several of
these toxins once they are formed in food.
Hazard identification 71
pathogen, then data on initial contamination, heating steps and recontamination
will be essential. Where heating is a key part of the safety system, conditions
must be quantified because, according to heat resistance, combinations of time
and temperature will have a defined ability to reduce pathogen numbers. Growth
range (e.g. temperature, pH, water activity ¨C A
W
) may also be important
(Tienungoon et al., 2000), if products are stored or distributed and illness is
related to the level or dose of the pathogen or toxin, and especially if
temperature control is an integral part of the safety system. Enteric pathogens
(salmonella and pathogenic E. coli) are unlikely to grow at chill temperatures,
but Listeria monocytogenes will continue to grow almost down to freezing point,
although chill temperatures may inactivate campylobacter. If it is assumed that
the survival of very low levels of infectious pathogens constitutes a hazard, then
analytical data on incidence must be considered very critically, to ensure that
safety is not assumed, based on limits of detection, or failure to recover injured
cells.
If the pathogen is toxin-producing, then growth and toxin production, and the
concentration and persistence of any pre-formed toxin both need to be
considered. Heating may inactivate the cells, but pre-formed toxin may remain,
as heating will not usually destroy it. On the other hand growth under conditions
that do not allow toxin synthesis (e.g. at low pH) are not likely to lead to a
hazard, unless changes in temperature or food composition (e.g. rehydration of a
dried food followed by storage) allows toxin production.
4.8 Identifying the origin and distribution of microbial
hazards
The assessment must identify hazard inputs at the beginning of the supply chain
(e.g. raw materials) and during processing (e.g. cross-contamination). Factors
influencing the input of pathogens (such as harvest or growth conditions) and the
extent of detail required will be determined by the scope of the study. Primary
raw material and pathogen origins may be included or a study may start at the
factory gate. If origins are covered, then the prevalence and persistence of
pathogens in areas where raw materials are grown or harvested and routes for
contamination (e.g. irrigation water or harvesting machinery) should be
considered. If the assessment starts at the factory gate, then contamination
levels and their variability (e.g. regional (temperate v. tropical) or seasonal
(summer to winter) differences) in delivered raw materials should be known.
Pathogens in raw materials are often present at low levels and non-uniformly
distributed; this determines the level of risk they pose. Distributions may be
relatively uniform in liquid foods and more variable in solid or particulate foods.
At the most extreme, heterogeneous distributions can lead to many
uncontaminated portions and a few (highly) contaminated ones. The latter
provide the greatest challenge to process conditions and a potential hazard to
consumers: this should be covered whatever detail is provided in the exposure
72 Microbiological risk assessment in food processing
assessment. Therefore, because of variability, historical data on average
pathogen levels in raw materials and finished products can only provide rough
guidance on risks to consumers.
Levels and variability of contamination or process conditions may be
represented as single values, or as the lowest, average and highest levels or as
frequency distributions. The incidence (% contaminated) and distribution of an
agent (e.g. log normal, defined by mean log and standard deviation) may be used
to refine the exposure assessment and assign probabilities to levels of
contamination at particular stages. Estimated distributions of pathogens (e.g.
log normal) may be used to bridge data gaps. Such distributions are often
positively skewed, and this is consistent with the observation that
microbiological populations in foods are log-normally distributed. Variability
in the lethality of cooking treatments or the heat resistance of the target
microorganism or the product¡¯s thermo-physical properties (e.g. portion
thickness) may have a large influence on risk, influencing the chances of
pathogen survival after in-factory or in-home heating (Brown et al., 1998). The
quantity likely to be consumed (portion size) will play a less important role than
overall pathogen concentration in determining risk.
4.9 Changes in microbial hazards
Process conditions and the product environment may cause one or more of the
effects below on microbial pathogens.
4.9.1 Growth
Within any food, any microorganisms present, including pathogens, may grow
and this will always increase risk. They can also interact with each other and
with intrinsic and extrinsic factors, leading to differences in metabolism,
multiplication, survival or death. Growth and death may be predicted by models
(Giffel et al., 1999; Rasmussen et al., 2001; Stewart et al., 2001; Tienungoon et
al., 2000) covering the conditions in the food (e.g. pH, A
W
) and various storage
and process temperatures. Many models are limited to the behaviour of isolated
species growing under laboratory conditions. In reality, an exposure assessment
may need to go further and consider interactions between species and the food,
limiting the predictive accuracy of models (Zwietering and van Gerwen, 2000).
Microbial or environmental interactions may inhibit or promote growth of
pathogens such as Listeria monocytogenes, Salmonella sp. or prevent toxigenisis
by Staphylococcus aureus.
4.9.2 Death
Single factors (e.g. heat) or combinations of factors or ¡®hurdles¡¯ (e.g. acid and
low A
W
) are used to control pathogen death or survival (Ahvenainen et al.,
Hazard identification 73
2002). How many steps are used to kill or inhibit pathogens will determine the
complexity of the exposure assessment (EA) study. To make sense of the impact
of conditions at the risk-determining steps, the EA team should know the
kinetics of inactivation because heating and, for example, acidification have
different rates of destruction for different pathogens (van Gerwen and
Zwietering, 1998). As conditions are increased above the maximum for growth,
injury and then death will occur. Generally higher numbers of cells will take
longer to kill and their practically logarithmic rate of death makes it possible to
predict numbers of surviving cells from knowledge of the process conditions
(time and temperature) involved and the initial number of target cells. Heat
sensitivity is often expressed as a D-value to indicate the time at a particular
temperature required for a 10-fold reduction in numbers of a particular
pathogen. Vegetative cells will have very much lower heat resistances than
bacterial spores. To allow the prediction of the killing effect of different
temperatures, the linking concept of ¡®z¡¯ is used to express the number of oC, that
process conditions need to shift to alter the rate of killing 10-fold. Typically this
is 7¨C12oC and the risk assessor should consider its impact when variability is
noted at heating or cooling stages.
Different microorganisms grown under different conditions may respond
differently to treatments; vegetative cells typically have very much lower heat
resistances than bacterial spores. The hazard identification should provide
sufficient detail for the correct values or characteristics to be used. Process and
food-linked factors may also alter the rate of killing by heat, e.g. acid pH
(decreased heat resistance) or A
W
(increased heat resistance). If the prior history of
the cells (e.g. starvation, heat or cold shock) is likely to induce a stress response
and increase resistance, then this should be taken into account. Estimates of the
changes in resistance caused by stress may be found from the literature or by
challenge testing, where it is unknown it should be noted as an uncertainty.
4.9.3 Survival
Under process conditions outside their growth range (e.g. chilled, frozen or dry
storage) microbial cells may remain alive (survive) and remain unable to grow
until the damage is repaired or favourable conditions return. Such conditions may
alter sensitivity to treatments (e.g. increased resistance to exposure to acid) or
capability of causing harm. Exposure assessments should clearly identify process
stages where conditions may prevent growth, but allow survival, especially if they
are likely to increase resistance to downstream treatments. If the extent of the
effects on resistance is not known, it can be noted as an uncertainty, but may be
clarified by reanalysis and comparison with analytical or historical data.
4.9.4 Toxigenesis
Toxin-producing species do not produce toxin over their whole growth range
and production of toxin will depend on the fate of the producing microorganism
74 Microbiological risk assessment in food processing
at each step of the food chain. It is usually prevented, or inhibited, under adverse
conditions (e.g. low temperatures or high numbers of a competing flora) and
these must be identified (Stewart et al., 2001) to understand the effects of supply
chain conditions. If this is unknown, it should be noted as an uncertainty. Food
handlers may also act as a source of contamination of unpacked products by
Staphylococcus aureus. The impact of subsequent times/temperatures on growth
and toxin production should be linked to stages where it is likely to occur. Table
4.3 gives some indications of the relationship between the level of toxin and the
fate of the toxin producing pathogen.
4.10 Other biological hazards
Even though viral and parasitic agents may not grow in foods, different supply-
chain factors can influence their impact on safety. Understanding of their
importance is often limited by lack of data on the characteristics of the agents
themselves. Points to consider include routes and sources of contamination (raw
materials, environment, equipment or personnel), the level and distribution of
the agent in the food and the effectiveness of any decontamination or
inactivation steps. The influence of harvest and process steps on pathogenicity
and survival, especially the presence of resistant, infective forms (such as cysts)
should not be overlooked.
4.11 References
AHVENAINEN, R., ALOKOMI, I., HELANDER I., SKYTTA, E. and SIPILAINEN-MALM, T
(2002), The hurdle concept. In Ohlsson, T. and Bengtsson, N. (eds),
Minimal Processing Technologies in the Food Industry. Woodhead
Publishing Ltd, Cambridge.
Table 4.3 The relationship between the level of toxin and the fate of the toxin
producing pathogen
Is the toxin-producing Is toxin present from previous steps?
micro-organism present
from previous steps?
Yes No
Yes The level of toxin may: The level of toxin may:
? increase, ? increase,
? stay the same, ? stay the same.
? decrease (denaturation).
No The level of toxin may: The level of toxin stays the
? stay the same, same.
? decrease (denaturation).
Hazard identification 75
ANON. (1996), Principles and guidelines for the application of microbiological
risk assessment. Alinorm 96/10 Codex Alimentarius Commission, Rome.
BETTS, R. (2002), Detecting pathogens in food. In Blackburn, C. de W. and
McClure, P. J. (eds), Foodborne Pathogens: Hazards, Risk Analysis and
Control. Woodhead Publishing Ltd, Cambridge.
BROWN, M.H., DAVIES, K.W., BILLON, C.M., ADAIR, C. and MCCLURE, P.J. (1998)
Quantitative microbiological risk assessment: principles applied to
determining the comparative risk of salmonellosis from chicken products.
Journal of Food Protection 61 (11) 1446¨C53.
GIFFEL, M.C., JONG, P DE and ZWIETERING, M.H. (1999) Application of predictive
models as a tool in the food industry. New Food 2 (2) 38, 40¨C41.
ICMSF (1996). Micro-organisms in food Volume 5. Microbiological specifica-
tions of food pathogens. International Commission on Microbiological
Specifications for Foods, Blackie Academic and Professional, London.
LAMMERDING, A.M. and FAZIL, A. (2000) Hazard identification and exposure
assessment for microbial food safety risk assessment. International
Journal of Food Microbiology 58 (3) 147¨C157.
RASMUSSEN, B., BORCH, K. and STARK, K.D.C. (2001) Functional modelling as
basis for studying individual and organisational factors ¨C application to
risk analysis of salmonella in pork. Food Control 12 (3) 157¨C164.
STEWART, C.M., COLE, M.B., LEGAN, J.D., SLADE, L, VANDEVEN, M.H. and
SCHAFFNER, D.W. (2001) Modeling the growth boundary of Staphylococcus
aureus for risk assessment purposes. Journal of Food Protection 64 (1)
51¨C57.
TIENUNGOON, S., RATKOWSKY, D.A., MCMEEKIN, T.A. and ROSS, T. (2000) Growth
limits of Listeria monocytogenes as a function of temperature, pH, NaCl,
and lactic acid. Applied and Environmental Microbiology 66 (11) 4979¨C
4987.
VAN GERWEN, S.J. and ZWIETERING, M.H. (1998) Growth and inactivation models
to be used in quantitative risk assessments. Journal of Food Protection 61
(11) 1541¨C1549.
ZWIETERING, M.H. and GERWEN, S.J.C. VAN (2000) Sensitivity analysis in
quantitative microbial risk assessment. International Journal of Food
Microbiology 58 (3) 213¨C221.
76 Microbiological risk assessment in food processing
5.1 Introduction: key issues in hazard characterization
Hazard characterization is a description of the relationship between levels of a
pathogen consumed (dose) and the probability of subsequent development and
severity of illness or other adverse health outcome (response). This process is
often referred to as the dose¨Cresponse assessment; however, the term hazard
characterization was coined to better describe the broader scope of the analysis,
which typically includes a severity assessment and a consideration of sequelae.
A dose¨Cresponse relationship can be expressed as a mathematical relationship,
using mathematical models in combination with observational data such as those
from human trials, small animal studies, or outbreak investigations. In the final
step of a food safety risk assessment, risk characterization, the dose¨Cresponse
assessment is integrated with the exposure assessment to estimate the likelihood
and magnitude of a hazard such as the probability of illness from a foodborne
pathogen.
Key issues explored in this chapter include the impacts of variability within a
population, differences in strains of a pathogen, and interaction of food matrix
effects on the interpretation of dose¨Cresponse data. Also presented are the
strengths and weaknesses of using various biological studies for dose¨Cresponse
modeling. Modeling difficulties commonly encountered such as the need to
extrapolate from high to low dose when fitting observed data to models and
coping with uncertainty and variability in the data and model estimates are
addressed. Lastly research needs and future trends in dose¨Cresponse modeling
are discussed.
5
Hazard characterization/dose¨Cresponse
assessment
S. B. Dennis, M. D. Miliotis, and R. L. Buchanan, United States Food
and Drug Administration, College Park
5.1.1 Hazard characterization v. dose¨Cresponse
The guideline of the Codex Alimentarius Committee for the conduct of
microbiological risk assessments is the conceptual framework most widely used
to date (Codex Alimentarius Committee, 1999). It divides the risk assessment
process into four components:
1. hazard identification
2. exposure assessment
3. hazard characterization
4. risk characterization.
Within that framework, there is ongoing discussion of what information should
be provided in the hazard characterization and hazard identification components.
In general, the microbiological risk assessments that have been undertaken have
used the hazard identification to describe the basic epidemiology and etiology of
a disease. Conversely, the hazard characterization has focused on detailed
descriptions of the factors contributing to the disease process that could
influence the dose¨Cresponse relationship or the severity of the disease (e.g.
virulence determinants, subpopulations with increased susceptibility, food
matrix effects).
The World Health Organization (WHO) and the Food and Agriculture
Organization (FAO) of the United Nations have sponsored expert committees to
conduct risk assessments and develop guidelines for their conduct and use at the
international level. An ad hoc Joint FAO/WHO Expert Meeting on Microbial
Risk Assessment (JEMRA) developed guidelines for the conduct of hazard
characterizations (WHO/FAO, 2001). The process begins with an initiation
phase wherein the scope and purpose of the hazard characterization are
described and the assessment is planned. Next, data are collected, evaluated, and
a descriptive characterization is developed. With this information the data are
analyzed and the dose¨Cresponse model(s) prepared. Prior to presenting or
publishing the results, JEMRA recommends that a peer review be conducted.
The conduct of hazard characterization should be an iterative process, in which
information learned at each step is used to refine the hazard characterization
(WHO/FAO, 2001). The hazard characterization phase of a microbial food
safety risk assessment should provide a thorough description of the adverse
effects of the pathogen on the host. The technical report should include a
complete description of host, pathogen and food matrix factors that impact the
likelihood of the disease or other public health outcome and the data and model
used to describe the dose¨Cresponse relationship. Sufficient information should
be provided to allow an analyst to reproduce the dose¨Cresponse model, including
sources of data, assumptions used, goodness of fit of the distribution, and
uncertainty and variability (WHO/FAO, 2001).
While hazard characterization is typically used in combination with an
exposure assessment to evaluate a risk, it may be conducted and reported as a
stand-alone analysis initially and later used with exposure assessments
developed for specific geographical regions, consumer groups, or product
78 Microbiological risk assessment in food processing
categories. Although the hazard characterization may be a quantitative or
qualitative evaluation, a dose¨Cresponse model quantitatively describes the
frequency and magnitude of the adverse event as a function of the exposure to
the pathogen.
The specific organism of concern and its mode of causing a disease must be
considered in designing and interpreting dose¨Cresponse data for use in modeling.
For infectious microorganisms (e.g., Salmonella Enteritidis) to cause disease
viable cells must be ingested, attach in the epithelial cells in the gastro-intestinal
(GI) tract, and then invade the epithelium to cause gastroenteritis or the body to
cause septicemia. Toxico-infectious microorganisms (e.g., Escherichia coli
0157:H7) are similar except they do not invade the body, but instead produce or
release toxins after colonizing the surface of the GI tract. A third class of
pathogens, toxigenic bacteria (e.g., Clostridium botulinum), produce the toxins
in the food before it is ingested. These differences in mechanisms of
pathogenicity will influence the dose¨Cresponse model selected and its
underlying assumptions.
5.1.2 The disease triangle
The likelihood that an individual becomes ill from ingesting a microorganism is
dependent on the complex interaction among the host, pathogen, and food
substrate. These three factors and their interactions are known as the infectious
disease triangle or epidemiologic triad. While exposure (number of
microorganisms or amount of their toxins ingested) is generally thought of as
the administered dose, for dose¨Cresponse studies the true or infectious dose is
the portion of the administered dose that actually reaches the site of infection
and causes the end-point of concern such as infection, illness, or death.
Depending on the microorganism, health and age of the consumer, and type of
food consumed, the infectious dose may range from one microbe to hundreds of
millions (Jaykus, 1996). The nature and vast array of possible combinations of
these factors makes the nature of dose¨Cresponse relationships one in which there
is a high degree of variability inherent in the estimate. Additionally, detailed
knowledge of these factors and their interactions is typically lacking, leading to
substantial uncertainty related to the estimate. This should not necessarily
indicate that the dose¨Cresponse relationship is poorly understood, but instead
reflects the highly variable nature of the three biological systems being
described, i.e., the exposed population, the pathogen, and the food.
Host factors
The human population is highly diverse in its vulnerability and response to
microbial pathogens. The immune status, especially associated with those
individuals who are immunocompromised due to disease or medical treatments
such as immune-suppressive drugs, can influence occurrence and/or severity of
foodborne diseases. A number of intrinsic factors such as age, sex, and genetics
further influence the immune system, and thus the susceptibility of the
Hazard characterization/dose¨Cresponse assessment 79
individual to disease. Additional factors such as the general health of the
population, presence of underlying disease, or nutritional and physical stresses
on members of the population influence individuals¡¯ responses. Different dose¨C
response curves or even models may be needed to describe the relationship
between exposure and illness for distinctly different subpopulations such as the
general population v. high-risk subpopulations. Alternatively, highly susceptible
or highly resistant subpopulations may be viewed as ¡®tails¡¯ of the general
population being described by a dose¨Cresponse curve.
Pathogen factors
Evaluation of the dose¨Cresponse relationship requires knowledge of the
virulence mechanisms and physical distribution of the microbial pathogen in
the food environment. Virulence factors such as adherence, invasiveness, ability
to evade host defenses, and release of potential toxic factors, are among the
microbial characteristics that can influence the ability of a microorganism to
cause disease. Other characteristics of the pathogen that can influence infection
and its outcome are dynamic evolution of virulence from microorganism
interaction with environment and host, microbial variability in response to
environmental factors, and microbial tolerance to adverse conditions that may
allow person-to-person spread.
The physiological state of a microorganism can also influence its ability to
cause disease. For example, recent studies have suggested that virulence factors
may or may not be expressed as a result of ¡®quorum sensing¡¯, i.e., chemical
communication between microorganisms when there are sufficient numbers of
cells present within an environment. Likewise, the behavior and characteristics
of a microorganism can differ substantially when present as planktonic cells
versus biofilms or microcolonies.
Considering the array of factors described above, it is not surprising that a
dose¨Cresponse curve developed for a specific pathogen strain cultured under one
set of conditions may not be applicable for another strain or even the same strain
cultured under different conditions.
Food matrix factors
In recent years it has become apparent that the food matrix in which the
pathogen is transmitted can have a significant impact on the likelihood of
disease. Food matrix factors such as fat levels, acidity, salt levels and other
characteristics of the food should be considered in an evaluation of the ability of
a pathogen to cause disease (Foegeding, 1997). The consumption of highly
buffered foods or antacids may decrease the number of microorganisms needed
to cause illness because of the foods¡¯ modulating effect on gastric pH. For
example, studies with V. cholerae O1 indicate that cooked rice, which provides
buffering capacity, may have substantive impact on the measured dose¨Cresponse
relationship (Levine et al., 1981). Similarly, achlorhydria, the decrease or
cessation of acid production in the stomach, would be expected to impact the
effective dose (Buchanan et al., 2000). Additional dietary factors that impact the
80 Microbiological risk assessment in food processing
physiological response of the gastro-intestinal tract, particularly the stomach,
may alter the dose needed to produce infection. For example, gastric bactericidal
lipids, which protect against Listeria infection, accumulated in rats fed a high
milk fat diet (Sprong et al., 1999). Gastrin, the hormone that is the most potent
stimulant of gastric acid secretion, is released after eating a protein-rich meal
(West, 1985). Because most enteric pathogens are sensitive to acids, the
increased production of gastric acid following a protein-rich meal such as
oysters would provide greater protection against infection, thus increasing the
infectious dose.
The physical distribution of the pathogen in the food environment can also
impact the passage of pathogenic bacteria through the intestinal tract (WHO/
FAO, 2001). Pathogen clumping, aggregation, and intimate association (e.g.,
coating, absorption) with food particles may increase their survivability and
allow intact passage through the stomach to the intestinal epithelium. This may,
in part, account for the decrease in effective dose observed with certain enteric
pathogens suspended in foods with high lipid contents. To date, food safety risk
assessments have not been developed to a level of sophistication that permit a
direct accounting of food matrix effects.
5.1.3 Theories of infection
Two hypotheses related to the initiation of infection, minimum infectious dose
(threshold model) versus single-cell (non-threshold model), have been used to
describe the dose¨Cresponse relationship. Threshold models assume that there is
some level of the pathogen that particular individuals can tolerate without
becoming infected. Conversely, non-threshold models assume that a single
microbial cell is capable of causing illness.
When dose (e.g., log number of microorganism ingested) is plotted against
response (percentage of population infected), the shape of the curve is often
sigmoidal, which has been interpreted as indicating that there is a minimum
infectious dose below which a pathogen will not cause disease (Buchanan et al.,
1998). However, efforts to measure this level in humans have not been
successful for infectious and toxico-infectious microorganisms, and have been
alternatively interpreted as indicating each bacterial cell has the potential, albeit
small, to multiply in the host and cause disease.
Fitting dose¨Cresponse data using a non-threshold model and then displaying it
by plotting log (dose) versus percent response gives a sigmoidal curve.
However, in this instance if the graph is replotted using the log (percent
response) versus the log (dose), the non-threshold nature of the model becomes
apparent, as the graph becomes linear at low doses (Buchanan et al., 2000).
Figure 5.1 illustrates this using the exponential distribution function.
The hypothesis that a single ingested cell could cause infection is
increasingly accepted as a default assumption for dose¨Cresponse modeling of
infectious foodborne pathogens. The reasons for this are two-fold. First, non-
threshold models appear consistent with reports of a number of large outbreaks
Hazard characterization/dose¨Cresponse assessment 81
where the number of infectious bacteria was very low. Second, since the lower
limit of infectivity would be virtually impossible to prove, models that are linear
or log-linear at low dose provide a conservative but biologically defensible
default assumption. Toxigenic microorganisms represent a different case since
the response is based on the levels of a pre-formed toxin and not the number of
organisms ingested directly. Threshold-based models appear to be a better
choice for toxigenic microorganisms, such as Staphylococcus aureus and C.
botulinum, that produce acute toxins (Buchanan et al., 1997).
For both threshold and non-threshold-based models, the probability of
infection or morbidity increases as the dose increases. However, a 50% increase
in dose does not necessarily yield a 50% increase in the probability of illness
(Cassin et al., 1998a). The possible occurrence of multiple infection sites in the
host may account for observed increases in infection with higher doses. Having a
greater number of cells may increase the probability of an infection by
increasing the probability that one or more cells survive the stomach and come
in contact with an appropriate binding site within the intestine. Having multiple
binding sites would increase the probability of infection if a percentage of
infection loci are asymptomatic, or that a sufficient number of pathogens be
present before the body¡¯s defenses are overwhelmed and the disease becomes
symptomatic. One of the factors that is well established is that increasing the
dose decreases the ¡®incubation time¡¯ between when the cells are ingested and
when overt symptoms appear. For gastrointestinal diseases, infection is
generally defined as the presence of organisms in the GI tract, but it may not
necessarily lead to illness. That is, given infection the probability of illness
might increase, decrease, or be independent of dose (Teunis et al., 1999).
Fig. 5.1 Example of a dose¨Cresponse curve using an exponential distribution plotted as
either a log(dose) versus response (shown as a dashed line) or as a log(dose) versus
log(response) (shown as a solid line).
82 Microbiological risk assessment in food processing
5.2 Types of dose¨Cresponse data
The development of dose¨Cresponse models depends on the availability of data
that quantitatively describe the relationship between the levels of the microbe
ingested and the frequency and severity of illness. Data necessary to develop a
dose¨Cresponse model can be obtained from clinical trials, epidemiological
investigations, and small animal studies, as well as in vitro studies. Dose¨C
response models have also been developed using a combination of
epidemiological and food survey data. For example, annual estimates of
incidence of listeriosis were combined with data on levels of Listeria
monocytogenes in smoked fish to develop a purposefully conservative dose¨C
response relationship using the exponential model (Buchanan et al., 1997). The
various types of data used in dose¨Cresponse modeling are described below and
the strengths and limitations of each are summarized in Table 5.1.
5.2.1 Clinical trials
The primary source of data for dose¨Cresponse modeling has been clinical trials,
which are also referred to as human volunteer feeding studies. In these studies, a
known dose of a pathogen is administered. The results of these studies provide a
relationship between the number of organisms that cause illness and the severity
of the illness in volunteers administered a specific dose or level of a specific
pathogen. Data obtained from these studies show that the infectious dose and the
dose¨Cresponse relationship are dependent not only on the strains used, but also
on the age and physiological condition of the volunteers. For example, the range
of infectious doses of Salmonella spp. and E. coli strains can vary from 1
microorganism to 1 10
9
organisms among the different serovars (Kothary and
Babu, 2001). However, only a limited number of volunteer feeding studies have
been conducted. For foodborne microorganisms that cause potentially fatal
diseases, such as enterohemorrhagic E. coli (EHEC) or L. monocytogenes, there
are no existing feeding studies and it is highly unlikely that any studies will be
conducted for ethical reasons. This necessitates the need for alternate
approaches, such as using epidemiological data and animal studies, to estimate
dose¨Cresponse relations.
The majority of the available studies were conducted in conjunction with
vaccine trials wherein the pathogen was suspended in a saline solution,
administered concurrently with antacids. Very rarely was the pathogen
administered in a food matrix, which would provide better estimates of the
infective dose. Epidemiological data have shown that for several pathogens, the
infective dose obtained from feeding studies is much higher than that observed
with foodborne illness; which is likely due to the protective effect of the food
vehicle (Kothary and Babu, 2001). A major drawback associated with volunteer
studies is the routine use of healthy young men and women. It is known that age
and physiological condition, as well as the immune response of an individual,
affect the outcome of infection. This makes it difficult to apply results from
Hazard characterization/dose¨Cresponse assessment 83
these studies to the very young, aged or to immune-compromised
subpopulations (Kothary and Babu, 2001).
5.2.2 Epidemiological data
Epidemiological studies provide information on various factors influencing the
occurrence, distribution, prevention, and control of disease in a defined human
population. Analysis of epidemiological data can be used to establish the
relationship between human exposure to a hazard and the biological response.
Table 5.1 Summary of types of data used in dose-response modeling
Data source Benefit or advantage Limitation or other
considerations
Clinical studies Humans are test subject, no
need to extrapolate data to
other species
Known dose
Volunteers are not
representative of whole
population
Small number of participants/
trial
High doses used
Organism often administered
in buffered solution, not food
matrix
Epidemiological studies Human test subjects, no
extrapolation to other
species needed
More representative of
whole populations
Ingested dose is seldom
determined
Attack rate typically not
determined
Number of individuals that
consumed implicated food
but did not become ill often
lacking
Animal studies Simpler and cheaper than
human studies
Allow screening for effect
of virulence factors on
dose¨Cresponse
Relatively large number of
test subjects can be used
Interspecies extrapolation
required
In vitro studies Rapid testing of parameters Need to correlate response to
in vivo responses
Biomarkers Evaluation of end-points
other than illness or death
Currently, no quantification
or relation to specific
exposure levels in humans
Expert opinion Available in absence of
observational data
Information may be biased
84 Microbiological risk assessment in food processing
Outbreak investigations of foodborne diseases associated with ready-to-eat
foods that support growth of microbes have provided unique settings to increase
our knowledge of dose¨Cresponse relationships in human infections with
foodborne pathogens. With knowledge of attack rates (i.e. the number of people
exposed v. number of people ill) associated with the consumption of different
amounts of the implicated food and the concentration of the pathogen in the
food, we can estimate the dose¨Cresponse relationship, albeit with qualifications.
An example of how epidemiological data can be used to determine the infective
dose is the analysis of a multistate outbreak of Salmonella enteritidis associated
with ice cream consumption (Vought and Tatini, 1998). In this investigation, the
infective dose was calculated as no more than 28 cells based on consumption of
a single sundae cone, which caused severe illness in an 8-year-old boy and only
moderate to mild illness in the adult parents.
Another innovative approach is the use of attack rates from multiple
outbreaks to evaluate published dose¨Cresponse models (FAO/WHO, 2000).
Since 1997, Japan has advised large food service establishments to freeze and
retain subsamples of raw foods and cooked dishes for possible examination in
the event of an outbreak or reported illness. This food-saving system allows
measurement of pathogen levels in incriminated food, a variable often lacking in
most outbreak reports. By having multiple outbreak investigations available, it
was possible to fit the data using various dose¨Cresponse models.
5.2.3 Animal studies
In the absence of human data, small animal studies can be used to assess the
virulence potential of different strains and serotypes, susceptibility of the
sensitive subpopulation (i.e., immune-compromised), and to study the role of
specific virulence determinants. Small animals such as mice, rabbits and
monkeys are administered known levels of the pathogen and the response of
interest is measured. These studies are particularly useful to evaluate the
interactions of pathogenic strain and food matrix effects on the host. For
example, the effect of fasting and administration of sodium bicarbonate alone
and in combinations on the ability of different strains of Salmonella Enteritidis
to invade the spleen in rats has been evaluated (Havelaar et al., 2001). Extensive
evaluations such as this could not be conducted in human clinical trials.
Susceptible animal models have been developed to provide enough data to
develop correlative dose¨Cresponse models with human data especially with
respect to immunocompromised individuals, which cannot be conducted in
susceptible humans. However, the virulence potential observed in animals may
not reflect the response in humans, particularly for strains that are host
adapted. The challenge for successfully using animal data in dose¨Cresponse
modeling is to extrapolate data acquired in animal models to humans.
Adjustments may be needed to account for body weight differences, body
mass, surface area of the lower gastrointestinal tract and host specificity in
expression of pathogenicity and virulence. In order to extrapolate the data,
Hazard characterization/dose¨Cresponse assessment 85
there is the assumption that the response of the small animal to a particular
pathogen is similar to that of humans, and the mechanism of pathogenesis of
the microorganism is the same for both animals and humans. Another
drawback to animal studies is that generally healthy animals of similar weights
and age are used. Also laboratory animals are highly inbred and lack genetic
diversity (Buchanan et al., 2000).
5.2.4 In vitro studies
In the absence of human or animal studies, data obtained from cell, tissue, or
organ culture studies could be used in developing dose¨Cresponse models. Tissue
culture involves the maintenance or growth of tissues or cells out of the body (in
vitro), in a manner that allows differentiation and preservation of their
architecture and/or function. In vitro methods using tissue culture cell lines
have been used to study virulence functions, such as invasion of epithelial cells
and survival in macrophages. A correlation between animal in vivo and in vitro
studies has been demonstrated for adherence and invasion properties of
Salmonella and shiga toxin-producing E. coli (La Ragione et al., 2001; Ferreira
et al., 1997).
5.2.5 Biomarkers
Biomarkers are measurable indicators of host exposure to a pathogen. They
include determination of levels of the pathogen or a metabolite in the host, and
other indicators such as biochemical or physiological changes that indicate
disease. Once identified, biomarkers may be particularly important as surrogates
of disease or to indicate end-points other then illness for substances or agents for
which little epidemiological data are available. There is a need to quantify
biomarkers and relate them to specific exposure levels. In particular, biomarkers
based on alterations in molecular and biochemical parameters may be useful in
microbial risk assessment for establishing the presence of an exposure, ranking
relative risks among exposed individuals, and estimating risks at low levels of
exposure. Dose¨Cresponse data in humans obtained from biomarkers can help
reduce the assumptions and uncertainties that arise from interspecies and high-
dose to low-dose extrapolations, thereby making these risk assessments more
reliable, meaningful, realistic, and cost effective.
5.3 Modeling dose¨Cresponse relationships
A dose¨Cresponse model is a simplified description of the complex relationship
between a dose and the adverse event caused by a specific pathogen. Data from
studies obtained under specific conditions are used to predict adverse outcomes
from different, non-tested doses by fitting these data to predictive, mathematical
models. Fitting models often requires extrapolation beyond the range of the
86 Microbiological risk assessment in food processing
observed values. Important concepts in modeling dose¨Cresponse relationships
include variability and uncertainty, the criteria to select data used in the model,
and determination of appropriate distributions used in the model.
5.3.1 Variability and uncertainty
Because a model is only a representation of the interaction of many variables
and often based on incomplete information, there is considerable variability and
uncertainty associated with the output of pathogen dose¨Cresponse models.
Variability refers to the true heterogeneity of the biological system and as such
cannot be reduced by additional knowledge. On the other hand, uncertainty
refers to our imperfect knowledge and can be reduced with research. For
example, each strain of a pathogen has different potential for virulence in
humans (variable) and it is often unclear which strains of this pathogen are the
most virulent (uncertain). In the example of pathogen strains, additional research
will not change the fact that there is diversity in the biology of the pathogen
strain (variability) but can help us understand the differences in the virulence of
these strains (uncertainty).
Separating uncertainty and variability in the model, referred to as second-
order modeling, is preferable. By not separating variability and uncertainty, the
risk assessor assumes that the impact of either variability or uncertainty is
negligible and this assumption can quantitatively affect the predicted risk
(Nauta, 2000). One method of incorporating uncertainty along with
experimental variation is to combine estimates from several different models
rather than rely on a single dose¨Cresponse model (Kang et al., 2000). This
approach does not reduce uncertainty but will reduce the biases of individual
models in the predictions. Additional information about variability and
uncertainty in modeling can be found in Byrd and Cothern (2000) and Vose
(2000).
5.3.2 Empirical v. mechanistic models
Most efforts at dose¨Cresponse modeling have used empirical models, which are
limited by the range of the experimental data. However, mechanistic models
would allow more effective extrapolation of predicted responses. For example,
they could be used to extrapolate data from healthy males to pregnant females.
Buchanan and collaborators (Buchanan et al., 2000) developed a simple
mechanistic model as an example of how this approach might be used for
foodborne pathogens. The example model was based on three compartments
which addressed the impact of stomach acidity, ability of cells to attach to and
colonize in the intestine, and the likelihood that an infection progresses to
morbidity. Before mechanistic models can more fully developed and applied to
food safety issues, additional research is needed on the impact of host, pathogen
and food factors on infection mechanisms. Once developed, mechanistic models
would allow in silico analysis of interactions to be explored that are not possible
Hazard characterization/dose¨Cresponse assessment 87
in humans or small animals and can assist in understanding different disease
outcomes for different exposed or infected individuals (Kirschner, 2001).
5.3.3 Selection and development of models
Model selection and development will depend on the purpose of the risk
assessment, the availability of data, and the resources available to the risk
assessor (WHO/FAO, 2001). The selected model should be consistent with the
behavior of the study data and the nature of the known or assumed relationship
between exposure and infection, illness, or disease. These factors as well as the
modeling assumptions used determine how accurately the model reflects
reality, both biologically and in relation to the observed data (Bernard and
Scott, 1995).
Important considerations in selecting a model include: the need to extrapolate
from studies that use high doses to expected responses for low doses; the need to
extrapolate data collected from healthy adults to subpopulations with increased
susceptibility; and the endpoint of interest such as infection, morbidity, or
mortality. Extrapolation beyond the observed data points is often done to predict
the probability of illness from low doses and to account for differences between
the test subjects and the populations of interest. For example, extrapolating from
small animals to humans or from healthy young adults to an elderly or high-risk
subpopulation. For these reasons, it is critical that the risk assessor document the
basis of model selection and the possible impact that that decision has on the risk
assessment results.
Selecting surrogate pathogens for dose¨Cresponse modeling should include a
consideration of the similarities in taxonomy, epidemiology of human disease,
and genetic control of pathogenesis (Coleman and Marks, 1998). Coleman and
Marks (1999) used murine challenge studies as a surrogate to account for the
differences in human host susceptibility to non-typhoid salmonellosis. A family
of dose¨Cresponse curves representing subpopulations with different susceptibili-
ties to infection was developed using data from mice sensitized via administra-
tion of antibiotics (which removed the protective effect of normal microflora)
prior to challenge with Salmonella Enteritidis.
While model selection may include a statistical evaluation of the fit, it should
not be the only consideration. For the model to be credible, consideration must
also be made of the biological characteristics and other events that influence the
modeled response (Teunis and Havelaar, 2000).
5.3.4 Examples of distribution functions used in dose¨Cresponse models
One challenge that risk assessment modelers face is deciding the shape of the
dose¨Cresponse function, especially in the lower dose region of the curve where
observational data are generally sparse, absent, or practically unattainable. Other
challenges are to describe the probability of illness, infection, morbidity or other
endpoints over a wide range of dose levels for different subpopulations.
88 Microbiological risk assessment in food processing
Commercial software packages are available that allow use of numerous
distribution functions.
Table 5.2 summarizes some of the different distribution functions used in
dose¨Cresponse modeling. Vose (2000) provides mathematical equations for
these and other probability distributions. The three most commonly used
distribution functions for dose¨Cresponse modeling are the exponential, beta-
Poisson, and Weibull-gamma (Whiting and Buchanan, 2001). The exponential
function assumes that the host susceptibility and pathogen virulence are constant
for a specific population. The beta-Poisson and Weibull-gamma include host-
pathogen interactions that are beta or gamma distributed, respectively.
Table 5.2 Summary of selected distribution functions used in dose-response models
Distribution Assumptions References
Beta-Poisson Assumes infectivity is dose
dependent. Accounts for
pathogen virulence or host
susceptibility differences,
or both. Predicts mean
percentage of population
for a particular dose. Non-
threshold function.
Haas, 1983;
Buchanan et al., 2000
Beta-binomial A modified beta-Poisson.
Accounts for variability of
the probability of illness
predicted for a particular
dose.
Cassin et al., 1998b
Exponential Assumes the probability of
a cell causing infection is
independent of dose. Non-
threshold function.
Haas, 1983;
Buchanan et al., 2000
Weibull-gamma Assumes that the
probability that any
individual cell can cause
infection is distributed as a
gamma function. Provides
flexibility as it can take on
several different shapes
depending on the parameter
values selected.
Farber et al., 1996;
Whiting and Buchanan, 1997
Single-Hit Risk cannot exceed the
probability of exposure
(maximum risk curve limits
upper confidence level).
Teunis and Havelaar, 2000
Hazard characterization/dose¨Cresponse assessment 89
5.3.5 Examples of models used in recent quantitative microbial risk
assessments
Several different approaches have been used to develop models for microbial
risk assessment and no single model seems best for all pathogens or ranges of
data. International expert consultations sponsored by FAO/WHO evaluated
dose¨Cresponse curves for salmonellosis and listeriosis that had been developed
using different sources of data, biological end points, and modeling approaches
(FAO/WHO, 2000). For Salmonella, five dose¨Cresponse relationships, three
based on clinical trial data and two based on outbreak data were evaluated. For
Listeria the dose¨Cresponse relationships evaluated included different endpoints
(infection, morbidity, and mortality), different subpopulations (neonates,
elderly, general population) and different data sources (disease statistics, animal
models, and outbreak data). These risk assessment experts concluded that no
single approach to defining the dose¨Cresponse relationship was superior. With
either of these diseases, the various dose¨Cresponse curves examined either under
or over predicted illness compared to observations from outbreak data.
However, each group did select a dose¨Cresponse model that they felt was most
useful for the risk assessment that they had been asked to conduct, and provided
a detailed rationale for the selections.
Table 5.3 provides a summary of dose¨Cresponse models from selected
national or international microbial risk assessments. Different risk assessments
have focused on different biological endpoints of the data used in the models
such as infection or colonization, morbidity, mortality, and sequelae. It is
important to emphasize that dose¨Cresponse data or the dose¨Cresponse
relationships derived from the data can only be compared directly if they are
describing the same biological endpoint.
Because of the differences in dose¨Cresponse studies and difficulties in
selecting models that fit the data, it is even more difficult to compare infectious
doses for different organisms. Holcomb et al. (1999), evaluated six dose¨C
response models using data from four human feeding studies to determine
whether a single model could be fit to diverse data. The Weibull-gamma, a
flexible three-parameter mathematical model, provided the best overall fit for
the four pathogens. However, increased flexibility must be counterbalanced
against uncertainty, increasing the number of model parameters increases both
flexibility and inherent uncertainty.
5.4 Problems in hazard characterization
Absence of human data, incomplete epidemiology information, difficulty in
extrapolating animal data to humans, and lack of mechanistic models are major
factors that limit the use and development of dose¨Cresponse models and
contribute to the uncertainty in the model estimates. Another common problem
is the lack of well-defined criteria to evaluate data quality and to select data to
include in the model. Criteria are also needed to identify outliers that should
90 Microbiological risk assessment in food processing
Table 5.3 Summary of dose¨Cresponse models used in national and international
microbial risk assessments
Pathogen/commodity
(reference)
End-point examined Type of data used Model distribution
function used
E. coli O157:H7/
ground beef (USDA,
2001)
Morbidity and
mortality
Exposure data and
human feeding trials
using surrogate
pathogens
Beta-Poisson
Salmonella
Enteritidis/shell eggs
and egg products
(USDA, 1998)
Illness Surrogate human
feeding trial
(Shigella
dystenteriae)
Beta-Poisson
V. parahaemolyticus/
oysters (US FDA,
2001)
Illness Human feeding
studies
Fit multiple models
(beta-Poisson,
probit, Gompertz)
L. monocytogenes/
ready-to-eat foods
(USD HHS/USDA,
2001)
Morbidity Animal (mouse)
lethality and human
fatality statistics
Fit multiple models
(logistic,
exponential,
Gompertz-log,
probit, multihit)
Salmonella spp./
broilers and eggs
(FAO/WHO, 2000)
Illness Human feeding and
outbreak data
Compared three
models using beta-
Poisson but
different data;
outbreak data using
exponential and
beta-Poisson
models
L. monocytogenes/
ready-to-eat foods
(FAO/WHO, 2000)
Infection Expert elicitation;
animal (mouse) study
Compared fit of
multiple models:
Weibull-gamma;
exponential, and
beta-Poisson
L. monocytogenes/
ready-to-eat foods
(FAO/WHO, 2000)
Morbidity Annual disease
statistics and food
survey data; or
outbreak data (butter,
Mexican-style
cheese)
Exponential
L. monocytogenes/
ready-to-eat foods
(FAO/WHO, 2000)
Mortality Animal (mouse)
lethality and human
fatality statistics
Exponential
L. monocytogenes/
ready-to-eat foods
(FAO/WHO, 2000)
Febrile
gastroenteritis
Outbreak data
(chocolate milk, corn
salad)
Exponential
Hazard characterization/dose¨Cresponse assessment 91
validly be omitted from the dataset. Furthermore, a dose¨Cresponse model
developed for the general population may not be applicable for a susceptible
subpopulation (i.e., elderly).
5.4.1 Conducting severity assessments
A hazard characterization consists of two phases: determination of the dose¨C
response relationship and assessment of severity. As documented above, there have
been significant advances during the past several years in determining dose¨C
response relationships, both conceptually and technically. Severity assessments
have not received the same degree of attention. Currently, this phase of the hazard
characterization is considered qualitatively and the results of the dose¨Cresponse
relationship are interpreted in light of the spectrum of consequences associated
with the disease during the risk characterization phase of the risk assessment.
Conceptually, it has been proposed that this could be performed by developing
dose¨Cresponse relationships for several biological end points (e.g. morbidity,
mortality, sequelae), weighting the various biological end points in relation to their
impact, and integrating the estimates to provide a total disease burden (Buchanan
et al., 1998). In this manner, the impact of different foodborne diseases could be
compared; a measurement that could be useful for decision making related to
allocation of limited resources. However, risk assessors should be careful about
undertaking such a process. It requires the risk assessors to make societal value
judgments in order to set the weighting values (e.g. how many hospitalizations are
equal to one death), and as such may be beyond the risk assessors¡¯ mandate. In the
absence of such a mandate, the reporting of multiple biological end points is likely
to be the extent to which a severity assessment should be conducted.
5.4.2 Lack of human quantitative data
Clinical trials, a primary source of dose¨Cresponse data, are generally conducted
with healthy adult volunteers. However, as mentioned previously, one must be
cognizant of the limitations associated with volunteer studies. These data do not
reflect the entire population and may overestimate the dose needed to adversely
affect humans and therefore underestimate the risk to more highly susceptible
subpopulations. Also, because of financial and ethical considerations only a
limited number of volunteers are included in each trial so the degree of
uncertainty associated with hazard characterizations can be substantial.
Furthermore, clinical trials using potentially fatal microorganisms such as L.
monocytogenes, V. parahaemolyticus, and enterohemorraghic E. coli are not
possible due to ethical reasons, so generation of accurate dose¨Cresponse curves
requires alternative approaches. While the use of foodborne outbreak
investigation data is a promising alternative means for acquiring human data,
it also has significant limitations. By the time the agent has been identified, the
incriminated food may no longer be available. Also, it is difficult to determine
the levels of the microorganism in the food at the time of consumption because
92 Microbiological risk assessment in food processing
the pathogen could grow (levels increase) or die off (levels decrease) during
storage of the food. All these factors contribute to a lack of abundant human
quantitative data for dose¨Cresponse modeling.
5.4.3 Lack of methods to extrapolate high doses to low doses
Criteria are needed to assist risk assessment modelers in selecting dose¨Cresponse
models to use in different situations. Better methods are needed to evaluate
model fit. Guidelines must be developed to address measurement error
(accuracy of analytical methods to enumerate microorganisms and deter-
mination of exposed individuals) and guidelines developed for selecting data
used in dose¨Cresponse modeling (FAO/WHO, 2000). The ability to extrapolate
from high to low doses is also likely to be influenced by continuing research on
how quorum sensing and other extracellular microbial communication strategies
affect the expression of virulence determinants.
5.4.4 Extrapolation from animal to human
Animal models have been used as surrogates to provide a basis for extrapolating
dose¨Cresponse estimates for humans. Measures of the severity of illness used in
animal studies often do not correspond with definitions of human illness on
which reporting statistics are based (Bernard and Scott, 1995). In chemical risk
assessments, in extrapolating small animal data to humans, the modeler must
account for the difference in life span, body weights, and differences in
metabolic rates (Byrd and Cothern, 2000). However, for infectious agents, the
similarities between the surrogate animal and humans in the disease process
including specificity of receptors, immune response, and physiological functions
are more important considerations. There have been few attempts to directly
compare the disease process in humans and surrogate animals to eliminate
potential confounding factors (e.g. strains used, means of delivering dose,
immune and physiological status, genetic diversity).
5.4.5 Sequelae
Foodborne illness is typically thought of as involving clinical manifestations
associated with the gastrointestinal tract (e.g. diarrhea, vomiting). However, it is
becoming increasingly evident that a number of chronic severe sequelae such as
ankylosing spondylitis, arthropathies, renal disease, cardiac and neurologic
disorders, and nutritional and other malabsorptive disorders (incapacitating
diarrhea), may arise in some of the individuals infected by foodborne pathogens
(Lindsay, 1997). Sequelae can be life-threatening, such as cases of hemolytic
uremic syndrome that occur, generally in children under the age of 10 years, as a
consequence of enterohemorrhagic E. coli infections.
The association between a particular microorganism or its products and these
long-term sequelae ranges from convincing to circumstantial (Council for
Hazard characterization/dose¨Cresponse assessment 93
Agricultural Science and Technology, 1994; Bunning et al., 1997). The reason
for this uncertainty is that, except in rare circumstances, current surveillance
systems are not adequate to link chronic complications to a foodborne infection.
Furthermore, chronic sequelae can arise as a result of otherwise asymptomatic
infections. The chronic sequelae may be unrelated to the acute illness and may
occur even if the immune system successfully eliminates the primary infection.
In fact, in many cases it is the activation of the immune system due to the
infection with a foodborne microorganism that initiates the chronic condition as
a result of an autoimmune response (Council for Agricultural Science and
Technology, 1994; Bunning et al., 1997). This is further complicated by the fact
that such autoimmune responses are typically associated only with individuals
with a genetic predisposition. For example, cases of reactive arthritis in a
population exposed to Salmonella appears to be linked primarily to individuals
having the HLA-B27 immune marker.
Consideration of sequelae is often critical to performing adequate severity anal-
yses, but how this is best done is one of the challenges currently facing risk assessors.
Sequelae often appear not to be directly related to the same type of dose¨Cresponse
relationships associated with acute responses or fatalities. In some instances, it
appears that the incidence of sequelae is most easily described as a percentage of
active infection; however, the general lack of data relating sequelae to specific
instances of foodborne disease has hampered consideration of this approach.
5.4.6 Validating models
Generally all available data are used in the development of a dose¨Cresponse
model, so typically it is difficult to find data to use in validating a new model.
When additional data are available, the reasonableness of model predictions and
the appropriateness of the modeling assumptions can be evaluated by comparing
model output to relevant data that were not used to develop the relationships and
distributions of parameters in the model per se. For example, data from two
outbreaks of E. coli 0157:H7 were used to validate the dose¨Cresponse relation-
ship at low doses predicted by a beta-Poisson model fit to animal data (Haas et
al., 2000). Another approach is to use data generated in one country to develop
the dose¨Cresponse model and then use it with the exposure and health statistics
data from a second country (WHO/FAO, 2001), evaluating the degree of
agreement between the number of predicated and observed cases.
5.5 Future trends
Assessing and then managing risks has always been an integral part of food
safety throughout history. With improved modeling tools, we can now conduct
risk assessments with a higher degree of sophistication. In recent years there has
been an increasing international focus on risks associated with microbial
pathogens and specifically on reducing those risks through a comprehensive,
94 Microbiological risk assessment in food processing
farm-to-table approach to food safety. Critical needs for national and
international efforts include initiation of new research and expansion of current
surveillance efforts, advancement of modeling techniques, development and
standardization of study designs, and improvements of our ability to share data,
ideas, and modeling tools.
Important areas to advance our ability to conduct hazard characterizations are
as follows:
? Development of a central repository for outbreak and other data used in dose¨C
response modeling.
? Improvement in modeling techniques to allow exploration of the interactions
of host, pathogen, and food matrix in models along with the development of
mechanistic models.
? Advancement of model tools to enable use of multiple dose¨Cresponse models
as a method of accounting for uncertainty.
? Development of criteria for selecting dose¨Cresponse data, models and tools of
comparison.
Although dose¨Cresponse models developed at the national level can be generic
so that they can be used at the international level, it would be preferable to
validate models using regional data. Information on disease incidence for each
region of interest is needed for these internationally focused risk assessments.
5.5.1 Research needs
Some of the research needs for improving our ability to conduct hazard
characterizations include the following:
? Biological information for the development of mechanistic models.
? Enhanced outbreak investigations to provide data on the level of pathogen in
the implicated food, amount of implicated food actually consumed,
characterization of health and immune status of symptomatic and
asymptomatic cases, and calculation of attack rates.
? Quantitative data on the effect of food matrix on likelihood of infection.
? Potential for development of sequelae following illness and techniques for
modeling these sequelae.
? Evaluation of the impact of secondary (person-to-person) transmission of
disease.
? Identification of the key virulence factors for each pathogen so that strain
differences can be fully accounted for in the assessment and determination of
the frequency and distribution of specific virulent pathogen strains in food.
5.5.2 Risk assessment in a risk analysis framework
Conducting risk assessment within a risk analysis framework improves
subsequent regulatory decisions. It is important for risk assessors and risk
Hazard characterization/dose¨Cresponse assessment 95
managers to interact on a regular basis throughout the risk assessment process
and continually refine the questions the risk assessment should answer, the scope
of the project, and the key assumptions used in the model. As practical
experience is gained with this approach, both the conduct and use of complex
risk assessment in management and trade decisions will increase and improve.
Risk assessments can also be valuable as a tool to identify critical research needs
before a full quantitative risk assessment is attempted. Such activity is often
referred to as data gap analysis.
5.5.3 Intensive outbreak investigations
Epidemiological data would be useful for developing risk assessment models if
standard outbreak investigations were expanded to provide detailed information
on the amount of food consumed and the degree of contamination of that food.
Outbreak investigations typically focus on acquiring the minimum amount of
information needed to identify the source of an outbreak in order to prevent
further disease cases. However, in the longer term the knowledge gained from
more thorough investigation may prevent more cases by providing the type of
information needed to identify risks that can be managed. This information
would allow the development of the relationship between the amount of
contaminated food(s) consumed and the severity of subsequent illness, as well as
the relationships between the dose of contaminated food items ingested and the
severity of the resulting illness controlling for host factors.
As a means of making additional resources available, the US Chicago
Department of Public Health (CDPH) developed in cooperation with the US
Food and Drug Administration a protocol to carry out such intensive
investigations of foodborne disease outbreaks, which also includes molecular,
environmental, and virulence characterization of the microbial isolates. This
protocol is available on the internet (www.foodriskclearinghouse.umd.edu) and
has been used to investigate a Salmonella outbreak. Investigations of this type
will allow quantitative analysis of disease-associated foods and better estimates
of the total population exposed, both critical values needed to calculate attack
rates and thus use this type of epidemiological data to develop dose¨Cresponse
relationships for foodborne pathogens.
5.6 Sources of further information and advice
The field of microbial risk assessment including dose¨Cresponse modeling is
advancing rapidly, as new data and modeling tools become available. A number
of national government agencies and international organizations and
professional societies are active in the area of public health and microbial risk
assessment. Some resources that provide up-to-date information on the state of
the art include:
96 Microbiological risk assessment in food processing
? the Society for Risk Analysis (www.sra.org)
? the World Health Organization (www.who.int/fsf/mbriskassess/applicara/
index.htm)
? the Food and Agriculture Organization (FAO) of the United Nations (http://
www.fao.org/es/ESN/pagerisk/riskpage.htm)
? the Centers for Disease Control and Prevention (www.cdc.gov).
Information and links to current information on microbial risk assessment are
also provided by the Food Safety Risk Analysis Clearinghouse at
www.foodriskclearinghouse.umd.edu and the US government food safety web
site at www.foodsafety.gov. Subscribing to electronic mailing lists is another
way to keep informed of current thinking in the area of dose¨Cresponse modeling
(Byrd and Cothern, 2000). RiskAnal is a popular active list. To subscribe,
simply send an email to lyris@lyris.pnl.gov with ¡®subscribe riskanal your name¡¯
in the message.
5.7 References
BERNARD D T and SCOTT V N (1995), ¡®Risk assessment and food-borne micro-
organisms: the difficulties of biological diversity¡¯, Food Control 6 (6),
329¨C333.
BUCHANAN R L, COLE M, LAMMERDING A M, CLARKE I R, VAN SCHOTHORST M and
ROBERTS T A (1998), ¡®Potential application of risk assessment techniques
to microbiological issues related to international trade in food and food
products¡¯, J Food Protection 61, 1075¨C1086.
BUCHANAN R L, DAMBERT W G, WHITING R C and VAN SCHOTHORST M (1997),
¡®Use of epidemiologic and food survey data to estimate a purposefully
conservative dose-response relationship for Listeria monocytogenes levels
and incidence of listeriosis¡¯, J. Food Protection 60, 918¨C922.
BUCHANAN R L, SMITH J L and LONG W (2000), ¡®Microbial risk assessment: dose¨C
response relations and risk characterization¡¯, Int. J. Food Microbiol. 58,
159¨C172.
BUNNING V K, LINDSAY J A and ARCHER D L (1997), ¡®Chronic health effects of
microbial foodborne disease,¡¯ Wld Hlth Statist Quart. 50, 51¨C56.
BYRD B M and COTHERN C R (2000), Introduction to Risk Analysis, Maryland,
Government Institutes.
CASSIN M H, PAOLI G M and LAMMERDING A M (1998a), ¡®Simulation Modeling For
Microbial Risk Assessment¡¯, J. Food Protection 61 (11), 1560¨C1566.
CASSIN M H, LAMMERDING A M, TODD E C, ROSS W and MCCOLL R S (1998b),
¡®Quantitative risk assessment for Escherichia coli O157:H7 in ground beef
hamburgers¡¯, Int. J. Food Microbiol. 41, 21¨C44.
CODEX ALIMENTARIUS COMMITTEE (1999), ¡®Principles and Guidelines for the
conduct of microbial risk assessments¡¯, FAO/WHO, Rome.
COLEMAN M and MARKS H (1998), ¡®Topics in dose¨Cresponse modeling¡¯, J. Food
Hazard characterization/dose¨Cresponse assessment 97
Protection 61 (11), 1550¨C1559.
COLEMAN M E and MARKS H M (1999), ¡®Qualitative and quantitative risk
assessment¡¯, Food Control 10, 289¨C297.
COUNCIL FOR AGRICULTURAL SCIENCE AND TECHNOLOGY (1994), Foodborne
Pathogens: Risks and Consequences, Task Force Report No. 122, Council
for Agricultural Science and Technology, Ames, Iowa.
FARBER J M, ROSS W H and HARWIG J (1996), ¡®Health risk assessment of Listeria
monocytogenes in Canada¡¯, Int. J Food Microbiology 30, 145¨C156.
FERREIRA A J, ELIAS W P JR, PELAYO J S, GIRALDI R, PEDROSO M Z and SCALETSKY I
C (1997), ¡®Culture supernatant of Shiga toxin-producing Escherichia coli
strains provoke fluid accumulation in rabbit ileal loops¡¯, FEMS Immunol
Med Microbiol 19 (4), 285¨C288.
FOEGEDING P M (1997), ¡®Driving predictive modelling on a risk assessment path
for enhanced food safety¡¯, Int. J. Food Microbiology 36, 87¨C95.
FAO/WHO (FOOD AND AGRICULTURAL ORGANIZATION OF THE UNITED NATIONS
AND THE WORLD HEALTH ORGANIZATION) (2000), Joint FAO/WHO Expert
Consultation on Risk Assessment of Microbiological Hazards in Foods,
FAO Food and Nutrition Paper 71, FAO, Rome.
HAAS C N (1983), ¡®Estimation of risk due to low doses of microorganisms: A
comparison of alternative methodologies¡¯, Am. J. Epidomol. 118, 573¨C
582.
HAAS C N, THAYYAR-MADABUSI A, ROSE J B and GERBA C P (2000), ¡®Development
of a dose-response relationship for Escherichia coli O157:H7¡¯, Int. J.
Food Microbiology, 1748, 153¨C159.
HAVELAAR A H, GARSSEN J, TAKUMI K, KOEDAM M A, DUFRENNE J B, VAN LEUSDEN
F M, DE LA FONTEYNE L, BOUSEMA J T and VOS J G (2001), ¡®A rat model for
dose¨Cresponse relationships of Salmonella Enteritidis infection¡¯, J.
Applied Microbiology 91, 442¨C452.
HOLCOMB D L, SMITH M A, WARE G O, HUNG Y, BRACKETT R E and DOYLE M P
(1999), ¡®Comparison of six dose-response models for use with food-borne
pathogens¡¯, Risk Analysis 19 (6), 1091¨C1100.
JAYKUS L (1996), ¡®The application of quantitative risk assessment to microbial
food safety risks¡¯, Crit. Rev. Microbiol. 22 (4), 279¨C293.
KANG S, KODELL R L and CHEN J J (2000), ¡®Incorporating model uncertainties
along with data uncertainties in microbial risk assessment¡¯, Regulatory
Toxicology and Pharmacology 32, 68¨C72.
KIRSCHNER D (2001), ¡®Reconstructing microbial pathogenesis¡¯, ASM News 67,
566¨C573.
KOTHARY M H and BABU U S (2001), ¡®Infective dose of foodborne pathogens in
volunteers: A review¡¯, J. Food Safety 21, 49¨C73.
LA RAGIONE R M, COLES K E, JORGENSEN F, HUMPHREY T J and WOODWARD M J
(2001), ¡®Virulence in the chick model and stress tolerance of Salmonella
enterica serovar Orion var. 15+¡¯, Int. J. Med. Microbiol 290 (8), 707¨C718.
LEVINE M M, BLACK R E, CLEMENTS M L, NALIN D R, CISNEROS L and FINKELSTEIN R A
(1981), ¡®Volunteer studies in development of vaccines against Cholera and
98 Microbiological risk assessment in food processing
Enterotoxigenic Escherichia coli: A review¡¯, in Holme T, Holmgren J,
Merson M H and Mo¨llby R, Acute Enteric Infections in Children. New
Prospectives for Treatment and Prevention, Elsevier, North-Holland
Biomedical Press, 443¨C459.
LINDSAY J A (1997), ¡®Chronic sequelae of foodborne disease¡¯, Emerging
Infectious Diseases 3 (4), 443¨C452.
NAUTA M J (2000), ¡®Separation of uncertainty and variability in quantitative
microbial risk assessment models¡¯, Int. J. Food Microbiology 57, 9¨C18.
SPRONG R C, HULSTEIN M F and VAN DER MEER R (1999), ¡®High intake of milk fat
inhibits intestinal colonization of Listeria but not of Salmonella in rats¡¯, J.
Nutr. 129, 1382¨C1389.
TEUNIS P F M and HAVELAAR A H (2000), ¡®The beta poisson dose-response model
is not a single-hit model¡¯, Risk Analysis 20 (4), 513¨C520.
TEUNIS P F M, NAGELKERKE N J D and HAAS C N (1999), ¡®Dose response models for
infectious gastroenteritis¡¯, Risk Analysis 19 (6), 1251¨C1260.
USDA (UNITED STATES DEPARTMENT OF AGRICULTURE) (1998), ¡®Salmonella
Enteritidis risk assessment, shell eggs and egg products¡¯, Washington DC.
USDA (UNITED STATES DEPARTMENT OF AGRICULTURE) (2001), ¡®Draft risk
assessment of the public health impact of Escherichia coli O157:H7 in
ground beef¡¯, Washington DC. (www.fsis.usda.gov/OPPDE/rdad/FRPubs/
00-023NReport.pdf)
US DHHS/USDA (UNITED STATES DEPARTMENT OF HEALTH AND HUMAN SERVICES
AND UNITED STATES DEPARTMENT OF AGRICULTURE) (2001), ¡®Draft
assessment of the relative risk to public health from foodborne Listeria
monocytogenes among selected categories of ready-to-eat foods¡¯,
Washington DC. (www.foodsafety.gov).
US FDA (UNITED STATES FOOD AND DRUG ADMINISTRATION) (2001), ¡®Draft risk
assessment on the public health impact of Vibrio parahaemolyticus in raw
molluscan shellfish¡¯, Washington DC. (www.foodsafety.gov).
VOSE D (2000), Risk Analysis, England, John Wiley & Sons Ltd.
VOUGHT K J and TATINI S R (1998), ¡®Salmonella enteritidis contamination of ice
cream associated with a 1994 multistate outbreak¡¯, J. Food Protection, 61
(1), 5¨C10.
WEST J B (1985), Best and Taylor¡¯s Physiological Basis of Medical Practice,
11th ed, Baltimore, Williams & Wilkins.
WHITING R C and BUCHANAN R L (1997), ¡®Development of a quantitative risk
assessment model for Salmonella enteritidis in pasteurized liquid eggs¡¯,
Int. J. Food Microbiology, 36, 111¨C125.
WHITING R C and BUCHANAN R L (2001), ¡®Predictive modeling and risk
assessment¡¯, in Doyle M P, Food Microbiology: Fundamentals and
Frontiers, ASM Press, Washington, DC, 813¨C831.
WHO/FAO (WORLD HEALTH ORGANIZATION AND THE FOOD AND AGRICULTURAL
ORGANIZATION OF THE UNITED NATIONS) (2001), ¡®WHO/FAO guidelines
on hazard characterization for pathogens in food and water¡¯, from a
workshop, Bilthoven, Netherlands 13¨C18 June 2000, Preliminary draft.
Hazard characterization/dose¨Cresponse assessment 99
6.1 Introduction
The aim of this chapter is to provide an explanation and practical guide to
exposure assessment (Schothorst, 1997), so that readers can tackle evaluation of
the level of microorganisms or microbial toxins in a food at the end of the supply
chain, when it is consumed. The exposure assessment procedure explained and
used as a reference is qualitative and suitable for routine assessment of the likely
impact of process and formulation changes on the microbiological risks
associated with a food product. How the output is presented must be dictated by
the needs of the users.
Microbiological exposure assessments are overall models of the level of
pathogens or toxins in foodstuffs moving through the supply chain. Their
function is to provide an estimate of levels in a product at the point of
consumption. Risk assessors are responsible for using them with other risk
assessment information to evaluate how raw material quality and all the factors
in the supply chain including consumer use, can fix or alter the exposure of
consumers to foodborne hazards. Because each exposure assessment identifies
the elements of the food chain that are relevant to preventing or managing food
safety problems, users must be aware of their strengths, limitations and scope.
In practice an assessment will be directed at a specific food (e.g. cooked
meat) made by an identified supply chain. It may focus on the fate of a specific
hazard (e.g. salmonella), or be repeated (using the same supply chain and
consumer data) until the full range of realistic hazards has been covered (e.g.
salmonella, Staphylococcus aureus and Listeria monocytogenes). Each
assessment needs to describe potential routes of contamination and control
measures, combined with knowledge of the characteristics of the pathogen, and
6
Exposure assessment
M. Brown, Unilever Research, Sharnbrook
is used to estimate the level or the probability of toxin presence in a portion at
consumption. Assessments can use simple descriptions, point estimates or
ranges of values to describe variables (e.g. time, temperature or pathogen level)
and should make variability, uncertainty and assumptions explicit and show how
far the selected control measures actually control food hazards. Hence, realistic
exposure assessments need to collect and make use of information about all the
risk-determining steps from raw material to product consumption (Lammerding
and Fazil, 2000). Typical information includes raw material quality, the
performance of manufacturing equipment and consumer usage; plus in-house or
literature data on the characteristics of realistic pathogens (e.g. growth range,
heat sensitivity, prevalence) in the product. Technical personnel and micro-
biologists involved directly with the supply chain will usually provide this
information. Sometimes it also comes from consumers and may be backed up by
consultation with outside experts or industry colleagues.
6.2 The role of exposure assessments in microbiological risk
assessment
Risk assessment has emerged over the last 10 years as an accepted, science-
based approach to making choices between options for managing the
microbiological safety of food (Lammerding, 1997; Klapwijk et al., 2000). It
is a systematic process based on four inputs, hazard identification, exposure
assessment, hazard characterisation and risk characterisation (Codex
Alimentarius Committee, 1999; Voysey, 2000). Exposure assessments use
information and expert opinion to identify and measure or rank what happens to
microbial hazards in food, and hence the dose likely to be presented to
consumers in a portion. They can provide an estimate of the impact of the
individual stages in a supply chain or changes in a supply chain on
microbiological safety (McNab, 1998). Changes may lead to food companies
redefining microbiological risks or hazards and actions needed to protect their
customers (Moy, 1999). Specifically, exposure assessments can provide
predictive or ¡®what if¡¯ information to help with managing the impact of new
raw material sources, milder product formulations or new groups of customers
(e.g. children), and may lead to revised control measures or the re-focusing of
quality assurance resources (Giffel et al., 1999). Alternatively they can be used
to examine historical data and trace the likely origins and extent of food
poisoning outbreaks or produce an early warning of problems. Exposure
assessments can be used in both ways because they are a data-based and
systematic approach to identifying hazards and understanding what affects the
riskiness of food products for specified groups of consumers (Balbus et al.,
2000). Risk assessments do not judge whether a product or a process is ¡®safe¡¯ or
¡®unsafe¡¯. Their function is to use exposure assessment data to produce an
estimate of risk that can be understood and used by risk managers in industry or
regulatory agencies to improve or maintain safety standards (Voysey and
Exposure assessment 101
Brown, 2000). In this context a risk manager is anyone in an organization who
has the responsibility for deciding how to manufacture or to sell a food product.
Risk assessments may be carried out by different groups of experts and with
difference scopes and levels of complexity. Governments may do full risk
assessments for global risks or new hazards, e.g. Bovine spongiform
encephalopathy (BSE) (European Commission, 2000) or enteropathogenic
Escherichia coli. At lower levels of complexity, industrial or research institute
experts may be involved in ¡¯routine¡¯ decision making on altered process
conditions or products, where there may be new or altered risks of food-
poisoning. At the most everyday level, quality assurance (QA) managers,
company microbiologists or regulatory agencies may do informal or operational
risk assessments as part of hazard analysis critical control point (HACCP) studies
(Hoornstra et al., 2001; Meredith et al., 2001), to set criteria (Norrung, 2000) or
to follow-up consumer complaints or food-poisoning incidents. Two general
approaches to risk assessment have been described by FAO/WHO (1995) and
Codex Alimentarius Commission (1999). Examples of quantitative risk
assessments with detailed exposure assessments are being published (Table 6.1).
Exposure assessments measure or estimate changes in pathogen or toxin
levels along the supply chain to predict the likely level of infectious pathogens
Table 6.1 Examples of quantitative risk assessments
? USFDA (2000) Draft Risk Assessment on the Public Health Impact of Vibrio
parahaemolyticus in Raw Molluscan Shellfish.
? HHS and USDA (2000) Draft Assessment of the Relative Risk to Public Health from
Foodborne Listeria monocytogenes Among Selected Categories of Ready-to-Eat
Foods.
? USFDA/CVM (2001) Draft Risk Assessment on the Human Health Impact of
Fluoroquinolone Resistant Campylobacter Associated with the Consumption of
Chicken.
? FAO/WHO (2000) Joint FAO/WHO Activities on Risk Assessment of Micro-
biological Hazards in Foods: Risk Assessment: Salmonella spp. in broilers and eggs ¨C
Preliminary Report ¨C Exposure assessment of Salmonella spp. in broilers MRA 00/05
(Kelly, L., Anderson, W. and Snary, E.)
? FAO/WHO (2000) Joint FAO/WHO Activities on Risk Assessment of Micro-
biological Hazards in Foods: Risk Assessment: Salmonella spp. in broilers and eggs ¨C
Preliminary Report ¨C Exposure assessment of Salmonella enteritidis in eggs MRA 00/
04 (Ebel, E.D., Kasuga, F., Schlosser, W. and Yamamoto, S.)
? FAO/WHO (2000) Joint FAO/WHO Activities on Risk Assessment of
Microbiological Hazards in Foods: Risk Assessment: Listeria monocytogenes in
ready-to-eat foods Preliminary Report ¨C Hazard identification and hazard
characterization of Listeria monocytogenes in ready-to-eat foods. MRA 00/01
(Buchanan, R and Lindqvist, R)
102 Microbiological risk assessment in food processing
or microbial toxins in a portion of food. This prediction is based on
microbiological data and supply chain data.
Microbiological data consists of the relevant characteristics of the pathogen
(e.g. heat sensitivity) affecting their activity (metabolism, growth, death or
survival) during food processing. Experimental data from challenge testing and
growth/death models may be used to estimate responses to process conditions.
Identifying the correct pathogen characteristics is important because the same
preventative or control measures will not have the same effect on all pathogens
(e.g. different heat sensitivities between infectious pathogens and spores).
Supply chain data include analysis of the physical and chemical properties of the
food, product and process design, measurement and inspection of processing
conditions, and data on consumer use. Data requirements will be determined by
describing the supply chain and potential risks at each step. A generic supply
chain model is shown in Fig. 6.1.
Putting together process, product and microbiological information provides a
factual basis for making decisions about risks and hence where ¡®safety¡¯
resources and control measures should be directed to give the best cost/benefit
ratio in preventing food poisoning, and in turn protecting the reputation of a
brand or business. Both science and judgement play important roles in using
these estimates to reach decisions about which hazards and controls are realistic.
Recommendations may sometimes be controversial, if conclusions are
inconsistent with the concerns of stakeholders, as necessary actions may
sometimes increase costs. Controls may arise from increased levels of quality
management or inspection, tighter specifications, changes to process conditions
at risk determining steps, restriction of product shelf life or exclusion of some
sources of supply (Notermans and Hoornstra, 2000; Rasmussen et al., 2001).
Appropriate actions will depend on the severity of the hazard (e.g. spoilage, mild
or life-threatening illness: Jouve et al., 1999) and the level of risk that managers
are prepared to accept on behalf of their customers and employers.
Effective risk management practices can be implemented only when a
realistic and reliable exposure assessment has been produced and communicated
to the risk assessment team. If control of new hazards or the hazards not
previously associated with a particular food causes changes to control measures,
any new requirements need to be clearly communicated to suppliers, buyers,
product development, quality assurance and manufacturing personnel and
possibly outside the company to regulators.
6.2.1 Exposure assessment and HACCP
Formal and informal exposure assessments (EA) are already done during many
activities in the food industry related to microbiological safety, e.g. product
development and process optimization. The hazard analysis part of HACCP
studies requires similar information, because it is the risk management tool often
used to ensure that production is under day-to-day control for identified hazards
(Serra et al., 1999). If a formal approach to data collection and analysis, similar
Exposure assessment 103
Fig. 6.1 A generic ¡®production to consumption¡¯ model of a supply chain, each stage
may be risk determining and may provide inputs of hazards or opportunities to reduce
hazard levels or limit increases. Probable risk-determining steps shaded.
104 Microbiological risk assessment in food processing
to an EA, is used as an input to a HACCP study, this will form a reliable and
consistent basis for evaluating current processes or the impact of any changes.
Thus documentation and explanation are key elements of the contribution of
exposure assessments to the overall risk analysis.
6.3 What¡¯s in an exposure assessment?
To aid understanding and acceptance of the risk assessment conclusions,
exposure assessments need to be based on a clear process of information
collection and analysis. This starts with formulation of the food safety problem
(e.g. salmonella food poisoning associated with a pre-fried meat dish for
microwave heating). It is followed by a statement of purpose and definition of
the scope of the assessment (e.g. to determine the effect of raw material quality,
process conditions and the variability of microwave heating on levels of
salmonella in a portion for ready for consumption). The scope will often be
defined by the needs of the risk manager (e.g. to find out if a change of supplier
or raw material quality will affect the level of risk). The output needs to meet
his/her needs (e.g. ranking of the risks in the existing and proposed supply
chains) and its limitations must be clear. These can include:
? Variability (e.g. in process conditions).
? Uncertainty (e.g. lack of knowledge (Kang et al., 2000).
? Lack of resources.
Examination of these factors may show if an assessment can be improved by
collecting more data (if easily available). If this cannot quickly and easily be
done, expert opinion, generic data from suppliers or assumptions (e.g. an
infectious dose is one cell) can be used, but its basis and limitations should be
explained. Any assumptions and conclusions should be easily understandable to
users and should normally be fail-safe or worst-case. For example, if prevention
of re-contamination after heating is a risk-determining step, then the assumption
should be that it occurs, unless there are data to show it is prevented, with back-
up information on the systems and facilities showing that it is consistently
achieved.
6.3.1 Stages in an exposure assessment
Preparing an exposure assessment follows a well-defined series of stages shown
in Table 6.2. All risk assessments aim to provide an estimate of the probability
and severity of illness from consuming potentially contaminated food.
Qualitative, or even informal, risk assessments should be more than a summary
of information and should follow the same systematic approach as a quantitative
risk assessment (Vose, 1998; Coleman and Marks, 1999; Notermans and Teunis,
1996). Data structured in the supply chain model, based on the steps described in
Table 6.2 should provide answers to the questions shown in Table 6.3. The
Exposure assessment 105
Table 6.2 Stages in an exposure assessment
1. Formulation of the food safety problem: showing the hazard, food product, supply
chain and market being covered.
2. Hazard identification: including characterisation of the pathogen relevant to the
conditions in the supply chain and the product.
3. Statement of purpose: agreed output and detail required by the risk assessor or risk
manager and within the capabilities of the team.
4. Scope: extent of the supply chain covered and any additional factors to be considered,
such as abuse.
5. Data collection: to construct a model of the supply chain:
Supply chain data Microbiology data
? Overall product and process design
? List of raw materials, specification and
analytical data
? Description of final product including
characteristics and use instructions
? Process specifications + QA and process
records
? Description and assessment of the
sequence of process stages, based on scope
and including microbiological effects from
records or measurements, concentrating on
the risk-determining steps
? Description of sequential changes in the
physical and chemical characteristics of
the food and the interaction of process
steps with the product (e.g. heating or
cooling)
? Equipment hygiene and routes of
contamination
? Primary packaging, ability to exclude
pathogens
? Conditions in storage, distribution and
handling
? Product use, e.g. portion size, levels and
pattern of consumption by various groups
? On-site validation of risk-determining
steps and description and assessment of
variability and uncertainty in the data
? Characteristics of the identified
hazard under supply chain
conditions from harvest/supply to
consumption
? Pathogen inputs and levels from raw
materials, factory, equipment and
product
? Measurements of microbial
numbers in raw materials and
products (finished product and at
consumption)
? Measurements of microbial
numbers in-process material
? Availability and application of
relevant kinetic models
6. Production of a supply chain model, including a flow diagram linking product,
process and microbiology information to outline the effects of process stages and
treatments on the identified pathogen. This may include the use of predictive models.
7. Identification of major risk-determining steps and the cumulative effect of
processing on pathogen levels.
8. Description of variability, uncertainty and assumptions for raw materials, pathogen
characteristics, specified process stages, etc.
9. Presentation of information to meet the statement of purpose and the needs of the
risk assessment team, e.g. point estimates, scenarios or quantitative risk assessment
(QRA).
106 Microbiological risk assessment in food processing
Table 6.3 Key questions on the risk-determining steps that should be answered by the
supply chain model
Topic Key questions
Hazardous
microorganisms in raw
materials
What is the frequency and level of contamination of the raw
materials making-up the product?
What is the range of contamination in the raw materials?
What is the origin of the data ¨C e.g. analytical samples or
predictions?
What is the variability or uncertainty of this estimate?
Effects of processing/
decontamination
What are the effects of harvest, handling and storage before
processing, on the level of the hazard entering the process
with each raw material?
What is the intended effect of all processing and any
decontamination stages on the level of the pathogen in the
product after manufacture?
What is the variability or uncertainty of this estimate?
Occurrence of toxins from
toxin-producing
pathogens
What is the likelihood of toxin presence, if the
microorganism can produce toxin in raw materials or
product?
What is the variability or uncertainty of this estimate?
Re-contamination after
processing or
decontamination
What is the frequency of re-contamination of the final
product with the hazard in the factory after processing or
decontamination?
What is the likely level of re-contamination after
processing or decontamination?
What is the variability or uncertainty of this estimate?
Primary packaging Is the product put in its primary packaging before or after
decontamination? If the answer is yes, how effective is
packaging at preventing recontamination before
consumption?
What is the frequency of recontamination after packaging?
What is the amount of recontamination after packaging?
What is the variability or uncertainty of this estimate?
The effects of storage and
distribution
What are the conditions during storage and distribution and
how does this affect the level of the hazard in the product
after manufacture?
What is the effect of storage (according to the instructions)
on the level of hazard at the point of sale?
What is the variability or uncertainty of this estimate?
Additional questions for toxin-producing microorganisms
What is the effect of storage conditions on toxigenesis (If
the level of the microorganism changes and it produces
toxin)?
What is the likelihood of toxin production in the product?
What is the variability or uncertainty of this estimate?
Exposure assessment 107
degree of detail in those answers will determine whether the output is qualitative
or quantitative.
Microbiological risk assessments include an element of prediction, because it
is not often possible to measure the real level of a pathogen in a food at the time
of consumption. It is most important to know the levels of pathogens at the risk-
determining steps, because the overall exposure assessment has to provide an
estimate of the amount of pathogen or toxin likely to be ingested in a portion.
Analytical data, or predictive, kinetic (sub) models, can be used to estimate
sequential changes in the behaviour, or levels, of pathogens (Ross et al., 2000);
and so understand what happens to them in the supply chain or in the hands of
consumers (Walls and Scott, 1997; Foegeding, 1997; Kleer and Hildebrandt,
2001; Whiting and Buchanan, 1997). Predictive models for growth or death need
to use point or range estimates and be based on the fastest growing or most
resistant strains likely to be present in the food.
Table 6.3 (continued)
Topic Key questions
Consumer use Is the product intended as single- or multi-use with storage
after opening?
If the product is multi-use in a domestic or food service
application, then what is the effect of recontamination and
open shelf life on the level of microorganisms or toxin?
What is the variability or uncertainty of this estimate?
Additional questions for toxin-producing microorganisms
What is the likelihood of growth and toxin production
during open shelf-life?
What is the variability or uncertainty of this estimate?
The effects of storage,
usage and preparation on
the level of hazards
What is the effect of customer or food service, preparation
and usage on the level of the pathogen?
What is the variability or uncertainty of this estimate?
Additional questions for toxin-producing microorganisms
What is the effect of usage and preparation on toxin level
and production?
What is the probability of toxin presence at the point of
consumption?
What is the variability or uncertainty of this estimate?
Food intake What is the likely quantity of the food consumed by a
customer on a specified occasion or over a period of time?
What is the variability or uncertainty of this estimate?
Exposure estimate What is the likely level of the hazard in a portion at the
point of consumption?
What is the variability or uncertainty of this estimate?
108 Microbiological risk assessment in food processing
6.4 Who should do an exposure assessment and when?
Information for an exposure assessment study should be collected and examined
by a team including experts, production and QA personnel. By the beginning of
a study this team should have collected and analysed the minimum information
on product, process and microbiology needed to build a supply chain model. To
interpret the impact of processing and product formulation on microbial
numbers, the team needs microbiological expertise to consider the effects of the
hazard (e.g. wide risk of infection or limited risk from toxigenisis) and the
sensitivity of consumers (e.g. rates or disease among consumers or the minimum
harmful dose). Such expertise will help the risk managers define their response
to the risk estimate. This information should be structured within the overall
model of the supply chain, showing what happens at each stage and especially at
the risk-determining steps. Therefore they need to have the skills and
information:
? To estimate the presence or concentration of the hazard in the raw ingredients.
? To summarise the overall effect of the sequence of stages in the supply chain
(e.g. processing, distribution, handling and preparation) on the level (and
severity) of the hazard at the point of consumption. Taking into account the
effects of consumption patterns, consumer abuse or under-processing, if
necessary.
? To provide an estimate of pathogen levels in a portion of product at the point
of consumption, with a clear explanation of uncertainty and variability.
? To outline the geographical distribution and quantity of the product sharing a
common hazard and risk.
An assessment may be triggered by the need to understand what happens in the
supply chain and especially at the risk-determining steps, or to establish whether
changes in processing or materials may increase or decrease risk. The technique
may be used to determine likely sources, if problems have been found (e.g. food-
poisoning or premature spoilage). Timing, scope and detail should be
determined by the requirements for risk management (Oscar, 1998) and,
wherever possible, relevant information from previous assessments or records
should be used to minimise the resources needed.
6.5 Building up supply chain data for an exposure assessment
Microbiological exposure assessments need to collect and analyse data on each
stage where pathogens or hazards can be introduced, increased, reduced or
eliminated, to assess their cumulative impact up to the moment a product is
consumed. To be effective, the assessment process has to be shaped by risk
managers so that the output addresses their needs and concerns (Schlundt, 2000),
but they should not affect its scientific integrity. This principle is equally true for
hazard analysis; where the HACCP study team should impartially select the data
Exposure assessment 109
used to meet the needs of the HACCP study implementation team. Data on
microbial levels and process conditions over a representative period of time or
number of batches can be used to improve the reliability of a risk estimate.
Exposure assessment must consider the effects of food handling, patterns of
consumption and intake quantities on pathogen levels at consumption and the
sensitivity of consumers to the pathogen. There are differences in agent¨Chost
and food¨Chost interactions and in minimum hazardous dose between infectious
agents (e.g. salmonella) and toxins (e.g. Staphylococcus toxin) and different
groups of consumers, such as young, old, etc. When an agent is infectious, harm
is caused when it is ingested and grows in the body to cause infection. In
principle, one cell may be sufficient to cause an infection, but in reality higher
doses are usually needed. For example the hazardous dose of E. coli O157 may
be 1¨C10 cells/g, while Listeria monocytogenes is a severe hazard at higher levels
(<10 000/g of food) to a small number of sensitive consumers the young, old,
pregnant or immunocompromised (YOPIs), whereas healthy consumers may
consume even higher levels without ill-effects. Where the agent is a toxin, it has
to be pre-formed in the food at a sufficiently high concentration to cause illness.
Some of the bacilli (e.g. Bacillus cereus) may cause food poisoning by
producing a toxin during spore germination and growth in the gut), in this case
sufficient numbers need to be ingested to cause a harmful level of toxin to be
produced (McElroy et al., 1999). An exposure assessment should stop at
producing an estimate of the level of the hazard in a portion at the point of
consumption; analysis of the effects of pathogen level in the portion on the
consumer is the function of hazard characterisation.
Exposure assessments should also specify the portion size or ingredient
contribution to the portion, often specified in the use instructions for products. If
possible this should be related to the portion size previously implicated in illness.
The unit of exposure is typically a per meal portion. The impact of these
differences on the dose consumed may not be as large as those attributable to
other factors (such as decontamination treatments or stages allowing growth) in
the exposure assessment. For example changes in pathogen levels, prior to
consumption and the minimum hazardous dose, can differ on a log-scale, whereas
differences in portion size are linear (from a few grams to a few hundred grams).
Therefore if the hazard is an infectious pathogen, then the effect of portion size
may be negligible, but if the hazard is the toxin produced by the microorganism
and response to the toxin is linear, then portion size may be important.
Food consumption patterns, food preparation and consumption practices (e.g.
cooking habits and/or cooking times and temperatures, home storage conditions,
including abuse; typical serving sizes and habits plus any seasonal, regional or
cultural differences relevant to the product) are part of the exposure assessment.
Information on consumer behaviour is needed where it is likely to influence the
dose of pathogen ingested. Where possible, exposure assessments should include
demographic and social information to identify consumers who may be more
susceptible to infection or illness (e.g. because of age distribution or increased
sensitivity ¨C infants, children, pregnant women, elderly or immunocompromised
110 Microbiological risk assessment in food processing
populations). Such groups may also have different eating habits and levels of
exposure (Gerba et al., 1996). When risk assessments are conducted for
international trade purposes, differences in exposure data between countries and
regions and for different populations must be considered.
Mixing or blending raw materials or ingredients can result in contamination
of larger volumes of material and widen the distribution of a pathogen (e.g. into
other products or to different consumers). If there is no growth, pathogen levels
may be diluted below a hazardous level or there may be a decrease in the
number of harmful portions, if contaminated and uncontaminated foods are
mixed. Unlike chemical risk assessments that deal with relatively static hazard
levels, microbiological risk assessments are usually concerned with dynamic
hazards (e.g. changing numbers of microbial pathogens in a food from
production to consumption). For this reason all of the wide mix of factors that
can influence pathogen levels need to be covered by the exposure assessment
and will need updating, as new information becomes available or changes are
made in the supply chain.
For each stage in the supply chain, information on temperatures, times and
other key parameters should be collected, recorded and structured sequentially
to give an overall model of the supply chain (Table 6.3). A flow diagram may be
a useful starting point to indicate the scope of the assessment and to define the
steps involved in the supply chain. The degree of sophistication will depend on
the detail needed to describe the fate of the hazards. Simple models (e.g. Fig.
6.1) can describe pathways of exposure and different routes of control and
contamination, but more complex representations may be needed to describe
other important parameters (e.g. growth or survival of pathogens in residual
material, in the manufacturing environment or re-contamination) at risk-
determining steps. If exposure assessments are used to investigate food-
poisoning outbreaks, information should always be linked to the production
circumstances involved and, if sample integrity can be assured, the results of any
microbiological examination of the foods involved should be included.
6.6 Sources of information
Data can be obtained from many sources including:
? A review of quality assurance and process data.
? On-site inspection and sampling.
? Process and product design information.
? Equipment and ingredient specifications.
? Records of raw material quality.
? Specific analyses of in-process materials or finished products.
From this information, pathogen inputs and the effects of processing conditions
can be put into the supply chain model. Additional data may be derived from the
analysis of samples taken along the line, or by using predictive models in
Exposure assessment 111
conjunction with process data (Betts and Earnshaw, 1998). Pathogens usually
have a relatively low prevalence in raw materials and because of the limitations
of current methods of detection, their distribution is not well understood. Many
factors can affect pathogen inputs; usually there are not sufficient quantitative
data to describe what happens during harvest or before the factory gate, and
modelling of changes in pathogen numbers is limited to processing and
distribution stages where data are available (Ross et al., 2000). Where models
are used to predict levels, the possibility that previous processing may alter
pathogen resistance or growth rates should be considered. Surveys of
performance data (e.g. in the process or distribution chain), storage or challenge
tests, or laboratory studies (e.g. laboratory assessment of D values for raw
material contaminants or survivors of the heat treatment) may provide additional
data on likely changes.
Time and resources may limit the process, formulation or complaint data
information available to an ¡®exposure assessment¡¯ team. And scientific journals
may not provide useful data on microbial behaviour under the relevant
conditions. Exposure assessments should at least be based on a coherent set of
descriptive or point values (e.g. high or low or a temperature or pH value) for a
line manufacturing a product. At a more sophisticated level, data should indicate
material or process variability, (e.g. minimum and maximum values) and
distributions defining variability should be built up if sufficient data are
available. Typical variables include the level of contamination in a raw material
or finished product and temperatures at the various stages in production. Where
sufficient data are not available to indicate the range of values, worst-case point
values can be used to create best, average and worst scenarios. If constraints are
so severe that reliable data cannot be obtained, then the study should be stopped
to prevent presentation of an invalid assessment.
6.6.1 Variability, uncertainty and data quality
Data collection should, as far as possible, highlight variability (e.g. raw material
quality or process conditions) and minimise uncertainty (Huss et al., 2000).
Characterising variability is most important if a decontamination step has only
marginal capability to ensure pathogen absence or prevent growth at its ¡®upper¡¯
and ¡®average¡¯ values, because lower end values, or a higher challenge, may lead
to pathogen presence. Sometimes to improve accuracy or take account of
variability, data sets from similar lines, or different times, may be combined to
provide suitable information (e.g. if a problem that occurred some time ago is
being examined). Where this is done, the procedures and techniques used to
provide a composite view should be explained. If variability leads to a high risk
being estimated, then better control of the process, lower pathogen numbers in
raw materials or more predictable consumer use may reduce the risk. If a
product is assumed to be risky as a result of uncertainty, then focus on
improving knowledge may apparently reduce risk. However, if control actions
are urgently needed, and reliable information is not available, then a cautious
112 Microbiological risk assessment in food processing
decision might be justified on the basis that more information could allow less
severe, or restrictive, risk management measures in the future (Schlundt, 1999).
If an exposure assessment shows that there are consistently high risks (e.g.
survival of pathogens in products cooked according to their specification) or that
process variability leads to periodic contamination of products, then the
effectiveness of control measures should also be reviewed based on exposure
assessment data.
The origins and importance of uncertainty should always be noted with the
best description possible (Maarten and Nauta, 2000; Nauta, 2000). It may stem
from lack of knowledge or doubts about the underlying science relating to a
hazard (e.g. the effect of pre-processing conditions on the heat resistance of a
pathogen at a process stage), an inability to characterise the effects of processing
(e.g. the microbiological effects of heating a dry ingredient or routes of product
recontamination) or the preservative effects of product formulations (e.g. the
ability of combinations of pH and salt to prevent pathogen growth). Uncertainty
and the dependence on assumptions can be reduced by better knowledge, but on
the other hand more knowledge may sometimes produce only a better
description of variability.
Variability and uncertainty can be accounted for by putting different point
estimates into scenarios or using high and low boundaries based on observed
variability. To fail safe, worst-case estimates should be used and any
assumptions identified, but if this is done for all process stages and material
qualities, then unrealistic (pessimistic) assessments will be obtained. Monte
Carlo simulations (Cassin et al., 1998b; McElroy et al., 1999; Braud et al., 2000;
Smout et al., 2000; Giannakourou et al., 2001) are used to provide a more
balanced view, based on the probabilities of encountering combinations of
adverse values or conditions. In any case, explanation of the origin, accuracy
and variability of data within the assessment process will play an important role
in establishing trust in its outputs. Credibility will be improved if stakeholders
are involved in and understand how all the information available has been
obtained and used to make the supply chain model. However they should not be
involved in the exposure assessment process itself, particularly the admission or
exclusion of data, the management of uncertainty (lack of information) or
accounting for variability (day-to-day changes in parameter values, e.g. chiller
temperatures), as this may lead to bias.
A risk assessor should not use an exposure assessment based on poor-quality
or limited information. A usable output has to be based on reliable data that may
be descriptive (e.g. high, medium or low) or show a range (e.g. high, average or
low) or contain point-estimates (i.e. deterministic, the average level of a
pathogen in a product, the highest level of contamination of a raw material or an
average portion size). The most useful outputs are based on a range of data
representing variability and showing it as defined distributions or probabilities
(i.e. stochastic). The main difference is in the amount of data required (e.g. 65oC,
65 ± 5 oC or mean 65 oC and SD 0.6 oC) and the reliability of the risk estimates.
Point-estimates can be derived from single best or worst case values or averages.
Exposure assessment 113
If there are more data available, distributions can be used to describe the range
of values at risk-determining steps (e.g. frequency or probability) and produce a
range of exposure levels (Duffy and Schaffner, 2001). Ranges of values can also
be used if facts are sketchy or there are high levels of variability likely to result
in different pathogen levels in the product.
Assessments may identify knowledge gaps: having been found, the team
should determine their influence on the validity or accuracy of the risk estimate
(e.g. lack of knowledge of process conditions or pathogen survival). If the team
has to use non-representative or amalgamated data to reach a conclusion, this
should be clearly shown and any influence from high or low values made
explicit. The overall validity of data collection, sampling and testing procedures
should also be carefully examined, because these may directly affect an
assessment. In some cases, poor methodology (such as counting pathogens from
process stages causing injury without using resuscitation techniques: Mafart,
2000)) may lead to inaccurate data and this should be noted in the assessment
document, with a rationale if it is excluded. Using expert opinion is one way to
reduce uncertainty in an exposure assessment. Such opinion is not evidence in
itself, but inference based on suitable evidence. Similar ¡®rules¡¯ should be used
for acquiring and using data (e.g. from records or laboratory experiments) and
expert judgement.
6.7 Types of data used in an exposure assessment
6.7.1 Scope
The risk manager¡¯s problem and the information available will determine the
scope of the exposure assessment (i.e. from farm or factory gate to fork). To do
this the initial phase has to develop a clear and understandable statement of the
scope and context of the problem (a risk profile), so that data collection and
output meet the needs of the risk manager. If the goal is to estimate the risks
from a food¨Cpathogen combination, then the exposure assessment needs data
and information about the supply chain and consumer use right up to
consumption (Brown et al., 1998). Data on multi-stage processes must take
account of the conditions and variations at each stage, any changes caused,
estimating the inputs and outputs at each stage and the cumulative changes that
will determine the level of pathogens or toxin in the finished product. If the end-
point of the assessment is consumption, variations in hazard levels after
consumer handing should also be estimated, because these are likely to be
affected by temperatures and times during storage and preparation.
Exposure assessments are most robust if they focus on one pathogen in one
food. For example, published risk assessments include salmonella in eggs or E.
coli in ground beef (Cassin et al., 1998a and see Table 6.1). Such assessments
have a narrow focus; so that the information they produce may have limited
value in gaining insight into factors increasing risk or ways of reducing it in
other plants. Exposure assessments providing the most useful information enable
114 Microbiological risk assessment in food processing
a risk manager dealing with one plant or product clearly to identify the influence
of the risk-determining stages in the supply chain (Schlundt, 1999). If this
requirement cannot be met, then the technique should not be used and input from
experts should be relied on.
6.7.2 Core supply chain information
The overall supply chain model should identify the hazard inputs and control
measures that determine hazards in the product. It should be built-up stage by
stage, and the scope should determine where and how data are collected. A
decision has to be made before data collection starts about whether the aim is a
single point representation of each stage, or a representation of the range of
conditions at each stage; and therefore the different levels likely to be
encountered in the final product when combinations of contamination levels and
treatments are put together (e.g. 10
6
/g in a raw material with a process giving a 6
log reduction against 10/g in a raw material and a 3 log reduction). This aim will
determine the extent of data collection (e.g. the period over which data should be
collected, the number of measurements and their scope in the supply chain, from
the field or the factory gate to release of the product or consumption). Collected
data should also indicate the quantity of food likely to be produced under the
conditions studied and its distribution among consumers. The more severe the
hazard, the more attention should be paid to collecting reliable data about
process conditions, the effectiveness of control measures and their variability.
Sources of information can be diverse, but the team should focus on collecting
and analysing direct observations, audit information, QA or process records and
experimental data representing the supply chain. For the risk-determining steps,
additional information on QA sampling plans, analytical test methods and any
procedures used for validating control measures should also included, as these
may allow the quality of the data to be evaluated. Data from quality monitoring
(e.g. analytical or microbiology), processing or consumer complaints is usually
held by food companies, and may also be held by regulatory agencies.
Data on the initial quality of ingredients and raw materials (e.g. level of
contamination and its variability) and changes in pathogen level from the field
(or raw material reception), through processing, storage and distribution until the
product is prepared for consumption need to be estimated. Consumer use (e.g.
ready to eat, ambient-stable or cook thoroughly) should be used as a basis for
interpreting the significance of pathogen inputs and control measures during
manufacture, and guide data collection about the supply chain (Carlin et al.,
2000). For example, pathogen presence at low levels in a raw material intended
for cooking (pasteurisation) before consumption is not a severe hazard (Cassin et
al., 1998a). Collecting data on contamination routes for infectious pathogens
will not be critical, if specified heating conditions immediately prior to
consumption will make the product safe. However, data on cooking and its
variability are essential. On the other hand, information on the possible presence
of similar pathogens in a ready-to-eat product is essential, data on
Exposure assessment 115
decontamination and re-contamination is needed to produce a useful risk
estimate. Where the process steps are designed to cause specific changes in
numbers of pathogens (e.g. heating) or restrict their access to the product (e.g.
primary packaging), then additional data may be sought to validate effects. This
data may come from company QA or process information (e.g. pasteuriser
performance or sample examination), from experimental data (e.g. heating
studies on a defined microbiological challenge in the product) or literature
sources (e.g. pathogen heat sensitivity under defined conditions).
6.7.3 Microbiological information
The characteristics of the identified hazard will determine its prevalence in raw
materials and responses to process and storage conditions. Data on key
physiological properties of the chosen pathogen(s) should be obtained (e.g.
growth range, stress tolerance or heat resistance). What is needed will depend on
the processing and preservation systems used. Quality assurance records or
specific investigations of in-process material or equipment hygiene should be
used to show pathogen numbers and identify realistic inputs from raw materials,
factory, equipment and personnel.
If the hazard has been identified and process conditions and product factors
are known, kinetic models can be used to predict growth or death during
processing and storage (McDonald and Da-Wen-Sun, 1999; van Gerwen and
Zwietering, 1998; Stewart et al., 2001; Tienungoon et al., 2000). Kinetic models
require quantitative data on times, temperatures, etc. to predict the fate of
pathogens. Within limitations, they can predict how microbial numbers change in
response to time, temperature and other variables (e.g. pH and salt level) at the
risk-determining steps up to the time of consumption. Worst-case estimates based
on the fastest growing strains likely to be present can be used as the default by
published models. A sigmoid growth curve (the Gompertz function) is assumed
in many of these growth models: it shows growth rate monotonically increasing
up to a maximum and then falling to zero, as the bacterial population reaches a
steady state at its maximum level. Using the maximum rate of growth is fail-safe,
because this rate is often approached rapidly and does not decline significantly
until conditions change, or the maximum population is reached. However, this
type of model may not be appropriate if gradual changes stop growth and induce a
survival phase. Under these conditions actual measurements of numbers will
provide a more realistic basis for the exposure assessment.
Product temperatures and changes in temperature will often depend on the type
of product, and its interactions with equipment and with heating (Braud et al.,
2000) or cooling media. When temperatures are selected for modelling, account
should be taken of the fact that product temperatures do not equilibrate instantly,
but gradually, at each processing stage. This can lead to significant differences
between process conditions and the heating or cooling treatment that products and
pathogens receive. A practical approach to estimating process effects is to start
with single temperature inactivation (e.g. D and z where D the time at any
116 Microbiological risk assessment in food processing
specified lethal temperature required to inactivate a population of spores or
vegetative cells by one logarithm cycle or 90% ¨C mathematically D is equal to the
reciprocal of the slope of the superior curve; and z the number of Celsius
degrees for the thermal destruction curve to transverse one logarithmic cycle ¨C
mathematically z is equal to the reciprocal of the slope of the thermal death curve.)
or growth models and then progress to, or build, more complex models to cover
additional factors such as survival and re-contamination or heat transfer, if they
are important. Hence predictive microbiological models are useful sub-models
within the overall supply chain model (see Zwietering and van Gerwen, 2000).
6.7.4 Additional information
In addition to in-house information on the process and materials in question, the
team should collect relevant published literature and consult experts who may
have access to additional sources of information. These can include suppliers,
food processing personnel, microbiologists, food scientists, epidemiologists,
health experts, nutritionists, research institutes and public health authorities.
Consumer organisations can be another source of information about consumer
practices, and many food trade associations also have data about food/
commodity consumption rates. It may also be useful to compare the collected
data with historical or outside experience. Animal health data may be relevant
for meat and poultry products (Bryan and Doyle, 1995) likely to be con-
taminated with zoonotic pathogens. Information on routes for infection and
contributory events may be extracted from well-conducted outbreak investiga-
tions; however, usually quantitative exposure information is not collected, or is
very limited. Nevertheless, information from reconstruction of the chain of
events that led to an outbreak can be useful in identifying realistic or potential
scenarios of exposure (European Commission, 2000).
6.8 The output of an exposure assessment
Risk assessors use exposure assessments to produce risk estimates of the
probability and consequences of exposure to a food poisoning hazard; depending
on the quality of information available. Assessments will have different levels of
reliability and should always explain levels of uncertainty and variability in the
supply chain covered and any assumptions made. The output of an exposure
assessment should communicate pathogen or toxin levels in the food at the time of
consumption and may also show how they are likely to vary because of handling,
raw material quality, processing and preparation. Whatever level of detail is used,
the output has to be understandable to its users; if it is not, then the benefits of any
approach will be lost. Table 6.4 illustrates examples of exposure assessments.
Assessments can be characterised as informal, qualitative (e.g. describing
risks as serious, life threatening) or quantitative (e.g. estimating risks as one in a
million potentially toxic or a defect rate of 1 in 10 million packs). Qualitative
Exposure assessment 117
Table 6.4 Examples of exposure assessments for infectious and toxin-producing pathogens in various foodstuffs
Microbiological hazard Toxin-producing,
pathogen
Infectious
pathogens
Toxigenic pathogen
Staphylococcus
aureus
Listeria
monocytogenes
Salmonella Salmonella Staphylococcus
aureus
Product type Cooked ham Cooked ham Fully cooked
frozen product
Frozen raw
poultry
Canned low-acid
food hazard from
wet pack handling
Occurrence of hazardous microorganisms in the raw materials
What is the frequency of contamination of the
raw materials making-up the product?
Very low
frequency: 0¨C1%
Frequent Negligible:
0¨C0.1%
Frequent Negligible:
0¨C0.1%
What is the level of contamination found in
the raw materials?
0¨C10 cells/g 0¨C10
4
cells/g 0¨C100 cells/g 0¨C30%
contaminated,
0¨C10 000 cells/g
0¨C10 cells/g
Effect of processing/decontamination
What is the effect of storage before processing
on the level of the hazard?
Survival Some growth Survival Survival Survival
What is the intended effect of all processing
and any decontamination stages on the level
of the microorganism?
Complete
inactivation ¨C at
least a 6 log
reduction
Complete
inactivation ¨C at
least a 6 log
reduction
Complete
inactivation ¨C at
least a 6 log
reduction
Survival Partial inactivation
Occurrence of toxin (if the hazardous micro-organism is toxin-producing)
What is the likelihood of toxin presence if the
microorganism can produce toxin and is
present in the raw materials, product or
process environment?
Low frequency:
1¨C10%
Negligible: 0¨C0.1%
Re-contamination after processing or decontamination
What is the frequency of re-contamination of
the product in the factory after processing or
decontamination, so that the hazard is present
in the final product?
Very low
frequency: 0¨C1%
Very low
frequency: 0¨C1%
Negligible:
0¨C0.1%
High frequency:
10¨C50%
Negligible: 0¨C0.1%
What is the likely level of re-contamination
after processing or decontamination?
0¨C10 000 cells/g 0¨C10 cells/g 0¨C10 cells/g 0¨C10 000 cells/g 0¨C10 000 cells/g
What is the variability of recontamination? Very low, good
quality data on
materials over a
period of months
or tens of intakes
Low, fair quality
data on similar
materials from
QA data
Very low, good
quality data on
materials over a
period of months
or tens of intakes
Medium, medium
quality general
information on
supplier
assurance
concerning levels
Very high, no data,
opinion
Packaging
Is the product put in its primary packaging
before the decontamination step?
No No No N/A Yes
If the answer is no, what is the frequency of
recontamination of decontaminated product
before packaging?
Negligible: 0¨C
0.1%
Negligible: 0¨C
0.1%
Negligible:
0¨C0.1%
N/A, but high risk
of cross
contamination
during domestic
use
Very low: 0¨C0.1%
What is the level of recontamination after
packaging?
0¨C10 000 cells/g 0¨C10 cells/g 0¨C10 cells/g N/A 0¨C10 000 cells/g
Table 6.4 (continued)
Microbiological hazard Toxin-producing,
pathogen
Infectious
pathogens
Toxigenic pathogen
Staphylococcus
aureus
Listeria
monocytogenes
Salmonella Salmonella Staphylococcus
aureus
Product type Cooked ham Cooked ham Fully cooked
frozen product
Frozen raw
poultry
Canned low-acid
food hazard from
wet pack handling
Effect of product/pack storage
How does the level of the hazard change
during storage (storage according to the usage
instructions)?
Survival Some growth:
< 3 log increase
in numbers
Survival Survival Growth, large
increase in numbers
Effect of product/pack storage on toxigenesis
What is the likelihood of toxigenesis in the
product?
Very low
frequency
Negligible
What is the effect of storage conditions on
toxigenesis (If the level of the micro-organism
changes and it is toxin-producing)
No change during
chilled storage,
survival
No change to rapid
toxin production
Consumer use
Is the product intended as single use (S) or
multi-use (M), where it will be stored after
opening?
M, fridge or
ambient
M, fridge or
ambient
S S Usually S
If the answer is M: This means that the product is multi-use either in a domestic or food service application and the usage and preparation section
should be completed.
The effect of open shelf-life on the microbial hazard
What is the effect of open shelf-life storage on
the level of the pathogen?
No change
(at chill)
Some growth No change
(at chill)
No change
(at chill)
Growth
What is the variability of re-contamination
during open shelf-life?
Low Low N/A High N/A
The effect of open shelf-life on toxigenesis
What is the likelihood of growth and toxin
production during open shelf-life?
Slight increase,
low frequency of
toxin production
Unchanged, low
frequency
What is the effect of the intended storage
conditions on toxigenesis?
Unchanged Large increase
The effect of usage and preparation on hazards
What is the effect of customer or food service
preparation and usage on the level of hazard?
No change,
survival or
growth and
toxigenesis
Some growth:
< 3 log increase
in numbers
Complete
inactivation, if
used according
to instructions
Complete
inactivation, if
used according
to instructions
No change or
increase
The effect of usage and preparation on toxigenesis
Is toxin likely to be present when the product
is consumed?
Very low
frequency
Very low frequency
What is the effect of usage and preparation on
toxin level and production?
Slight increase Unchanged
Food intake by a consumer
What is the likely quantity of the food
consumed by a customer on a specified
occasion or over a period of time?
Low intake
50¨C100g
Low intake
50¨C100g
Low intake
50¨C100g
Low intake
50¨C100g
High intake
100¨C200g
Risk estimate Presentation of the risk estimate in the way most informative to the user or risk manager
assessments are the lowest level of detail providing a systematic description of
a problem and its contributory factors. Quantitative assessments include
mathematical analyses of relevant data (van Gerwen et al., 2000; Hoornstra and
Notermans, 2001) and provide numerical estimates of risk. They should only be
done if there are sufficient resources for data collection, the attendant
calculations and a substantial amount of scientifically respectable data. When
there is less data, effective guidance for risk managers may still be provided by
a description and ranking (e.g. serious, realistic, unrealistic) of the hazards and
a detailed description of the risk determining steps. When data, time and/or
other resources are limited, the best option will usually be to conduct a
qualitative risk assessment, as described in the examples (Table 6.4). In fact,
such studies are often undertaken for preliminary evaluation of a food safety
issue to determine if a more detailed analysis is warranted (Tompkin, 1999).
Such preliminary studies must at least identify the hazards and parts of the food
chain to consumption that require further detailed examination during a
quantitative exposure assessment.
If reliable quantitative data are available, exposure assessors may use
quantitative terms to describe risk. If quantitative data are not available, a
qualitative approach is likely to be more meaningful. Data from an exposure
assessment may be used under three headings:
1 Identification of the hazard covered (e.g. salmonella) with an overall supply
chain model showing the origin(s), routes and effects of processing,
handling and preparation on the hazard as defined by the scope. The
probability of contamination or presence may be rated between negligible,
high or certain, or may be expressed numerically. Based on changes in the
hazard level, risk-determining steps in the supply chain can be shown.
2 An estimate (e.g. 1 in 10
6
portions will contain 100 cells) or a description
(e.g. high) of the probability that a portion will contain a harmful dose for
the designated consumers (Buchanan et al., 2000). To do this, the critical
level required for disease and the dose in a portion need to known (or
assumed). In many cases exact values will not be available and it may be
appropriate simply to state that one cell in a portion causes illness. If
information on critical exposure levels is not available, risk assessors may
use the exposure assessment to describe processing and food composition
factors that are likely to influence the level of exposure.
3 The quantity of food likely to be covered by the exposure assessment and
hence the extent of the risk arising because of the level and distribution of
the hazard. Factors that influence distribution of the disease agent and its
wider spread are important for controlling the impact of disease. An
exposure assessment should indicate the distribution of significant doses
from different uses of a contaminated raw material.
122 Microbiological risk assessment in food processing
6.9 References
BALBUS, J., PARKIN, R. and EMBREY, M. (2000) Susceptibility in microbial risk
assessment: definitions and research needs. Environmental Health
Perspectives 108(9) 901¨C905.
BETTS, G. and EARNSHAW, R. (1998) Predictive microbiology for evaluating food
quality and safety. Food Review 25(9) 11¨C13.
BRAUD, L.M., CASTELL-PEREZ, M.E. and MATLOCK, M.D. (2000) Risk-based design
of aseptic processing of heterogeneous food products. Risk Analysis 20(4)
405¨C412.
BROWN, M.H., DAVIES, K.W., BILLON, C.M., ADAIR, C. and MCCLURE, P.J. (1998)
Quantitative microbiological risk assessment: principles applied to
determining the comparative risk of salmonellosis from chicken products.
Journal of Food Protection 61(11) 1446¨C1453.
BRYAN, F.L. and DOYLE, M.P. (1995) Health risks and consequences of Salmonella
and Camplylobacter jejuni in raw poultry. Journal of Food Protection 58,
326¨C344.
BUCHANAN, R.L., SMITH, J.L. and LONG, W, (2000) Microbial risk assessment:
dose¨Cresponse relations and risk characterization. International Journal of
Food Microbiology 58(3) 159¨C172.
CARLIN, F., GIRARDIN, H., PECK, M.W., STRINGER, S.C., BARKER, G.C., MARTINEZ, A.,
FERNANDEZ, A., FERNANDEZ, P., WAITES, W.M., MOVAHEDI, S., LEUSDEN, F-
VAN, NAUTA, M., MOEZELAAR, R., TORRE, M. DEL and LITMAN, S. (2000).
Research on factors allowing a risk assessment of spore-forming
pathogenic bacteria in cooked chilled foods containing vegetables: a
FAIR collaborative project. International Journal of Food Microbiology
60(2/3) 117¨C135.
CASSIN, M.H., LAMMERDING, A. M., TODD, E. C. D., ROSS W. and MCCOLL, R. S.
(1998a) Quantitative risk assessment for Escherichia coli 0157:H7 in
ground beef hamburgers. International Journal of Food Microbiology 41
21¨C44.
CASSIN, M.H., PAOLI, G.M. and LAMMERDING, A.M. (1998b) Simulation modeling
for microbial risk assessment. Journal of Food Protection 61(11) 1560¨C
1566.
CODEX ALIMENTARIUS COMMITTEE (1999) Principles and Guidelines for the
Conduct of Microbiological Risk Assessment, CAC/GL-30, Geneva.
COLEMAN, M.E. and MARKS, H.M. (1999) Qualitative and quantitative risk
assessment. Food Control 10(4/5) 289¨C297.
DUFFY, S. and SCHAFFNER, D.W. (2001) Modeling the survival of Escherichia coli
O157:H7 in apple cider using probability distribution functions for
quantitative risk assessment. Journal of Food Protection 64(5) 599¨C605.
EUROPEAN COMMISSION, HEALTH AND CONSUMER PROTECTION DIRECTORATE
GENERAL. (2000) (submitted to the Scientific Steering Committee at its
meeting of 13¨C14 April 2000) Preliminary Report on Quantitative Risk
Assessment on the Use of the Vertebrate Column for the Production of
Exposure assessment 123
Gelatine and Tallow.
FAO/WHO (1995) Application of risk analysis to food standards issues. Report of
the Joint FAO/WHO Expert Consultation. WHO, Geneva. WHO/FNH/
FOS/95.3.
FOEGEDING, P.M. (1997) Driving predictive modelling on a risk assessment path
for enhanced food safety. International Journal of Food Microbiology
36(2¨C3) 87¨C95.
GERBA, C.P., ROSE, J.B. and HAAS, C.N. (1996) Sensitive populations: who is at the
greatest risk? Int. J. Food Microbiol., 30, 113¨C123.
GIANNAKOUROU, M.C., KOUTSOUMANIS, K., NYCHAS, G.J.E. AND TAOUKIS, P.S.
(2001) Development and assessment of an intelligent shelf life decision
system for quality optimization of the food chill chain. Journal of Food
Protection 64(7) 1051¨C1057.
GIFFEL, M.C., JONG, P. DE and ZWEITERING, M.H. (1999) Application of predictive
models as a tool in the food industry. New Food 2(2) 38, 40¨C41.
HOORNSTRA, E. and NOTERMANS, S. (2001) Quantitative microbiological risk
assessment). International Journal of Food Microbiology 66(1/2) 21¨C29.
HOORNSTRA, E; NORTHOLT, M.D., NOTERMANS, S. and BARENDSZ, A.W. (2001) The
use of quantitative risk assessment in HACCP. Food Control 12(4) 229¨C
234.
HUSS, H.H., REILLY, A. and BEN-EMBAREK, P.K. (2000) Prevention and control of
hazards in seafood. Food Control 11(2) 149¨C156.
JOUVE, J.-L., STRINGER, M.F.andBAIRD-PARKER, A.C. (1999) Food safety
management tools. Food Science & Technology Today 13(2) 82¨C91.
KANG, S.H., KODELL, R.L. and CHEN, J.J. (2000) Incorporating model uncertainties
along with data uncertainties in microbial risk assessment. Regulatory
Toxicologocal Pharmacology 32(1) 68¨C72.
KLAPWIJK, P.M., JOUVE, J.L. and STRINGER, M.F. (2000) Microbiological risk
assessment in Europe: the next decade. International Journal of Food
Microbiology 58(3) 223¨C230.
KLEER, J. and HILDEBRANDT, G. (2001) Importance of predictive microbiology for
risk minimization in food production processes. I. Model creation, user
programs and validating. Fleischwirtschaft 81(6) 99¨C103.
LAMMERDING, A.M. (1997) An overview of microbial food safety risk
assessment. Journal of Food Protection 60(11) 1420¨C1425.
LAMMERDING, A.M. and FAZIL, A. (2000) Hazard identification and exposure
assessment for microbial food safety risk assessment. International
Journal of Food Microbiology 58(3) 147¨C157.
MAARTEN, J. and NAUTA, M.J. (2000) Separation of uncertainty and variability in
quantitative microbial risk assessment models. International Journal of
Food Microbiology 57(1/2) 9¨C18.
MAFART, P. (2000) Taking injuries of surviving bacteria into account for
optimising heat treatments. International Journal of Food Microbiology
55(1¨C3) 175¨C179.
MCDONALD, K. and DA-WEN-SUN (1999) Predictive food microbiology for the
124 Microbiological risk assessment in food processing
meat industry: a review. International Journal of Food Microbiology 52(1/
2) 1¨C27.
MCELROY, D.M., JAYKUS, L.A. and FOEGEDING, P.M. (1999) A quantitative risk
assessment for Bacillus cereus emetic disease associated with the
consumption of Chinese-style rice. Journal of Food Safety 19(3) 209¨C229.
MCNAB, B.W. (1998) A general framework illustrating an approach to quantitative
microbial food safety risk assessment. Journal of Food Protection 61(9)
1216¨C1228.
MEREDITH, L., LEWIS, R. and HASLUM, M. (2001) Contributory factors to the
spread of contamination in a model kitchen. British Food Journal 103(1)
23¨C35.
MOY, G.G. (1999) Food safety and globalization of trade: a challenge to the public
health sector. World Food Regulation Review 8(9) 21¨C24.
NAUTA, M.J. (2000) Separation of uncertainty and variability in quantitative
microbial risk assessment models. International Journal of Food
Microbiology 57(1/2) 9¨C18.
NORRUNG, B. (2000) Microbiological criteria for Listeria monocytogenes in
foods under special consideration of risk assessment approaches.
International Journal of Food Microbiology 62(3) 217¨C221.
NOTERMANS, S.andHOORNSTRA, E, (2000) Risk assessment of Listeria
monocytogenes in fish products: some general principles, mechanism of
infection and the use of performance standards to control human exposure.
International Journal of Food Microbiology 62(3) 223¨C9.
NOTERMANS, S. and TEUNIS, P. (1996) Quantitative risk analysis and the
production of microbiologically safe food: an introduction. International
Journal of Food Microbiology 30(1¨C2): 3¨C7.
OSCAR, T.P. (1998) The development of a risk assessment model for use in the
poultry industry. Journal of Food Safety 18 371¨C381.
RASMUSSEN, B., BORCH, K. and STARK, K.D.C. (2001) Functional modelling as
basis for studying individual and organisational factors ¨C application to
risk analysis of Salmonella in pork. Food Control 12(3) 157¨C164.
ROSS T., DALGAARD P. and TIENUNGOON S. (2000) Predictive modelling of the
growth and survival of Listeria in fishery products. International Journal
of Food Microbiology 62(3) 231¨C245.
SCHLUNDT, J. (1999) Principles of food safety risk management. Food Control
10, 4(/5), 299¨C302.
SCHLUNDT, J. (2000) Comparison of microbiological risk assessment studies
published. International Journal of Food Microbiology 58(3) 197¨C202.
SCHOTHORST, M. VAN (1997) Practical approaches to risk assessment. Journal of
Food Protection 60(11) 1439¨C1443.
SERRA, J.A., DOMENECH, E., ESCRICHE, I. and MARTORELL, S. (1999) Risk
assessment and critical control points from the production perspective.
International Journal of Food Microbiology 46(1) 9¨C26.
SMOUT, C., LOEY, A.M.L. VAN and HENDRICKX, M.E.G. (2000) Non-uniformity of
lethality in retort processes based on heat distribution and heat penetration
Exposure assessment 125
data. Journal of Food Engineering 45(2) 103¨C110.
STEWART, C.M., COLE, M.B., LEGAN, J.D., SLADE, L., VANDEVEN, M.H. and
SCHAFFNER, D.W. (2001) Modeling the growth boundary of Staphylococcus
aureus for risk assessment purposes. Journal of Food Protection 64(1) 51¨C
57.
TIENUNGOON, S., RATKOWSKY, D.A., MCMEEKIN, T.A. and ROSS, T. (2000) Growth
limits of Listeria monocytogenes as a function of temperature, pH, NaCl,
and lactic acid. Applied and Environmental Microbiology 66(11) 4979¨C
4987.
TOMPKIN, R.B. (1999) Food safety and its management. Food Australia 51(12)
628¨C630.
VAN GERWEN, S.J. and ZWIETERING, M.H. (1998) Growth and inactivation models
to be used in quantitative risk assessments. Journal of Food Protection
61(11) 1541¨C1549.
VAN GERWEN, S.J., TE GIFFEL, M.C., VAN¡¯T RIET, K., BEUMER, R.R. and ZWIETERING,
M.H. (2000) Stepwise quantitative risk assessment as a tool for
characterisation of microbial food safety. Journal of Applied Microbiology
88 938¨C951.
VOSE, D.J. (1998) The application of quantitative risk assessment to microbial
food safety. Journal of Food Protection 61(5) 640¨C648.
VOYSEY, P. (2000) An Introduction to the Practice of Microbiological Risk
Assessment for Food Industry Applications. Guideline, Campden &
Chorleywood Food Research Association; No. 28 Campden &
Chorleywood Food Research Association, Chipping Campden.
VOYSEY, P. and BROWN, M. (2000) Microbiological risk assessment: a new
approach to food safety control. International Journal of Food
Microbiology 58(3) 173¨C179.
WALLS, I. and SCOTT, V.N. (1997) Use of predictive microbiology in microbial
food safety risk assessment. International Journal of Food Microbiology
36(2/3) 97¨C102.
WHITING, R.C. and BUCHANAN, R.L. (1997) Development of a quantitative risk
assessment model for Salmonella enteriditis in pasteurised liquid eggs.
International Journal of Food Microbiology 36, 111¨C125.
ZWIETERING, M.H. and VAN GERWEN, S.J.C. (2000) Sensitivity analysis in
quantitative microbial risk assessment. International Journal of Food
Microbiology 58(3) 213¨C221.
126 Microbiological risk assessment in food processing
. . . great uncertainties are introduced at every step in the risk assessment
procedure, and risk characterisations should be seen as indicators only
and all the uncertainties carefully spelled out. Risk characterisations, as
they are presently derived, should never be assumed to present accurate
representations of the real situation. (Benford and Tennant, 1997)
7.1 Introduction: key issues in risk characterisation
Risk characterisation is defined as the qualitative and/or quantitative estimation,
including attendant uncertainties, of the probability of occurrence and severity
of known or potential adverse health effects in a population based on hazard
identification, hazard characterisation and exposure assessment (FAO/WHO,
1995; Codex 1999). It represents the integration of the hazard identification,
hazard characterisation and exposure assessment to give a risk estimate. The
information to bring together for a risk estimate may be quantitative and/or
qualitative in nature, however it should be of the best quality, be the most
relevant and the most up to date that is available. This information will never be
completely ¡®made-to-measure¡¯, thus ¡®expert opinion¡¯ and uncertainty will
always have a role in the risk characterisation step, and consequently in
microbiological risk assessment (MRA) as a whole.
A broad range of skills is required to carry out the risk characterisation step,
since information and data of a wide range of types (quantitative and qualitative)
and sources will need to be handled. Some mathematical knowledge (for
example in modelling data), knowledge of the process under consideration, and
7
Risk characterisation
P. Voysey, K. Jewell and M. Stringer, Campden and Chorleywood
Food Research Association, Chipping Campden
microbiological knowledge and expertise is invariably needed. Voysey (2000)
divides risk characterisation into six stages:
1. combining previous MRA steps
2. summarising the risk
3. variability in risk
4. sensitivity analysis
5. uncertainty
6. validation against experience
These will be addressed in turn.
7.1.1 Purpose
This heading can be used to cover the stages of combining previous MRA steps
and summarising the risk described by Voysey (2000). The above definition of
risk characterisation reflects the purpose of this step, which is to bring together
the information derived from the previous MRA steps. These are hazard
identification, exposure assessment (probability of occurrence) and hazard
characterisation (severity of known or potential adverse health effects). This can
then be used to provide an estimate (qualitative or quantitative) of risk to a given
population or sub-population.
The handling of combined factors associated with the food, the process, the
pathogen and the person is complex. Consequently, considerable skill is needed
by the team carrying out the risk characterisation step to ensure that the output
from this step matches the anticipated output as declared in the Statement of
Purpose. It must also be in a format that the risk manager can use to make a
decision based on the findings of the MRA.
7.1.2 Variability
In summarising the risk, some detail will not be included. Consequently, an
¡®average¡¯ or ¡®overall¡¯ risk will not be as applicable to some groups of the
population, or some circumstances, as others will. Variability in factors relating
to the manufacturing process for the food, the characteristics of the food itself,
the pathogen or people, can result in variability in the final risk. A ¡®sensitivity
analysis¡¯ is an important guide in planning assessment of variability.
If variability in a factor has been included in its uncertainty, it will carry
through to uncertainty in the final risk. If variability in a factor has been
identified and carried through the process, the final risk will be dependent on
source factors. For example, this might result in different final risks for different
groups of people, or an expression for final risk which depends on storage life.
In such a case, it is important to report clearly the dependence, and the variation
in risk between different circumstances or groups.
128 Microbiological risk assessment in food processing
7.1.3 Identification of risk factors
It is possible that the estimated final risk is not substantially affected by credible
variations in the various factors, parameters, assumptions, models, and so on. It
is more likely that some have a larger effect than others. Identification of process
and food factors that have a substantial effect on the final risk (sensitivity
analysis), is an important benefit of risk assessment. It allows effective direction
of risk reduction measures and targeted consideration of the effect of variability.
Assessment of the effect of parameter values, assumptions and models on the
final risk assessment is often the only way of transferring uncertainties in them
through to the final values. Often this can effectively be performed by varying
these assumptions and observing the effect on the final risk assessment.
7.1.4 Uncertainty
The uncertainty expresses the range of values that may credibly be true. It may
be that the best estimate would lead to one decision, but that uncertainties are so
large that a value leading to another decision is quite credible. It is essential to
accompanyanyindicationofriskwithanindicationoftheassociated
uncertainty. In principle, it is possible to estimate the final uncertainty by
tracking the uncertainties of all the inputs to the risk assessments, modifying and
combining the uncertainties as the inputs are modified and combined. Often this
is impractical and the effects of input uncertainties on the output must be
estimated by sensitivity analysis.
7.1.5 Validation
It is essential to ensure that the results of the risk assessment accord with
common sense and with experience.
7.2 Risk characterisation requirements
Risk characterisation aims to describe the risk to the relevant population from
consumption of the relevant food. There are real and substantial differences
between members of the population and between food items with consequent
real differences in risk. For microbiological risk assessment in food processing,
the description of such differences is often more important than estimates of
average or typical risk. Judgements on the acceptability of any given risk level
are properly the prerogative of government, which few food processors would
wish to usurp. For a food processor a conclusion that the processed food presents
no greater risk than the equivalent fresh food, or that a process modification does
not increase the risk, is more useful than a numerical estimate of illnesses. That
is, a food processor is more concerned with risk relative to variability (within the
scope) than with absolute risk estimates. An appropriate handling and
description of variability is central to successful risk characterisation. Unless
Risk characterisation 129
the risk characterisation includes a well considered and described handling of
variability, the risk estimate will be partial and may often be misleading.
7.2.1 Clear statement of purpose
Hazard identification, hazard characterisation, and exposure assessment collect
and process external information for use principally within the risk assessment.
Risk characterisation integrates their results to give risk estimates (Codex, 1999)
which are the principal outputs from the risk assessment. It is important that
there is a clear understanding of what those outputs should be. Although a clear
statement of purpose is necessary to undertake an appropriate hazard
identification, hazard characterisation, and exposure assessment, failures in
these stages due to inadequate definitions of objectives and required outputs may
not become apparent until the risk characterisation stage. Accordingly, we
associate the requirement for a clear and complete statement of purpose with the
risk characterisation rather than earlier stages. The statement of purpose should
include, inter alia, objectives, scope and required outputs (Codex, 1999;
Voysey, 2000).
Setting objectives(s)
Mitchell (2000) states ¡®At the very beginning . . . you need to work out why you
are doing this particular Risk Assessment. Otherwise you can become easily
confused and the MRA will not do what you want¡¯. Voysey (2000) notes that the
objective would ¡®generally take the form of an objective to carry out an MRA . . .
within the agreed scope and to an agreed timetable and output¡¯. In hindsight this
seems inadequate. The statement of purpose should include a clear statement of
the need that the risk assessment is intended to address, that is motivations as
well as specific objectives. Without such clarity in purpose, it is difficult to
define appropriate scope, timetables and outputs.
A clear statement of the motivation for the risk assessment is especially
important to a food processor because his available time, information and other
resources are substantially less than for governmental risk assessments, and are
unlikely to allow risk estimates as complete and precise as might reasonably be
wished. A clear objective allows the risk characterisation to make informed
compromises. It should be in sufficient detail to allow the degree of achievement
of objectives to be estimated, and the relative value of different results to be
assessed. The act of recording encourages clarity of thought, and the record
avoids later drift in perceived objectives.
Defining the scope
A careful definition of the scope in the statement of purpose can greatly alleviate
the difficulty of all steps of any risk assessment, but especially of the risk
characterisation step and particularly when risk assessment is applied to food
processing. Variation and uncertainty are closely linked, if there were no
variation there would be no uncertainty, and variation is a reflection of scope.
130 Microbiological risk assessment in food processing
Governmental risk assessments generally have a broad scope. The population
considered is national or international, the foodstuff is a broad category and the
food chain contains many variants. The broad scope demands substantial
resource in the hazard characterisation and the exposure assessment. The broad
variation demands much of the risk characterisation. The risk to any particular
person from any particular food item depends on the characteristics of the person
and the food item. Incorporating that dependence in a risk estimate is
challenging, the more so as the breadth of the variation increases. Failure to
incorporate such dependence leads to uncertainty in applying typical or average
estimates to specific circumstances.
The scope of microbiological risk assessments in food processing is generally
much more restricted. The population considered may still be national or
international. However, in food processing the objective of the risk assessment is
often to look at differences. The population can often be considered constant for
the alternatives being compared so that the effect on the conclusions of variation
and uncertainty from that source is minimised. For a food processing risk
assessment the foodstuff and food chain are usually very well defined. This
eases the exposure assessment; often directly relevant data can be gathered from
existing records. It also reduces the range of variation to be encompassed in the
risk characterisation. Clearly the scope must include the range of alternatives to
be compared and realistic deviation in other variables. However the variation
implicit in a risk assessment of a particular product subject to a particular
process and distributed by a particular chain is very much less than a typical
governmental risk assessment.
The reduction in scope does limit the range of application of the conclusions.
However, microbiological risk assessments in food processing are generally
conducted within strictly limited time-scales for specific objectives. The scope
should be as narrow as possible, consistent with the objectives.
Form of output
The statement of purpose should include a clear and explicit statement of the
output to be produced. Codex (1999) say that:
The output form and possible output alternatives of the risk assessment
should be defined. Output might, for example, take the form of an
estimate of the prevalence of illness, or an estimate of annual rate
(incidence of human illness per 100,000) or an estimate of the rate of
human illness and severity per eating occurrence.
Voysey (2000) and Mitchell (2000) give similar examples of alternatives.
These examples illustrate different bases for risk estimates but the alternatives
given are by no means a complete list. Other bases for risk estimates are
especially relevant when applying microbiological risk assessment in the food
processing industry, when differences and ratios may be more appropriate; for
example, the proportionate reduction (or increase) in risk consequent on a
process change.
Risk characterisation 131
More fundamentally, while these example forms of output specify the basis
of the risk estimate, they do not address the expression of variability. There are
many ways in which variability can be estimated and expressed, and it is
important that the methods chosen are appropriate to the purpose of the risk
assessment. Hypothetically, a process change may result in a lower risk on
average, but, due to increased variability, a substantial increase in the risk
presented by the extreme 10% (say) of the product. The ¡®average¡¯ risk associated
with processed foods is very low so that risk management decisions are likely to
be based on variation from the average. If the manner in which variation will be
assessed and described is not considered and specified at the outset, it is unlikely
to be successfully achieved by the risk characterisation at the end of the risk
assessment.
The specification of form of output should also explicitly address the
expression of uncertainty. It is essential that uncertainty and variability are
clearly distinguished. The implications of the two following risk estimate
statements:
1. process change results in 95% of production presenting a reduced risk; and
2. process change results in 95% confidence of a reduced risk
are very different, and neither statement is adequate. A satisfactory risk estimate
statement will include indications of both variability and uncertainty and is
likely to be much more complex than either of these examples.
Preliminary investigation
A microbiological risk assessment performed by, or for governmental and
regulatory agencies may be well justified even if the information and data
available do not allow clear conclusions to be drawn. The absence of
information is itself of interest and may serve to direct precautionary measures,
as well as further investigations. In contrast, microbiological risk assessments in
food processing are generally performed with specific objectives. The narrower
scope limits the spin-off value if the assessment does not achieve its objectives.
The clear statement of objectives, form of output, and scope described above are
of little value if the objectives are not met because of limited resources,
information or data. When describing the statement of purpose step Codex
(1999) said that ¡®the microbiological risk assessment may require a preliminary
investigation phase¡¯. Voysey (2000) recommended an outline MRA as a
preliminary ¡®Step 0¡¯. Such a preliminary investigation may indicate which
objectives are feasible and what resources may be needed to achieve them. Of
course, one possible outcome of the preliminary investigation is that the MRA is
not feasible and should not be attempted. However, a well executed and
interpreted preliminary MRA will identify information and resources and permit
a clear and complete statement of purpose, including the form of the risk
assessment and the handling of variability and uncertainty. This will allow the
risk characterisation to be carried out with a clear and achievable objective in
view.
132 Microbiological risk assessment in food processing
7.2.2 Appropriate inputs
The risk characterisation depends directly on the results of the exposure
assessment and hazard characterisation steps, which in turn depend on the
statement of purpose and the hazard identification. If the exposure assessment
and hazard characterisation do not produce the required information then the risk
characterisation cannot succeed. A well designed, executed and interpreted
preliminary investigation should have ensured the availability of appropriate
resources, including information. A clear and complete statement of purpose
should have ensured that the exposure assessment and hazard characterisation
were appropriately directed.
However, risk characterisation is the stage at which the exposure assessment
and hazard characterisation come together, and it is at this stage that
incompatibility will become apparent. Incompatibility can arise through
different approaches to variability. The variability can be broadly divided into
that within the exposed population and that within the food chain. Both hazard
characterisation and exposure assessment are likely to address variability by
sub-setting the scope. Hazard characterisation is likely to divide the exposed
population into sub-groups of differing sensitivity to the hazard, and may divide
the food scope into areas of different associated virulence. Exposure assessment
is likely to divide the exposed population into sub-groups of differing behaviour
and the food scope into areas of differing contamination.
Unless the hazard characterisation and exposure assessment use compatible
data, sub-setting the risk characterisation will not be able to combine them
without substantial loss of precision in the description of variability. If hazard
characterisation divides the exposed population according to medical criteria
and exposure assessment divides them by socio-economic grouping, the risk
characterisation will not be able to describe the variation in risk by either
criterion. Expressing this formally, the exposure assessment estimates
exposure as a function of variable factors; hazard characterisation estimates
harm as a function of variable factors. Unless the domains of the two
functions are compatible the risk characterisation will not be able to estimate
risk as a function of variable factors. Microbiological risk assessment in food
processing situations is less likely to suffer from such incompatibilities if the
statement of purpose adequately describes the objective and scope, directing
the exposure assessment and hazard characterisation to common and
appropriate subsetting.
7.2.3 Skills and tools
A successful and valid risk characterisation requires a broad range of skills.
Because this step synthesises the results of the hazard characterisation and the
exposure assessment it requires a deep understanding of the techniques used in
those steps, and their limitations, approaching that required to perform those
steps. Here we consider the skills in the order:
Risk characterisation 133
? mathematical, statistical and computing skills
? microbiological and medical skills
? food processing skills
This list is not in order of importance; all the skills are essential. If anything, and
especially for risk assessment in food processing, their importance to the validity
of the conclusions is the reverse, with food processing knowledge and
experience the most important.
Mathematical, statistical and computing skills
Microbiological risk assessments commonly represent and summarise rates and
levels of contamination, the behaviour of the food and the pathogen during
processing and distribution, and the behaviour using a range of dose¨Cresponse
empirical and mechanistic mathematical and statistical models and techniques
implemented as computer programs. The availability and usability of such
programs has removed many technical barriers, however it has encouraged a
¡®black-box¡¯ approach to sophisticated techniques where the results may be
considered without due regard to the assumptions and limitations of the
techniques and the data. An appreciation of these issues at the risk
characterisation step is essential to a proper handling of uncertainty and
requires a good understanding of a broad range of mathematical and statistical
techniques.
Microbiological and medical skills
Not surprisingly, it is critically important that expertise and knowledge is
available on the characteristics of microorganisms. Information will be required
on the growth, survival and death of a wide range of pathogens and in some
cases spoilage organisms. An understanding will be required on how the
different microorganisms respond to a variety of preservation mechanisms and
technologies, and in particular the interaction of preservation methods. The
properties and attributes of microorganisms will be closely linked to the
foodstuffs in which they are found. With the advent of novel molecular
techniques for identifying and differentiating microorganisms it is also
important that members of the team undertaking the MRA embrace this new
knowledge.
In the area of dose¨Cresponse, a clear and informed view will be necessary on
the definition of various ¡®illness end-points¡¯ if one is going realistically to
attribute illness with data on prevalence and distribution patterns. The
implication of microorganisms in illness (whether it be outbreaks or sporadic
cases) needs to be undertaken with great care, and medical expertise in
epidemiological investigative science will be vital. It is likely that such expertise
will become increasingly important as microbiological risk assessment
addresses severe illness syndromes (e.g. haemolytic uraemic syndrome
associated with E. coli O157:H7, and sensitive sub-populations, i.e. pregnant
women and immuno-compromised).
134 Microbiological risk assessment in food processing
Food processing skills
Expert knowledge is also required on the implications of all stages in food
processing which may impinge on the outcome of the risk assessment. This will
include sources of raw materials (growing and harvesting methods or production
techniques), primary processing, manufacturing through to distribution and
retail sale to the consumer. It is evident that many stages in the food production
chain can impact on the levels of microorganisms which may be present in the
final product. Skills may be necessary in disciplines such as mechanical and
chemical engineering, food science and technology, agriculture and consumer
and sensory sciences. The use of such expertise will maximise the opportunity
for identifying intervention strategies which will lead to the greatest reduction in
risk.
7.3 Risk characterisation methods
This section often uses mathematical terminology. This usage is, however, only
for brevity and does not imply that the inputs or estimates must be expressed
mathematically. Almost always the inputs and outputs of the risk assessment are
at least partly qualitative. Even then the risk characterisation must still explicitly
consider the variability, sensitivity, uncertainty and validation of the risk
estimates which will depend in turn on the exposure assessment and hazard
characterisation. A summary of these three stages is shown in Fig. 7.1 where
inputs are grouped into those associated with the process (including the
foodstuff), the pathogen, and the person at risk.
Exposure assessment estimates the exposure of members of the target
population to the pathogen. Hazard characterisation estimates the harm to
members of the target population conditional on a given exposure to the
pathogen. Risk characterisation combines these estimates to produce estimates
of the harm to members of the target population, the required risk estimates. The
methods used by the risk characterisation to combine the exposure assessment
and hazard characterisation estimates depend on the nature of those estimates
and of the required risk estimates, as specified in the Statement of Purpose.
Figure 7.1 represents all three estimates as functions of the inputs, F
E
,F
H
and
F
R
. Risk characterisation may be seen as the joint integration, or convolution, of
the exposure assessment and hazard characterisation estimates.
The hazard characterisation estimate of harm is conditional on an assumed
level of exposure. As the frequency of harm depends on exposure, the hazard
characterisation output is intrinsically multi-valued, often expressed as a dose-
response curve. This multi-valued estimate does not, in itself, represent
variability (see section 7.3.3). The multiple values do not represent different
individuals in a population but different assumed doses. Indeed each value, the
frequency of harm associated with a given dose, is generally an average across a
population and conceals the variability of response within that population.
Risk characterisation 135
7.3.1 Sensitivity analysis
Inasmuch as the purpose of a risk assessment is to deduce risk estimates from the
input factors, the dependence of the estimates on the input factors is central to
the whole process. However, formal sensitivity analysis is usually required both
to meet the objectives of the risk assessment and appropriately to handle
uncertainty.
Sensitivity analysis involves determination of the change in risk resulting
from changes in the inputs. This is important both in handling uncertainty
(section 7.3.5) and in determining and describing the risk determining factors
(section 7.3.4).
Viewed mathematically, sensitivity analysis is differentiation of the risk with
respect to the inputs which may be expressed as equation 7.1.
Fig. 7.1 The relationship between exposure assessment, hazard characterisation and risk
characterisation.
136 Microbiological risk assessment in food processing
dF
R
X
@F
R
@x
dx 7:1
where F
R
is the risk estimate and the x¡¯s are all the values on which the risk
depends.
This makes clear that the influence of an input on the risk estimate can be
considered in two parts. First is the partial differential, @F
R
=@x, defining the
effect on the risk estimate of a unit change in the input. To evaluate the actual
effect of a change on the risk estimate this must be multiplied by the magnitude
of the change, the total differential of x, dx.
The term sensitivity is often used to mean the total effect of an input on the
risk estimate and sensitivity analysis is used to mean the process of determining
and describing those factors with most effect on risk. We prefer to reserve the
term sensitivity, or more specifically sensitivity coefficient, for the partial
differential and the term sensitivity analysis for the process of determining the
sensitivity coefficients.
It is rare for the form of the risk estimate to permit the explicit, mathematical
determination of the effect of all the inputs, in the manner implied by equation
7.1. Almost always the effect of some inputs must be determined semi-
quantitatively at best. It is essential to consider all important inputs and to avoid
the temptation of restricting the sensitivity analysis to those inputs amenable to
mathematical treatment. Even when the sensitivity analysis is qualitative, it is
useful to divide it into the two parts implied by equation 7.1. Firstly, what is the
effect of a small change in the input, with all other inputs unchanged? Secondly,
what is the likely magnitude of a change in that input?
Analogy of the sensitivity analysis to a mathematical differentiation also
clarifies two issues which may be missed in a qualitative application. Firstly, the
partial derivative with respect to an input may be a function of that input and
other inputs. It is important to consider how the sensitivity may vary with the
value of all relevant inputs. Secondly, a purely mathematical application of
equation 7.1 would evaluate the change in risk, F
R
, by integrating the
expression over the frequency distribution of the inputs, x. It is important to
consider the frequency of different input values and especially the correlations
between them. Conversely, the mathematical analogy can encourage the neglect
of inputs which are not expressed as simple numbers; this must be avoided.
Many inputs will be categorical, e.g. sex, or choices and an example of choices
would be models; all these inputs must be considered.
Sensitivity analysis, as defined here, is not an objective in itself. It is an
approach which is useful in drawing conclusions and assessing uncertainty.
Sensitivity analysis is likely to be used in several stages of the risk
characterisation, rather than forming a distinct stage itself. When determining
and describing the risk determining factors the x¡¯s considered will generally be
the process, pathogen and person input factors shown in Fig. 7.1. When studying
uncertainty the x¡¯s considered will generally be estimates of parameters, although
often those parameters will be descriptions of the variability in input factors.
Risk characterisation 137
7.3.2 Distinguishing variability and uncertainty
There is much confusion in the use of the terms variability and uncertainty.
Codex (1999) says that ¡®Differentiation of uncertainty and variability is
important . . .¡¯ but does not define either term. Other authors (Vose, 2000;
Voysey, 2000; Smith, 2002) distinguish between uncertainty and variability but
there is not complete agreement and the authors of this chapter do not find any
of these definitions completely satisfactory.
There is general agreement that uncertainty and variability both describe
values which are to some extent random or stochastic. There is also agreement
that variability reflects ¡®real¡¯ differences while uncertainty reflects lack of
knowledge on the part of the risk assessor which is, at least in principle,
reducible by further investigation. To provide clarity, while retaining the sense
of other authors we suggest that the distinction between uncertainty and
variability lies in the entity with which each is associated. Variability is a
characteristic of a population, uncertainty is a characteristic of an estimated
value (which value may be non-numeric). We present the following definitions:
Variability represents the heterogeneity in a well-characterised
population, usually not reducible through further measurement or study.
(from Burmaster and Bloomfield, 1996, our emphasis)
Uncertainty of an estimate characterises the dispersion of the values that
could reasonably be attributed to the estimated value. (based on the
definition of uncertainty of measurement in (BSI, 1995))
The population with variability may be other than people, for example the
population of process temperatures applied to a product. Variability will often be
expressed as a frequency distribution but this may be non-numeric.
Parameters of a variability frequency distribution may be estimated with
associated uncertainty. For example, the risk to which a population is exposed
may be described by a median and percentiles, when those parameters will have
associated uncertainties. Although the variability within the population may
influence the uncertainty, it is more difficult to characterise a more variable
population; the variability does not form an intrinsic part of the uncertainty. At
least in principle, further investigation can indefinitely reduce the uncertainty of
the estimated parameters. On the other hand, if an estimate relates to an
individual from the variable population the variability becomes part of the
uncertainty of the individual estimate. Even in principle the uncertainty of the
individual estimate cannot be reduced below the population variability without
information specific to the individual.
7.3.3 Variability
This makes it very important that the Statement of Purpose adequately specifies
the nature of the required risk estimates and the report makes clear the nature of
the reported risk estimates. It is unlikely that all members of the population have
138 Microbiological risk assessment in food processing
the same chance of harm, their risk will depend on factors such as age, immunity
and dietary habits. A single valued risk estimate may conceal large variability
between the risk to individuals. It is important that the risk characterisation
captures and conveys such variability and that risk estimates relating to
populations are not applied to individuals without due caution.
Qualitative estimates will express variability in qualitative terms. Numerical
risk estimates may express variability as frequency distributions, defining the
proportion of the population with any given level of risk. However even
numerical risk estimates may be derived from non-numerical information and it
is important that all substantial variability is included with the risk estimate,
avoiding the tendency to concentrate on quantifiable aspects to the neglect of
qualitative information. Inasmuch as risk characterisation adds no new
information, merely combining that from the exposure assessment and hazard
characterisation, it is important that those steps do not suppress variability
information by premature averaging.
For example, it is common to represent the results of the hazard
characterisation as a dose-response curve defining the frequency of a stated
response at any given dose. That frequency represents an average across other
factors relating to the process, the pathogen and the person. Thus, the
representation of the hazard characterisation as a two-dimensional curve is a
simplification. A complete hazard characterisation would be multidimensional,
relating frequency of response to all of its input values.
In principle, at least, details of such multidimensional dependence should be
passed to the risk characterisation so that it can be combined with similar
information from the exposure assessment to estimate variability in risk. In
practice, many dependencies cannot be described in any more than the most
general qualitative terms by either the exposure assessment or the hazard
characterisation so that the unknown dependency becomes a component of
uncertainty.
7.3.4 Risk determining factors
Often identification of the factors with most influence on risk is more important
than the risk estimate itself. If a risk assessment is to inform risk management
decisions the risk characterisation must identify those factors that most influence
the risk. Where those factors are associated with the person this may influence,
for example, the groups to whom information is directed, or advice on dietary
behaviour. Where the factors are associated with the process this may direct risk
management efforts.
Identification and description of risk determining factors will generally be by
some form of sensitivity analysis as described in section 7.3.1. As stated earlier,
it is important that the changes and levels in input factors considered are realistic
and that realistic combinations are considered. Often information on
distributions of factors, and especially correlations between them, is much
poorer than information on typical or average values. Such uncertainty is often
Risk characterisation 139
mitigated by stating the sensitivity conclusions in conditional terms such as: ¡®If
input changes by x and nothing else changes then risk changes by y¡¯. However
such conditional statements should be used with appropriate caution as they will
often be read as implying that the hypothesised change is feasible. This may
reduce the uncertainty in the conclusions of the sensitivity analysis compared to
the absolute risk estimates. However the changes in risk may be very sensitive to
the assumed mechanisms, that is the models, mathematical or otherwise, which
are implicit in the risk assessment process. Such model uncertainty can be
difficult to estimate but must be carefully considered.
7.3.5 Uncertainty
Every conclusion produced by the risk assessment should be subject to an
uncertainty estimation. Where the estimate is multi-valued the uncertainty
should also be multi-valued. For example, the within population variability in
risk may be represented as a frequency distribution specifying the proportion of
the population exposed to any given level of risk. It is not meaningful to
associate a single uncertainty value with the frequency distribution. If the
frequency distribution is represented by a parameterised equation then each
parameter has an associated uncertainty. If the frequency distribution is
represented by a graph then each point on that graph, that is the estimated
proportion of the population exposed to each given level of risk, has an
associated uncertainty.
When a frequency distribution is represented by a line on a graph it is
tempting to represent uncertainty by an additional pair of lines representing a
confidence interval. However such a representation is easily misinterpreted. For
example, the illustrative graph shown in Fig. 7.2(a) may be interpreted as
showing confidence intervals on the frequency associated with each level of risk
(Fig. 7.2(b)). This is probably the correct interpretation. However it may be
interpreted as showing confidence intervals on the quantiles, the level of risk to
which a given proportion of the population is exposed (Fig. 7.2(c)). It may even
be interpreted as showing confidence intervals on the level of risk to which an
individual in the population is exposed. Graphical representations of uncertainty
must be clearly labelled to avoid misinterpretation.
The uncertainty in risk estimates can be evaluated by the kind of sensitivity
analysis indicated in section 7.3.1. However, identification and description of risk
determining factors are also important conclusions and the uncertainty in those
conclusions should be assessed. The effect of risk determining factors is itself a
result of sensitivity analysis, a differential in the sense of equation 7.1. In principle,
the uncertainty of those effects should be determined by sensitivity analysis of the
sensitivity, a double differentiation. In practice, the original sensitivity analysis is
perforce semi-quantitative at best. Estimation of the uncertainty relating to
identification and description of risk determining factors can rarely be better than
qualitative, nevertheless, it can and should be explicitly considered.
140 Microbiological risk assessment in food processing
Fig. 7.2 Different interpretations of graphical confidence intervals: (a) ambiguous;
(b) alternative 1; (c) alternative 2.
Risk characterisation 141
7.3.6 Validation
Voysey (2000) states that ¡®it is essential to ensure that the results of the risk
assessment accord with common sense and with experience¡¯. Ideally the
conclusions of the risk assessment should be validated against information
which has not been used to produce the conclusions. The conclusions of the risk
assessment result from a model, which may be mathematical but generally is at
least partly conceptual or qualitative. It is well known that testing of a model
against the information used to produce the model gives only weak confidence
in the model¡¯s validity. It is generally good practice to keep a validation ¡®test
set¡¯ distinct from the ¡®calibration set¡¯ used to produce the model. In practice this
is unlikely to be possible for MRA. The paucity of relevant information requires
the use of all data which is available in producing the MRA conclusions, leaving
none for validation purposes.
Independent third-party peer review of the MRA offers an alternative
approach to validation. The difficulty of such review should not be
underestimated. The authors of an MRA often become so involved that it is
difficult for them to see where assumptions or preconceptions may limit the
valid scope of their conclusions; the reviewer must be truly independent. The
review must consider all the information, assumptions and models which lead to
the conclusions. This requires a level and breadth of skill and knowledge similar
to those of the MRA authors. Clearly, the review requires that the MRA is very
well documented.
7.4 Quantitative and qualitative outputs
Although often couched in mathematical language the principles above apply
equally strongly when the information being handled and the conclusions
produced are non-numeric. In practice, MRAs and their risk characterisations
are rarely either totally numeric (quantitative) or totally non-numeric
(qualitative). Some aspects of the exposure assessment or risk characterisation
will be blessed by good numeric data and validated mathematical models, others
will suffer from lack of knowledge and understanding. Sometimes available
quantitative data or tools may be consciously neglected as unjustified for the
stated purposes of the MRA.
7.4.1 Qualitative outputs
The ultimate MRA output (as the discipline was originally conceived for
governmental or regulatory agencies use at least), is a quantitative one. By
definition, this necessitates all the information and data needed to draw up the
risk characterisation to be numerical in nature. The reality of this will be
discussed in more detail later in this chapter.
What has become very clear as the number of MRA¡¯s being carried out has
steadily increased, is that all the necessary data and information is very rarely if
142 Microbiological risk assessment in food processing
ever solely in a quantitative format. This means that resources, including time,
need to be expended to fill gaps in the quantitative data available, if it is possible
to do so. As discussed earlier, quantitative information is not easy to come by in
some critical areas. An important, if not the most important, example of this, is
the area of dose-response. It is extremely difficult if not impossible to determine
specific numbers of microorganisms that would need to be consumed by the
population under consideration in a specific study, because of ethical issues. It is
not likely that enough ¡®volunteers¡¯ of a target population could be found to
consume pathogenic microorganisms in food, sufficient to make them extremely
ill or even kill them!
This leaves the risk assessor with the option of carrying out an MRA with as
much quantitative information and data as is available, and filling the gaps in
with ¡®expert opinion¡¯ and/or qualitative material. This semi-quantitative MRA
output will be discussed later. The other option open to the assessor is to use
solely qualitative information and data to produce an output to the MRA.
The Codex Alimentarius Commission (Codex, 1999) defines qualitative risk
assessment as:
A risk assessment based on data which, whilst forming an inadequate
basis for numerical risk assessments, nonetheless, when conditioned by
prior expert knowledge and identification of attendant uncertainties
permits risk ranking or separation into descriptive categories of risk.
In simple terms, a qualitative risk assessment is ¡®a risk assessment that cannot be
expressed in a precise numerical format but can still be useful, particularly based
on expert knowledge and expressed in terms of categories such as high, medium
or low, or as risk comparisons¡¯ (Mitchell, 2000). Whether this type of MRA can
be used by government bodies or regulators depends on their purpose for
carrying out the MRA. If it is to set food safety objectives, then probably not.
Studies at Campden and Chorleywood Food Research Association have found
that a qualitative MRA output may certainly be adequate for industrial or
manufacturers¡¯ MRAs. For example, a company may be considering modifying an
existing line to produce a ¡®less risky¡¯ product. Or, a company which can show that
it can achieve a 20-day shelf life from an existing product, may wish to make a
cheaper version of the product with only a 10-day shelf life. These comparative
MRAs can be usefully performed with little if any quantitative data at all.
Consequently the cost of resources for carrying them out will also be less. The
output of this type of MRA will be severely lacking in detail, however it can be
sufficiently useful to answer the original question posed and therefore of benefit.
The original format for a qualitative MRA was the ¡®risk profile¡¯ (see, for
example, Voysey, 2000). This is a paper-based MRA where questions are asked
to cover the aspects of the hazard identification, exposure assessment and hazard
characterisation steps relevant to the pathogen, food and process being used. A
series of tandem questions can be used to gauge the level of uncertainty
associated with the answers given to the questions. The questions asked are
designed to tease out all the information that is needed to make a judgement by
Risk characterisation 143
the performer of the risk profile. It can be useful if the performer of the risk
profile considers how standard questions (such as those given in Voysey, 2000,
for example), could be interpreted to meet the need of each set of circumstances.
All relevant information and data required should be recorded for each question.
If the risk profiler is working at a conceptual stage in product development
for example, it can be useful to use a general profile template, and answer the
questions on a scale from 1 to 5, varying in severity. Although this exercise will
be of limited use, it can be used as a starting point for more detailed
consideration. Expert opinion is used extensively in this process. The risk
characterisation step of a risk profile consists essentially of identifying the areas
of the MRA where the answers suggest that the hazard could be a problem (i.e.
lots of 5¡¯s) and where questions cannot be answered with any degree of
confidence (i.e. high uncertainty).
The risk profile is a useful means for carrying out an ¡®outline MRA¡¯ before
the proper MRA is carried out. It helps the practitioners to appreciate what
information and data will be needed, to allow a proper MRA to be performed
and indeed, to determine if there is a need to perform a full MRA study. Typical
descriptors or outputs, which could be deduced from a qualitative MRA, are that
the risk associated with this pathogen in the food is ¡®high¡¯, ¡®low¡¯ or ¡®negligible¡¯.
7.4.2 Semi-quantitative outputs
Microbiological risk assessments need to be carried out with the most up to date
and relevant information and data available. It is likely that in most situations
where it is considered that an MRA is required, that there will not be some
quantitative information available which could be useful to the MRA. In this
circumstance, qualitative and quantitative inputs need to be combined in the
MRA process. The output of such an MRA can be expected to have a more
precise output than a qualitative MRA and less than a quantitative MRA.
7.4.3 Quantitative outputs
Codex (1999) defines a quantitative MRA as ¡®A risk assessment that provides
numerical expressions of risk and indication of the attendant uncertainties.¡¯
While the representation in Fig. 7.1 is intended to include qualitative and semi-
quantitative relationships (e.g. heating reduces contamination levels), a quantita-
tive risk assessment requires these relationships to be expressed numerically
(e.g. equation 7.2).
N
T
N
0
10
t=D
D D
R
10
T
R
T =z
7:2
where:
N
T
contamination level after heating for time t at temperature T
N
0
initial contamination level
144 Microbiological risk assessment in food processing
D
R
decimal reduction time at the reference temperature, T
R
(a characteristic of the pathogen)
z a characteristic of the pathogen
A typical MRA involves many such interacting relationships and the only
practical means of representing, linking and calculating them is on a computer.
Spreadsheet programs make it easy to develop, link and visualise
relationships. However spreadsheet models may evolve rather than be designed,
leading to unforeseen relationships and interactions, and can be difficult to
document and validate. Guidance on avoiding these problems is available and
should be followed, (Read and Batson, 1999). Other computer programs allow
the iterative definition of series of functions, one in terms of the other. These
environments make formal design and documentation a more natural part of the
model-building process but require more experience in their use and are
generally only used by specialists.
However it is undertaken, the results of this initial stage of model building are
models, often called ¡®deterministic¡¯ models, which produce single outputs from
single sets of input values. If all the input factors (e.g. N
0
, T and t in equation
7.2) had no variability, and those factor values, the model (e.g. the form of
equation 7.2), and the model parameters (e.g. D
R
and z in equation 7.2) had no
uncertainty, then the resultant risk could be calculated from the deterministic
model and would have no variability or uncertainty. This is not the case for any
realistic risk assessment. Variability in the input factors results in variability in
risk. Uncertainty in the parameters describing the variable input factors, in the
nature of the models and their parameters, results in uncertainty in the para-
meters describing the variable risk. Variability and uncertainty, usually repre-
sented as frequency distributions, must be added to the deterministic model to
produce a model, often called a ¡®stochastic¡¯ model, which gives risk
distributions from input distributions.
In principle, if the deterministic model was very simple and the input
distributions were amenable, it would be possible to deduce the output
distribution algebraically. In practice this is not possible for real situations.
Monte Carlo modelling
The most common approach is Monte Carlo modelling which provides a frequency
distribution of the output by making many deterministic calculations, known as
iterations. At each iteration a single random value is generated for each of the
stochastic inputs and parameters, resulting in a single calculated risk. If the
distribution of randomly generated input values is appropriate then the distribution
of calculated values represents the distribution of risk. There are a number of
computer software packages dedicated to Monte Carlo modelling (e.g. @Risk,
Palisade Corporation; Crystal Ball, Decisioneering, Inc.) or it may be implemented
in any of several statistical packages by those expert in their use. Although Monte
Carlo modelling is relatively simple to understand and implement and is by far the
most widely used technique, it is not without its dangers.
Risk characterisation 145
The validity of the output distribution (risk) depends entirely on the validity
of the input distributions. Vose (2000) presents a ¡®cardinal rule¡¯, ¡®Every iteration
of a risk analysis model must be a scenario that could physically occur¡¯. The
distributions representing the inputs must be realistic. ¡®However, experience
indicates that what is important is to choose distributions based on properties
such as whether the distribution is skewed or symmetric, if it should be truncated
or not, and whether extreme values should be allowed¡¯ (Smith, 2002). It is
important to include correlations between the input values. Failure to do so may
break Vose¡¯s cardinal rule as input values are used which are feasible
individually but not in combination. This applies to all stochastic elements,
uncertain parameters as well as variable factors. It is important that sampled
input values cover a wide range of the input values. For simple random sampling
this can require a great number of iterations. ¡®Latin Hypercube Sampling¡¯ is the
most common approach to achieve a wide spread of random values representing
the chosen distribution.
Separating uncertainty and variability can be burdensome. The most common
approach is ¡®second order simulation¡¯. One set of distributions describes the
uncertainty in model parameters and estimated values, a second set of
distributions represent variability. A single simulation uses one set of values
sampled from the uncertainty distributions and many samples of the variability
distributions to produce a result incorporating variability without uncertainty.
Multiple simulations are run, each with a fresh sample from the uncertainty
distributions, producing an uncertainty distribution of results. This is
conceptually simple and can be straightforward to implement, especially with
Crystal Ball. However, a single simulation often requires many iterations to
produce an adequate representation of variability. A second order simulation
dramatically increases the time required and is often performed with relatively
few simulations, limiting the reliability of the uncertainty estimates.
It can be difficult to represent variability and uncertainty in categorical
values, especially in choices such as model selection. The many numbers and
graphs resulting from a typical Monte Carlo simulation can give a misleading
impression of precision and a temptation to ignore sources of variability and
uncertainty which have not been explicitly and quantitatively included in the
model. It is important that the conclusions are considered and expressed with
due regard to all sources of uncertainty, including those in the underlying
assumptions. The reported risk estimate should be accompanied by an indication
of ¡®the dispersion of the values that could reasonably be attributed to the
estimated value¡¯.
Other approaches
Although Monte Carlo modelling is by far the most common means of including
variability and uncertainty in a model there are others, albeit as yet rarely used
except in theory or demonstration. As mentioned above, classical statistics and
algebra are impractical for all except the simplest problems. However they do
give rapid, reliable conclusions and they should be used when possible, even
146 Microbiological risk assessment in food processing
within a Monte Carlo simulation. If a part of a model can be solved and
represented algebraically within a simulation, then this should be done.
The Bayesian approach to statistics, in which information modifies a ¡®prior
distribution¡¯ to generate a ¡®posterior distribution¡¯, is becoming a more popular
approach to combining qualitative or semi-quantitative expert opinion with data.
There can be substantial conflict between proponents of classical and Bayesian
approaches to statistics, often relating to the meaning of the fundamental term
¡®probability¡¯. The authors of this chapter do not take a position in this conflict,
regarding classical and Bayesian approaches as complementary, each with their
own advantages and appropriate in different circumstances. Where possible we
avoid the contentious issues, preferring to use the terms ¡®frequency¡¯, ¡®relative
frequency¡¯ and ¡®confidence¡¯ where these are more appropriate than
¡®probability¡¯.
Techniques such as expert systems and neural networks are also being
adapted to the incorporation of expert opinion into stochastic models, see for
example (Barker, 2000). However they have not yet been used in major,
published MRAs and their application to MRA must be regarded as ¡®work under
development¡¯, whose potential is not yet proven.
7.5 Risk characterisation in practice: some examples
When chemists consider the risk characterisation step, they identify two
different approaches. The first of these is adopted when the nature of the hazard
and the dose-response data indicate the existence of a threshold. In this case a
¡®safety evaluation¡¯ is carried out and the risk characterisation is used to
determine risk relative to parameters such as the acceptable daily intake (ADI)
of the chemical. The second alternative relates to toxic effects that appear not to
be thresholded, or where the existence of a threshold cannot be assumed. In this
case, the risk characterisation may take the form of a quantitative risk
assessment (Walker, 2000). This second approach is similar to that taken for the
risk characterisation step in MRA.
With the advent of the Codex Alimentarius papers on risk assessment, there is
a much more structured and universally accepted format for the undertaking of
risk characterisation. In particular, the issues of sensitivity and variability are
being addressed more thoroughly, thus allowing higher levels of confidence in
the analysis of various mitigation strategies for reduction of pathogens at
different stages of the food chain. The series of FDA/USDA risk assessments
(e.g. Salmonella in eggs, Vibrio parahaemolyticus in shellfish, E. coli in ground
beef and Listeria in various ready to eat foods) are extensive studies and indicate
both the breadth and depth of investigation which may be necessary to
¡®accurately¡¯ estimate risk. In the following section some key features are
highlighted for a small number of risk assessments which have recently been
published.
Risk characterisation 147
7.5.1 Study 1: Risk assessment to public health from foodborne Listeria
monocytogenes among selected categories of ready-to-eat foods (US DHHS/
USDA, 2001)
In this study, the risk assessment linked the probability of exposure to Listeria
monocytogenes from the consumption of food in 20 food categories with adverse
health outcomes. The primary focus was on a prediction of relative probability
of contracting listeriosis from the consumption of a single serving of food in one
of the 20 food categories. Additional predictions considered the extent of the
annual consumption of various foods and the predicted contribution of each of
the individual food categories to the number of listeriosis cases nationally. The
study was based on contaminated foods at retail level and included both sporadic
and outbreak cases.
This study, right at the start of the risk characterisation process, importantly
considers the limitations of modelling. In particular, because listeriosis is such a
rare event, straight Monte-Carlo modelling was unable to provide adequate
characteristics with the tails of the distribution in the model. The risk
characterisation was developed in two stages. Firstly a simulation on exposure
assessment calculations that produced the number of annual servings for each of
three sub-populations (perinatal, elderly and intermediate age) at designated
dose levels for each food category. This included population variability and
uncertainty due to lack of information. The second step calculated predictions of
the relative risk of listeriosis to each sub population from each food category.
Previous work has focused on highly virulent strains in a single population
group. By including human susceptibility variation and pathogen virulence
variability there is a substantial reduction in risk estimation associated with a
particular dose. The study recognises that most exposures to L. monocytogenes
are unlikely to result in listeriosis, even among highly susceptible segments of
the population.
The study showed substantial differences in risk among the different food
categories. For example the median predicted relative risk between pate¡ä and
meat spreads and ice cream and frozen dairy products differs by almost a
million-fold. The 5th and 95th percentile values were calculated for the three
sub-populations thus enabling an estimation of the variability and uncertainty. It
is recognised that five key factors had a profound influence on the results of the
exposure assessment and the subsequent risk characterisation including storage
before consumption. This study also recognised that data on actual consumer
storage practices were not available and data was therefore based on expert
judgement and USDA recommended practices. In reality, it is likely that actual
consumer storage times of food are longer than USDA recommendations. In
most cases, foods with a high risk based on servings had a high risk based on a
per annual consumption basis but this was not always the case. For example, risk
of illness with vegetables and pasteurised fluid milk was relatively low on a per
serving but higher on a per annum basis.
148 Microbiological risk assessment in food processing
7.5.2 Study 2: Draft risk assessment on the public health impact of Vibrio
parahaemolyticus in raw molluscan shellfish (USFDA, 2001)
This study describes the probability of illness caused by consumption of oysters
harbouring pathogen V. parahaemolyticus. The study was divided into three
modules, harvest, post harvest and public health. In the harvest module, salinity
was not considered an important variable and the module was therefore based
solely on water temperature. The post harvest module addressed the simulation
of oyster handling practices and effects of various mitigations. The public health
module looked at the destruction of potential illness in different regions and
seasons. The post harvest mitigation strategies investigated were
? mild heat treated (5 min at 50oC),
? freezing ( 30oC) and
? rapid cooling immediately following harvest.
All three had a substantial effect on the distribution of the probable number of
illnesses. The effect of mild heat treatment was found to reduce the mean risk of
illness per serving to susceptibility to less than 1 in 100,000.
FDA had advised that V. parahaemolyticus in shellfish should not exceed a
level of 10,000 viable cells per gram. This risk assessment did allow the workers
to address the question of what would be the predicted impact on the incidence
of disease if one could exclude oysters at the time of harvest that had a certain
level of V. parahaemolyticus in the Louisiana Gulf Coast summer harvest. The
simulation results suggest that 15% of illness are associated with consumption of
oysters that contain greater than 10
4
V. parahaemolyticus per gram at time of
harvest. The corresponding fraction containing greater than 10
4
per g was 5%.
Therefore, a large proportion of the harvest contains lower numbers, but a
significant associated level of risk.
Additional simulations were performed to examine the effect of uncertainty
and variability parameters on the variance of the distribution of illnesses
obtained by simulation. The influences of three parameters were examined:
1. relative growth of V. parahaemolyticus in oysters versus broth model
2. combination of variability and uncertainty in the overall percentage of V.
parahaemolyticus that is pathogenic; and
3. variation of water temperature.
These three factors are considered to account for approximately 45% of the total
variation in risk per serving. Individually, the uncertainty in growth rate
proportionality and percentage pathogenic account for 26% and 12% of the total
variation respectively. Water temperature, which is variable, accounts for 22%
of total variation. Thus of all factors, the variation would be reduced most by
additional information on growth in oysters versus broth. An additional and
important source of uncertainty associated with the predicted distribution of
illness is that associated with the extrapolation from illness in feeding trials.
Risk characterisation 149
7.5.3 Study 3: Salmonella enteritidis risk assessment: shell eggs and egg
products (USDA, 1998)
The first major formal risk assessment undertaken in the USA was a very large
study. The work programme commenced in 1996 in response to the increasing
number of illnesses associated with the consumption of shell eggs. The risk
assessment model consisted of five modules, egg production (estimating number
of eggs infected with Salmonella Enteritidis), the shell egg module, the egg
products module, the preparation and consumption module and the public health
module. The latter calculated the incidents of illnesses associated with four
degrees of clinical outcome, recovery without treatment, recovery with
treatment, hospitalisation and mortality. The baseline model for shell eggs
simulated an average production of 46.8 billion shell eggs per year in the US, 2.3
million of which with Salmonella Enteritidis, resulting in 661,633 human
illnesses per year. The study then examined mitigation elasticity which is a
measure of how changes in module variability affect model outputs. It was
observed that combinations of mitigations may potentially be more effective in
reducing total human illnesses. Each of the 5 modules was subjected to a
sensitivity analysis.
7.5.4 Study 4: Quantitative microbiological risk assessment: principles
applied to determining the comparative risk of Salmonellosis from chicken
products (Brown et al., 1998)
In this paper models were constructed in accord with Codex Alimentarius
principles, to provide a quantitative risk assessment (QRA) of Salmonellosis
from frozen poultry products. The QRA addressed three types of information:
occurrence and distribution of Salmonella, sensitivity of populations to infection
and the effect of cooking (in the factory or home) on levels of Salmonella and
hence the risk of infection. The paper contains interesting data on the issues
associated with thermal inactivation and heat transfer. The models compute the
chance of infection from a chicken portion (possibly contaminated with
Salmonellae), subjected to a specified heat treatment and ingested by an
individual who may be sensitive. The program allows users to produce risk
estimates without extensive data on the dose response within populations. A
facility has been built into the program to find the value of a chosen variable that
gives rise to n people per million units at risk of infection. The authors discuss
the issues associated with statistical sensitivity and the implications for risk
estimates.
7.5.5 Study 5: Quantitative risk assessment for Escherichia coli O157:H7
in ground beef hamburgers (Cassin et al., 1998)
The authors introduce the term process risk model (PRM) for the integration and
application of QRA methodology with scenario analysis and predictive micro-
biology to provide an objective assessment of the hygienic characteristics of a
150 Microbiological risk assessment in food processing
manufacturing process. In the PRM, one submodel described the behaviour of
Escherichia coli O157:H7 throughout the production, handling and consumption
chains. The second submodel is a dose-response model to estimate illness risk.
Monte Carlo simulation was used to assess the effect of the uncertainty and
variability in the model parameters on the predicted human health risk. The
model predicted probability of haemolytic uremic syndrome and mortality. The
efficacy of three mitigation strategies were also explored. The authors give a
very clear account of importance analysis, i.e. examining the sensitivity of an
outcome to a factor and the uncertainty and variability associated with that
factor.
7.6 Current problems and future trends
Inasmuch as the risk characterisation synthesises the results of previous stages
its problems and future reflect those of the previous stages. Many of the
problems arise because MRA is a new and rapidly developing discipline.
Practitioners have limited experience, in years and in range of technique and
application. While a consensus is building on fundamental principles and
terminology, helped by the Codex document (1999), this is not yet complete. It
is to be hoped and expected that more formal definitions will be laid down and
widely accepted, reducing misunderstandings between practitioners and
confusion of audiences.
Increased experience of MRAs by those audiences, the ¡®customers¡¯ of the risk
analyst, will increase their understanding of the process and conclusions and
reduce pressure on the risk analyst to use approaches, such as worst case
assumptions and the ¡®precautionary principle¡¯, which are incompatible with
MRA as presented here. We will not discuss such approaches here except to note
the Codex principle that ¡®There should be a functional separation between Risk
Assessment and Risk Management¡¯, and that risk assessment estimates the
frequency, not the possibility, of harm.
MRA was developed and has predominantly been applied for governmental
purposes. It is already clear that the techniques typically used for governmental
MRAs, when regarded as a ¡®toolbox¡¯ to be adapted to the situation, can help
achieve industrial objectives. However, the relevance and relative importance of
different philosophies and techniques to different circumstances will only
become evident over time and no doubt specific tools will be developed.
One factor distinguishing governmental and industrial MRAs is their scope.
Typically a governmental scope is very broad both in terms of product, covering
a generic product group, and the food chain, from farm to fork. An industrial
scope is much narrower, commonly restricted to a single brand, or even a single
production line, and often concentrating on that part of the chain under the
control of the company. The reduced scope has benefits and disadvantages. The
range of variability to be considered is much reduced, especially in categorical
Risk characterisation 151
rather than numerical factors such as processing method, resulting in reduced
model complexity and uncertainty. There is often substantial directly relevant
monitoring data available reducing the uncertainty in estimating values, and in
extrapolating from the samples measured to the population considered by the
MRA. However, the limited scope reduces the extent to which the conclusions
of the MRA can be applied. This reduces the value of the MRA, and thus the
resource which can be justified. There is a balance between, on the one hand,
ambitious objectives and wide scope leading to wide applicability but high
uncertainty and, on the other hand, tight objectives and a narrow scope leading
to narrow applicability but increased precision. It is not yet clear which
industrial objectives can be reasonably achieved at what cost. This hampers the
cost benefit analysis which, at least informally, precedes any MRA. Once again,
experience will clarify the objectives and appropriate scopes which can be
achieved cost effectively and within time scales appropriate to industry.
Turning from the purpose of the MRA to the information collection and
synthesis stages, the principal problem is undoubtedly the quantity, quality and
relevance of the information. Most of the scientific data used in MRAs has not
been produced with MRAs in mind. As MRAs become more important it is to be
expected that data, especially that from government funded work and including
that not produced directly for MRAs, will be more suitable for MRA use. There
are already initiatives to build collections of information and other resources so
that they are readily available to MRA practitioners.
Much of the available and relevant information is, and will remain, non-
numeric, qualitative, expert opinion. Such information is not easily handled by
Monte Carlo simulation, at least not with appropriate handling of uncertainty
and variability. The techniques referred to above and others should be developed
into practical tools which can be combined with each other and Monte Carlo
simulation to give a broad range of flexible tools capable of handling and
synthesising ¡®fuzzy¡¯ as well as numeric information.
MRA is often constrained by the limits of scientific knowledge, especially on
the behaviour of pathogens and the mechanism of infection and disease
processes. Progress on such fundamental issues will be slow, but MRA must be
ready to incorporate new knowledge as it becomes available. The requirement
for MRA to use the most current information and understanding will militate
against the re-use of MRA modules. The limited number of complete MRAs
performed to date have generally been independent of each other. Although each
has been modular in itself the modules have not been directly compatible
between MRAs. This is likely to remain the case for some time. The number of
extant MRAs is so small, and information and techniques available are
developing so rapidly, that it is generally inappropriate to re-use major portions
without careful consideration, and probably substantial modification.
Nevertheless, prior MRAs are already easing and accelerating the production
of later MRAs, and this trend can be expected to continue.
152 Microbiological risk assessment in food processing
7.7 References
BARKER, G. G. (2000) ¡®Risk assessment for Clostridium botulinum in food:
Bayesian belief networks in microbial risk assessment¡¯, in Price, G. M.
and Shuker, L. K., Probabilistic Approaches to Food Risk Assessment,
Inst. Env. Health, UK, 25¨C26.
BENFORD, D. J. and TENNANT, D. R. (1997) ¡®Food Chemical Risk Assessment¡¯, in
Tennant, D. R., Food Chemical Risk Analysis, Blackie Academic and
Professional, London, 21¨C54.
BROWN, M. H., DAVIES, K. W., BILLON, C., ADAIR, C. and MCCLURE, P. J. (1998)
Quantitative microbiological risk assessment: principles applied to
determining the comparative risk of Salmonellosis from chicken products,
J. of Food Protection, 61, 11, 1446¨C1453.
BSI (1995) Guide to the Expression of Uncertainty in Measurement, Vocabulary
of Metrology Part 3, SI PD 6461: Part 3.
BURMASTER, D. E. and BLOOMFIELD, L. R. (1996) ¡®Mathematical Properties of the
Risk Equation When Variability is Present¡¯, Human and Ecological Risk
Assessment, 2 (2), 348¨C355.
CASSIN, M. H., LAMMERDING, A. M., TODD, E. C., ROSS, W. and MCCOLL, R. S. (1998)
¡®Quantitiative risk assessment for Escherichia coli O157:H7 in ground
beef hamburgers¡¯, Int. J. Food Microbiol., 41, 21¨C44.
CODEX (1999) ¡®Principles and Guidelines for the Conduct of Microbiological
Risk Assessment¡¯, CAC/GL-30.
FAO/WHO (1995) Application of Risk Analysis to Food Standards Issues, Report
of the Joint FAO/WHO Expert Consultation, WHO/FNU/FOS/95.3.
MITCHELL, R. T. (2000) Practical Microbiological Risk Analysis. How to Assess,
Manage and Communicate Microbiological Risks in Foods, Chandos
Publishing (Oxford) Limited.
READ, N. and BATSON, J. (1999) Spreadsheet Modelling Best Practice, Business
Dynamics, Pricewaterhouse Coopers.
SMITH, E. (2002) ¡®Uncertainty Analysis¡¯, in El-Shaarawi, A. H. and Piegorsch,
W. W., Encyclopedia of Enviromatics, John Wiley & Sons Ltd,
Chichester, 2283¨C2297.
VOSE, D. (2000) Risk Analysis: A quantitative guide, John Wiley & Sons Ltd.,
Chichester.
VOYSEY, P. (2000) An Introduction to the Practice of Microbiological Risk
Assessment for Food Industry Applications, Campden & Chorleywood
Food Research Association Group, Guideline 28.
WALKER, R. (2000) ¡®Risk Characterisation¡¯, Food Nutrition Law and Health
Journal, Poland, Supplement 4 66¨C71.
UNITED STATES DEPARTMENT OF AGRICULTURE (USDA) (1998) ¡®Salmonella
Enteritidis risk assessment, shell eggs and egg products¡¯, Washington DC.
UNITED STATES DEPARTMENT OF HEALTH AND HUMAN SERVICES AND UNITED
STATES DEPARTMENT OF AGRICULTURE (US DHHS/USDA) (2001) ¡®Draft
assessment of the relative risk to public health from foodborne Listeria
Risk characterisation 153
monocytogenes among selected categories of ready-to-eat foods¡¯,
Washington DC (www.foodsafety.gov).
UNITED STATES FOOD AND DRUG ADMINISTRATION (US FDA) (2001) ¡®Draft risk
assessment on the public health impact of Vibrio parahaemolyticus in raw
molluscan shellfish¡¯, Washington DC (www.foodsafety.gov).
154 Microbiological risk assessment in food processing
8.1 Introduction
Communication is one of the most basic of human activities, yet so often it
goes wrong. Experts can feel exasperated when non-experts (i.e. consumers,
the food industry and politicians) fail to understand expert pronouncements on
food safety risks and consequently fail to follow expert advice on the proper
practices needed to eliminate or mitigate those risks. They feel that non-
experts are at fault because they are technically illiterate. Their solution is that
more and better education is needed. On the other hand non-experts are
equally frustrated when experts (i.e. public health professionals and scientists)
apparently fail to see their point of view and come up with what appears to
them as ludicrous and patronising advice. They feel that experts need to get
out more and live in the real world. Clearly, there is a wide gulf between these
opposing points of view. Something is needed to bridge this gap. The answer
is risk communication.
Expressed in simple terms, risk communication is the two-way exchange of
information and opinions on how the risks have been assessed and can be
managed (Mitchell, 2000). In technical terms, it was defined by the Codex
Alimentarius Commission as ¡®an interactive exchange of information and
opinions concerning risk among risk assessors, risk managers, consumers and
other interested parties¡¯ (FAO/WHO, 1997). However, the perception of ¡®risk¡¯ is
heavily influenced by a range of ¡®outrage factors¡¯ that can trigger in people
psychological responses that may be out of step with the risk expressed in purely
technical terms (Section 8.4.2).
Accordingly, a FAO/WHO Expert Consultation on the Application of Risk
Communication to Food Standards and Safety Matters (1998) recommended that
8
Risk communication
R. Mitchell, Public Health Laboratory Service, London
the Codex definition should be modified by inserting the words ¡®and risk-related
factors¡¯ so that the definition would read
Risk communication is the exchange of information and opinions
concerning risk and risk-related factors among risk assessors, risk
managers, consumers and other interested parties.
8.2 The concept of risk
Almost every aspect of human existence carries with it an element of risk. In
order to survive, human beings need to assess each risk, decide whether or not it
is acceptable and develop strategies for managing it.
An understanding of risk communication is predicated by an understanding of
the concept of risk itself, not least because the concept of risk encompasses a
large element of human psychological responses and because non-experts often
perceive risks in ways very different from experts.
8.2.1 Different uses of the word risk
The first barrier to effective risk communication is that the term ¡®risk¡¯ can take
on a variety of very different meanings. There are a number of ways in which
this can lead to confusion:
? To many experts risk means ¡®probability¡¯; to others it means ¡®severity of the
hazard (harm)¡¯; to others it is a combination of the ¡®probability and severity¡¯
together.
? Some languages do not discriminate between ¡®hazard¡¯ and ¡®risk¡¯. For
example, in French the same word ¡®hasard¡¯ can cover both terms, and in
German they have both been translated as ¡®risiko¡¯.
? The context of the term ¡®risk¡¯ is heavily qualified by the terms that precede or
succeed it. Precedent terms include, for example, ¡®relative¡¯, ¡®acceptable¡¯,
¡®high¡¯ and ¡®low¡¯, each of which imbues a very different meaning to the
context. Similarly, successor qualifying terms include ¡®factor¡¯, ¡®assessment¡¯,
¡®analysis¡¯, ¡®management¡¯, ¡®communication¡¯, ¡®averse¡¯, ¡®perception¡¯, ¡®ratio¡¯
and ¡®behaviour¡¯.
? The hazard component referred to can vary. For example, it can refer to ¡®risk
of illness¡¯, ¡®risk of illness per thousand servings of a product¡¯, ¡®risk of illness
per hundred thousand of the population¡¯ or the ¡®risk of loss of business or loss
of sales¡¯.
? The meaning of ¡®risk¡¯ changes depending on who is affected by it. It can be
the risk ¡®to me¡¯, ¡®to the population as a whole¡¯, ¡®to vulnerable groups within
that population¡¯ or as before ¡®to my business¡¯.
The key to overcoming this particular barrier is to be very specific. For
successful risk communication it is vital to be quite detailed when describing
156 Microbiological risk assessment in food processing
risks, almost to the extent of using, where appropriate, statements in the form of:
the risk ¡®of what¡¯ to ¡®whom¡¯. This should avoid many of the arguments about
risk that are caused by an incomplete description of risk.
8.2.2 Technical expression of risk
Historically, experts have tended to think of risk purely in terms of the
probability of an adverse event happening. (This was very much true in relation
to risk management strategies like the hazard analysis and critical control point
(HACCP) system where the resulting harm was described as the hazard.) The
Codex Alimentarius Commission now defines risk as ¡®a function of the
probability of an adverse health effect and the severity of that effect,
consequential to a hazard(s) in food¡¯ (Codex Alimentarius Commission,
1998). However, it can be argued that the general population, and sometimes
experts, do not truly understand probability.
Probability can be defined as ¡®the number of desired outcomes divided by the
number of all possible outcomes¡¯. However, this can be of limited meaning in
determining the public¡¯s behaviour. A simple example is the UK National
Lottery where one has to select 6 balls from a combination of 49. There is only
one combination in any draw whereby these 6 balls will be chosen, the
probability of which is 1 in 14 million. To put this probability into context, if
you spent ¡ê50 a week on the lottery, then you could expect to win the jackpot by
matching 6 balls roughly once in 5000 years. Further details on the behaviour of
the public in relation to probability and lotteries across the world are available
from Haigh (1999).
The failure of the general public and experts to understand even simple
probabilities is one of the many factors that promote misunderstanding between
perceptions, estimates or communication of risk between experts and the general
public.
Table 8.1 is taken from Morris and Bate (1999). It shows the risk of dying
from a variety of causes. Tables such as this can be attractive and have often
been used by experts to communicate risk to the public at large. However, their
practical usefulness is limited because the public experience difficulty in
interpreting them in context. Very few people have been struck by lightning so
how can the rest of the population use this as a measure of their chances of being
infected by salmonella from poultry?
Historically, expert thinking on how to communicate risks has been
somewhat outdated and ineffective. It appears that many food safety experts
seem to think that all they have to do is (Fischoff, 1995):
? Get the numbers right.
? Inform the public of the numbers.
? Explain what we mean by the numbers.
? Show them that they have accepted similar risks in the past.
? Show them that it is a good deal for them.
Risk communication 157
? Treat them nice.
? Make them partners.
This attitude is patronising, relies heavily on numerical estimations of risk and
will fail to work in light of the outrage factors outlined below.
An appreciation of how the general public understands, or fails to understand,
risk and probability is absolutely critical to successful policy-making by
government and experts in terms of how they should anticipate the public
response to hazards and risks. Furthermore, this awareness is critical to
improving risk communication between experts, lay people and the decision
makers. Without this understanding the best efforts of experts and government
will fail.
8.3 Risk perception
People¡¯s attitudes to risks are determined by more than numerical expressions of
probability alone. Risk is not just about science. There is a human factor. In
other words, human psychology plays a fundamental role.
8.3.1 The human element and outrage factors
Human beings are not machines. They do not process information like
computers but are subject to emotions and other psychological influences. Not
surprisingly their perception of risk is heavily motivated by these psychological
factors. The consequences for risk communication are enormous. Various
psychological techniques have shown how the general public perceives risks.
Studies have shown that the public¡¯s reaction to any given risk can be motivated,
just as much if not more, by these emotional reactions than by the particular
Table 8.1 The numerical risk of dying from various causes
Cause of death Risk of dying
Smoking 10 cigarettes a day 1 in 200
All natural causes age 40 1 in 850
Violence or poisoning 1 in 3300
Influenza 1 in 5000
Accident on the road 1 in 8000
Accident at home 1 in 26 000
Accident at work 1 in 43 500
Radiation working in radiation industry 1 in 57 000
Homicide 1 in 100 000
Salmonella infection from poultry 1 in 5 000 000
Struck by lightning 1 in 10 000 000
Nuclear power station radiation leak 1 in 10 000 000
Beef on the bone 1 in 1 000 000 000
158 Microbiological risk assessment in food processing
hazard (in terms of severity and probability) associated with that risk (e.g. Fife-
Shaw and Rowe, 1996). Indeed some authors (Sandman et al., 1993) express this
in the equation:
Risk hazard outrage
No matter how serious the risk (in technical terms) and no matter how much
technical detail is used to explain it, it is the degree of outrage that will
determine much of the public¡¯s response to the risk. Psychological studies have
shown that there are number of outrage factors:
1. Choice. Is the risk voluntary? Some authors have expressed this crudely as
the difference between skiing and being pushed down a mountain with two
planks of wood attached to one¡¯s feet ¨C the risk is the same but reaction is
markedly different depending on whether or not one has chosen to
participate. In food terms, an example would be that consumers in the UK
apparently prefer to be exposed to the quantifiable and demonstrable risks
associated with pathogens in raw milk than to the as yet unquantifiable risks
associated with genetically modified (GM) foods.
2. Control. Arguably, the driver and passengers in a car are exposed to the
same risk yet the driver is more comfortable by virtue of feeling in control.
It can be argued that consumers are more willing to accept the presence of
salmonella in poultry because they have the potential to cook and handle the
product in such a way as to negate the risk associated with it. They do not
feel such control with GM foods.
3. Fairness. People are more willing to accept risks if they perceive that
everybody is equally affected by them. Press reports about GM foods
having been banned by the caterers supplying Monsanto, the UK Houses of
Parliament and the European Parliament can have done little to reassure the
public at large (BBC, 1999).
4. Trust. Is the organisation responsible for managing the risk trustworthy?
Trust is multidimensional and appears to be linked with perceptions of
accuracy, knowledge and concern for public welfare (Frewer et al., 1996).
In the UK public trust in the Ministry of Agriculture Fisheries and Food and
to some extent the Department of Health was seriously eroded by the bovine
spongiform encephalopathy (BSE) crisis. Largely as a counter to this, the
Food Standards Agency (FSA) was set up as an independent food safety
watchdog to protect the public¡¯s health and consumer interests in relation to
food. The FSA has been working hard to ensure that it is the UK¡¯s most
reliable source of advice and information about food. To this end the FSA¡¯s
guiding principles are: putting consumers first; being open and accessible;
and being independent.
5. Morality. Is the risk moral? To many people, including Prince Charles, GM
foods are immoral in the sense that the associated manipulation of genetic
material is tantamount to playing God with these foods.
6. Familiarity. People are more comfortable with those risks that they have
Risk communication 159
lived with from day-to-day than those which they feel they know nothing
about. A recent survey of consumer attitudes (FSA, 2001) revealed that,
despite concern about hygiene standards in takeaways and fast-food outlets,
two-thirds of people in the UK visit them on an occasional or regular basis.
7. Memorability. Does the risk stick in the mind? Health scares in general,
and food scares in particular, can be remarkably memorable. Emotive
newspaper headlines such as ¡®Listeria hysteria¡¯ and ¡®Frankenstein foods¡¯ are
designed to attract attention and are unforgettable.
8. Catastrophe. Outrage is elicited where a large group of people are affected
at the same time in the same place. Train and aeroplane crashes receive
much prominence even though the deaths associated with them are rarer and
fewer compared to the daily death toll on the roads. Each year in England
and Wales there are more than three times as many laboratory-reported
cases of human infection due to campylobacter than salmonella. However,
the fact that the salmonella cases tend to be clustered around groups of
people, a specific restaurant or a specific event means that this organism
attracts more attention than campylobacter, where the vast majority of cases
are sporadic.
9. Dread. How terrifying is the risk? Emotive phrases such as ¡®mad cow
disease¡¯ or ¡®Frankenstein foods¡¯ can very easily induce fear in the minds of
the general public.
10. Benefits. It is a fundamental truism that we are happy to take risks if we
perceive some benefit from doing so. Consumers expect food that is
cheaper, tastier or more nutritious. It appears that they are more willing to
accept risks with products with these attributes.
11. Impact on vulnerable groups. Risks that impact on vulnerable members of
society, such as the young, the elderly or pregnant women, are much less
acceptable. Exposing children to the risks from verotoxigenic E. coli O157
is very likely to induce outrage among the general public, particularly when
images of children being treated by kidney dialysis or life support
equipment are involved.
In total, up to 47 outrage factors have been identified (Covello and Merkhofer,
1994). These include:
? Public understanding of the risks.
? Whether or not scientists actually understand the risks involved.
? Whether or not scientists can state them clearly to the public.
? Media attention.
? Whether or not the risk can be reversed.
? Whether someone could be blamed or identified as being as fault.
Outrage factors are more complex than this sequential one-dimensional list
suggests. Psychologists have used psychometric techniques to determine people¡¯s
responses to combinations of outrage factors. One approach is the factor-analytic
representation shown in Fig. 8.1. It depicts the results obtained when peoples¡¯
160 Microbiological risk assessment in food processing
responses to dread factors and unknown risks are represented on a two-dimensional
basis (Slovic, 1987). Dread factors are a measure of whether or not the risks are
controllable, global, catastrophic, of high risk to future generations or involuntary.
Unknown risks are those that are not observable, are not recognised by those
affected, have delayed effect, are new or are unknown to science.
Risk factors that have the greatest effect on the general public such as nuclear
power stations and DNA technology appear in the top right hand corner of the
graph. These are rated highest in terms of both dread factor and unknown risk
and are also those that people feel most require government intervention. In
contrast the opposite corner consists of those factors that are demonstrably more
risky in terms of death or illness but are relatively known and dreaded less. Thus,
the general public appears to be much less concerned about the consumption of
alcohol or the use of swimming pools even though they are palpably more risky.
An equivalent study carried out on food related risks in the UK by Sparks and
Shepherd (1994) found a markedly similar picture (Fig. 8.2). Fife-Shaw and
Rowe (1996) report comparable results. In the light of the results, public fears
about GM foods and food irradiation are perhaps unsurprising.
8.3.2 Expert perception versus public perception
A fundamental explanation for ¡®risk mis-communication¡¯ is the evidence that
suggests experts and non-experts can perceive the same risk in vastly different
ways.
Studies of ¡®expressed preferences¡¯ have shown that perceived risk is
quantifiable and predictable. Table 8.2, derived from Slovic (1987), shows the
results when various groups of people were asked to rank up to 30 hazards in
terms of their ¡®riskiness¡¯. An activity with a score of 1 was perceived by that
Fig. 8.1 Risk factor analysis for general risks. (After Slovic, 1987.)
Risk communication 161
group as being the ¡®most risky¡¯ while a score of 30 was seen as being the ¡®least
risky¡¯. Students were used as examples of the general public; activity club
members were used as examples of those people who would either participate in
a particular activity or use the technology associated with it; experts were used
as examples of the experts. Not all the answers are shown here but it is clear that
each of the different groups rated the various hazards in a markedly different
manner. For example, students ranked nuclear power as being the most risky
activity whereas the experts gave this the lowly ranking of 20.
Further evidence for this risk-perception dichotomy is obtained when the
factor-analytic representations shown in Figs 8.1 and 8.2 are recalculated and
redrawn: the responses of experts are vastly different from those of non-experts
(Slovic, 1987; Sparks and Shepherd, 1994; Fife-Shaw and Rowe, 1996). For
example, whereas the general public tend to consider risks like nuclear power
stations or food irradiation to be ¡®unknown and severe¡¯, placing them in the top
right hand quadrant, experts tend to consider such risks to be ¡®known¡¯ (perhaps
Fig. 8.2 Risk factor analysis for food risks. (After Sparks and Shepherd, 1994.)
Table 8.2 Risk ranking by different social groups (1 = highest risk)
Activity or technology Students Club members Experts
Nuclear power 1 8 20
Motor vehicles 5 3 1
Handguns 2 1 4
Smoking 3 4 2
Motorcycles 6 2 6
Alcoholic beverages 7 5 3
General aviation 15 11 12
162 Microbiological risk assessment in food processing
not surprisingly) and less ¡®severe¡¯, placing them more towards the bottom left
hand quadrant. Further, experts tend to consider quantifiable risks like alcohol or
a high sugar diet to be more ¡®severe¡¯ and place them further to the right hand
side of the representation than do the public at large.
Psychometric studies such as these explain why the risks from technologies
such as GM foods can elicit differing reactions from experts compared with the
general public and why the general public is so concerned with such risks. They
highlight the tremendous scope for confusion and misunderstanding if experts
do not recognise and appreciate that the general public can perceive risks in a
different manner. Many issues and many problems in risk communication, not
just in food safety, stem from this dichotomy in perception.
8.4 The concept of communication
The Oxford English Dictionary defines communication as ¡®the imparting or
exchanging of information by speaking, writing or by using some other
medium¡¯. In the context of risk communication, perhaps a more useful definition
is ¡®The two way process whereby one party exchanges a message, idea, action or
intention to another¡¯ (Mitchell, 2000).
The three components of communication are: the audience with whom one
wishes to communicate; the message one wishes to impart; and the medium by
which it is conveyed. Communication is a two-way process. As such, it will fail
unless it determines the needs of the audience, adjusts the message to reflect
those needs, and then confirms that the correct message has been received.
8.4.1 The audience
Consumers and other customers are sensible. They know, often by intuition,
when they are being talked down to, patronised or lied to. Nevertheless, some
experts wittingly or unwittingly still try to pull the wool over their eyes. The
Food and Nutrition Alliance has published a list of warning signs that consumers
should look for when evaluating claims made about foods and food safety. These
are the ¡®10 Red Flags of Junk Science¡¯ that should immediately raise suspicions
about any food safety or risk communication claims (Bruhn, 1998):
? Recommendations that promise a quick fix.
? Dire warnings of danger from a single product or regimen.
? Claims that sound too good to be true.
? Simplistic conclusions drawn from a complex study.
? Recommendations based on a single study.
? Dramatic statements that are refuted by reputable scientific organisations.
? Lists of ¡®good¡¯ and ¡®bad¡¯ foods.
? Recommendations made to help sell a product.
? Recommendations based on studies published without peer review.
Risk communication 163
? Recommendations from studies that ignore differences among individuals or
groups.
8.4.2 Different types of audience
Risk communication is complicated by the fact that ¡®the audience¡¯ does not
comprise one homogeneous group whose members would all receive ¡®the
message¡¯ in the same way. Different groups within a population (e.g. health
practitioners or parents or the elderly) can have different information needs.
Consequently within the same risk communication strategy it may be necessary
to deliver different messages targeted at each population subgroup. The social
context within which messages are received is also crucial. For example, people
of a lower socio-economic status tend to receive health information from a few,
local community-based sources (National Research Council, 2001). Almost any
criterion that can be used to divide up the population can produce subgroups that
might perceive risk messages differently and will therefore need to be taken into
account, e.g. gender, age, health status or ethnic background (Fischoff and
Downs, 1997; Breakwell, 2000).
Further, psychologists have identified at least four different types of people,
each of whom perceives risks differently and as a consequence will react
differently to the same risk message.
1. Egalitarians. This group perceives the balance of nature as fragile, distrust
expertise and strongly favour public participation in decisions.
2. Individualists. Such people see nature as robust and want to make their
own decisions.
3. Hierarchists. These see nature as ¡®robust within limits¡¯ and want well-
established rules and procedures to regulate risks.
4. Fatalists. These see life as capricious and attempts at control as futile.
Rarely will any risk communication exercise be aimed at any of these groups
individually. Any risk message aimed at the population as a whole will generate a
range of markedly differing responses influenced by the characteristics listed
above. A spectrum of responses should be expected from any such exercise. As an
added complication, the public¡¯s response can fluctuate as risk messages evolve.
8.4.3 The message
Most people judge messages not by their content but by the credibility of the
messenger. If the messenger is not credible then the message is likely to be
disregarded, no matter how technically correct it is, how well intentioned and
well delivered. Indeed, some research (e.g. Frewer et al., 1996) suggests that
well-presented arguments from distrusted sources actually have a negative
effect. It can appear that the sender is not only untrustworthy but also devious.
In practice the risk messages that consumers receive will comprise multiple
messages, from many sources (Trautman, 2001). Individual issues can comprise
164 Microbiological risk assessment in food processing
difficult and complex ideas, yet preparing even a single risk message can be
difficult. It may require choosing between a message that is so extensive and
complex that only experts can understand it and a message that is more easily
understood by non-experts but is selective and thus subject to challenge as being
inaccurate or manipulative (National Research Council, 1989). There is no easy
answer.
Important points to remember about risk messages are as follows:
? Messages are usually judged first by whether or not their source is trusted.
? Intentional communication is often only a minor part of the message actually
conveyed.
? Responses to messages depend not only on content but also on manner of
delivery, especially emotional tone.
? Experts no longer command automatic trust, no matter how genuine their
expertise.
? Trust is generally fostered by openness, both in avoiding secrecy and in being
ready to listen.
8.4.4 Risk comparisons
Experts often try to communicate risk by means of risk comparisons (Table 8.1).
The underlying philosophy is that by numerically comparing a new risk to a
known risk then the non-expert can perceive the new risk in its proper context.
The flaw in this approach is that people seldom react to risks in such a cool
logical fashion (Section 8.4.3). Indeed, it has been suggested that use or over-use
of risk comparisons can damage the credibility of the messenger (Department of
Health, 1998).
Guidance for using comparisons include:
? Avoid comparisons.
? Do not exaggerate the risk of rare events.
? Using comparisons to imply acceptability is dangerous.
? Compare like with like.
Nevertheless, in the absence of better information on microbiological risks in
foods, risk comparisons may be the only method of risk communication
available for the foreseeable future.
8.4.5 The medium
Experts communicate with each other by means of peer-reviewed publications,
specialist fora and conferences. These tend not to be available to the general
public. The vast majority of people receive their information about risks in foods
from the media. Indeed this is often the first and only source of information
about risks for many people. The media have two objectives: (a) to educate and
inform, and (b) to make a profit. It is unavoidable that sometimes the media tend
Risk communication 165
to exaggerate or sensationalise issues in order to attract customers. As a result,
the media can sometimes act as ¡®amplifiers¡¯ of risk messages, bringing them to a
wider audience and increasing public concern.
In order to be prepared for these events it is necessary to know the ¡®triggers¡¯
that stimulate or amplify media interest in a story. A possible risk to public
health is more likely to become a major story if the following are prominent or
can be made to appear prominent (Health and Safety Executive, 1998):
? Questions of blame.
? Alleged secrets and cover-ups.
? Human interest through alleged heroes, villains, dupes, etc.
? Links to existing high-profile issues or personalities.
? Conflict, particularly between experts or between experts and others.
? Story is a sign of further problems.
? Many people exposed to the risk, even at low levels.
? Strong visual impact, e.g. pictures of the victims.
? Links to sex or crime.
? Reference back to other stories.
8.5 Risk communication
Risk communication is a highly specialised form of communication. To be
effective it needs to take account of all the facets, technical and human,
described above. The Codex Alimentarius Commission currently defines risk
communication as ¡®an interactive exchange of information and opinions
concerning risk among risk assessors, risk managers, consumers and other
interested parties¡¯ (FAO/WHO, 1997). In conjunction with risk assessment and
risk management, risk communication constitutes the Codex microbiological
risk analysis paradigm. It follows that risk communication needs to be based
upon a sound assessment of the risk under consideration. Further, some might
consider that, in practice, risk communication is a critical component of risk
management.
8.5.1 The benefits and uses of risk communication
The benefits and uses of risk communication are largely self-evident. Risk
communication allows one party to communicate risk to another in order to:
? Demonstrate that a proper risk assessment has been conducted and that risk
management procedures are in place.
? Justify any costs, alterations or restrictions that might be required to
implement the risk management procedures.
? Communicate the actions that need to be taken to accomplish the risk
management procedures, e.g. avoid certain products or ingredients, alterations
to pasteurisation requirements or even the withdrawal of product from sale.
166 Microbiological risk assessment in food processing
There may be other benefits and uses but they will tend to be subsets of the three
listed above. Indeed the risk communication process and its benefits will tend to
be the same for government, enforcers and health professionals and industry:
only the messages and the medium will change.
In 1998 an FAO/WHO Expert Consultation on the Application of Risk
Communication to Food Standards and Safety Matters set out the goals of risk
communication in more detail (FAO/WHO, 1998):
1. Promote awareness and understanding of the specific issues under
consideration during the risk analysis process, by all participants.
2. Promote consistency and transparency in arriving at and implementing risk
management decisions.
3. Provide a sound basis for understanding the risk management decisions
proposed or implemented.
4. Improve the overall effectiveness and efficiency of the risk analysis process.
5. Contribute to the development and delivery of effective information and
education programmes, when they are selected as risk management options.
6. Foster public trust and confidence in the safety of the food supply.
7. Strengthen the working relationships and mutual respect among all
participants.
8. Promote the appropriate involvement of all interested parties in the risk
communication process.
9. Exchange information on the knowledge, attitudes, values, practices and
perceptions of interested parties concerning risks associated with food and
related topics.
8.5.2 Proactive and reactive risk communication
Basically the risk communication process will differ depending upon the reason
for doing it. Proactive risk communication is easiest primarily because one has
time to plan and test the mechanisms involved. Indeed it is an integral
component of overall risk analysis planning and implementation. It can be
greatly improved by having a risk communication strategy (see below).
Reactive risk communication is required when problems arise with specific
foods or a sector of the industry and may be required to reassure others that
everything possible is being done to minimise or eliminate the risk. Although the
problem might be unforeseen it is a relatively safe bet that any organisation will
face problems from time to time. In a crisis, a great deal can be gained from
having at least the elements of risk communication planned in advance.
8.5.3 An example of a risk communication strategy
It is possible to be proactive and plan in advance to have a strategy for risk
communication. Table 8.3 is based upon the Pointers to Good Practice for
Communicating about Risks to Public Health published in 1998 by the
Risk communication 167
Table 8.3 Good practice in communicating about risks (Department of Health, 1998)
Anticipating public impact
1. Responses to risks will be amplified by outrage factors and media triggers.
2. Knock-on effects are often caused by responses to the original risk. Plan in advance
for potential indirect economic, social and political consequences.
Planning a strategy
3. Clear aims are essential:
? What do you want to achieve or avoid?
? Who do you need to agree them with in advance?
4. Identify the key stakeholders:
? Not only the intended audiences but others who may react or who can affect what
happens.
? What do they stand to gain or lose from different outcomes?
5. Consider how they may perceive the issue:
? Can this be investigated or influenced?
? What can be done to enhance trust?
? What other issues may stakeholders be responding to?
6. Check for apparent inconsistencies with previous messages or other actions:
? If unavoidable, these need to be explained.
7. Keep all the above (including aims) under review as the situation develops.
The process of communication
8. Plans must determine who needs to be involved at each stage of message preparation
and release. It might be a good idea to draw up standard lists in advance.
9. Ensure that:
? Choices are consistent and defensible.
? Any lack of openness is both necessary and well-explained.
? Mechanisms for involvement are made clear to others.
10. Check what else is being done to deal with the risk. What counts is the overall
impression conveyed.
Content of communication
11. Be careful to address audiences¡¯ values (e.g. perceived fairness, or a need to vent
anger), as well as providing factual information. Keep checking the emotional tone
used.
12. Acknowledge uncertainties in scientific assessments.
13. In giving statements about probabilities:
? If relative risks are cited (e.g. ¡®the risk has doubled¡¯), the baseline risk must be
made clear
? Any risk comparison (¡¯the risk from X is less than from Y¡¯) should be relevant to
actual choices.
? Avoid comparisons that may seem unfair or flippant, e.g. juxtaposing voluntary
and involuntary risks.
14. If alternative options have benefits as well as risks, ensure that both are fairly spelt
out. In any case bear in mind framing effects of wording (e.g. ¡®lives lost¡¯ versus
¡®lives saved¡¯).
Monitoring decisions and outcomes
15. At the start of an episode, set up procedures to monitor events and actions.
16. Afterwards, review the strategy taken and outcomes reached ¨C desirable or otherwise
¨C and disseminate lessons for future practice.
168 Microbiological risk assessment in food processing
Department of Health. The checklist can be used to identify difficult cases in
advance and to guide reaction to incidents as they occur.
8.6 The future of risk communication
The science of microbiological risk analysis in foods is still in its infancy.
Consequently, risk communication in this field has yet to develop to its full
potential. Examples of good practice would be extremely useful, but to date it is
hard to find any that are widely, or even narrowly, acknowledged as being
suitable models for others. Clearly, much research and sharing of good practice
are required.
An indication of the areas of research and procedures that need to be
developed further can be gleaned from the reports of two expert consultations,
one on risk communication and the other on strategic planning (FAO/WHO,
1998; WHO, 2001):
? If risk communication is to be effective, then key issues dealing with the
process itself must be addressed. These include the involvement and inter-
action of all interested parties; the use of persons trained in risk communica-
tion; an assurance that the risk communication is received and understood;
and the fostering of transparency during the entire process.
? Practitioners of food safety risk analysis should seek to involve and gain input
from all interested parties. This input will help risk assessors and managers to
become aware of and consider valid issues and concerns other than science.
? Persons with training and experience in the application of the principles and
procedures of risk communication should be part of any crisis management
team involved in a food safety issue. Training programmes in the principles
and practices of risk communication should be established for both risk
assessors and risk managers.
? Communications between and among risk assessors, risk managers and other
interested parties should use language and concepts that are readily
understood by the target audience. This includes clearly identifying what is
science, what are value judgements and what benefits, if any, are involved.
? Risk analysis practitioners should use risk communication procedures to
make the risk assessment process and the resulting risk management
decisions as transparent as possible. This will increase the likelihood of both
public understanding and acceptance of the risk management option(s)
selected.
? Generic communication strategies need to be developed based on these
recommendations. They should take into account local differences and
information needs. Constant refinement of the risk communication message
and process, following feedback from evaluation activities is essential.
Finally, communication of microbiological risks in foods is clearly still in its
infancy. Nevertheless, if food microbiologists and other parties conducting
Risk communication 169
microbiological risk analysis in food begin to take account of the factors
described in this chapter, then a vitally important first step will have been taken.
8.7 References
BBC (1999) 22 December: http://news.bbc.co.uk/hi/english/sci/tech/
newsid_574000/574245.stm
BREAKWELL G M (200) Risk Communication: factors affecting impact. British
Medical Bulletin 56 (1) 110¨C120.
BRUHN C M (1998) Communicating food safety to the consumer. Dairy, Food
and Environmental Sanitation 18 742¨C744.
CODEX ALIMENTARIUS COMMISSION (1998) Draft Principles and Guidelines for
the conduct of Microbiological Risk Assessment. ALINORM 99/13A
Appendix II, FAO, Rome.
COVELLO V T and MERKHOFER, M W (1994) Risk Assessment Methods. Plenum
Press, New York.
DEPARTMENT OF HEALTH (1998) Communicating about Risks to Public Health
Pointers to Good Practice. HMSO, London
FAO/WHO (1997) Report of the Twenty-second Session. Codex Alimentarius
Commission, FAO, Rome.
FAO/WHO (1998) Expert Consultation on the Application of Risk Communication
to Food Standards and Safety Matters, FAO, Rome.
FIFE-SHAW C and ROWE G (1996) Public perceptions of everyday food hazards: a
psychometric study. Risk Analysis 16 (4) 487¨C500.
FISCHOFF B (1995) Risk perception and communication unplugged: twenty years
of progress. Risk Analysis 15 137¨C145.
FISCHOFF B and DOWNS J (1997) Communicating foodborne disease risk.
Emerging Infectious Disease 3 (4) 489¨C495.
FSA (2001) UK Food Standards Agency Survey 2001 ¨C Top ten facts and figures.
http://www.food.gov.uk/multimedia/webpage/consumersurvtopten
FREWER L J HOWARD C HEDDERLEY, D and SHEPHERD R (1996) What determines
trust in information about food-related risks. Underlying psychological
constructs. Risk Analysis 16 (4) 473¨C486.
HAIGH J (1999) Taking Chances: Winning with Probability. Oxford University
Press.
HEALTH AND SAFETY EXECUTIVE (1998) Risk Communication ¨C A Guide to
Regulatory Practice. HMSO London
MITCHELL R T (2000) Practical Microbiological Risk Analysis in Food.
Chadwick House Group Ltd, London.
MORRIS J and BATE R. (1999) Fearing Food: Risk, Health & Environment.
Butterworth-Heinemann, Oxford.
NATIONAL RESEARCH COUNCIL (1989) Improving Risk Communication. National
Academy Press, Washington, DC.
NATIONAL RESEARCH COUNCIL (2001) Science and Risk Communication: A
170 Microbiological risk assessment in food processing
Mini-Symposium Sponsored by the Roundtable on Environmental Health
Sciences, Research and Medicine. National Academy Press, Washington,
DC.
SANDMAN P M, MILLER P M, JOHNSON B B and WEINSTEIN N D (1993) Agency
communication, community outrage, and perception of risk: Three
Simulation Experiments. Risk Analysis, 13 (6) 585¨C598.
SLOVIC P (1987) Perception of risk. Science. 236 280¨C285.
SPARKS P and SHEPHERD R (1994) Public perceptions of the potential hazards
associated with food production and food consumption: an empirical
study. Risk Analysis 14 799¨C806.
TRAUTMAN T D (2001) Risk communication ¨C perceptions and realities. Food
Additives and Contaminants 18 (12) 1130¨C1134.
WHO (2001) Report of a WHO Strategic Planning Meeting convened by the
Food Safety Programme, Geneva.
Risk communication 171
Part II
Implementing microbiological risk
assessments
9.1 Introduction
The aim of a microbiological risk assessment (MRA) is to provide risk managers
with answers to one or more questions that may enable them to make better
informed decisions. For example, risk managers may have encountered a
problem and wish to know how big it is in order to decide whether control
measures are needed. They may also want to know the potential options
available to control the problem. Perhaps they merely want to be assured that the
problem is a one-time accident rather than a continuous threat to public health.
Sometimes, microbiological risk assessors can answer a question very rapidly.
Based on profound experience and common sense, they come to a conclusion
and make recommendations. On the other hand, an MRA may be very elaborate;
it may take several months and be carried out by several experts. Examples of
elaborate risk assessments include those carried out in the USA for Salmonella
enteritidis in eggs (FSIS, 1998) and Listeria monocytogenes in ready-to-eat
foods (FSIS, 2001). The MRAs carried out by the FAO/WHO experts are other
examples of very detailed assessments (FAO/WHO, 2001).
Whatever the depth and sophistication of the MRA may be, it provides risk
managers with data or recommendations, which help them to make the appropriate
decisions. According to the Codex document on microbiological risk management
(CAC, 2001a), risk managers should consider various options for control
measures. In order to provide data on the effects of these options, risk assessors
may be asked to simulate scenarios to determine the possible outcome of
implementing several control options. Although all stakeholders in the food chain,
in practice, could apply this risk management model according to the Codex
9
Implementing the results of a
microbiological risk assessment: pathogen
risk management
M. Van Schothorst, Wageningen University
Alimentarius, it is mainly applied by governments or intergovernmental bodies
such as Codex. Food industries use during the development of new products, a
similar approach, but the aim and outcome are different.
A governmental risk assessment deals with all kinds of similar products in the
market made by different producers. The risk assessment will provide risk
managers with a risk estimate which can be, for example, an estimation of the
number of people that may get a type of illness as a consequence of consuming a
particular food containing a (certain level of a) certain microorganism. When
different scenarios are studied, the risk estimates may change according to the
control options considered. Clearly, those control measures that will result in a
lower estimate of the number of illnesses will be evaluated for implementation
and, when appropriate, considered for inclusion in generic hazard analysis
critical control point (HACCP) plans.
In the food industry, food safety managers do not normally express the
outcome of the simulation of different control measures as an estimated number
of cases of illness. They are more likely to estimate the level of a certain
microorganism in the food to be marketed, and to compare this with a similar
food with a good safety record. In the food industry this is called food safety
benchmarking. Well-established good hygienic practices (GHP) and HACCP are
the basis of the safety record. When new formulations, new technologies and
new equipment are going to be used, their effect on the safety of the final
product will be estimated. This process is part of the hazard analysis in the
HACCP system. Food industries do not determine how many more, or how
many fewer, people may become ill after the consumption of the new food; the
target is to prevent illness. In performing this hazard analysis the same
methodology can be used as is used in the product/pathogen/pathway analysis in
MRA. Predictive models, Monte Carlo simulations, etc. can be useful, but the
end-point will be an exposure assessment rather than a risk characterisation.
It is important to make the distinction between a governmental MRA and
an industrial hazard analysis (HA). There are several differences between the
implementation of the results of a governmental MRA and the results of an
industrial HA. Industrial HA starts with an assessment of their raw materials
and their suppliers. An estimate of the initial level of contamination with the
hazard of concern may thus be obtained. This forms the basis of the
elaboration of the product formulation, processing conditions, shelf-life and
shelf-life conditions, as well as instructions for preparation and use necessary
to obtain the required level of safety. If a supplier is not able to deliver what is
needed to produce a safe product, another supplier is found or another
technology applied. Thus one of the first activities of industrial food safety
managers is to determine the level of safety they want to achieve and to ensure
that this will be achieved. Frequently this level is regarded to be ¡®as low as
reasonably achievable¡¯ (ALARA), but in practice it is often the benchmarking
mentioned earlier. Safety is ¡®built-in¡¯ and hazards are ¡®engineered out¡¯.
However, incidents and unforeseen events still happen occasionally during all
steps of the food chain.
176 Microbiological risk assessment in food processing
Governmental risk managers have to determine the level of risk they are
willing/prepared to accept or tolerate in order to comply with the WTO/SPS
(sanitary and phytosanitary measures) agreement (WHO, 1997). Establishing such
a level of risk is a complex exercise, and while science should be the starting point,
consumer preferences, costs and feasibility all play a role in decision making.
A product submitted for import may be rejected if it endangers the
appropriate level of protection (ALOP) also called the ¡®acceptable level of risk¡¯.
Instead of the latter term, the expression ¡®tolerable level of risk¡¯ (TLR) is
preferred because while consumers may tolerate food safety risks, they are
reluctant to accept them. Moreover, risk assessment is mentioned in the SPS
agreement as a tool in setting an ALOP.
An ALOP is defined as:
the level of protection deemed appropriate by the Member [State]
establishing a SPS measure to protect human life or health within its
territory.
A food put on a market in another country should not endanger this appropriate
level of protection: imported foods should not lead to an increase in the number
of diseases caused by a certain microorganism in a certain food.
An ALOP and a TLR may both be expressed as an annual number of illnesses
per 100 thousand of a population caused by a certain pathogen in a certain food
considered to be appropriate or tolerable. In theory, an MRA would be necessary
to determine whether a food would be acceptable for importation if this is in
doubt. If the outcome of the MRA were a risk estimate lower than TLR, the food
would be accepted. If, however, the risk assessment resulted in a risk estimate
higher than the TLR, the food would be rejected.
Clearly, this procedure would hamper the trade unnecessarily (which is
against the objectives of the WTO), and thus another way of dealing with the
SPS agreement had to be found. The food safety objective (FSO) concept has
been proposed to deal with this: it converts a ¡®level of illness¡¯ into a ¡®level of a
hazard¡¯ (ICMSF, 1998). An FSO expresses the maximum frequency and/or
concentration of a microbiological hazard in a food at moment of consumption
that provides the appropriate level of health protection (CAC, 2001b).
9.2 Establishing food safety objectives
MRAs serve as a tool for risk managers to select or reinforce control measures
that will provide consumers with the appropriate level of health protection. The
Codex document on microbiological risk management (CAC, 2001a) specifies
the assessment of various options as one of the key elements of risk
management:
The primary objective of microbiological risk management options
assessment is an optimisation of the interventions necessary to prevent and
Implementing the results of an MRA: pathogen risk management 177
to control microbiological risks. It is aimed at selecting the option or
options that achieve the chosen level of public health protection for the
microbiological hazard in the commodity of concern, in an as cost
effective manner as possible within the technical feasibility of the
industry. Available options may be identified at national, regional or
international level in the context of international trade agreement
provisions.
There might be many different options for reducing microbiological
risks, such as:
? avoiding foods with a substantiated history of contamination or
toxicity;
? preventing contamination and/or introduction of pathogens at any
stage in the food chain, including reducing the level of specific
pathogens in primary production;
? preventing growth of pathogens by the combined action of extrinsic
factors (e.g. chilling or freezing) and/or intrinsic factors (e.g.
adjusting pH, A
w
; adding preservatives; employing microbiological
competition);
? destroying pathogens (e.g. cooking, irradiation);
? establishing regulatory requirements and/or creating incentives for
changes in attitudes that will contribute to risk reduction;
? labelling products with consumer information that either instructs
regarding safe handling practices or warns regarding microbiological
hazards that are likely to occur and for which adequate controls were
unavailable;
? educating/informing the population at large or affected sub-groups
about the steps they can take to reduce risks;
? establishing microbiological standards or other criteria and enforcing
compliance;
? establishing microbiological food safety objectives (FSOs);
Usually, a combination of options will be more effective in reducing risks.
All these options need careful consideration, and which ones are chosen
depends largely on the product, the pathogen, the population at risk, technical,
economic and other societal considerations. For this reason, the following text
will not deal with most of them. However, the concept of FSOs will be
explained, because all risk management options and selected control measures
should result in a certain level of health protection.
The FSO concept was developed because it is difficult to assess whether an
ALOP or TLR will be achieved. An FSO converts the ALOP/TLR into para-
meters that can be controlled by food producers and monitored by government
agencies. The ALOP/TLR is an expression of a public health risk, while an FSO
expresses the level of a hazard in relation to this risk. The FSO can be defined
as:
178 Microbiological risk assessment in food processing
the maximum frequency and/or concentration of a microbial hazard in a
food at the moment of consumption that provides the appropriate level
of health protection.
An example of an FSO is <100 L. monocytogenes/g in a serving of food at the
moment of consumption. The estimated level of protection achieved by meeting
this FSO is that the chance of someone getting ill from eating a food that
contains this concentration of L. monocytogenes would be 10
12
(FAO/WHO,
2001). An FSO should be met through the implementation of GHP and HACCP
systems as well as correct food preparation and use practices. This is in line with
the Codex definition of food safety: ¡®assurance that food will not cause harm to
the consumer when it is prepared and/or eaten according to its intended use¡¯
(CAC, 1997a). Since FSOs define the level of a hazard at the moment of
consumption other criteria have to be used to define the level of a hazard that is
expected for other points in the food chain. Such criteria have been called
performance criteria and can be described as: ¡®the required outcome of a control
measure or combination of control measures that can be applied to assure that an
FSO is met¡¯ (ICMSF, 1998). In Section 9.3 this concept will be further
developed and a proposal for a distinction between the terms ¡®performance
criteria¡¯ and ¡®performance guidelines or standards¡¯ will be made.
Establishing an FSO is a risk management activity and not the result of a risk
assessment. It is a decision, based on scientific input, feasibility assessment, and
a judgement of the acceptability by stakeholders such as consumers and
industry. Ideally, an FSO would be based on the frequency or concentration of a
pathogen in a food that would not cause illness. This would be equivalent to
finding a no-effect dose, the value that is used for setting tolerable levels of daily
exposure for acutely toxic chemicals. Certain foodborne pathogens have clearly
definable threshold levels below which they pose no risk to the consumer. For
example, for certain toxigenic foodborne pathogens such as Staphylococcus
aureus a threshold concentration of cells can be estimated below which the
microorganism does not produce sufficient toxin to cause a measurable adverse
health effect (Jablonski and Bohach, 2001).
For infectious pathogens such a threshold is often assumed to be one viable
cell. Currently, most risk characterisation models are based on this assumption
(Whiting and Buchanan, 2001). An important outcome of an MRA is the
establishment of a relationship between the level of a hazard (frequency and/
or concentration) in a food and the incidence of the illness it causes in a given
population, which may be represented by a hazard characterisation curve. The
slope of this curve is specific to the hazard, the food, the illness and the
category of consumers for which the curve has been determined. If such a
curve is available for the incidence of illness for a specific pathogen¨Cfood
combination, the selected ALOP can be positioned on the y-axis and the
corresponding level of the hazard (FSO) can be obtained on the x-axis (see
Fig. 9.1). Thus, the curve describes the relation between the level of a
microbiological hazard in a specific food and its effect (for example the
Implementing the results of an MRA: pathogen risk management 179
number of cases of diarrhoea) on the general population. When the TLR has
been set, the FSO can be determined.
Even when no ALOP is determined and the risk assessment does not
provide the necessary information, FSOs can still be established.
Investigations of foodborne illnesses and epidemiological surveillance
programmes provide information about which foods have caused adverse
health effects and which pathogens were implicated. Industry records are in
principle another important source of information Many foods processed for
safety have an excellent history of providing an appropriate level of health
protection. When such foods have been implicated in foodborne illness this is
usually caused by deviations from good manufacturing/hygienic practices or
accidents that were not detected in time. A good example is the safety record
of industrially produced shelf-stable canned products. By analysing the
production of such a food, an estimate can be made of the level of a potential
hazard that may remain in the food. This level may than be used to establish a
performance criterion/standard or an FSO.
Risk managers must seek to provide evidence that the proposed FSO is
technically achievable through implementation of good hygenic practice (GHP)
and HACCP. If the FSO cannot be achieved, then the product, process and/or the
FSO should be modified. When this is not possible, or if the public does not
accept the modified FSO, the consequence may be that the products, processes
or foods need to be banned. An exporting country may encounter the same
problem, i.e. that meeting the FSO is technically not achievable. This would
mean that the product could not be exported. Thus FSOs could play an important
role in providing the transparency and equivalence mentioned in the SPS
agreement.
Fig. 9.1 Hypothetical risk characterisation curve.
180 Microbiological risk assessment in food processing
9.3 Developing food safety management strategies
9.3.1 Definitions
From the information provided in a FSO, regulatory authorities and food
operators can select appropriate control measures to achieve the intended results
(ICMSF, 1998). A control measure is ¡®any action and activity that can be used to
prevent or eliminate a food safety hazard or reduce it to an acceptable level¡¯
(CAC, 1997b). One or more control measures may be necessary at each step
along the food chain to ensure that a food is safe when consumed. In order to
design control measures it is necessary to establish what needs to be achieved
(the performance criterion) and how it will be achieved (the process and product
criteria). Control measures should be established according to GHP and HACCP
(CAC, 1997a, b).
In order to prevent confusion concerning parameters describing what should
be achieved, where and how, the following terminology will be used:
? Food safety objective: the level of a hazard at the moment of consumption.
? Performance standard: the level of a hazard at any other point in the food
chain. NB: the use of the word ¡®Standard¡¯ does not imply that the specified
level of the hazard would be a regulatory mandatory requirement.
? Performance criterion: the outcome of a process step or a combination of
steps (decrease or increase in the level of a microorganism or microbial
toxin).
? Process criterion: a control parameter (e.g. time, temperature, pH, a
w
) at a
step that can be applied to achieve a performance criterion.
? Product criterion: a parameter of a food that is essential to ensure that a
performance standard or food safety objective is met.
? Microbiological criterion: the acceptability of a product or a food lot, based
on the absence or presence, or number of microorganisms including parasites,
and/or quantity of their toxins/metabolites, per unit(s) of mass, volume, area
or lot.
When designing and controlling food operations it is necessary to consider
initial pathogen contamination (H
0
), reduction (R), growth (G) and possible
recontamination (RC). These events can be represented by:
H
0
X
R
X
RC
X
G PS
in which the PS stands for performance standard, the
P
indicates that several of
these events may occur and should be summed. This is based on the ICMSF
equation:
H
0
X
R
X
I FSO
in which I stands for increase, i.e. both RC and G (ICMSF, 2002).
Since a PS can be at any point of the food chain, Fig. 9.2 may serve as an
example of a more complete picture. The level of the hazard in raw materials,
intermediate product and end-products may change along the food chain (¡®from
Implementing the results of an MRA: pathogen risk management 181
farm to fork¡¯) due to all kinds of influences. At various stages a raw material or
product may become contaminated (RC), the microorganism may grow (G) and
may also be reduced in number (R). This may occur several times and the
resulting effects are summed (
P
). The PS of one stage is the H
0
of the next
stage, the last PS (that of the preparation in the kitchen and further events before
consumption) becomes the FSO. To give an illustration, the hypothetical fate of
Listeria monocytogenes in a soft cheese will be considered. At the farm the milk
is contaminated with 1 cell per ml (C). The receiving dairy factory does not
accept milk containing >10 L. monocytogenes/ml (the dairy¡¯s H
0
and the
producer¡¯s performance standard). The time and temperature during storage at
the farm and transport to the dairy plant should thus limit the multiplication to a
factor of 10 or three generations (G). At the plant multiplication before thermal
treatment should again be limited to a factor of 10 (G). The thermalisation of the
milk should achieve at least a 10
5
reduction (R) so that a level of 10
3
cells /ml
is obtained. Cheese making means that a 10-fold increase is reached after
heating by draining of the whey and expressing the level in cells per gram.
Growth during cheese making cannot completely be prevented, but should again
be limited to a factor of 10 (G). The frequency of recontamination (RC) is kept
under control by GHP and does not exceed 1 cell per 10 g and thus the PS of
10
1
cells/gram (used by the dairy plant) is met. The FSO has been set at < 100
L. monocytogenes at the moment of consumption, thus growth during storage
and distribution should not exceed a factor of 10
3
(
P
G) but would preferably be
less than this figure.
The equation is a good example of using the result of the product/pathogen/
pathway analysis performed during MRA or the results of the hazard analysis
performed in a HACCP study.
9.3.2 Performance standards
The term performance standard (PS) is chosen because in trade these criteria
play an important role. The FSO sets the level of a hazard at the moment of
consumption, a stage of the food chain where foods are no longer traded. An
FSO for Salmonella in poultry meat may be ¡®absence in a serving¡¯. Currently
Fig. 9.2 Picture showing factors influencing food safety.
182 Microbiological risk assessment in food processing
broilers in most countries contain this pathogen, and a government may want to
limit the contamination by setting a PS of ¡®not more than 15% of broilers may be
contaminated¡¯. Proper cooking and application of GHP during preparation
should ensure that the FSO is achieved, while the market is not unreasonably
challenged by a PS equal to the FSO, which in many countries is not achievable.
When a stable ready-to-eat (RTE) food is dealt with, the FSO and the PS may
be the same, but frequently a producer may want to build in a ¡®safety factor¡¯, in
order to be ¡®on the safe side¡¯. This takes into account that some abuse may occur
during further handling and that this should not lead to illness. The magnitude of
this ¡®safety factor¡¯ may be the result of an analysis of distribution, sales,
preparation and use practices carried out during the hazard analysis in a HACCP
study or an exposure assessment of an MRA. When microbial growth will occur
after a product leaves the factory, the PS is more stringent than the FSO; for
example, certain RTE products with extended shelf-life in which
L. monocytogenes can multiply. Obviously, the PS can be less stringent than
the FSO when a product needs to be cooked before consumption and when the
performance criterion of this preparation step, in combination with the H
0
,
would ensure that the FSO would be met. The case of Salmonella in broilers is a
good example of this. An MRA can estimate whether a certain PS will meet the
targeted health protection.
It should be mentioned here that a PS can be set at any point in the food chain
and that it is identical to the ¡®acceptable level¡¯ to be achieved at a critical control
point (CCP). A CCP is defined as: ¡®a step at which control can be applied and is
essential to prevent or eliminate a food safety hazard or reduce it to an
acceptable level¡¯ (CAC, 1997b).
9.3.3 Performance criteria
In the original ICMSF concept (ICMSF, 1998) no distinction was made between
a PC expressed as the level of a hazard and a PC expressed as a D-value or
another outcome of a process. This has led to some confusion and therefore the
term performance standard was introduced to express the level of a hazard. An
example of a PC is a 6D kill of Salmonella when cooking ground beef, or < 15%
of freshly slaughtered broilers contaminated with Salmonella as mentioned
above.
A PC does not only refer to a reduction in numbers, it may also be used to
limit recontamination and growth. For example, if the FSO for L. mono-
cytogenes in a non-stable RTE food is less than 100/g and the PS after a cooking
step during production is absence in 10 g, then the PC for recontamination (RC)
could be less than 1/g and the PC for growth (G) less than 10
2
.
Validation is an increasingly important aspect of food safety management
(ILSI, 1999; CAC, 2001b). It is defined as: ¡®obtaining evidence that the elements
of the HACCP plan are effective¡¯ (CAC, 1997b). Setting PCs based on
performance standards is an excellent means of ensuring that the system
becomes transparent and it will serve to obtain evidence of the equivalence
Implementing the results of an MRA: pathogen risk management 183
mentioned in the WTO/SPS agreement. It helps the shift from the old system of
compliance with processes and process criteria to compliance with objectives.
The consequence of this is, of course, that evidence needs to be provided that the
required PS is achieved with the PC applied. Validation can be performed
through challenge studies, by analysis of samples, by calculation, e.g. using D-
and z-values, etc.; a discussion of the merits of the different approaches is
outside the scope of this chapter. Validation is used to provide evidence that
certain data used in the MRA were correct; results of MRA cannot be used to
validate PCs in the food chain.
9.3.4 Process criteria
Process criteria are the control parameters (e.g. time, temperature, pH, a
w
) at a
step, or combination of steps, that can be applied to achieve a PC. For example,
the control parameters to achieve at least a 10
6
reduction of L. monocytogenes
in milk are 71.7 oC for 15 s (ICMSF, 1996). Process criteria are identical to
critical limits (CAC, 1997b) when the control point is a CCP in a HACCP plan.
Correctly applied process criteria for the preparation of food prior to
consumption is very important. Cooks have no means of checking whether an
FSO is achieved. They can, and should, monitor parameters such as time and
temperature. Providing other information concerning the importance of good
kitchen practices is part of risk communication, and initiating active information
and education programmes was already mentioned in Section 9.2 as a risk
management option.
In the Codex document on General Principles of Food Hygiene (CAC, 1997a)
the following text refers to this:
governments should provide health education programmes which
effectively communicate the principles of food hygiene to industry and
consumers¡¯.
This document also mentions that:
Industry should ensure that consumers have clear and easily-understood
information, by way of labelling and other appropriate means, to enable
them to protect their food from contamination and growth/survival of
foodborne pathogens by storing, handling and preparing it correctly.
And moreover it is stated that:
consumers should recognize their role by following relevant instructions
and applying appropriate food hygiene measures.
9.3.5 Product criteria
Once products are ready for distribution and sale, care is necessary to ensure that
they do not become unsafe due to multiplication and/or toxin formation by
184 Microbiological risk assessment in food processing
pathogens. Parameters in foods that are used to prevent unacceptable growth of
microorganisms are called product criteria. Multiplication and/or toxin
formation are dependent on the formulation, composition and ¡®environment¡¯
in the food. Parameters such as pH, a
w
, temperature, structure, additives,
competitive flora and gas atmosphere are used to control growth. For example,
to prevent L. monocytogenes reaching levels above 100/g in a RTE food during
distribution sale and storing at home, it may be necessary that a food has a pH <
4.6 or an a
w
< 0.92. Process criteria deal with treatments used to render foods
safe; product criteria are used to keep them safe.
9.4 Establishing microbiological criteria
9.4.1 Sampling plans
One risk management option is to establish microbiological criteria or standards,
which serve various purposes in the trade of food. In principle they are intended
for the assessment of foods based on microbiological analysis. A Micro-
biological Criterion (MC) for food defines according to Codex (CAC, 1997c):
the acceptability of a product or a food lot, based on the absence or
presence, or number of microorganisms including parasites, and/or
quantity of their toxins/metabolites, per unit(s) of mass, volume, area or
lot.
This document describes further how these criteria should be established and
applied:
A Microbiological Criterion consists of:
? a statement of the microorganisms of concern and/or their toxins/
metabolites and the reason for that concern
? the analytical methods for their detection and/or quantification
? a plan defining the number of field samples to be taken and the size
of the analytical unit
? microbiological limits considered appropriate to the food at the
specified point(s) of the food chain
? the number of analytical units that should conform to these limits.
In the establishment of MCs, FSOs or PSs can be useful and an MC for a food
should be related to its FSO. An MC that is excessively stringent relative to an
FSO may result in rejection of food even though it has been produced under
conditions that provide an acceptable level of protection.
According to the Codex document, in order to decide whether or not an MC
should be established and what the content should be, consideration should be
given to the following:
? Evidence of actual or potential hazards to health (epidemiological evidence
or the outcome of an MRA).
Implementing the results of an MRA: pathogen risk management 185
? The microbiology of raw materials (H
0
).
? Effect of processing (R).
? Likelihood and consequences of contamination (RC) and growth (G) during
handling, storage and use.
? The category of consumers at risk.
? The cost¨Cbenefit ratio of the application.
? The intended use of the food.
These considerations are of a very general nature and apply to all foods. When
dealing with specific foods, decisions must be made where criteria are to be
applied in the food chain and what would be achieved by applying them.
Microbiological criteria differ in function and content from FSOs (see Table
9.1). However, occasionally the limit in a criterion is the same as an FSO or a PS
as, for example, in the case of the FSO for L. monocytogenes in a stable RTE
product. An FSO will normally not prescribe a sampling plan. For MCs it is
essential that such a plan is developed, because that will assist in achieving the
transparency and equivalence mentioned in the WTO/SPS agreement.
The Codex document specifies that, in developing sampling plans, the
severity of the hazard and assessment of the likelihood of its occurrence must
be considered, but for more guidance the document refers to ICMSF Book 2
(ICMSF, 1986). The first part of this book that deals with the scientific
rationale for the development of sampling plans has been revised and published
as ICMSF Book 7: Microbiological Testing in Food Safety Management
(ICMSF, 2002).
Table 9.1 Characteristics of FSOs and microbiological criteria
Food safety objective Microbiological criterion
A goal upon which food processes can be
designed so the resulting food will be safe
A statement that defines acceptability of a
food product or lot of food
Aimed at consumer protection Confirmation that effective GHP and
HACCP plans are applied
Applied to food at the moment of
consumption
Applied to individual lots or consignments
of food
Components:
? Maximum frequency and/or
concentration of a microbiological
hazard
Components:
? Microorganism of concern and/or their
toxins/metabolites
? Sampling plan
? Analytical unit
? Analytical method
? Microbiological limits
? Number of analytical units that must
conform to the limits
Used only for food safety Used for food safety or quality
characteristics
186 Microbiological risk assessment in food processing
The ICMSF approach distinguishes three categories of hazards based upon
the relative degree of severity of their effects:
1. Severe hazards, life threatening.
2. Serious hazards, incapacitating but not life threatening.
3. Moderate hazards, severe discomfort of short duration.
This categorisation and the examples presented in Table 9.2 were based on the
best epidemiological data available at the time of publication, but may need to
be reviewed when new data become available.
The other factor to be considered is the likelihood of occurrence of an adverse
effect, taking account of the anticipated conditions of use. Here the ICMSF
again recognises three categories:
1. Conditions that would reduce the risk.
2. Conditions that would increase the risk.
3. Conditions that would not cause a change in risk.
Combining the three levels of severity of a health effect with the categories of
likelihood of occurrence leads to different levels of concern called ¡®cases¡¯ by the
ICMSF, case 7 being of lowest concern to food safety and case 15 of the
highest.
Taking into account the likelihood of a health effect, cases 9, 12 and 15
represent the highest levels of concern because they refer to situations where
pathogens can multiply in the food under expected conditions of handling,
storage, preparation and use. Cases 7, 10 and 13 represent the lowest levels of
concern, because they refer to intermediate situations of concern where the level
of the hazard is likely to be reduced before consumption, for instance during
Table 9.2 Categories of hazards with some examples. (Based on ICMSF, 2002.)
1. Moderate, severe discomfort, short
duration
S. aureus
V. parahaemolyticus
B. cereus
C. perfringens
2. Serious, incapacitating, not life
threatening
Salmonella (non-typhi)
Yersinia enterocolitica
Shigella (non-dysenteriae I)
Listeria moncytogenes
3A. Severe, life threatening for general
population
C. botulinum
V. cholera O1
S. typhi
Enterohaemorrhagic E. coli
3B. Severe for restricted populations Campylobacter jejuni
Enteropathogenic E.coli
Listeria monocytogenes
Implementing the results of an MRA: pathogen risk management 187
preparation. Cases 8, 11 and 14 refer to situations where the level of the hazard
would remain the same between the time of sampling and the time of
consumption.
Based on these nine cases, the ICMSF developed two-class sampling plans in
which n indicates the number of sample units to be tested and c the number of
defective sample units that can be accepted. These sampling plans are
summarised in Table 9.3. The plans direct more of the available resources for
analysis towards those situations with a high level of concern.
Often 25 g or ml of the samples taken from a lot is analysed, but a smaller or
larger weight or volume can be used to decrease or increase the stringency of
the sampling plan. Using 25 g analytical units means that in Case 10
Salmonella would be ¡®absent¡¯ (not detected) in 125 g, and in Case 15 in 1.5 kg.
When pathogens are homogeneously distributed throughout a lot, or when
samples are taken at random, statistical methods can be used to express the
likelihood of contamination of the lot. Finding no Salmonella when applying
Case 10 would mean that 90% of the lots containing 2% defectives would be
accepted (with a probability of 95%). For Case 15 it would mean that 30% of
such lots would be accepted. However, in many cases the distribution of
contaminants is not homogeneous and random sampling is most of the time not
possible. This clearly illustrates that examination of batches, lots or
consignments of products for the presence of pathogens has only limited
value as a control measure.
Table 9.3 Plan stringency (Case) in relation to degree of health concern and conditions
of use. (Based on ICMSF, 2002.)
Degree of concern relative to
expected health effect
Conditions in which food is to be handled and
consumed after sampling in the usual course of
events
Conditions
reduce degree
of concern
Conditions
cause no
change in
concern
Conditions may
increase
concern
1. Moderate, severe discomfort, short
duration
Case 7
n=5,c=2
Case 8
n = 5, c = 1
Case 9
n = 10, c = 1
2. Serious, incapacitating, not life
threatening
Case 10
n=5,c=0
Case 11
n = 10, c = 0
Case 12
n = 20, c = 0
3A. Severe, life threatening for
general population
and
3B. Severe for restricted populations
Case 13
n = 15, c = 0
Case 14
n = 30, c = 0
Case 15
n = 60, c = 0
n = the number of sample units tested.
c = the number of defective sample units which can be accepted.
188 Microbiological risk assessment in food processing
9.4.2 Establishment of microbiological criteria based on FSOs
The FSO states the level of a hazard at the moment of consumption; this is
normally not the point in the food chain where samples are taken and tested for
the frequency and/or the concentration of a pathogen. Therefore, the MCs have
to be related to other points in the food chain, i.e. to PSs. The nature of this
relationship will depend on whether the level or concentration of a certain
microorganism or a group of microorganisms (indicators) are measurable or not.
The proposed FSO for L. monocytogenes in a stable RTE food is less than
100/g at the moment of consumption. This concentration can be determined with
classical microbiological procedures such as a plate count or most probable
number (MPN) technique. An MC could be directly related to this concentration,
because in a stable RTE food, the level of L. monocytogenes would not change.
The number of samples to be taken would reflect the safety factor that a
government or a company applies. If the RTE food is not stable, then it will
depend on when the sampling is done, how much time is envisaged between
sampling and consumption and what the conditions for growth are expected to
be during this time. If a 100-fold increase were envisaged, then the criterion at
the moment of sampling would be absence of L. monocytogenes in 1 g of a given
number of samples of the product. This would still be measurable. However, if a
10 000 fold increase is foreseen, the criterion should be absence in at least 100 g,
which would become much more difficult to determine.
If the FSO for Salmonella in dried egg was less than 1/10 kg, testing for
compliance would become impossible. In such case, a criterion could be based
on the concentration of an indicator group of microorganisms such as
Enterobacteriaceae. When the initial number of Salmonella (H
0
) in raw egg
would be 1/g, a 10
5
reduction should be obtained (PC) in order to achieve the
FSO (assuming a 10-fold increase in numbers due to the evaporation of water
during drying). The group of Enterobacteriaceae has more or less the same heat
resistance as Salmonella (Cox et al., 1988). This means that in order to achieve
the FSO, the number of these indicators should also be reduced by a factor of
10
5
. Assuming that the initial level of Enterobacteriaceae in raw egg is 10
5
then
the criterion would be absence of these indicators in a number of samples of 1
gram. This criterion is again measurable.
Indicators that have a relationship with measures to control a pathogen are
not always available. For example, for the sterilisation of a low-acid canned
product, a so-called ¡®bot cook¡¯, is applied. This means that the product receives a
thermal treatment that reduces the concentration of spores of Clostridium
botulinum by a factor 10
12
. Even if an indicator group such as ¡®total viable
spores¡¯ could be used to check whether a heat treatment was performed, it would
not be able to determine the presence of spores in a sufficiently large quantity of
food to check whether the PC was met.
In many cases, microbiological criteria cannot be directly based on an FSO or
a PS because of the low level of the pathogen to be achieved and the absence of
relevant indicators. In these cases, the ICMSF approach to use a form of
primitive risk assessment as basis for the selection of ¡®cases¡¯ and the suggested
Implementing the results of an MRA: pathogen risk management 189
sampling plans is still recommended. By using the appropriate criteria for the
selection of the cases, the best use of available resources is achieved. Moreover,
the reason for choosing the stringency of the sampling plan becomes consistent
and transparent, which is important in the context of the WTO/SPS agreement.
9.5 Problems in implementation
The results of an MRA are often difficult to interpret. During the assessment,
many assumptions have to be made, many data are lacking and hazard
characterisation curves are not available. Although a risk estimate should
include attendant uncertainties, the magnitude of these uncertainties is often
difficult to establish.
Risk managers may find it difficult to understand clearly the implications of
the implementation of certain control measures. Decision making is further
complicated by the fact that stakeholders such as consumers and industries
should be involved in the decision making process. The exchange of ideas and
perceptions are part of what is called risk communication, i.e. the third element
of risk analysis. Risk communication not only pertains to communicating the
decisions to the public and the affected industry; it also refers to the interactive
communication during risk management (FAO/WHO, 1998). How industries,
consumers and other interested parties might participate in governmental risk
management activities is still not apparent (Renn et al., 2001). Often, this
communication is carried out in so-called hearings and consultation processes
during which interested parties may comment on proposals and where they may
ask pertinent questions. The effectiveness of these procedures, and whether this
will help the acceptance by the public of certain decisions made, may need to be
studied further.
One of the main problems regarding the safety of food products is the public
perception of the magnitude and the severity of the effects of certain hazards in
foods. Even when it was clearly demonstrated scientifically that the use of alar
in the cultivation of apples would have no adverse public health effect, still its
application had to be stopped because of the public reaction against it. The
public became scared because of an anti-alar campaign that was supported by a
well-known Hollywood actress. Many other examples can be given where
scientific evidence was not sufficient to influence public acceptability of a
governmental decision (Groth, 2000a).
For the industry, public perception is as important as the outcome of scientific
risk assessments. A genetically modified product may be considered safe by risk
assessors and governmental risk managers. However, if the consumer does not
buy such a product, the industry has little interest in putting it on the market.
One of the main problems in the implementation of the results of MRA will
remain how to prepare the consumer for the acceptance of certain risk
management decisions based on these risk estimates.
190 Microbiological risk assessment in food processing
9.6 Future trends
MRA will develop into a powerful risk communication tool. It will show what is
known and what is not known. It will make the pathogen¨Cproduct pathway
transparent and will show the differences in food safety that can be achieved by
various control options. Further refinement of many MRAs will be necessary
(FAO/WHO, 2000). For example, the MRA of L. monocytogenes in RTE foods
carried out in the USA did not differentiate between pate¡ä recontaminated after
the heat treatment and in-pack pasteurised pate¡ä. Obviously the risk of the last
product is negligible, while the first product has been involved in a major
foodborne outbreak (McLaughlin, 1996).
Since many data are currently lacking, the ¡®precautionary approach¡¯ will most
probably be advocated by consumer organisations (Groth, 2000b) or countries that
want to protect their own production. ¡®Worst-case¡¯ scenarios are often reported by
the media. How to deal with uncertainty in the estimations needs to be agreed upon,
in order that MRA becomes an effective part of the risk management process.
9.7 References
CAC (1997a), Recommended International Code of Practice, General Principles
of Food Hygiene, CAC/RCP 1-1969, Rev. 3, (1997), FAO, Rome.
CAC (1997b), Hazard Analysis and Critical Control Point (HACCP) System and
Guidelines for its Application, Annex to CAC/RCP 1-1969, Rev. 3, (1997),
FAO, Rome.
CAC (1997c), Principles for the Establishment and Application of
Microbiological Criteria for Foods, CAC/GL 21-1997, FAO, Rome.
CAC (2001a), Proposed Draft Principles and Guidelines for the Conduct of
Microbiological Risk Management, CX/FH 01/7, 34th Session of the
Codex Committee on Food Hygiene, FAO, Rome.
CAC (2001b), Report of the thirty-fourth session of the Codex Committee on
Food Hygiene, Alinorm 03/13, FAO, Rome.
COX, L.J. KELLER, N. and VAN SCHOTHORST, M. (1988) ¡®The use and misuse of
quantitative determination of Enterobacteriaceae in food microbiology,
Journal of Applied Bacteriology, Symposium Supplement, 237S¨C249S.
FAO/WHO (1998), The Application of Risk Communication to Food Standards
and Safety Matters, Joint FAO/WHO Expert Consultation on Risk
assessment of Microbiological Hazards in Foods, FAO Food and Nutrition
Paper 70, FAO, Rome.
FAO/WHO (2000), Report of the Expert Consultation on Risk Assessment of
Microbiological Hazards in Foods, FAO Food and Nutrition Paper 71,
FAO, Rome.
FAO/WHO (2001), Risk characterisation of Salmonella spp. in eggs and broiler
chickens and Listeria monocytogenes in ready-to-eat foods, Joint FAO/
WHO Expert Consultation of Risk Assessment of Microbiological
Implementing the results of an MRA: pathogen risk management 191
Hazards in Foods, FAO Food and Nutrition Paper 72, FAO, Rome.
FSIS (1998), Salmonella enteritidis Risk Assessment, US Department of
Agriculture, Food Safety and Inspection Service, Washington DC, 20250.
FSIS (2001) Draft Assessment of the Relative Risk to Public Health from
Foodborne Listeria monocytogenes among selected categories of ready-
to-eat foods, US Department of Agriculture, Food Safety and Inspection
Service, Washington, DC, 20250.
GROTH, E. (2000a), Science, Precaution and Food Safety, How Can We Do
Better?, A discussion paper for the US Codex Delegation, Consumers
Union of US Inc., Yonkers, New York.
GROTH, E. (2000b) Towards a More Precautionary and More Scientific
Approach to Risk Assessment. Consumer Perspective on Food Safety,
Presented at the World Congress on Medicine and Health Medicine Meets
the Millennium Hannover Germany August 2000.
ICMSF (1986), Microorganisms in Foods 2. Sampling for Microbiological
Analysis: Principles and Specific Applications (2nd ed.), University of
Toronto Press, Toronto.
ICMSF (1996), Microorganisms in Foods 5. Characteristics of Microbial
Pathogens, Blackie Academic & Professional, London.
ICMSF (M. VAN SCHOTHORST) (1998), ¡®Principles for the establishment of
microbiological food safety objectives and related control measures¡¯,
Food Control, 9, 379¨C384.
ICMSF (2002) Microorganisms in Foods 7: Microbiological testing in food safety
management, Kluwer Academic/Plenum Publishers, New York.
ILSI EUROPE (1999), Validation and Verification of HACCP, Report series, IlSI
Press, Washington, DC.
JABLONSKY, L.M. and BOHACH, G.A. (2001), Staphylococcus aureus, in Doyle,
M.P., Beuchat, L.R. and Montville, T.J, Food Microbiology,
Fundamentals and Frontiers, ASM Press, Washington, DC, 411¨C434.
MCLAUCHLIN, J. (1996), ¡®The relationship between Listeria and listeriosis, Food
Control, 7, 187¨C193
RENN, O. KASTENHOLZ, H. LEISS, W. and LO
¨
FSTEDT, R. (2001), Draft OECD-
Guidance document on Risk Communication, Center of Technology
Assessment, Stuttgart.
WHITING, R.C. and BUCHANAN, R.L. (2001), in Doyle, M.P., Beuchat, L.R. and
Montville, T.J, Food Microbiology, Fundamentals and Frontiers, ASM
Press, Washington, DC, 813¨C832.
WHO (1997) Food Safety and Globalization of Trade in Food, a Challenge to the
Public Health Sector. WHO/FSF/FOS/97.8 Rev. 1, WHO, Geneva.
9.8 Acknowledgement
The author wants to thank Dr. T.A. Roberts and Dr. J-L. Cordier for their valued
comments.
192 Microbiological risk assessment in food processing
10.1 Introduction
Microbial risk assessments are usually carried out by a team of assessors with
expertise in such areas as food microbiology, epidemiology, food engineering
and product development. Such a team needs to keep up to date with all aspects
of risk assessment. As knowledge in this area is rapidly increasing, it is very
hard to keep track of the latest developments in such areas as risk assessment
methodology, data on microorganisms and new outbreaks, for example. This
growing volume of information and the complexity of decision making makes
the use of computers a logical tool in microbiological risk assessment. Such
tools provide support at each stage. Databases provide background information
on pathogens such as physiological characteristics, growth kinetics and means of
inactivation. Computer models help to predict the behaviour of microorganisms
in processes, products and the environment. They can help calculate the impact
of corrective actions on shelf-life, product safety and consumer health.
In microbiological risk assessment various stages can be distinguished in the
decision making process. The first stage is making an inventory of product and
process characteristics and collecting microbiological data. Processes are usually
described in terms of process variables that influence the introduction, increase
or inactivation of microorganisms. Products are also described in terms of
supporting the growth, survival or inactivation of microorganisms. Consumers
can be described in terms of their susceptibility to infection. At this stage
databases containing information on microorganisms and their interactions with
products, processes and consumers can be very useful. This information
provides a basis for modelling and predicting the behaviour of microorganisms
in response to product, process and consumer characteristics. In Fig. 10.1 the
10
Tools for microbiological risk assessment
T. Wijtzes, Wijtzes Food Consultancy, Gorinchem
required information sources are depicted graphically. Each of the boxes in the
figure represent a database or a set of tools that could be used.
Tools for microbiological risk assessment can be divided into two groups:
1. Qualitative tools dealing with risk assessment in words rather than numbers.
2, Quantitative tools dealing with the numerical prediction of the micro-
biological risk.
Qualitative decision making can help risk managers decide which
microorganisms are of concern, and hence their characteristics and behaviour.
Quantitative tools then help to calculate the extent of the risk involved.
Qualitative interpretation of risk relies on heuristic knowledge. Heuristic
knowledge exists in the form of facts and expert opinion. This knowledge can
either be diffused among a range of sources or it can be collected and organised
systematically in databases. The quality of this database information relies on
the care with which the data were gathered, the sources and accuracy checked,
and the skills in organising the database coherently and keeping it updated.
Quantitative tools can be divided into deterministic tools, yielding fixed values,
and tools based on quantitative probabilistic models where outcomes are
probabilities and distributions.
Fig. 10.1 Information sources for a risk assessor.
194 Microbiological risk assessment in food processing
10.2 Qualitative tools for risk assessment
In what has been called the information age, one would expect a dedicated
knowledge base in the area of microbiological risk assessment. At present,
however, there are relatively few central information sources available. Much
information is distributed among individual experts and organisations, journal
articles, books and conference proceedings.
Internet information resources can be divided into so-called portal sites and
content sites. Portal sites provide a gateway to other information resources,
usually not as part of the actual portal domain. Portals are useful for identifying
and accessing a range of content sites. An example of a good food safety portal
site run by the US Government is: http://www.foodsafety.gov. It provides links
to a range of non-commercial food safety information resources covering food
safety issues for the US food industry.
There are now a number of sites that contain information on microbiological
risk assessment methodology. The Food and Agriculture Organisation (FAO)
and the World Health Organisation (WHO) list key activities and reports
(www.who.int/fsf/mbriskassess/index.htm and www.fao.org/waicent/faoinfo/
economic/esn/pagerisk/riskpage). The US Department of Agriculture (USDA)
has also compiled a helpful bibliography on food safety risk assessment
(www.nal.usda.gov/fnic/foodborne/risk.htm).
Content sites containing dedicated information on microbiological risks are
still relatively rare. A good example is the so-called ¡®bad bug book¡¯ compiled by
the US Food and Drug Administration (FDA). This database, which is available
electronically through the FDA¡¯s website, provides information on individual
pathogens, conditions for growth, and epidemiology. The content of the site is
updated on a regular basis (www.cfsan.fda.gov/~mow/intro.html).
Table 10.1 lists examples of portal and content sites currently available. It is
necessarily a snapshot of a landscape that is evolving on a daily basis with new
sites regularly coming on stream. There are also a number of microbiological
databases and disease surveillance systems set up by national governments,
many of which are also available electronically. Examples of these include:
? USA: The Center for Disease Control and Prevention (www.cdc.gov).
? USA: Center for Food Safety and Applied Nutrition (www.cfsan.fda.gov
which includes the bad bug book).
? UK: Public Health Laboratory Service (www.phls.co.uk).
? Australia: Communicable Diseases Centre (www.health.gov.au/pubhlth/cdi/
cdihtml.htm).
There are moves to try to share and standardise microbiological data. Several
forums have proposed the creation of data ¡®clearing houses¡¯, which are already
established in the chemical and pharmaceutical industry. In the USA a microbial
food safety risk assessment ¡®clearing house¡¯ is established within the Joint
Institute for Food Safety and Applied Nutrition (http://www.foodrisk
clearinghouse.umd.edu/) to ¡®capture¡¯ data generated and collated within risk
Tools for microbiological risk assessment 195
assessments so that they are more readily available for subsequent assessments.
Eurosurveillance system has been also developed to share data within the EU
(www.eurosurv.org). The WHO/FAO (2000b) also recommended that the
feasibility of establishing an international repository for microbial food safety
risk assessment data be investigated.
10.3 Predictive modelling
Quantitative risk assessment relies heavily on the use of predictive microbiology
models. These models use information on pathogen, product and process
characteristics to predict inactivation, survival and growth of microorganisms.
These models are usually developed for a particular microorganism under specific,
well-controlled environmental conditions. Most models relate the kinetics
parameters governing the behaviour of a microorganism to changes in
environmental conditions such as temperature, pH, water availability and activity,
or concentration of organic acids. These models are usually developed using
sample products or model media from which experimental data are derived.
Predictive models use both static and dynamic models of microbiological
behaviour. Static models describe the kinetic parameters of microorganisms
under fixed, non-changing, environmental conditions. As an example, one
temperature value is set and the growth of a microorganism is followed in time.
Static models do not take into account changes in the environmental parameter
such as temperature changes. These models are explicit functions of the kinetic
parameters as a result of the environmental conditions. An example of such a
model is the Gompertz model, which is given below (Zwietering et al., 1991):
Table 10.1 Internet sites
Information source Description Quality
http://www.cfsan.fda.gov/
~mow/intro.html
Bad bug book; excellent background
material on behaviour of
microorganisms. Appendices are useful
reading.
+++
www.foodsafety.gov Portal to US safety information ++
www.who.int/fsf/ WHO on food safety ++
www.fsis.usda.gov USDA food safety site ++
www.foodhaccp.com Lot of background material on hazards,
HACCP, etc.
++
www.fda.gov FDA site; contains, e.g., bad bug book ++
www.cdc.gov Centre of disease control and prevention +
www.safetyalerts.com This site provides recent recall
information in the USA. A lot of
information on food and allergens. Per
notice a short description is given on the
background.
+
196 Microbiological risk assessment in food processing
ln
N
t
N
0
Aexp
exp
m
e
A
exp t 1
10:1
where N
t
numbers of organisms at time t
N
0:
initial number of microorganisms
A maximum level of microorganisms
m
maximum specific growth rate at a specific value of, e.g.,
temperature
lag time at a specific value of, e.g., temperature
Among the approaches more commonly used are the modified Gompertz
equation, the Baranyi equation and the logistic equation (Baranyi et al., 1993;
Gibson et al., 1988). Combinations of linear models for exponential, lag and
exponential or lag, exponential and stationary phases have also been used
(Buchanan et al., 1997) though the last of these is controversial (Baranyi, 1997;
Garthright, 1997).
Dynamic models attempt to relate changing environmental conditions to the
kinetic parameters of microorganisms. Changes in microbiological behaviour
are, for example, monitored against changes in temperature over time. These
models usually have the form of (a set of) differential equations. An example of
such a set of differential equations is the first order exponential growth model
with time-delayed growth:
For t < :
dN
dt
0
10:2
For t :
dN
dt
N
All the parameters are as described above. In this equation two things strike the
eye. First, for two different times (t) there are two different models: one for
times shorter than the lag time ( ) and one for times longer than the lag time.
The second important part are the ¡®d¡¯ in the equation. A d mathematically stands
for a change; so dN represents a change in numbers (N). The denominator (dt)
represent a change in time. So the left hand side of the equations work out as a
change of numbers in time. The right hand side of the upper equation equals
zero, so when time is shorter than the lag time there is no change in numbers in
time. The right hand side of the lower equation equals N. This is the standard
first order growth model. Static models provide a foundation for the building of
dynamic models of microbiological behaviour.
10.3.1 Model development: empirical models
Most available models are empirical models based on experimental data. The
development of an empirical model takes place in different stages (Legan et al.,
2002). The first stage is the selection of the organism, the reference product and
the parameters that will be studied. This planning stage should include a
Tools for microbiological risk assessment 197
literature study to identify available data and existing relevant models. Where
appropriate, it may be more cost-effective to make use of an existing model,
providing the assumptions on which it is based match the conditions that are to
be modelled. Some existing models are described in Section 10.5.
Based on the objectives set out for the model and the parameters to be
studied, and taking into account existing data, an experimental design is set up
(Fig. 10.2). Several design techniques exist, ranging from factorial to minimal
designs and so-called craftsmen designs. Several authors have given detailed
descriptions of experimental designs for modelling in food microbiology
(Davies, 1993; McMeekin et al., 1993; Ratkowsky, 1993). After deciding on an
experimental design, the experiments themselves are carried out. Depending on
the type of model, environmental conditions are set and the kinetic parameters of
the microorganisms are followed in time. In both dynamic and static modelling,
microbial numbers in time are measured. Measuring changes in microbial
numbers is the most laborious part of model development. Guidelines for data
collection and storage from experiments have been put together by the protocols
group of the UK Food MicroModel programme (Kilsby and Walker, 1990) and
discussed by Walker and Jones (1993).
Measuring microbial numbers over time usually produces so-called growth
curves. In a growth curve the natural logarithm of the counted number of
microorganisms (y-axis) is plotted against time (x-axis). At this stage of the
model development, through a process of statistical regression called fitting, the
parameters of the curve are determined. Since static models attempt to describe
the behaviour of microorganisms at a single value of the controlling
environmental parameters, a growth curve usually has a sigmoid shape (Fig.
Fig. 10.2 Stages in the development of an empirical predictive model.
198 Microbiological risk assessment in food processing
10.3). In this graph, the kinetic parameters are shown as in equation 10.1. A
sigmoid growth curve consists of four stages. The first stage is characterised best
as an adjustment period. Cells have to adjust to their new environment. This
stage is called the lag-phase. Microbial numbers stay constant during this stage.
After a while, when the microorganisms have adapted to their new environment,
the log-phase starts. The microorganisms make maximal use of the nutrients
present and reach a maximum specific growth rate. This phase is characterised
by a rapid increase of cells. Later again, nutrient depletion occurs and the rate of
growth slowly declines. In the end, the asymptote is reached. Cells no longer
divide because of a lack of nutrients, a too low pH or too high concentrations of
other growth-inhibiting substances. Usually, foods are long spoiled when the
asymptote is reached. Long after the asymptote is reached, cells start to die and
microbial numbers become less and less. This stage is called the death stage and
is not shown in Fig. 10.3. Each of these stages has its kinetic parameters. The lag
phase has the so-called lag-time ( ). The log-phase has the maximum specific
growth rate ( ) and the asymptote has the asymptotic level of organisms (A).
These parameters can be used to describe an entire growth rate curve.
Since a growth curve is measured at one single value of a controlling
variable, a large number of growth curves need to be measured. This again
results in an equally large number of fitted growth rates, lag times and
asymptotic values. In the next stage of model development, the values for the
kinetic parameters ( , and A) are related to the value of the controlling
environmental parameters. This results in an explicit mathematical equation for
lag time ( ) or growth rate ( ), as an example, as a function of pH, temperature
or water activity, for instance. An example of such an equation is:
b T T
min
2
pH pH
min
pH pH
max
a
w
a
w;min
10:3
Fig. 10.3 Bacterial growth curve.
Tools for microbiological risk assessment 199
where maximum specific growth rate at a specific value of, e.g.,
temperature
b regression coefficient
T temperature (Celsius)
pH negative log of [H
+
] concentration
a
w
water activity
T
min
lowest temperature at which growth ceases
pH
min
lowest pH at which growth ceases
pH
max
highest pH at which growth ceases
a
w,min
lowest water activity at which growth ceases
When all parameters in these models have been determined by means of
statistical regression, the models can be used for prediction. However, before
these models can be used with confidence, they need to be validated. The
accuracy of the model predictions needs to be verified against the dataset that
was measured. Several strategies exist for performing a good model validation.
For a simple model validation a statistical package should be used that is able to
perform non-linear regression as well as generate 95% prediction intervals for
single point predictions of the model (Baranyi et al., 1999). Examples of such
packages are SAS and SPSS, but less sophisticated packages such as Tablecurve
2D are also able to calculate these intervals.
The basic approach to validate a model is to divide an entire dataset into two
subsets. One subset is used to determine the parameters of the suggested model,
while the second subset is used for validation. The fitted model based on the first
subset of the data is used to predict the remaining part of the dataset (second
subset). The predictions of the data can be compared with the actual data. If the
95% confidence intervals of the second subset overlap the predictions of the first
subset, the model describes the data accurately. This validation strategy works
well under laboratory conditions.
The ultimate test for a model is performance under field conditions. Model
predictions should be validated against relevant data on real foods. In some
cases it is possible to extract validation data from the literature. Unfortunately,
data in the literature are often too incomplete to use and it is necessary to resort
to experimentation to compare actual growth data points with predicted growth
curves (Fig. 10.4). It is less critical to catch the points of inflection than it is with
the model-building experiments and four to six well-spaced points per curve can
be enough. However, more points can allow growth parameters to be derived
from the food data and this can facilitate comparison with the model predictions.
Good agreement between predicted and observed responses helps to build
confidence in the model. The comparison is often shown as a plot of observed
against predicted values (Fig. 10.5) in which the responses observed in foods
should be no faster than those predicted by the model for maximum confidence.
Methods and issues in validation are discussed in Baranyi et al. (1999).
Once it has been validated, the model can be used for prediction. Models, in
general, can be used only for interpolation. This means that models should never
200 Microbiological risk assessment in food processing
be used outside the range where data were gathered. In some cases it is very hard
to distinguish the areas where confident predictions can be made from the areas
where there is no supporting data. Some statistical packages provide for an
estimate of the confidence of single point predictions, using a similar process to
validation. However, instead of using the 95% prediction interval, a 95%
confidence interval is calculated for each prediction. The less confidence for a
single prediction, the larger the confidence intervals. Confidence intervals for a
single prediction will increase dramatically when predicting outside the
measured data range.
10.3.2 Mechanistic modelling
The previous section discussed empirical models that describe experimental
observations as a mathematical relationship but have nothing to say about
underlying physiological or physical processes. Experience has shown that such
models are adequate for many practical purposes in food safety management,
but they provide no secure basis for extrapolation outside the range of the
experimental data.
Mechanistic or deterministic models are built upon a theoretical under-
standing of microbiological behaviour. They have the potential to give more
accurate predictions than empirical models and can explain why microbiological
Fig. 10.4 An example of food validation data plotted against a predicted growth curve.
Predictions and observations were for Bacillus licheniformis in custard at pH 6.14,
NaCl0.3% and 28oC. The prediction is from the Food MicroModel B. licheniformis model
and the data were from J. D. Legan, P. A. Voysey and P. S. Curtis (unpublished
observations).
Tools for microbiological risk assessment 201
behaviour varies. Mechanistic models also provide a better basis for
extrapolation outside the range of the experimental data because it is the
mechanism controlling the response that provides the foundation for the model.
This added predictive capability is extremely valuable, but extrapolation without
validation may still be dangerous because the mechanism itself may change, or
prediction errors may become very large (Box et al., 1978).
Many ¡®quasi-mechanistic¡¯ models have developed (Bazin and Prosser, 1992;
McMeekin et al., 1993; Ross, 1999) and have certainly proved useful for
developing and testing hypotheses. The mechanisms postulated include rates of
reaction between enzymes and nutrients, rates of protein denaturation in
response to temperature changes and rates of enzyme synthesis by ribosomes.
These models have all indicated linkages between the putative mechanisms and
the observations of growth responses used in empirical models. However, in all
cases the ¡®key enzyme¡¯ is unknown and a ¡®mechanistic¡¯ model whose
parameters cannot be determined experimentally cannot be considered truly
mechanistic (Heitzer et al., 1991). Despite much progress, the observation by
van Dam et al. (1988) remains essentially true:
Much is known empirically about rates of growth and substrate
consumption for different microorganisms growing on various
substrates. At the same time the biochemistry and molecular biology of
the organisms is known in considerable detail. However, the question of
how growth (and death) kinetics are related to the physiology of
microorganisms is generally not well understood.
A rare example of a truly mechanistic model linking these elements is the work
of Cayley et al. (1992) that relates the growth rate of Escherichia coli K12 under
Fig. 10.5 An example of model validation across a range of conditions for predictions
from Food MicroModel for growth of L. monocytogenes compared with literature data
(adapted from McClure et al., 1994).
202 Microbiological risk assessment in food processing
osmotic stress to the intracellular accumulation of betaine and proline and the
thermodynamics of osmoprotection.
Box et al. (1978) commented that judgement is needed in deciding when and
when not to use mechanistic models. They indicated that a mechanistic approach
is justified whenever a basic understanding of the system is essential to progress
or when the state of the art is sufficiently advanced to make a useful mechanistic
model easily available. Clearly the latter is not yet true in microbiology, but
basic understanding is being actively pursued. As Cole (1991) observed:
¡®researchers in the field of predictive microbiology are striving to develop
models for microbial life and death based upon an understanding of cell
variability and physiology and that could be used to extrapolate to other
conditions¡¯. Truly mechanistic models will be developed in time as these
activities help to develop our understanding of the links between microbial
physiology and growth (and death) responses to environmental conditions.
10.4 Tools for modelling, prediction and validation
This section looks at the following types of software:
? Decision support in hazard identification.
? Packages that can be used in model development and validation.
? Current off-the-shelf models.
? Decision support systems.
10.4.1 Decision support in hazard identification
There are a number of decision support ¡®tools¡¯ to assist in determining whether a
pathogen is, or could be, an important hazard in a given food/food process
combination. These include various semi-quantitative scoring systems, decision
trees and expert systems (see, e.g., Notermans and Mead, 1996; Todd and
Harwig, 1996; ICMSF, 1996; van Gerwen et al., 1997; van Schothorst, 1997)
such as the one in Fig. 10.6. Decision trees enable the experience of others to be
shared and can assist in decision making by presenting a structured series of
questions relevant to the decision being made. In essence, the structured
approach of risk assessment offers the same assistance for more complex
decision processes.
10.4.2 Model development and validation
General statistical packages such as SAS and SPSS are useful in model
development in such areas as experimental design, data handling, and
establishing prediction and confidence intervals. While there are still no integral
packages dedicated to the whole process of modelling microbial behaviour,
there are a number of packages which are helpful in model design. A widely
used tool for performing probabilistic modelling is @RISK. This package is a
Tools for microbiological risk assessment 203
Fig. 10.6 A decision tree to aid identification of microbial hazards in finished foods.
Reproduced from Notermans and Mead (1996).
204 Microbiological risk assessment in food processing
risk analysis and simulation add-in for Microsoft Excel
or Lotus
1-2-3. The
package uses the technique known as Monte Carlo simulation to allow an
assessment to be made of all possible outcomes. Uncertain values in a
spreadsheet are replaced with statistical distribution functions that represent a
range of possible values. The package recalculates a spreadsheet hundreds or
even thousands of times, each time selecting random numbers from the
distributions that were entered. The results are distributions of possible
outcomes and the probabilities of getting those results. These results not only
calculate the outcome in a given situation but also the likelihood of it happening.
Packages such as BestFit are able to fit statistical distribution functions to
measured data. These functions can then be entered into a package such as
@RISK to perform simulations. As these packages are integrated into standard
spreadsheet packages such as Microsoft Excel
, they are easy to install and use.
Two packages specifically designed to help create predictive models have been
developed by the Institute of Food Research (IFR) in the UK: Microfit and
DMFit. These are downloadable from the IFR website (www.ifr.bbsrc.ac.uk).
Both help to fit, plot and analyse growth curves from microbiological data. The
first one fits the model of Baranyi and Roberts (1994) to measure concentrations
of growing bacterial population. The user can carry out a significance test to
compare the specific growth rates of different growth curves. The second one,
DMFit, is an Excel add-in, fitting, plotting and analysing many growth curves
simultaneously.
10.4.3 Off-the-shelf models
Most of the models available have been developed as research tools rather than
for commercial applications. The disclaimers of these sorts of software usually
mention that these tools can be used for obtaining estimates rather than absolute
predictions about shelf-life, stability or safety of products. The packages can
give product and process developers a broad indication of what might happen
with the products or processes that are assessed.
A good example of a package of this kind is the Pathogen Modelling
Program, developed by the US Department of Agriculture (USDA), and
available free over the Internet. This predictive microbiology application
program was designed as a research tool for estimating the effects of multiple
variables on the growth or survival of foodborne pathogens. It consists of both
growth and inactivation models for a number of pathogens. Although most of the
models are based on observations of microbial behaviour in broth cultures, some
are based on observations in specific foods. Microbial behaviour in foods is
similar to broths having comparable compositions. However, the user must be
aware of additional factors in other environments that may affect
microorganisms which are not within the experimental design parameters of
the models. Available models include the following:
Tools for microbiological risk assessment 205
? Growth models for Aeromonas hydrophila, Bacillus cereus, Clostridium
perfringens, Escherichia coli O157:H7, Listeria monocytogenes, Salmonella
spp., Shigella flexneri, Staphylococcus aureus and Yersinia enterocolitica.
? Time-to-toxigenesis and thermal inactivation models for Clostridium
botulinum.
? Non-thermal inactivation/survival models for E. coli O157:H7, L.
monocytogenes, Salmonella spp. and Staph. aureus.
? Gamma irradiation survival models for Salmonella typhimurium and E. coli
O157:H7.
The growth models for particular pathogens predict bacterial growth curves at
user-defined sets of values for temperature, pH, salt concentration and water
activity. In the case of some pathogens, the effects of preservatives and
atmospheric composition can also be studied.
The Food MicroModel was developed by the British Government and is
commercially available through the Leatherhead Food Research Association in
the UK (www.foodmicromodel.com). After selecting the model for a particular
pathogen, and then entering parameters such as temperature, pH, water activity
and preservative concentrations, the predicted generation time or reduction time
can be calculated. Results may be displayed as tables or graphs. Many of the
models are for pathogens, but other models for spoilage microorganisms are
being added. This system is considered more flexible and versatile in mimicking
bacterial performance in real foods than the Pathogen Modelling Program
(Baranyi and Pin, 2001).
The majority of public domain predictive models focus on the growth and
survival of pathogenic microbes for obvious reasons. However, it is also
important to be able to predict the growth of food spoilage organisms when
considering the likely stability and shelf-life of food products. Campden and
Chorleywood Food Research Association in the UK has addressed this need by
developing a collection of models, the FORECAST system, which can be used to
assess spoilage rates or likely stability of different product formulations (Betts
and Earnshaw, 1998). Models are available for specific spoilage organisms, e.g.
Pseudomonas species, for groups or organisms, e.g. Enterobacteriaceae, or for a
mixture of spoilage organisms relevant to food commodities, e.g. fish and fish
products. The models were developed in laboratory growth media and validated
by comparison with published growth data and validation in appropriate food
matrices.
The majority of models are kinetic growth models that can be used to predict
growth rate and lag time, although more recent models have been produced that
will predict non-growth or time to growth. The system is still under development
and will be expanded to include other product specific models. FORECAST is
available to potential users via an enquiry service that runs the models on behalf
of clients after a detailed consultation with respect to their needs. The
consultancy aspect of this approach allows subsequent expert interpretation and
consideration of model validation status and there are currently no plans to make
206 Microbiological risk assessment in food processing
it available as a software package. FORECAST is unique among modelling
systems in that it is developed by an industry-driven research association and is
able to ensure that further developments meet the current industrial
microbiological needs. Further information regarding FORECAST is available
from www.campden.co.uk.
The main advantage of such products as these is that predictions are produced
in minutes rather than the days or weeks required for conventional challenge
testing. Although they cannot replace challenge testing, the main role of the
software is helping to identify when challenge testing is required and then in
identifying the most relevant experimental parameters for challenge tests.
10.4.4 Decision support systems
A more recent development is the use of decision support systems for the
prediction of product stability and product safety. A well-known system is the
MIDAS system developed by the joint Unilever-Bestfood Research Laboratories
(Kilsby, 1999). The system contains data on the parameters that influence the
microbiological stability and microbial safety of certain foodstuffs together with
modelling software. It is usually applied at the product design stage. MIDAS
analyses a product¡¯s ingredients, processing methods and packaging systems to
identify potential microbiological risks. The system is currently being expanded
to include toxicological approval.
Wageningen Agricultural University has also developed a decision support
called the Food Design Support System (Wijtzes, 1996). The system can be used
to simulate a food product. Information on ingredients is combined with data on
food processing operations and pathogen growth kinetics. Parameter values of
ingredients of foods, such as water activity and acidity, and models for microbial
growth and inactivation are used for the prediction of the microbial behaviour in
the simulated food system. These values are drawn from the system database. If
required information is lacking, reliable guesses of the parameters can be made.
As an example, differing food distribution chains can be simulated to assess the
impact, for example, of temperature abuse during distribution on food quality.
The system can be used in such areas as product and process development and
training. In future it will be possible to apply expert knowledge in production
and development of foods to improve the quality of prediction.
Another decision support system developed by Wageningen Agricultural
University is SIEFE: Stepwise and Interactive Evaluation of Food safety by an
Expert system (van Gerwen, 2000). The SIEFE model provides a tool for
bacterial risk assessment using various knowledge sources. The main goal of the
SIEFE model is to analyse microbial behaviour during individual production
processes. SIEFE uses a stepped approach to quantitative risk assessment. Risks
are first assessed broadly, using order of magnitude estimates. Variations in
process or product parameters can easily be evaluated at this level. These
estimates help to highlight the main risk areas, which can then be studied in
more detail. Both general and/or specific models, and various scenarios, can be
Tools for microbiological risk assessment 207
used to quantitatively describe levels of risk. Thirdly, even more accurate studies
can be performed where necessary by using stochastic variables, for instance.
All steps in the system are designed to be as transparent as possible, making it
easier to assess the accuracy of the results and the assumptions on which they are
based.
Several companies now use predictive models as marketing tools to illustrate
the effects of ingredients such as preservatives. One of these systems, for
example, describes the effect of lactate on bacterial growth and survival of L.
monocytogenes. It contains simple yet effective models describing the relation
between growth rate, shelf-life and product characteristics such as moisture
level, salt concentration, pH and lactate concentration.
Ross and Sumner (in press) have developed a novel risk calculation tool to
aid determination of relative risks from various product/pathogen/processing
combinations. The tool is intended to assist those without extensive experience
in risk modelling to provide a first estimate of relevant risk and for food safety
risk management prioritisation (see Fig. 10.7). The user mouse-clicks on the
appropriate descriptor in each box in response to 11 questions, and can nominate
some specific numerical values. As a value is changed, the risk estimates (lower
right) are automatically recalculated. To assist users to make selection, and to
improve ¡®transparency¡¯ of the model, some of the weighting factors are specified
in the list of descriptors. The underlying model translates these descriptors,
using relatively simple mathematical relationships, into a range of risk
estimates. Some estimates consider only the probability of illness, while others
also consider the severity to estimate the risk of the illness and the numbers
affected.
The model is based on a series of multiplicative factors that increase or
decrease the estimate of the probability of the hazard occurring or the estimate
of risk. Some factors, such as processing or cooking, have been assigned a value
of zero, i.e. they are modelled to eliminate the risk. The model also recognises
that even if a process completely eliminates the risk, re-contamination may
occur, however, and re-introduce the risk. The risk estimate is ¡®truncated¡¯ so that
no more than one illness per consumer per day is predicted. Some of the
multiplicative factors are derived from fixed relationships, e.g. the risk of daily
consumption compared with monthly or less frequent consumption. Similarly,
the risk will depend on the size of the exposed population, and the proportion of
them consuming the food. The susceptibility of the population to infection for a
variety of hazard is based on epidemiological data. Hazard severity is arbitrarily
weighted by factors of ten for each increasing level of severity. The frequency of
contamination (¡®probability of contamination¡¯), concentration of the hazard and
the implications of subsequent processing and handling are also considered.
The spreadsheet, while providing estimates of risk, also helps to focus
attention on the interplay of factors that contribute to the risk of foodborne
disease, and can be used to explore the effect of different risk reduction
strategies. Users must remember that some of the weighting factors are
arbitrarily derived, however, and that the predicted effect of those management
208 Microbiological risk assessment in food processing
options may reflect only the assumption on which the model is based.
Nonetheless, weightings can be changed easily if data are available to indicate a
more appropriate weighting.
Ready to use software packages for probabilistic microbial risk modelling do
not exist yet. Probabilistic modelling adds a new feature to microbial risk
modelling as it helps describing and understanding natural variation in microbial
behaviour. The amount of data needed for probabilistic modelling, however, is
exponentially larger than the amount of data needed for deterministic modelling.
Bringing together datasets from different modelling groups might help
overcome this problem.
10.5 Future trends
Databases are useful for storing large quantities of data. A key issue for the
future is how to integrate information from a range of databases. As an example,
a database containing information on the microbial ecology of particular food
stuffs could be connected to a database in which the kinetic properties of
microorganisms are stored, which, in turn, could be linked to databases with
processing and epidemiological data. If such a linked system of databases were
available to a large number of researchers and risk assessors, allowing them to
input data, models and knowledge in a structured way, microbiological risk
Fig. 10.7 A proposed interactive food safety risk assessment tool developed in
spreadsheet software. Details of its use and source are described in the text.
Tools for microbiological risk assessment 209
Click for high resolution image
assessment would have a strong foundation on which to develop. Such a
development would be particularly helpful in making the most of
epidemiological and dose¨Cresponse data which remains relatively fragmentary.
The best medium for such a system would be the worldwide web. Examples
of such systems do not exist yet, but an outline for such a system is described in
Fig. 10.1. Each block contains a database or a set of databases with the relevant
information mentioned in that block. The combination of these databases or
information blocks requires a so-called inference engine. This engine uses
several strategies for combining information from these sources. The decision
support systems mentioned earlier make use of a variety of strategies to search
out relevant information. One uses product formulations to search for
epidemiological data on similar products in order to identify relevant
microorganisms. Another uses the kinetic growth properties of organisms as
selection criteria, while another combines epidemiological and kinetic data. The
selection of the most appropriate model, when several models are available for a
microorganism in a certain foodstuff also needs addressing in such a system.
Selection criteria might include assessment of the goodness of fit between the
model and the problem to be addressed, continuous validation of existing
models against new data, or assessment of such performance indicators as the
conservatism with which a model predicts. It might be possible to employ
several models to identify and take account of differing assumptions. Integrated
systems, in which all information comes together, decisions are taken and
predictions are made, are not to be expected within the next 10 years.
International collaborations need to be forged where gathered information, data
and models are put together. A framework as described above should be
developed and extended with other initiatives, such as toxicological and
epidemiological initiatives.
10.6 Sources of further information and advice
Some helpful guides are provided in the following section. Further information
on some of the software packages mentioned can be obtained from:
? FoodMicromodel: Leatherhead Food Research Association, Randalls Road,
Leatherhead, Surrey KT22 7RY, UK (www.lfra.co.uk).
? Lactate model: Optiform Listeria Control Model: PURAC America, 111
Barclay Boulevard, Lincolnshire Corporate Center, Lincolnshire, IL 60069,
USA.
? SAS: SAS Institute Inc., SAS Campus Drive, Cary, North Carolina 27513,
USA.
? SPSS: SPSS Inc. Headquarters, 233 S. Wacker Drive, 11th Floor, Chicago,
Illinois 60606, USA.
? @RISK and BestFit: Palisade Corporation, 31 Decker Road, New Field, New
York 14867, USA.
210 Microbiological risk assessment in food processing
? Tablecurve2D: SPSS Inc. Headquarters, 233 S. Wacker Drive, 11
th
Floor,
Chicago, Illinois 60606, USA.
? USDA Pathogen Modeling Program: PMP 6.0: USDA-ARS, 600 East
Mermaid Lane, Wyndmoor, Pennsylvania 19038, USA (http://
www.arserrc.gov/mfs/pathogen.htm).
10.7 References and further reading
BARANYI, J. (1997), Simple is good as long as it is enough (models for bacterial
growth curves). Food Microbiology, 14: 391¨C4.
BARANYI, J. and PIN, C. (2001), ¡®Modelling microbiological safety¡¯, in Tijskens,
L., Hertog, M. and Nicolai, B., Food Process Modelling, Woodhead
Publishing Ltd, Cambridge.
BARANYI, J. and ROBERTS, T. A. (1994), A dynamic approach to predicting
bacterial growth in food. International Journal of Food Microbiology, 23:
377¨C94.
BARANYI, J., ROBERTS, T. A. and MCCLURE, P. (1993), A non-autonomous
differential equation to model bacterial growth. Food Microbiology, 10:
43¨C59
BARANYI, J., PIN, C. and ROSS, T. (1999), Validating and comparing predictive
models. International Journal of Food Microbiology, 48: 159¨C66.
BAZIN, M. J. and PROSSER, J. I. (1992), Modelling microbial ecosystems. Journal
of Applied Bacteriology Supplement, 73: 89S¨C95S.
BETTS, G. D. and EARNSHAW, R. G. (1998), Predictive microbiology for evaluating
food safety and quality. South African Food and Beverage Manufacturing
Review, Sept, 11¨C13.
BOX, G. E. P., HUNTER, W. G. and HUNTER, J. S. (1978), Statistics for Experimenters.
An Introduction to Design, Data Analysis and Model Building, John Wiley
& Sons, New York.
BUCHANAN, R. L., WHITING, C. and DAMERT, W. C. (1997), When is simple good
enough: a comparison of the Gompertz, Baranyi and three-phase linear
models for fitting bacterial growth curves. Food Microbiology. 14:313¨C
26.
CAYLEY, S., LEWIS, B. A. and RECORD, T. M. (1992), Origins of osmoprotective
properties of betaine and proline in Escherichia coli K2, Journal of
Bacteriology, 174: 1586¨C95.
COLE, M. B. (1991), Predictive modelling ¨C yes it is! Letters in Applied
Microbiology, 13: 218¨C19.
DAVIES, K. W. (1993), Design of experiments for predictive microbial modelling.
Journal of Industrial Microbiology, 12 (3¨C5): 295¨C300.
GARTHRIGHT, W. E. (1997), The three-phase linear model of bacterial growth: a
response. Food Microbiology, 14: 395¨C7.
GIBSON, A. M. BRATCHELL, N. and ROBERTS, T. A. (1998), Predicting microbial
growth: growth response of salmonella in a laboratory medium as affected
Tools for microbiological risk assessment 211
by pH, sodium chloride and storage temperature. International Journal of
Food Microbiology, 6: 155¨C78.
HEITZER, A., KOHLER, H. E., REICHERT, P. and HAMER, G. (1991), Utility of
phenomenological models for describing temperature dependence of
bacterial growth. Applied and Environmental Microbiology, 57: 2656¨C65.
ICMSF (INTERNATIONAL COMMISSSION FOR THE MICROBIOLOGOCAL
SPECIFICATIONS FOR FOODS) (1996), Micro-organisms in Foods 5.
Microbiological Specifications of Food Pathogens, T. A. Roberts (Ed.),
Blackie Academic and Professional, London.
KILSBY, D. (1999), Presentation at the International Conference of the ICFMH,
Veldhoven, The Netherlands, 13¨C17 September.
KILSBY, D. C. and WALKER, S. J. (1990), Predictive Modelling of Microorganisms
in Foods. Protocols Document for Production and Recording of Data,
Campden Food and Drink Research Association, Chipping Campden.
LEGAN, D., VOYSEY, P. A. and CURTIS, P. S. (unpublished observations).
LEGAN, D., VANDERVEN, M., STEWART, C. and COLE, M. (2002), ¡®Modelling the
growth, death and survival of bacterial pathogens in foods¡¯, in Blackburn,
C. and McClure, P., Foodborne Pathogens, Woodhead Publishing Ltd,
Cambridge.
MCCLURE, P., COLE, M. and DAVIES, K. (1994), An example of the stages in
development of a predictive mathematical model for microbial growth: the
effects of NaCl, pH and temperature on the growth of Aeromonas
hydrophilia. Int. Journal of Food Microb., 23: 359¨C75.
MCMEEKIN, T. A., OLLEY, J. N., ROSS, T. and RATKOWSKY, D. A. (1993), Predictive
Microbiology: Theory and Application, Wiley, New York.
NOTERMANS, S. and MEAD, G. C. (1996), Incorporation of elements of quantitative
risk analysis in the HACCP system. International Journal of Food
Microbiology, 30 (1¨C2): 157¨C73.
RATKOWSKY, D. A. (1993), Principles of non-linear regression modelling. Journal
of Industrial Microbiology, 12 (3¨C5): 195¨C9.
ROSS, T. (1999), Assessment of a theoretical model for the effects of temperature
on bacterial growth rate, in Predictive Microbiology Applied to Chilled
Food Preservation. Proceedings of Conference no. 1997/2 of Commission
C2 (16¨C18 June 1997), Quimper, France. Office for Official Publications
of the European Communities, Luxembourg, pp. 64¨C71.
ROSS, T. and SUNMER, J. L. (in press), A simple, spreadsheet-based, food safety
risk assessment tool. International Journal of Food Microbiology.
TODD, E. C. D. and HARWIG, J. (1996), Microbial risk analysis of food in Canada.
Journal of Food Protection. Supplement 10¨C18.
VAN DAM, K., MULDER, M. M, TEXEIRA DE, M. and WESTERHOFF, H. V. (1998), A
thermodynamic view of bacterial growth, in Bazin, M. J. and Prosser, J. I.
(Eds.) Physiological Models in Microbiology, CRC Press Inc., Boca
Raton, FL, pp. 25¨C48.
VAN GERWEN, S. J. C. (2000), Microbiological Risk Assessment of Foods. Thesis,
Wageningen University, The Netherlands.
212 Microbiological risk assessment in food processing
VAN GERWEN, S. J. C, DE WIT, J. C, NOTERMANS, S. and ZWIETERING, M. H. (1997),
An identification procedure for foodborne microbial hazards. Inter-
national Journal of Food Microbiology, 38 (1): 1¨C15.
VAN SCHOTHORST, M. (1997), Practical approaches to risk assessment. Journal of
Food Protection, 60 (11): 1439¨C43.
WALKER, S. J. and JONES, J. E. (1993), Protocols for data generation for predictive
modelling. Journal of Industrial Microbiology, 12 (3¨C5): 273¨C6.
WHO/FAO (2000), Report of the Joint FAO/WHO Expert Consultation on Risk
Assessment of Microbiological Hazards in Foods (FAO Headquarters,
Rome, Italy, 17¨C21 July 2000) World Health Organization/Food and
Agriculture Organization of the United Nations, Rome, Italy.
WIJTZES T. (1996), Modelling the Microbial Quality and Safety of Foods, Thesis,
Wageningen University, The Netherlands
ZWIETERING, M., DE KOOS, J., HASENACK, B., DE WIT, J. and VAN¡¯T RIET, K. (1991),
Modelling of the bacterial growth curve. Applied Environmental Biology
57: 1094¨C101.
Tools for microbiological risk assessment 213
11.1 Introduction
Criteria related to the microbiology of foods may be applied with the intention
of ensuring quality or protecting public health. Ideally, criteria are established
only in response to a real need and crafted to prevent most effectively some
undesirable outcome. This approach implicitly suggests that existing criteria
related to food safety represent the outcome of deliberations that had identified a
hazard, and the magnitude of the risk associated with that hazard, and points or
stages in the farm-to-table
1
continuum that were critical to control of the risk. In
practice, approaches to setting criteria have varied (see Chapter 2; Adams and
Moss, 2000; Baird-Parker and Tompkin, 2000), creating an impediment to
international trade in foods because of inconsistency in regulations among
trading nations.
Microbiological risk assessment (MRA) may be considered to have its origins
in the development of fair rules for international trade in food. The General
Agreement on Tariffs and Trade (now the World Trade Organization) resolved
in 1995 that demonstration of unacceptable risk to human, plant or animal health
was the only reasonable basis for restriction to trade in foods. Formal risk
assessment methods, already widely used in commerce, engineering, and
environmental and public health analysis, were advocated as the means for
comparing or evaluating those risks.
The potential to quantify foodborne public health risk, and/or to identify
rational and optimal strategies to reduce it, offers enormous scope for the design
11
Microbiological criteria and
microbiological risk assessment
T. Ross, University of Tasmania and C. Chan, Safe Food Production,
Sydney, NSW
1
We use this expression to denote all products from the point of harvest, catch or slaughter to the
point of consumption.
of new food safety regulations that are ¡®outcome based¡¯, rather than prescriptive.
These two applications of microbiological risk assessment were recognised by
the Codex Alimentarius Commission (CAC, 1999) who described micro-
biological risk assessment as ¡®a key element in ensuring that sound science is
used to establish standards, guidelines and other recommendations for food
safety to enhance consumer protection and facilitate international trade¡¯.
Within the context of microbiological food safety risk assessment, this
chapter considers the history of the development of both food microbiology
criteria, and performance and process criteria. Discussion of the difficulties in
establishing meaningful and practicable criteria demonstrates how increasingly
risk-based approaches have been used to address those problems, culminating in
the use of contemporary risk assessment methods and tools. The use of those
approaches in the development of microbiological and process and performance
criteria applied to foods is discussed and exemplified, as are potential
applications of testing against microbiological criteria as an input to risk
assessments.
11.2 Types of criteria
Two types of criteria related to the microbiology of foods can be differentiated.
These are ¡®microbiological¡¯ criteria and ¡®process and performance¡¯ criteria
(Baird-Parker and Tompkin, 2000).
11.2.1 Microbiological criteria
The following definitions, drawn from CAC (1997) and ICMSF (1986),
illustrate the range of criteria related to the microbiological safety and quality of
food.
A microbiological criterion for food describes the acceptability of a product
or a food lot, based on the absence or presence, or a specified number, of
microorganisms, including parasites, and/or quantity of their toxins/metabolites,
per unit(s) of mass, volume, area or lot. Microbiological criteria are applied to
differentiate food of acceptable quality from food of unacceptable quality. Since
the advent of control systems such as hazard analysis critical control point
(HACCP), they are usually not considered as routine methods to test individual
units of food for compliance but as a means of verifying the performance of
HACCP plans.
Microbiological criteria can be further categorised according to the intended
application. A microbiological standard is a criterion that is part of a law or
regulation. A mandatory criterion is enforceable by the regulatory agency
having jurisdiction. Various national standards exist including those issued by,
for example, the US Food and Drug Administration and US Department of
Agriculture (NRC, 1985). The Codex Alimentarius Commission and the
European Economic Community (NRC, 1985; Brown, 2000; Kumagai, 2000;
Microbiological criteria and microbiological risk assessment 215
Schalch and Beck, 2000) are examples of organisations that set international
standards that relate to the microbiology of foods. A microbiological
specification is a criterion that is applied as a condition of acceptance of a
food or ingredient by a food manufacturer or a public or private agency. Foods
that do not comply are subject to rejection.
A microbiological guideline is a criterion that is used by a manufacturer or
regulatory agency to monitor a food process or system. Guidelines aid in
determining, for example, whether or not microbiological conditions at critical
control points or for finished product are acceptable. Guidelines are usually
advisory but may be mandatory depending on the source of the sample and if set
by a manufacturer. A mandatory guideline set by a regulator, i.e. having the
force of law is, by definition, a standard.
Microbiological criteria can relate directly to the hazard or to other organisms
that, if present, are considered to correlate with the presence of the hazard (i.e.
¡®indicator organisms¡¯), other properties of the food that are believed to be
correlated with the presence of a hazard, or loss of quality suggested by total
aerobic counts, or detectable levels of spoilage compounds. The selection of the
target organism for use in microbiological criteria is discussed in ICMSF (1986).
Examples from EU directives (EU, 2002) of microbiological criteria include the
absence (per 1 g) of Salmonella in cheeses made from raw and thermised milk
(Directive 94/46/EEC), or <300 faecal coliforms per 100 g of live bivalve
molluscs (Directive 91/492/EEC), or that no minced meat should contain more
than 5 10
6
aerobic mesophilic bacteria or more than 500 Escherichia coli per
gram (Directive 94/65/EEC).
11.2.2 Process and performance criteria
Performance and process criteria are applied during the production of a food to
ensure that the food manufacturing process eliminates, or reduces to acceptable
levels, the risks of identified microbiological hazards. They are, thus, akin to
critical limits as applied to critical control points within HACCP plans or
analogous systems.
2
Often, the relationship between the criterion and the
microbiological basis underlying it may not be transparent. This sometimes
creates problems for regulators who have to enforce regulations and have to
explain the reason for them to food processors who may see them as no more
than bureaucratic impediments to their productivity! A performance criterion
has been defined as ¡®the outcome of one or more control measures at a
[processing] step or a combination of steps intended to ensure the safety of a
food¡¯ (Baird-Parker and Tompkin, 2000). Examples include pasteurisation, or
the design of heating processes to achieve a 10
12
-fold reduction in the
probability that a viable C. botulinum cell is present in a food (i.e. the
2
For example, other preventive food safety programmes that do not necessarily follow the full seven
steps of HACCP, such as the ¡®Food Safety Plan¡¯ concept promoted by Australian food regulatory
authorities.
216 Microbiological risk assessment in food processing
¡®botulinum cook¡¯). In general, time¨Ctemperature criteria for thermal processing
are established on the basis of their equivalence to the level of inactivation of
target organisms at some reference temperature.
In Australia a tolerance of no more than a 10-fold increase in the number of
E. coli on carcasses between slaughter and delivery to retail is being evaluated as
an alternative to the traditional criterion that specifies that the product surface
temperature must not exceed 5 oC after chilling. The problem with the latter, and
similar, approaches is that even very short periods of temperatures above the
5 oC limit would, legally, render the product non-compliant. In practice, on a hot
day, carcase surface temperatures could rise during the short time that it took to
unload even a few carcases from the transport vehicle to the cold room of a retail
store (Sumner and Krist, 2002). Clearly, a deviation of few degrees for a few
minutes does not render a carcase instantly unsafe, nor would it significantly
increase the number of pathogens, if present.
The traditional approach is a process criterion intended to prevent
unacceptable increase in E. coli or Salmonella (B. Shay, pers. comm., 2002),
based on their minimum temperatures for growth, while the alternative approach
is a performance criterion. To support the alternative criterion, a predictive
model was used to define time¨Ctemperature combinations that would limit E.
coli proliferation to less than 3.3 generations. To provide guidance to
transporters a table of time¨Ctemperature combinations and the corresponding
increase in E. coli was developed, effectively a matrix of process criteria, to
support the performance criterion.
The Process Hygiene Index (Gill et al., 1991) is another example of a process
criterion. In this approach, the cumulative effect of time and temperature during
processing of meat carcasses was related to potential growth of E. coli as
determined by reference to a predictive model. Performance criteria, based on
various percentile values of levels of performance of existing processing plants,
were established and expressed in terms of an index related to the predicted
number of generations of growth of E. coli. Critical pH and water activity limits
proposed to guarantee shelf stability of uncooked fermented meats have been
also defined (Leistner, 1995; European Economic Directive, 77/99 in EU, 2002)
and are examples of process criteria.
11.3 Key issues in the use of microbiological criteria
Microbiological criteria can be used to design products and processes and to
indicate the required microbiological status of raw food materials, ingredients or
end-products at any stage of the farm-to-table chain, as appropriate. They can be
developed by regulators charged with protecting consumer health and ensuring
the quality of foods, or by food processors to formulate design requirements or
to examine end-products for verification of a HACCP scheme (CAC, 1997).
Thus, the uses of criteria fall broadly into those concerned with the role of
regulatory authorities in protecting consumers and stated as legal requirements,
Microbiological criteria and microbiological risk assessment 217
and those of industry in meeting those legal requirements and also for
establishing and maintaining internal targets. Thus, industry uses of criteria
include purchasing agreements and establishment and verification of HACCP
plans.
Baird-Parker and Tompkin (2000) state that they ¡®are not aware where
significant foodborne hazards to health have been reduced through application
of microbiological criteria to a foodstuff as the primary means of control¡¯. The
oft-cited Sir Graham Wilson opined: ¡®Bacteriologists are better employed in
devising means to prevent or overcome contamination than in examining more
and more samples. Processing concerns the whole volume of food, sampling
only a minute fraction of it¡¯ and that ¡®control of processing is of far greater
importance than examining finished product¡¯ (Wilson, 1970). This view, and its
application using HACCP or similar approaches, is now almost universally
endorsed (EU, 2002; Adams and Moss, 2000).
Microbiological criteria cannot ensure the safety of foods, but may serve a
role as reference or target values (Mossel et al., 1995), as tools that can be used
in assessing the safety and quality of foods, or as means of assessing the
performance of an HACCP programme. Process and performance criteria are
tools appropriate for ensuring product safety, and are consistent with the
HACCP approach. In international trade, however, microbiological criteria are
often used as a means of assessing product safety at the ¡®port of entry¡¯ because
usually little is known of the methods of production or the efficacy of food
safety systems operating in the country of origin. It is through this application of
microbiological criteria that many problems have arisen.
11.3.1 Early uses of microbiological criteria
Baird-Parker (2000) traces modern food law to the nineteenth century, viz. the
UK Adulteration of Food Act of 1860, and notes that legislation concerning food
hygiene was first introduced at the beginning of the twentieth century. Food
safety legislation continued to evolve during the twentieth century, including the
introduction of the ¡®botulinum cook¡¯, and pasteurisation. Concerns over
salmonellae and other pathogens led to development of microbiological
standards. As these criteria increased in number it became increasingly apparent
that many pieces of corresponding legislation enacted by individual countries
were inconsistent, creating difficulties in international trade (NRC, 1985; Baird-
Parker and Tompkin, 2000; Pourkomailian, 2000). Moreover, it was seen that
many had no objective or scientific basis. For example, NRC (1985) stated that
¡®lack of sound guiding principles for the establishment of microbiological
criteria has, at least in part, been responsible for the large number of standards
and guidelines (particularly at the state and local level) that are impractical,
unenforceable, and without uniformity¡¯. Additionally, microbiological criteria
have been, and largely still are, based on technical feasibility (CAC, 1997;
ICMSF, 1998) rather than on an objective assessment of need based on scientific
risk assessment (Baird-Parker and Tompkin, 2000). This situation has,
218 Microbiological risk assessment in food processing
presumably, arisen because the establishment of objective and rational criteria is
fraught with difficulty as is discussed below.
While there are situations in which microbiological criteria do have a useful
role, such as in providing guidance on limits for safety (i.e. providing a ¡®line in the
sand¡¯), verification of HACCP plans and point of entry testing, NRC (1985)
succinctly identified the problem of microbiological criteria: ¡®When
microbiological criteria for foods are not based on definite needs, sound principles,
and statistically solid background information, they may become a burden to the
food industry, give a false sense of security to the public and lessen confidence in
the ability of the regulatory agencies to regulate food supply¡¯. As an example,
Adams and Moss (2000) cite regulations introduced in Oregon in the USA
concerning the microbiological quality of ground meat. After the regulations had
been in effect for a number of years, their effectiveness was assessed. It was found
that the standards had produced no significant improvement in quality but had
resulted in significantly increased costs owing to rejection of material not meeting
the standard and the costs of testing, and had effectively resulted in consumers
being misled. The standards were eventually revoked.
In the face of growing national and international concern that microbiological
criteria for foods often were not based on sound principles, a number of
organisations considered how rational and objective microbiological criteria for
foods could be established (NRC, 1985; ICMSF, 1986; CAC, 1997). Those
organisations addressed the question of how to establish criteria and formulated
rules for their development.
11.3.2 Principles for the establishment of microbiological criteria
The Codex Alimentarius Commission (CAC, 1997) presented
3
principles for
development of microbial criteria for foods that are generally consistent with
those of NRC (1985). Given the status of Codex as an international food
standards setting organisation under the auspices of the United Nations, their
recommendations are described here. The principles expressed for establishment
and specification of microbiological criteria implicitly identify the problems in
setting them.
CAC (1997) resolved that microbiological criteria for foods should be based
on scientific analysis and advice and, where sufficient data are available, a risk
assessment of the foodstuff and its use. Thus, to develop a criterion it is
necessary to:
? Identify any evidence of hazard to health, whether actual or potential.
? Consider the microbiological quality of the raw materials.
? Understand the effects on those microorganisms of any food processing steps
that occur prior to consumption including commercial processing and home
preparation.
3
The Codex guidelines were developed jointly with the International Commission on Micro-
biological Specifications for Foods.
Microbiological criteria and microbiological risk assessment 219
? Establish the likelihood of additional microbial contamination and the
consequences of the presence or growth of those contaminants in the food.
? Consider the intended use of the food.
? Consider the relative susceptibilities of expected consumers of the food.
? Establish the costs of implementing a criterion in relation to its benefits.
As has been discussed in earlier chapters, the above list features many of the
aspects of a microbiological food safety risk assessment including hazard
identification (Chapter 4), dose¨Cresponse assessment (Chapter 5) and exposure
assessment (Chapter 6). Notably, the recommendations do not suggest
appropriate levels of protection of public health, or some other target with
which the criteria should comply. The above considerations reflect one of the
main reasons for the introduction of risk assessment methodology to microbial
food safety, i.e. the need to draft objective microbiological criteria to harmonise
international trade in food.
11.3.3 Specification of criteria
If consideration of the above objectives indicates that a criterion is desirable and
feasible, there are requirements for the unambiguous specification of criteria
themselves. These, based on NRC (1985) and CAC (1997), are that the criterion:
? Includes a clear description of the food to which the criterion applies.
? Includes a clear description of the pathogen/toxin of concern.
? Details the analytical methods to be used to detect and/or quantify the
pathogen/toxin of concern.
? Details the number and size of samples to be taken and the point in the farm-
to-table continuum to which the criterion applies.
? Details the limits to be applied and the proportion of samples that must
conform to these limits for the batch to be considered acceptable.
The requirement for each of these elements of a microbiological criterion are
discussed in various texts and reviews (e.g. NRC, 1985; ICMSF, 1986; Adams
and Moss, 2000). Note, particularly, that the last points consider the number and
size of samples, and the proportion of samples that must comply. These
requirements point implicitly to the need to consider risk, i.e. the probability of
exposure and the likely severity of illness, and are important considerations in
specification of sampling plans appropriate to the risk. Note also that the
specifications require that the point in the farm-to-table chain at which the
criterion applies must be nominated. This points to one of the difficulties of
establishing microbiological criteria in comparison to chemical safety criteria,
i.e. microbial hazard levels and the attendant risk can increase or decrease
dramatically during the normal handling and processing of many foods (ICMSF,
1998; Lupien and Kenny, 1998).
220 Microbiological risk assessment in food processing
11.4 Dealing with variability, uncertainty and hazard
severity: sampling plans
In the 1970s, before the HACCP concept was widely implemented, the
International Commission for Microbiological Specifications for Foods
considered the needs of compliance testing. In response to the need for
objective criteria the Commission developed a series of sampling schemes
appropriate to different levels of ¡®risk¡¯ (ICMSF, 1974, 1986). In that process,
factors affecting risk were recognised, including the potential for change in the
level of the hazard (pathogen or toxin) between manufacture and consumption,
the levels of the target beyond which a substantial likelihood of health or
¡®utility¡¯ hazard exists, and the severity of the consequence of exposure to the
hazard (e.g. quality issues through to severe illness). In addition, these variations
were coupled with attributes sampling plans whose probability of detecting a
pathogen, if present, in a lot of food could be determined. Thus, the ICMSF
scheme to some extent considered both variability and its consequences, and
uncertainty in the result of the test. Quantification and differentiation of the
effects of uncertainty and variability continue to be areas of concern in
microbiological risk assessment (CAC, 1997; Cassin et al., 1998b; Nauta, 2000).
11.4.1 Variability and sampling plans
In testing against a microbiological criterion it is assumed that the analytical
results obtained are an accurate reflection of the quality of the whole batch of the
food. Microorganisms are rarely distributed evenly or randomly throughout a
food. One way to increase confidence in the representativeness of the sample is
to increase the size, or number, of samples but this carries an increased cost and
requires more product to be consumed in testing. The only sampling plan that
can provide 100% confidence requires sampling of the entire batch. Currently,
all test methods are destructive, so testing of the total batch is not feasible. All
sampling plans are, in consequence, a compromise between what is practicable
and yet offers reasonable confidence in the representativeness of the test result.
Thus, all carry some probability of an incorrect result, which introduces
additional types of risk. The processor¡¯s risk is that the test result leads to
rejection of acceptable product, while the consumer¡¯s risk is the possibility of
unsafe product passing undetected and being released for sale.
The mathematics of sampling plan design is discussed in detail in ICMSF
(1986). Two main types of sampling plan are recognised, and are described
below.
Variables sampling plans
Variables plans use the full range of numerical data describing microbial loads
on the foods of interest and are based on the known mean and standard deviation
of log-transformed counts of the product being tested. Consequently, they can be
applied only to situations in which the microbial loads are known to be
Microbiological criteria and microbiological risk assessment 221
distributed log-normally. Variables plans are more appropriate to producers who
regularly perform microbiological testing of their products. Such information is
rarely available to a regulatory authority or a purchaser who, consequently, will
usually default to attributes plans.
Attributes sampling plans
Attributes sampling plans test against a single criterion or attribute, such as the
presence of Salmonella in 25 g or the proportion of sample units that contain
greater than 100 cfu g
1
of the target organism. Unlike variables sampling plans,
the magnitude of the deviation between the number of microorganisms in the
sample and that specified in the criterion is not considered ¨C samples simply
either pass or fail. Attributes plans are described as two-class or three-class
plans. Both types can be characterised by three elements:
1. n, the number of samples taken from the batch of product being tested.
2. m, the attribute or condition that is being assessed.
3. c, the maximum proportion (or number) of the samples that do not satisfy
the requirement (m) but for which the batch is still considered to be
acceptable.
In three-class sampling plans a fourth characteristic is needed:
4. M, an attribute or condition which, if exceeded, is completely unacceptable
and that if exceeded in any sample leads to rejection of the entire batch.
Sampling plans are usually described in terms of these three or four values. For
example, an n 5; m 100 cfu g
1
, c 2 two class plan involves testing five
samples from a lot. The lot is acceptable if at least four out of five samples have
less than 100 cfu g
1
. If the results for five samples were: <10, 80, 50 and 150 and
45 000 cfu g
1
the two-class sampling scheme presented would lead to acceptance
of the batch because no more than two samples exceeded m.Ifm were reduced to
10 cfu g
1
,orc reduced to 1, or if the sampling plan were extended to a three-class
plan with M 1000 cfu g
1
, the same batch would be considered defective and
rejected. Thus, the specification of a sampling plan dictates the probability of
acceptance of a lot and dictates the stringency of a criterion.
As discussed more fully later (Section 11.6) the values chosen for m and M
reflect tolerance levels, i.e. levels of the target organism considered to represent
an unacceptable level of hazard but ICMSF (1986) notes that for a pathogen m
(or M) may be zero or a small number corresponding to the level of detectability
in a test. If the target organism represents a health hazard m (or M) values should
relate the levels of bacteria to probability or severity of illness using
epidemiological or laboratory data in combination, animal feeding studies,
etc. ICMSF (1986) also notes that the maximum amount of food likely to be
eaten at any one time, and the sensitivity of the group likely to eat the food,
should also be considered.
222 Microbiological risk assessment in food processing
11.4.2 Reliability of sampling schemes
While a more stringent plan can be expected to characterise more reliably the
microbiological status of a given food, unless 100% of the food is tested, no
scheme is 100% reliable because microorganisms are not evenly distributed
within a food. As sampling plans are an integral part of microbiological criteria,
their stringency should be expected to reflect the severity of the hazard. Before
considering this point, it is illustrative to consider the probability of ¡®success¡¯ of
a sampling plan.
The stringency of a sampling plan depends on n and c, i.e. the larger the value
of n at a given value of c, the better the food ¡®quality¡¯ must be to have the same
probability of being accepted by the sampling plan. The performance of a
sampling scheme can be described by its operating characteristic curve, often
abbreviated as OC curve. The OC curve relates the probability of acceptance of
a lot of given quality as a function of the sampling scheme. An example is given
in Fig. 11.1.
Figure 11.1 depicts the effect of the stringency of the sampling scheme on the
probability of detecting a contaminated batch. It illustrates that when actual
contamination levels in the batch are low (i.e. very few samples would contain a
cell of the target organism)
4
the probability of accepting the batch is always high,
Fig. 11.1. Operating characteristic curve showing the effect of increasing the number of
samples, when c = 0. The dashed curves relate the probability of acceptance of the batch
under a given sampling plan to the true probability of drawing a contaminated sample
from the batch.
4
It must be recognised that the target microorganisms present in a food may not be uniformly and
randomly distributed, but may exist in clumps. If non-randomness occurs, an appropriately large
sample size may be able to overcome the problems potentially caused. Jarvis (1989) states that only
for low-density populations will randomness be a reasonable assumption, but that the calculation
Microbiological criteria and microbiological risk assessment 223
regardless of the scheme chosen. This illustrates the difficulty of using end-product
testing to manage the threat of low infectious dose pathogens. It also illustrates the
futility of end-product testing when very large production volumes are involved.
Even if only a small proportion of units are contaminated, the effects can be
catastrophic, as a 1998/99 outbreak of listeriosis in the USA demonstrated (Anon.,
1999). In that case it was believed that contamination levels were low and
sporadic, but because of the scale of production over 100 people became seriously
ill over a period of several months before the outbreak was recognised and
resolved. Figure 11.2 illustrates the effect of increasing c, the number of ¡®failures¡¯
tolerated before the batch is rejected, and the true level of contamination on the
probability of accepting an unacceptably contaminated batch.
These figures illustrate there is no practical method of guaranteeing the
microbiological safety of every item of food consumed. Intuitively, however,
differences in the processing, distribution, microbial ecology, frequency of
consumption and intended use of foods lead to differences in risk. While
sampling is not the method of choice for quality and safety assurance of foods at
the manufacturing level, in a situation where testing is used to assess the quality
of a food (e.g. port of entry), there is a need to acknowledge these differences in
Fig. 11.2 Operating characteristic curve showing the effect of increasing the number of
positive samples accepted before rejection of the batch for an n = 10 two-class scheme.
advantages associated with the assumption of random dispersion have often been considered to
outweigh the disadvantages of rejecting the hypothesis for randomness.
224 Microbiological risk assessment in food processing
risk with different intensities of sampling. This need was recognised by ICMSF
(1974, 1986) who developed a series of sampling plans.
11.4.3 Sampling plans and risk
ICMSF (1974, 1986) stated that a sampling plan must consider:
? The type and seriousness of the hazards implied by the microorganisms for
which the test is made.
? The conditions under which the food is expected to be handled and consumed
after testing.
This led them to develop 15 different categories of risk, and to recommend
sampling schemes of increasing stringency appropriate to each of them, as
shown in Table 11.1.
Table 11.1 is a form of risk-based decision-tree, and its development by the
ICMSF represents an early form of semi-quantitative risk assessment. Other
schemes have been developed that attempt to include the idea of risk (i.e.
probability of exposure to a disease causing dose and severity of illness) to aid
Table 11.1 Suggested sampling plan of ICMSF (1986) based on 15 cases of stringency
according to risk-affecting factors
Type of hazard Conditions in which food is expected to be handled and
consumed
Reduce degree Cause no change May increase
of hazard in hazard hazard
No direct health hazard
Utility (e.g. general Case 1 Case 2 Case 3
contamination, reduced 3-class 3-class 3-class
shelf-life and spoilage n 5; c 3 n 5; c 2 n 5; c 1
Health hazard
Low, indirect (e.g. Case 4 Case 5 Case 6
indicator organism) 3-class 3-class 3-class
n 5; c 3 n 5; c 2 n 5; c 1
Moderate, direct, Case 7 Case 8 Case 9
limited spread 3-class 3-class 3-class
n 5; c 2 n 5; c 1 n 10; c 1
Moderate, direct, Case 10 Case 11 Case 12
potentially extensive spread 2-class 2-class 2-class
n 5; c 0 n 10; c 0 n 20; c 0
Severe, direct Case 13 Case 14 Case 15
2-class 2-class 2-class
n 15; c 0 n 30; c 0 n 60; c 0
Microbiological criteria and microbiological risk assessment 225
microbial food safety decisions (Corlett and Pierson, 1992; Huss et al., 2000;
Ross and Sumner, 2002). The latter schemes, however, are directed to toward
prioritising risk management actions rather than establishing testing regimes.
11.5 Microbiological criteria and food safety assurance: food
safety objectives
Numerous problems with microbiological criteria have been recognised and, as
indicated above, remain. Currently, if nations use different management
strategies for the control of a common problem, the only way to determine
the compliance of imports is some form of point of entry sampling programme.
These sometimes lead to disputes between nations. For example, differences in
approach to management of the risk of Listeria monocytogenes are evident
between some European nations and the USA. Some nations impose a ¡®zero
tolerance¡¯
5
for the presence of L. monocytogenes in ready-to-eat foods, while
others adopt a more liberal approach.
6
European companies exporting high-
value food products can suffer severe financial loss if their products are rejected
at the US port of entry under a more stringent set of criteria than applies in the
country of origin.
The wide adoption of the HACCP system promises to obviate many of the
criticisms of microbiological standards. In the HACCP paradigm, testing against
standards is used only to verify the performance of the HACCP plan, not the
integrity of individual units or batches. Just as HACCP offers to obviate the need
for compliance testing it was recognised that if the equivalence of HACCP, or
analogous, food safety assurance programmes used in different trading nations
could be demonstrated, the problems and expense of port-of-entry testing could
be considerably reduced. Thus, risk assessment techniques were proposed also
as a means of demonstrating the degree of equivalence of microbial food safety
control measures between nations (ICMSF, 1998; Hathaway, 1999). The needs
identified above for objective regulations and for ¡®harmonisation¡¯ of regulations
governing international trade in foods, led to calls to apply the principles of
formal risk assessment to microbial food safety management (e.g. Royal
Society, 1992; ILSI, 1993; CAST, 1994) which were endorsed in the Sanitary
Phytosanitary (SPS) Agreement at the conclusion of the Uruguay Round of
GATT in 1995.
5
Zero tolerance is jargon for a sampling scheme that allows no positive sample for the organism of
concern, usually in a relatively large sample unit, e.g. multiple samples of 10 or 25 g.
6
Several studies (e.g. CX/FH, 1999) have concluded that the incidence of listeriosis in nations with a
¡®zero tolerance¡¯ policy is not significantly different from those with more ¡®liberal¡¯ approaches. For
example Canada and/or Germany, which do not have a complete zero tolerance policy, have about
the same per capita incidence of listeriosis as the USA, Australia or Italy, which do have zero
tolerance policies for ready-to-eat products that support the growth of L. monocytogenes.
226 Microbiological risk assessment in food processing
11.5.1 Food safety objectives
To establish microbiological food safety criteria, risk managers must determine
what constitutes a tolerable risk, i.e. how often will someone become ill, and
how badly (ICMSF, 1998; Teufel, 1999). Thus, ICMSF (1998) noted that as
techniques in microbial risk assessment develop, risk managers will have to
analyse and interpret risk distributions that take into account both the inherent
variability of biological systems and the uncertainty of the data available. For
example, instead of stating that there is a zero tolerance for a specific pathogen,
a risk-based criterion might more accurately indicate that > 99% confidence is
required that the level of the target pathogen is <1 per kilogram. Importantly, it
must not be forgotten that risk is not borne equally by all members of the
population. Consideration of this variability, in deliberations about ¡®tolerable¡¯
risk, must also include the risk experienced by those with higher than ¡®average¡¯
exposure or lower than ¡®average¡¯ resistance to the hazard.
That statement of acceptable level and frequency of contamination becomes
the ¡®food safety objective¡¯ (see also Chapter 9). A food safety objective (FSO) is
defined as a statement of the frequency or maximum concentration of a
microbiological hazard in a food considered acceptable for consumer protection.
As such, an FSO may be a microbiological criterion.
Implicit, however, in an FSO is the concept of an appropriate level of
protection of public health. It is, in theory, possible to translate the FSO into a
tolerable level of foodborne illness. The FSO concept was developed by ICMSF
(van Schothorst, 1998) who recommend several steps for the management of
microbiological hazards in food based on the application of existing Codex
documents. The steps include the conduct of a risk assessment and an
assessment of risk management options, the establishment of an FSO which
should include a quantitative description, and confirmation that the FSO is
achievable by application of GHP and HACCP. Only where appropriate is it
proposed that a microbiological criterion be established, in accordance with
principles outlined earlier. Van Schothorst (1998) also states that FSOs are to be
established by government agencies, while elucidation of the complementary
HACCP requirements is the province of industry.
Two uses of risk assessment for the assurance of microbial food safety can
now be differentiated.
7
The first of these broadly involves providing decision
support for setting of objective microbiological criteria, i.e. limits for the
numbers of organisms, or their toxins, or their frequencies of occurrence, in
foods that are considered tolerable, and those that are not. A second aspect
involves use of the tools and approaches of risk assessment to determine where
process and performance criteria will most effectively minimise public health
risk and to determine what those criteria should be to achieve the FSO.
7
Other uses that have been recognised include the identification of risk management options,
prioritisation and allocation of resources for food safety management.
Microbiological criteria and microbiological risk assessment 227
11.6 Using microbiological risk assessments to set
microbiological criteria
Jarvis (1989) considered that to establish microbiological criteria, a decision has
to be made concerning the maximum colony count that could be permitted in
any circumstance, i.e. the values chosen for m or M. In the case of food quality,
this level is the minimum spoilage level, but in the case of food safety, it was
considered to be the minimum infective dose (MID). It is now more widely
considered, however, that all infectious pathogens have a MID of 1, i.e. that each
cell of a pathogen must be considered to have the potential to cause disease (see
Chapter 5; Holcomb et al., 1999; WHO/FAO, 2000, 2001). That potential,
expressed as the probability of one cell of the pathogen causing illness in a
consumer, is an index of the virulence of a pathogen.
Many factors affect the probability that a single pathogenic cell could cause
disease. Thus, to establish meaningful microbiological criteria for protection of
consumer health, it is necessary to understand the range of probability and
severity of infection for a given dose of microorganisms.
Definition of the dose¨Cresponse relationship for foodborne pathogens is a
complex task. The response to a given dose is affected by the consumer¡¯s
susceptibility, the strain of the pathogen, the effects of processing and storage
conditions on the physiology of the cell, and the interaction of the pathogen, the food
that harbours it and the physiology of the consumer at the time of ingestion. In
addition, in terms of public health risk, the potential for secondary infections must
also be considered.
11.6.1 Dose and hazard severity
The dose-response relationship for a foodborne microbial infection is considered
not to have a threshold, i.e. if a pathogen is present there is always a finite risk of
foodborne illness. Consumers may be unwilling to tolerate any risk but current
consumer preferences for less-processed foods with fewer preservatives creates
a paradox. Zero risk would require much greater levels of processing to kill or
completely inhibit pathogens. As with all risks, there are costs and benefits and
the concept of tolerable risk must first be accepted by all stakeholders.
Explaining the nature of the costs and benefits, and the magnitude and severity
of the risks being discussed, is a perfect example of the need for risk
communication (see Chapter 8).
Several groups (Farber et al., 1996; Buchanan et al., 1997; Bemrah et al.,
1998; Lindqvist and Westo¨o¨, 2000; FDA/USDA/CDC, 2001) have attempted to
develop dose¨Cresponse relationships for L. monocytogenes. WHO/FAO (2001)
developed an exponential dose¨Cresponse model that is generally consistent with
others presented. It is used here to illustrate the magnitude of risk. Those authors
used symptomatic listeriosis as a disease end-point and concluded that, for an
average consumer ingesting a single cell of L. monocytogenes of average
virulence, the risk of infection was approximately 1 in 10
14
.
228 Microbiological risk assessment in food processing
To place that estimate into perspective, if an average consumer ingested a
50 g meal containing 100 cfu L. monocytogenes.g
1
their risk of infection
would be approximately one in 20 000 million. Most consumers eat several
meals a day that could potentially be contaminated with L. monocytogenes,
suggesting that total lifetime potential exposures is in the order of 100 000
meals. Thus, a total lifetime risk of infection ¨C if each of these meals were
contaminated at the above level ¨C would be 1 in 200 000. If only one in ten
meals was contaminated at that level, the total lifetime risk of listeriosis would
be 1 in 2 000 000 people.
As noted earlier, the idea of an average consumer, or average exposure, is
misleading and is one of the potential pitfalls of the use of stochastic modelling
methods. Often foodborne disease arises from the convergence of unusual
circumstances, and identification and estimation of the frequency of these will
provide greater insight and be of greater concern than average circumstances.
For example, it is known that most cases of listeriosis occur in consumers with
identifiable predisposing factors. Some consumers are up to 1000-fold more
susceptible to listeriosis than the population average (Peters, 1989; Jurado et al.,
1993, Rocourt, 1995). Similarly, inter-strain virulence of L. monocytogenes
varies by up to 1000-fold (Stelma et al., 1987; Pine et al., 1990, 1991). Thus,
certain circumstances could lead to a 10
6
-fold greater risk than estimated by the
¡®average¡¯, indicating both the wide variability in risk, and also the value of
representing that variability through the use of stochastic models.
As noted in Section 11.3.2, a microbiological criterion should consider the
intended consumer of the food. For example, baby foods or special dietary
products for the elderly or immunocompromised should attract more stringent
criteria because they are intended for consumers known to be more susceptible to a
range of parenteral infections. Consideration of all of these factors should be
undertaken in the hazard characterisation phase of microbiological risk assessment.
11.6.2 Developing measures of ¡®equivalent¡¯ food safety risk
The above discussion suggests that, by using the concept of risk, two approaches
to setting criteria could be pursued. One would seek to limit the level of
contaminants in any product, while the other would limit the frequency and level
of contamination to achieve the required level of public health protection. The
two approaches need not be mutually exclusive. Thus, while the establishment
of a microbiological food safety criterion relies heavily on the hazard
characterisation step of risk assessment, understanding the routes and probability
of exposure to microbiological hazards ¨C the aim of exposure assessment (see
Chapter 6) ¨C might also be used to establish criteria. But how realistic a proposal
is this?
From the dose¨Cresponse relationship for a foodborne pathogen it is possible
to derive a series of combinations of contamination levels and exposure
frequencies that lead to the same probability of illness per meal. We will call
these sets of combinations ¡®iso-probabilities¡¯, but a more useful measure would
Microbiological criteria and microbiological risk assessment 229
be ¡®iso-risks¡¯ ¨C those combinations of contamination level and severities of
illness that are considered equivalent. To compare the severity of the disease
resulting from exposure from one hazard with that from another, a common
measure of severity is needed.
A useful measure of disease severity is the Disability Adjusted Life Years
(DALY) concept that was originally developed by Murray and Lopez (1996) and
adopted by the World Health Organization to inform global health planning
(AIHW, 2000). The DALY is a measure of the years of healthy life lost due to
illness or injury: one DALY is one year of ¡®healthy¡¯ life lost due to sickness or,
in extreme cases, death. DALYs are calculated as the sum of years of life lost
due to premature death (YLL) and the equivalent years of ¡®healthy¡¯ life lost due
to poor health or disability (YLD). The YLD considers the number of years that
a disability is endured weighted according to the severity of the disability.
Combination of the ¡®iso-probabilities¡¯ with a weighting for disease severity, for
example based on the DALY characteristic of the hazard, provides a basis for
¡®iso-risks¡¯. Were it possible to establish a universal tolerance level for the risk of
foodborne illness, the concept of the iso-risk provides a means of setting
product- and pathogen-specific specifications that lead to equivalent levels of
risk for all foodborne hazards.
Figure 11.3 presents iso-risks for two pathogens with different dose¨Cresponse
relationships
8
under the assumption that the dose¨Cresponse relationship is well
described by an exponential model. The iso-risk shown represents one illness per
1 million serves of a meal, i.e. contamination levels above the critical value are
predicted to lead to more than one illness per million meals for the
contamination levels shown. In the example given, the difference in infective
dose of the pathogens requires that for an equivalent level of contamination, the
pathogen with the lower ID
50
can only be tolerated at much lower frequency of
contamination to achieve the same iso-risk as the other pathogen. Alternatively,
for the same prevalence of contamination, higher levels of contamination can be
tolerated for the pathogen with the higher ID
50
.
Use of the iso-risk concept would enable prioritisation of risks and
equalisation of regulatory and industry efforts across all pathogen¨Cproduct
combinations to minimise food safety risks. The use of risk-per-serving as the
measure of isorisk, however, has a limitation. Even with the same iso-risk, a
food that is consumed by most people on a regular basis represents a much
greater public health risk than a product/pathogen combination of equal iso-risk
that is consumed infrequently by only a small proportion of the population.
From a regulatory perspective, however, it is unlikely that iso-risks
encompassing consumption patterns could be implemented because
consumption could differ between regions and over time. Thus, risk expressed
as risk-per-serving appears a more achievable means of comparing the risk of
different product/pathogen combinations. The iso-risk concept as described also
8
Equivalently, the iso-risks could represent pathogens with equivalent dose¨Cresponse relationships
but with DALY characteristics 1000-fold different.
230 Microbiological risk assessment in food processing
neglects risk management considerations of risk perception such ¡®outrage¡¯
factors associated with particular types of illness or particularly susceptible
groups. Furthermore, weighting factors to calculate DALYs for different
diseases are not yet agreed. Thus, the iso-risk remains a theoretical concept.
11.7 Using microbiological risk assessments to develop
performance and process criteria
There appears to be debate concerning the appropriate use of microbiological
risk assessment. Many authors (Foegeding, 1997; Hathaway and Cook, 1997;
Cassin et al., 1998b; Buchanan and Whiting, 1998; ICMSF, 1998) have
commented on the apparent similarities between risk assessment and HACCP
approaches and pointed out that introducing the idea of risk in HACCP planning
can differentiate trivial hazards from those genuinely requiring control. Baird-
Parker and Tompkin (2000) state, however, that despite the similarities in some
aspects of developing a HACCP plan and in performing a risk assessment, that
risk assessment should remain the domain of governments, and intergovernment
organisations.
Fig. 11.3 A plot of ¡®iso-risks¡¯, i.e. that combination of dose and frequency that results in
equivalent likelihood of illness. In the example shown the iso-risk represents one illness
per 1 million meals for two pathogens, each with a different dose¨Cresponse relationship.
The dotted line represents a pathogen whose ID
50
is ~7 000 000 cfu; the solid line ID
50
~
140 000.
Microbiological criteria and microbiological risk assessment 231
Full, quantitative, risk assessment requires expertise in epidemiology, food
manufacturing, food microbiology, and statistics and modelling. This expertise
is unlikely to be found within any but the largest food companies and, indeed,
some multinational food companies have created dedicated, in-house, food
safety risk assessment teams. Similarly, in some nations, food industry research
organisations have funded food safety risk assessments, sometimes in response
to government initiatives.
Clearly, the support that formal risk assessment methods offer to decision
making are of benefit to industry in objectively assessing new processes and
protocols, and in demonstrating the necessity, or otherwise, of proposed
regulations. Industry can also use the mathematical tools of risk assessment to
set their own performance criteria to meet other criteria imposed on them. ICMSF
(1998) concluded that ¡®the role of microbial risk assessment is to provide the
information HACCP developers need to make more informed decisions¡¯.
11.7.1 Identifying control points
A US Department of Agriculture risk assessment has considered risks from S.
Enteritidis in egg shells and egg products and another is currently assessing risk
from E. coli O157:H7 in ground beef. Both these assessments include as
objectives the identification of possible strategies to reduce human illness from
these product/pathogen combinations and comparison of the effectiveness of
alternative risk reduction strategies (USDA, 1998, 2001).
If a stochastic model contains sufficient detail, sensitivity analysis (see
Chapter 2) can be used to identify those steps in the farm-to-table chain that
have most effect on the predicted risk and, by inference, those steps in a chain
where efforts can potentially most effectively be exercised to control the food
safety risk. Cassin et al. (1998a) considered that sensitivity analysis enabled risk
assessment methods to identify potential critical control points, a point
implicitly recognised also by Zwietering and van Gerwen (2000) who further
considered that sensitivity analysis could be used to identify the main
contributions to inaccuracy in the risk estimate.
Whiting and Buchanan (1997) developed a model to assess the risks
associated with pasteurised liquid egg. From their analyses both time and
temperature of pasteurisation were shown to be important determinants of risk,
but temperature of pasteurisation was shown to be much more influential than
the time and a deviation of only one degree below the intended pasteurisation
temperature greatly altered the estimate of risk.
The use of sensitivity analysis in this ro?le was also well illustrated by Cassin
et al. (1998b) who considered the risk of enterohaemorrhagic E. coli in ground
beef hamburgers in the north American culture. They termed that assessment a
¡®process risk model (PRM)¡¯ because it explicitly modelled the effects of many
steps in the farm-to-table pathway so that the influence of each on the steps on
the final estimate of risk could be determined. Using sensitivity analysis of
model inputs they concluded that the factors most affecting risk were host
232 Microbiological risk assessment in food processing
susceptibility, the concentration of E. coli O157:H7 in the faeces of those cattle
shedding the pathogen, the cooking preferences of consumers and retail storage
temperatures of ground meat. The relative success of three risk mitigation
strategies was evaluated by modifying the values of the most important factors
in the model that affected the risk, and which could feasibly be changed in
practice. The new estimates of risk were compared with each other and the
original values. The average probability of illness was predicted to be reduced
by 80% using a strategy to reduce microbial growth during retail storage by
lowering the storage temperature. This strategy was predicted to be more effec-
tive than another hypothetical approach that would reduce the concentration of
E. coli O157:H7 in the faeces of cattle shedding the pathogen, and a third
approach based on persuading consumers to cook hamburgers more thoroughly.
It should be noted, however, that the results of a sensitivity analysis are
subject to the structure and assumptions inherent in the model, as noted by
Cassin et al. (1998b). Zwietering and van Gerwen (2000) explain that sensitivity
analysis is based upon correlation between variability in the output and
variability in the input factors. For example, if a model predicting the extent of
microbial growth included the assumption that temperature were controlled
throughout the life of a product within a very narrow interval, the output might
not be sensitive to temperature under those assumptions even though
temperature is known to have a profound influence on the rate of growth of
microorganisms.
11.7.2 Relationship between microbiological and process and performance
criteria
Microbiological food safety criteria appear to be focused on the risk to an
individual consumer, by specifying limits on the microbiological contamination
of an individual unit of food (as adjudged by an appropriate sampling scheme,
see below) commensurate with an appropriate level of protection of public
health. This, in turn, should be commensurate with some overall level of
population risk. Process and performance criteria are management tools, or in
the terminology of Cassin et al. (1998b) ¡®risk mitigation strategies¡¯, designed to
provide an overall level of population health protection. Clearly, these two
aspects of microbiological specifications must be complementary: the process
and performance criteria should be designed to limit risk and lead to compliance
with the microbiological criterion. Performance and process criteria, then, can
be considered as mechanisms to achieve an implicit microbiological criterion,
though that criterion is often unstated, i.e. it is not transparent.
For example, Australian Standard C1 (ANZFA, 2002) requires a 1000-fold
reduction in E. coli during the processing of uncooked fermented meats. This
performance criterion will not ensure a safe product if the level of entero-
haemorrhagic E. coli initially present exceed 10 000 cfu g
1
for example and,
appropriately, those regulations also require that the raw ingredients must contain
<1 000 cfu ¡®generic¡¯ E. coli g
1
, i.e. a microbiological criterion. Similarly, while
Microbiological criteria and microbiological risk assessment 233
pasteurisation conditions specify time¨Ctemperature limits they assume some
level of microbiological quality of the raw milk entering the process.
Increasingly, consumers preferences are for less preserved products and food
processors will ask questions such as:
? Can less severe processes be developed without increasing consumer risk?
? Are existing process criteria too severe?
? Were existing criteria developed with poorer control systems throughout the
chain?
For example, Baker (1993) asked the question whether the thermal treatments to
ensure the safety of canned meats against C. botulinum were overly
conservative. His conclusion was that, given the current levels of production,
and the current record of safety, data from more than 1500 years of production
would be required to justify that a 12D reduction was needed.
Once a microbiological criterion has been established, it is, in theory,
possible to establish performance and process criteria using the tools available in
modern stochastic simulation modelling software discussed in Chapter 10. As a
simple example, at the time of writing it is proposed that a total level of 100
cfu g
1
L. monocytogenes at the point of consumption does not pose a
significant threat to public health (ICMSF, 1996; European Commission, 1999;
FAO, 1999; Brown, 2000). How can this be translated into a process criterion for
foods that permit the growth of L. monocytogenes?
If manufacturers had data describing the frequency and level of con-
tamination of their product it is possible using stochastic
9
modelling tools to
estimate a maximum ¡®use-by¡¯ period for their product that, under normal
conditions of storage, would prevent L. monocytogenes from exceeding the
microbiological criterion applied.
Using Analytica
software, a simple model was constructed that includes the
effect of storage temperature, L. monocytogenes growth rate on the product, lag
times before growth of L. monocytogenes commences, and initial level of
contamination. Such a model is shown in Fig. 11.4, which is called an ¡®influence
diagram¡¯. In the model, arrows indicate that the value in one block (representing
a step or process) influences the value of the block to which it is connected. For
example, storage temperature influences generation time, and generation time
and initial contamination levels influence the time taken for the initial
contamination level to reach 100 cfu g
1
as does the lag time before growth
in the product commences. Using cold smoked fish as an example,
representative values of generation times at 5 oC estimated from literature and
various predictive models were used. Storage temperatures were assumed to
vary between 1 and 10 oC. A relative rate function (McMeekin et al., 1988)
based on a simple square root model (Ratkowsky et al., 1982) was used to
estimate the generation time at temperatures other than 5 oC. Contamination
levels were drawn from the literature. The model was also constructed so that
9
As illustrated by Nauta (2000), simple deterministic models are unlikely to yield realistic results.
234 Microbiological risk assessment in food processing
the effect of increasing lag times, or reformulating the product to achieve a 20%
slower growth of L. monocytogenes, or reducing contamination levels by a
factor of 10 could be compared. Full details of the model are shown in the
Appendix. Estimates of maximum shelf-lives, including the effect of some
modification to the product that reduces L. monocytogenes growth rate by 20%,
by reduction of initial contamination levels by 90%, or by increasing the lag
time by 60% are given in Table 11.2, which shows the results for different levels
of confidence that compliance will be achieved.
Such an analysis can be combined with an attributes sampling plan to
estimate the probability of a non-compliance being detected by testing. Given
that the 100 cfu g
1
criterion is applied at the point of consumption, there is no
further opportunity for microbial growth. Because of the potential severity of the
Table 11.2 Estimated maximum shelf-life (days) of cold-smoked fish required to
prevent product from containing more than 100 cfu g
1
L. monocytogenes at the point of
consumption at different levels of confidence
Confidence Initial Reduced Increased Decreased
level values growth lag Contamination
rate
50 25 30 41 42
90 8 9 11 11
95 11 13 15 15
99 13 15 18 18
Fig. 11.4 Structure of a simulation model created in Analytica
to determine the
maximum shelf-life of a product potentially contaminated with L. monocytogenes
commensurate with not exceeding 100 cfu g
-1
at the point of consumption.
Microbiological criteria and microbiological risk assessment 235
disease, a Class 14 plan (n 30; c 0) would be appropriate for a regulatory
authority to apply. This scheme delivers a 95.8% reliability when 10% of the
batch is contaminated, 78.5% for 5% contamination, but only 26% if 1% of the
batch is contaminated. Thus, if a manufacturer was able to ensure 99% of
products were compliant, and if the regulatory authority had limited resources
and opted for a plan less stringent than Plan 14 (see, e.g., Canadian sampling
schemes for L. monocytogenes; Brown, 2000) the probability of detection would
be low. For n 10; c 0; or n 5; c 0 the probabilities of detection of a
non-compliant batch are 9.6% or 4.9% respectively when 1% of all sample units
are contaminated at 100 cfu g
1
.
11.8 Using microbiological risk assessments to prioritise risk
management actions
Another aspect of regulation setting is the prioritisation of regulatory resources,
e.g. for inspection and control. Risk assessment methods could, ideally, be used
to compare risks quantitatively from different sources. An obvious question is:
¡®why would one use risk assessment methods when the burden of disease, and
hence risk, can be determined more readily from epidemiological data?¡¯
The answer lies in the fact that for many foodborne diseases no data are
collected. This may occur because no reporting system exists, because
treatments are often instituted without diagnoses, because misdiagnoses occur
(e.g. for some microbiological intoxications), or because many foodborne
diseases are mild and people affected often do not seek medical attention.
Estimates of foodborne disease in Australia (ANZFA, 1999; Sumner et al.,
2000) and the USA (Mead et al., 1999) indicate that between one in five and one
in ten consumers experiences a foodborne illness each year but acknowledge
that these estimates are up to 100 times greater than the number of foodborne
illnesses actually reported.
FDA/USDA/CDC (2001) used a risk assessment approach to estimate the
contribution of 20 categories of ready-to-eat foods to the observed incidence of
listeriosis in the USA. The risk from each category of food was ranked in order
of importance, the aim being the development and use of tools to ¡®evaluate the
effectiveness of current policies, programs and regulatory practices that will
minimize the public health impact of this pathogenic microorganism¡¯. In
Australia, a New South Wales government agency used a risk assessment
approach to prioritise risk management needs for that state¡¯s seafood industry.
A semi-quantitative approach was used to rank the relative public health risk
from 10 seafood product/hazard combinations (SafeFood NSW, 2001). A
simple risk ranking tool, similar to that described in Ross and Sumner (2002),
was used.
236 Microbiological risk assessment in food processing
11.9 Using criteria in risk assessments
Many practitioners of MRA have found that the estimation of public health risk
from microbiological contamination of foods is fraught with difficulty because
of apparently intractable problems of estimation of microbial numbers at the
point of consumption and characterisation of dose¨Cresponse relationships for
microbial pathogens and toxins (Cassin et al., 1998b; Coleman and Marks, 1998;
Lindqvist and Westo¨o¨, 2000). The former problems stem from the paucity of
relevant data on contamination levels in foods at various times in the product
history along the farm-to-table continuum and problems unique to MRA in
comparison to risk assessment of foodborne chemical toxins, viz. that of the
variability in behaviour and virulence of individual strains of microbial species,
and their potential for self-amplification (growth) or complete eradication (e.g.
by cooking) from foods. It may be possible to circumvent some of these data
needs by recourse to expert opinion or other default assumptions. This section
considers the potential use of criteria, or results of testing against criteria, as
inputs to risk assessment.
11.9.1 Use of criteria as inputs to risk assessment
Criteria as default hazard characterisation
As illustrated in Section 11.7, which discussed the development of HACCP
criteria and limits to satisfy microbiological criteria, in the absence of reliable
data from which to derive a dose¨Cresponse relationship, risk could potentially be
assessed using microbiological criteria as default hazard characterisations. Such
risk assessments would mainly be used as described in Section 11.7, or to
determine the producer¡¯s risk. They could also have utility, however, to
determine whether predictions of illness from such models accorded with
epidemiological data and the results used potentially to assess whether such
criteria were realistic and necessary, or not.
Compliance with criteria as default assumptions on inputs
The question can be asked: ¡®Can the existence of criteria and test results
demonstrating compliance with them be used as default inputs to risk
assessments?¡¯ In our opinion this should only be done using great caution. As
was demonstrated in Section 11.4.2, sampling plans are not 100% reliable. The
incidence of recalls, and outbreaks traced to in-plant contamination, reinforces
that contamination events can pass undetected. (This criticism, however, applies
equally to all sources of data that might be used in risk assessment, e.g. retail
surveys, epidemiological data). Moreover, it is the frequency and circumstances
of those non-compliance events that are likely to be the most important
determinants of public health risk.
As was shown in Section 11.7 it is possible from the results of testing pro-
grammes, however, to determine the probability that non-compliant product is
released for sale, i.e the consumer¡¯s risk. Furthermore, from the results of
Microbiological criteria and microbiological risk assessment 237
attributes sampling schemes it is also possible to infer contamination levels, as
described below.
11.9.2 Using results from attributes sampling plans to enrich
microbiological risk assessment
A specific problem in MRA has been to obtain data describing contamination
levels for hazards that are usually assessed against presence or absence criteria
(ICMSF, 1998; Marks et al., 1998). Negative tests may indicate true absence of
the pathogen, in which case the risk is zero, or may represent levels below the
detection sensitivity of the method and sampling plan. As discussed in Section
11.4.1, there is always some probability of non-detection for any level of
contamination. It is possible, however, to infer contamination levels from the
results of presence/absence tests in some cases, and Jarvis (2000) provides
useful calculations that may be able to be applied.
A two-class sampling plan is essentially a series of replicate analyses that can
be considered to be a most probable number determination for a single inoculum
level. From this it is possible to generate a most probable number (MPN)
estimate of the contamination level. Calculations presented in Jarvis (2000) use
the number of negative samples at a single dilution level to derive the
probability of occurrence of zero defects. Those calculations lead to the
relationship:
m 2:303=v log z=n [11.1]
where: m true density of organisms in the batch
v quantity of material tested
z number of sample units assessed by testing as ¡®negative¡¯
n total number of sample units examined
For example, in a series of tests involving 25 sample units of 10 g each of which
two were positive, the MPN is 0.0083 g
1
.
If no samples are positive, it is possible to estimate the maximum
contamination level for various probability levels assuming random distribution
of contamination within a lot.
10
Equation 2, derived from the binomial
distribution (Jarvis, 2000), gives the true number of defective units at some
specified level of confidence:
d 100 1
1 P
n
p
11:2
where: d actual percentage of defective units in a lot
P probability (confidence level)
n number of samples examined
10
Jarvis (2000) notes that this is an unlikely situation, citing the results of Habraken et al. (1986) for
the non-random distribution of salmonellae in dried milk products.
238 Microbiological risk assessment in food processing
Thus, for a series of 20 negative results the true number of defective units at
95% probability is estimated by this method to 13.9%. If each test is based on
50 g of product, the maximum likely level of contamination is 13.9% per 50 g, or
0.139/50 g
1
, i.e. 0.0028 g
1
or 1 cell per 360 g.
11
Analogously, if 5 samples of
25 g each were tested and none of the target organism was detected, the
maximum likely contamination level would be 0.018 g
1
.
From the above equations, it is possible obtain quantitative information from
qualitative data.
11.10 Future trends
As demonstrated above, there is a history of risk-based thinking in the evolution
of guidelines for setting microbial food safety and quality criteria. Mossel and
Drion (1979) also identified components of a risk-based approach to microbial
food safety assurance.
In principle, the use of stochastic risk assessment methods offers to
revolutionise the establishment of food safety criteria. By determining the
range of responses of humans to different doses of microbial pathogens or
their toxins it is possible to establish levels of contamination and frequencies
of exposure, and to determine the risk associated with different product/
hazard combinations at desired levels of confidence, as illustrated in the ¡®iso-
risk¡¯ example. Understanding the variability inherent in any system is
important in making food safety decisions. This was elegantly demonstrated
by Nauta (2000) who showed the different decisions that can result from
¡®point-estimate¡¯ approaches to those from stochastic approaches. Thus, in
theory, stochastic and risk-based approaches enable rational and equitable
microbiological, process and performance criteria to be established
objectively and dispassionately. The establishment of criteria encompassing
the concept of probability would also greatly facilitate transparency in the
basis of regulations, to the benefit of both producers and consumers. But how
feasible is this idealistic approach?
The first impediment appears to be to establish reliable dose¨Cresponse
relationships and, as others have noted (see Chapter 5), this is unlikely to be
resolved in the near future. The second, even greater impediment, will be to
determine what is an acceptable level of risk. The establishment of a
microbiological criterion is a risk management problem. A risk assessment
can characterise and quantify risk, but the level of risk that is considered
tolerable is a societal question. This level may differ widely from nation to
nation and region to region, depending on local factors affecting the
perception of the costs and benefits associated with those sources of risk. The
11
Conceptually it makes no sense to have less than one cell per unit but such values cannot be
assumed to be equivalent to zero (ICMSF, 1998). Such results are interpreted as probabilities of
finding a cell in that unit. For example, 0.018 g
-1
would be interpreted one cell in 56 grams of
product.
Microbiological criteria and microbiological risk assessment 239
right of nations to set their own criteria commensurate with the level of public
health protection they deem appropriate is enshrined under the SPS
Agreement, provided that the same criteria are applied consistently to both
domestic and imported products. Risk can also have many elements but is
generally regarded to include the existence of a hazard, the probability of
exposure to it, and the consequences of exposure if it occurs. Metrics for
consequences of exposure are not yet well established, and currently limit the
ability to compare risks associated with different hazards. The DALY concept
begins to address this deficiency. Further, risk encompasses the magnitude of
consequences of an exposure, and so is related to levels of production and
consumption, i.e. in theory, a tolerable level in one product may not represent
a tolerable risk in another, yet to differentiate on this basis seems inequitable.
Thus, in the short term, it is likely that microbiological criteria, or food safety
objectives, will default to a set of tolerable, or achievable, frequencies and levels
of contamination based on some consensus approach. Cassin et al. (1998b)
concluded that, given the lack of necessary data and the consequent level of
uncertainty in the results of risk assessment models, the most immediate
application of risk assessment would be in the area of identification of risk-
contributing factors and risk-mitigation strategies through sensitivity and
scenario analysis. In other words, in the context of criteria, the most immediate
application of risk assessment in criteria setting would be in establishing process
and performance criteria.
While the quest for truly risk-based microbiological criteria will continue, it
will require the development of new knowledge, and possibly new methods for
analysing and integrating data. In the interim, the existing data, approaches and
tools can greatly improve the rational development of performance and process
criteria to decrease both producers¡¯ and consumers¡¯ risks.
11.11 Further reading
BAIRD-PARKER A C and TOMPKIN R B (2000), ¡®Risk and microbiological criteria¡¯,
in Lund B M, Baird-Parker A C and Gould G W, The Microbiological
Safety and Quality of Food, Gaithersberg, USA, Aspen Publishers Inc.,
1852¨C1885.
ICMSF (INTERNATIONAL COMMISSION ON MICROBIOLOGICAL SPECIFICATIONS FOR
FOODS OF THE INTERNATIONAL UNION OF MICROBIOLOGICAL SOCIETIES)
(1986), Micro-organisms in Foods 2. Sampling for Microbiological
Analysis: Principles and specific applications, 2nd Edition, Oxford,
Blackwell Scientific Publications.
ICMSF (INTERNATIONAL COMMISSION ON MICROBIOLOGICAL SPECIFICATIONS FOR
FOODS OF THE INTERNATIONAL UNION OF MICROBIOLOGICAL SOCIETIES)
(1998), Potential application of risk assessment techniques to micro-
biological issues related to international trade in food and food products.
Journal of Food Protection, 61:1075¨C1086.
240 Microbiological risk assessment in food processing
IFST (INSTITUTE OF FOOD SCIENCE AND TECHNOLOGY, UK) (1999), Development
and Use of Microbiological Criteria for Foods, London, The Institute of
Food Science and Technology (UK).
NRC (NATIONAL RESEARCH COUNCIL, US FOOD PROTECTION COMMITTEE,
SUBCOMMITTEE ON MICROBIOLOGICAL CRITERIA) (1985), An Evaluation
of the Role of Microbiological Criteria for Foods and Food Ingredients,
Washington, National Academy Press.
TEUFEL P (1999), Food safety objectives ¨C Is there a zero-risk? Kieler
Milchwirtschaftliche Forschungsberichte, 51:5¨C14.
VAN SCHOTHORST M (1999), Principles for the establishment of microbiological
food safety objectives and related control measures. Food Control, 9:379¨C
384.
11.12 References
ADAMS M R and MOSS M O (2000), Food Microbiology, 2nd Edition, Cambridge,
UK, Royal Society of Chemistry.
AIHW (AUSTRALIAN INSTITUTE OF HEALTH AND WELFARE) (2000), Australia¡¯s
Health 2000: The seventh biennial health report of the Australian Institute
of Health and Welfare, Canberra, Australian Institute of Health and
Welfare.
ANON. (1999), ¡®Update: multistate outbreak of listeriosis¨CUnited States, 1998¨C
1999¡¯, Morbidity and Mortality Weekly, 49:1117¨C1118.
ANZFA (AUSTRALIAN AND NEW ZEALAND FOOD AUTHORITY) (1999). Food Safety
Standards Costs and Benefits. Canberra, Australian Government
Publishing Service.
ANZFA (AUSTRALIAN NEW ZEALAND FOOD AUTHORITY) (2002), Australia New
Zealand Food Standards Code, South Melbourne, ANSTAT Pty. Ltd.
BAIRD-PARKER A C (2000), ¡®The production of microbiologically safe and stable
food¡¯, in Lund B M, Baird-Parker A C and Gould G W, The Micro-
biological Safety and Quality of Food, Gaithersberg, USA, Aspen
Publishers Inc., 3¨C18.
BAIRD-PARKER A C and TOMPKIN R B (2000), ¡®Risk and microbiological criteria¡¯,
in Lund B M, Baird-Parker A C and Gould G W, The Microbiological
Safety and Quality of Food, Gaithersberg, USA, Aspen Publishers Inc.,
1852¨C1885.
BAKER D A (1993), ¡®Probability models to assess the safety of foods with respect
to Clostridium botulinum¡¯, Journal of Industrial Microbiology, 12:156¨C
161.
BEMRAH N, SANAA M, CASSIN M H, GRIFFITHS M W and CERF O (1998), ¡®Quantita-
tive risk assessment of human listeriosis from consumption of soft cheese
made from raw milk¡¯, Preventive Veterinary Medicine, 37:129¨C145.
BROWN B E (2000), ¡®National legislation, guidelines and standards governing
microbiology. Canada¡¯, in Robinson R K, Batt C A and Patel P D,
Microbiological criteria and microbiological risk assessment 241
Encyclopaedia of Food Microbiology, San Diego, Academic Press, 1549¨C
1561.
BUCHANAN R L, DAMERT W G , WHITING R C and VAN SCHOTHORST M (1997), ¡®Use
of epidemiological and food survey data to estimate a purposefully
conservative dose¨Cresponse relationship for Listeria monocytogenes levels
and incidence of listeriosis¡¯, Journal of Food Protection, 60:918¨C922.
BUCHANAN R L, and WHITING R C (1998), ¡®Risk assessment: A means for linking
HACCP plans and public health¡¯, Journal of Food Protection, 61:1531¨C
1534.
CAC (CODEX ALIMENTARIUS COMMISSION) (1997), Principles for the
Establishment and Application of Microbiological Criteria for Foods,
CAC/GL 21¨C1997, Rome, Codex Alimentarius Commission, Joint FAO/
WHO Food Standards Programme.
CAC (CODEX ALIMENTARIUS COMMISSION) (1999), Principles and Guidelines for
the Conduct of Microbiological Risk Assessment, CAC/GL 30¨C1999,
Rome, Codex Alimentarius Commission, Joint FAO/WHO Food
Standards Programme.
CASSIN M H, PAOLI G M and LAMMERDING A M (1998a), ¡®Simulation modelling for
microbial risk assessment¡®, Journal of Food Protection, 61:1560¨C1566.
CASSIN M H, LAMMERDING A M, TODD E C D, ROSS W and MCCOLL R S (1998b),
¡®Quantitative risk assessment for Escherichia coli O157:H7 in ground beef
hamburgers¡¯, International Journal of Food Microbiology, 41:21¨C44.
CAST (1994), ¡®Foodborne pathogens: risk and consequences¡¯, Council for
Agricultural Science and Technology, USA, Task Force Report No. 122.
COLEMAN M and MARKS H (1998), ¡®Topics in dose¨Cresponse modelling¡¯, Journal
of Food Protection, 61:1550¨C1559.
CORLETT D A and PIERSON M D (1992), ¡®Hazard analysis and assignment of risk
categories¡®, in Pierson, M D and Corlett, D A, Jr., HACCP: Principles and
Applications, New York, Van Nostrand Reinhold, 29¨C38.
CX/FH (CODEX COMMITTEE ON FOOD HYGIENE) (1999) Discussion Paper on the
Management of Listeria monocytogenes in Foods. Joint FAO/WHO Food
Standards Programme, Codex Committee on Food Hygiene, 32nd Session.
Available for download from: ftp://ftp.fao.org/codex/ccfh32/FH99_10e.pdf
EUROPEAN COMMISSION (1999), ¡®Opinion of the Scientific Committee on
Veterinary Measures Relating to Public Health on Listeria mono-
cytogenes¡¯, European Commission, Health & Consumer Protection
Directorate-General, Directorate B-Scientific Health Opinions, Unit B3¨C
Management of Scientific Committees II.
EU (EUROPEAN UNION) (2002), Overview of Microbiological Criteria for
Foodstuffs in Community Legislation in Force (updated June 2001)
(downloaded 1 March 2002 from: http://europa.eu.int/comm/food/fs/sfp/
mr).
FARBER J M, ROSS W H and HARWIG J (1996), ¡®Health risk assessment of Listeria
monocytogenes in Canada¡¯, International Journal of Food Microbiology,
30:145¨C156.
242 Microbiological risk assessment in food processing
FAO (FOOD AND AGRICULTURE ORGANIZATION OF THE UNITED NATIONS) (1999),
FAO Fisheries Report No. 604. Report of the Expert Consultation on the
Trade Impact of Listeria monocytogenes in Fish Products. Food and
Agriculture Organization of the United Nations, Rome.
FDA/USDA/CDC (US FOOD AND DRUG ADMINISTRATION, UNITED STATES
DEPARTMENT OF AGRICULTURE AND CENTERS FOR DISEASE CONTROL AND
PREVENTION) (2001), Draft Assessment of the Relative Risk to Public
Health from Foodborne Listeria monocytogenes Among Selected
Categories of Ready-to-Eat Foods. Available for download from: http://
www.foodsafety.gov/ dms/lmrisk.html
FOEGEDING P M (1997), ¡®Driving predictive modelling on a risk assessment path
for enhanced food safety¡¯, International Journal of Food Microbiology,
36:87¨C95.
GILL C O, HARRISON, J C L and PHILLIPS, D M (1991), ¡®Use of a temperature
function integration technique to assess the hygienic adequacy of a beef
carcass cooling process¡®, Food Microbiology, 8:83¨C94.
HABRAKEN C J M, MOSSEL D A A and VAN DEN RECK S (1986), ¡®Management of
Salmonella risks in the production of powdered milk products¡¯,
Netherlands Milk and Dairy Journal, 40:99¨C106.
HATHAWAY S (1999), ¡®The principle of equivalence¡¯, Food Control, 10:261¨C265.
HATHAWAY S and COOK R L (1997), ¡®A regulatory perspective on the potential
uses of microbial risk assessment in international trade¡¯, International
Journal of Food Microbiology, 36:127¨C133.
HOLCOMB D L, SMITH M A, WARE G O, HUNG Y C, BRACKETT R E and DOYLE, M P
(1999), ¡®Comparison of six dose¨Cresponse models for use with food-borne
pathogens¡¯, Risk Analysis, 19:1091¨C1100.
HUSS H H, REILLY A and BEN EMBAREK P K (2000), ¡®Prevention and control of
hazards in seafoods¡¯, Food Control, 11:149¨C156.
ICMSF (INTERNATIONAL COMMISSION ON MICROBIOLOGICAL SPECIFICATIONS FOR
FOODS OF THE INTERNATIONAL UNION OF MICROBIOLOGICAL SOCIETIES)
(1974), Micro-organisms in Foods 2. Sampling for Microbiological
Analysis: Principles and specific applications. Toronto, University of
Toronto Press.
ICMSF (INTERNATIONAL COMMISSION ON MICROBIOLOGICAL SPECIFICATIONS FOR
FOODS OF THE INTERNATIONAL UNION OF MICROBIOLOGICAL SOCIETIES)
(1986), Micro-organisms in Foods 2. Sampling for Microbiological
Analysis: Principles and specific applications. 2nd Edition, Oxford,
Blackwell Scientific Publications.
ICMSF (INTERNATIONAL COMMISSION ON MICROBIOLOGICAL SPECIFICATIONS FOR
FOODS OF THE INTERNATIONAL UNION OF MICROBIOLOGICAL SOCIETIES)
(1996), Report to Food Hygiene Committee of Codex Alimentarius¨C
principles for the establishment and application of microbiological
criteria for foods. ICMSF, Washington, DC.
ICMSF (INTERNATIONAL COMMISSION ON MICROBIOLOGICAL SPECIFICATIONS FOR
FOODS OF THE INTERNATIONAL UNION OF MICROBIOLOGICAL SOCIETIES)
Microbiological criteria and microbiological risk assessment 243
(1998), ¡®Potential application of risk assessment techniques to
microbiological issues related to international trade in food and food
products¡¯, Journal of Food Protection, 61:1075¨C1086.
ILSI (INTERNATIONAL LIFE SCIENCE INSTITUTE, EUROPE) (1993), A Scientific Basis
for Regulations on Pathogenic Microorganisms in Foods, ILSI Press,
Brussels. Summary presented in Dairy, Food and Environmental
Sanitation, 15:301¨C308 (1995).
JARVIS B (1989), Statistical Aspects of the Microbiological Analysis of Foods,
Amsterdam, Elsevier.
JARVIS B (2000), ¡®Sampling for microbiological analysis¡¯, in Lund B M, Baird-
Parker A C and Gould G W, The Microbiological Safety and Quality of
Food, Gaithersberg, USA, Aspen Publishers Inc., 1691¨C1734.
JURADO R L, FARLEY M M, PEREIRA E, HARVEY R C, SCHUCHAT A, WENGER J D and
STEPHENS D S (1993), ¡®Increased risk of meningitis and bacteremia due to
Listeria monocytogenes in patients with human immuno-deficiency virus
infection¡¯, Clinical Infectious Diseases, 17:224¨C227.
KUMAGAI S (2000), ¡®National legislation, guidelines and standards governing
microbiology. Japan¡¯, in Robinson R K, Batt C A and Patel P D, Encyclo-
paedia of Food Microbiology, San Diego, Academic Press, 1564¨C1570.
LEISTNER L (1995), ¡®Stable and safe fermented sausages world-wide¡¯, in
Campbell-Platt G. and Cook P E, Fermented Meats, London, Blackie
Academic and Professional, UK, 160¨C175.
LINDQVIST R and WESTO
¨
O
¨
A (2000), ¡®Quantitative risk assessment for Listeria
monocytogenes in smoked or gravad salmon/rainbow trout in Sweden¡¯,
International Journal of Food Microbiology, 58:181¨C196.
LUPIEN J R and KENNY M F (1998), ¡®Tolerance limits and methodology¡¯, Journal
of Food Protection, 61:1571¨C1578.
MARKS H M, COLEMAN M E, LIN C-T J and ROBERTS T (1998), ¡®Topics in microbial
risk assessment: dynamic flow tree process¡¯, Risk Analysis, 18:309¨C328.
MCMEEKIN T A, OLLEY J and RATKOWSKY D A (1988), ¡®Temperature effects on
bacterial growth rates¡¯, in Bazin M J and Prosser J I, Physiological Models
in Microbiology, Vol. 1, CRC Press Inc., Boca Raton, FL, 75¨C89.
MEAD P S, SLUTSKER L, DIETZ V, MCCAIG L F, BRESEE J S, SHAPIRO, C, GRIFFIN, P M
and TAUXE R V (1999), ¡®Food-related illness and death in the United
States¡¯, Emerging Infectious Diseases, 5:607¨C625.
MOSSEL D A and DRION E F (1979), ¡®Risk analysis. Its application to the
protection of the consumer against food-transmitted diseases of microbial
aetiology¡¯, Antonie van Leeuwenhoek, 45:321¨C323.
MOSSEL D A A, CORRY J E L, STRUIJK C B and BAIRD R M (1995), Essentials of the
Microbiology of Foods: A textbook for advanced studies. Chichester, J.
Wiley and Sons.
MURRAY C J and LOPEZ AD (1996), The Global Burden of Disease: A com-
prehensive assessment of mortality and disability from diseases, injuries
and risk factors in 1990 and projected to 2020. Global Burden of Disease
and Injury Series, Harvard, USA, Harvard School of Public Health.
244 Microbiological risk assessment in food processing
NAUTA M J (2000), ¡®Separation of uncertainty and variability in quantitative
microbial risk assessment models¡¯, International Journal of Food
Microbiology, 57:9¨C18.
NRC (NATIONAL RESEARCH COUNCIL, US, FOOD PROTECTION COMMITTEE,
SUBCOMMITTEE ON MICROBIOLOGICAL CRITERIA) (1985), An Evaluation
of the Role of Microbiological Criteria for Foods and Food Ingredients,
Washington, National Academy Press.
PETERS J B (1989), Listeria monocytogenes: A bacterium of growing concern.
Washington Sea Grant, Seafood Processing Series, Seattle, Washington.
PINE L, KATHARIOU S, QUINN F, GEORGE V, WENGER J D AND WEAVER R E (1991),
¡®Cytopathogenic effects in enterocytelike Caco-2 cells differentiate
virulent from avirulent Listeria strains¡¯, Journal of Clinical Microbiology,
29:990¨C996.
PINE L, MALCOLM G B and PLIKAYTIS B D (1990), ¡®Listeria monocytogenes
intragastric and intraperitoneal approximate 50% lethal doses for mice are
comparable, but death occurs earlier by intragastric feeding¡¯, Infection and
Immunity, 58:2940¨C2945.
POURKOMAILIAN P (2000), ¡®International control of microbiology¡¯, in Robinson
R K, Batt C A and Patel P D, Encyclopaedia of Food Microbiology, San
Diego, Academic Press, 1101¨C1106.
RATKOWSKY D A, OLLEY J, MCMEEKIN T A and BALL N (1982), ¡®Relationship
between temperature and growth rate of bacterial cultures¡¯, Journal of
Bacteriology, 149:1¨C5.
ROCOURT J (1995), ¡®Risk factors for listeriosis. Listeria, the state of the science¡¯,
Draft Proceedings International Food Safety Conference. 29¨C30 June,
1995. Rome, Italy. American Frozen Food Inst.
ROSS T and SUMNER J (2002), ¡®A simple spreadsheet-based, food safety risk
assessment tool¡¯, International Journal of Food Microbiology, 77:39¨C53.
ROYAL SOCIETY (1992), Risk Analysis, Perception and Management, Report of a
Royal Society Study Group, London, The Royal Society.
SAFEFOOD NSW (2001), A Risk Assessment of Selected Seafoods in NSW, Sydney,
SafeFood NSW, Australia,.
SCHALCH B and BECK H (2000), ¡®National legislation, guidelines & standards
governing microbiology. European Union¡¯, in Robinson R K, Batt C A
and Patel P D, Encyclopaedia of Food Microbiology, San Diego,
Academic Press, 1561¨C1564.
STELMA G N, REYES A L, PEELER J T, FRANCIS D W, HUN J M, SPAULDING P L,
HOHNSON C H and LOVETT J (1987), ¡®Pathogenicity test for Listeria
monocytogenes using immunocompromised mice¡¯, Journal of Clinical
Microbiology, 25:2085¨C2089.
SUMNER J and KRIST K (2002), ¡®The use of predictive microbiology by the
Australian meat industry¡¯, International Journal of Food Microbiology,
73:363¨C366.
SUMNER J L, MCMEEKIN T A and ROSS T (2000), ¡®Rates of food poisoning in
Australia¡¯, Medical Journal of Australia, 172:462¨C463.
Microbiological criteria and microbiological risk assessment 245
TEUFEL P (1999), ¡®Food safety objectives ¨C Is there a zero-risk?¡¯, Kieler
Milchwirtschaftliche Forschungsberichte, 51:5¨C14.
USDA (UNITED STATES DEPARTMENT OF AGRICULTURE FOOD SAFETY INSPECTION
SERVICE) (1998) ¡®Salmonella enteritidis risk assessment. Shell eggs and
egg products¡¯, Washington, USDA (downloaded (31 December 2001)
from: http://www.fsis.usda.gov/OPHS/risk/index.htm).
USDA (UNITED STATES DEPARTMENT OF AGRICULTURE FOOD SAFETY INSPECTION
SERVICE) (2001) ¡®Risk assessment of E. coli O157:H7 in ground beef¡¯,
Washington, USDA (http://www.fsis.usda.gov/OPHS/ecolrisk/home.htm).
WHO/FAO (FOOD AND AGRICULTURE ORGANIZATION OF THE UNITED NATIONS/
WORLD HEALTH ORGANIZATION) (2001), Risk Characterisation of
Salmonella spp. in Eggs and Broiler Chickens and Listeria
monocytogenes in Ready-to-eat Foods. Report of the Joint FAO/WHO
Expert Consultation on Risk Assessment of Microbiological Hazards in
Foods, Rome, Italy, 30 April¨CMay 4, 2001. Food and Agriculture
Organization of the United Nations, Rome.
VAN SCHOTHORST M (1998), ¡®Principles for the establishment of microbiological
food safety objectives and related control measures¡¯, Food Control,
9:379¨C384.
WHITING R C and BUCHANAN R L (1997), ¡®Development of a quantitative risk
assessment model for Salmonella enteritidis in pasteurized liquid eggs¡¯,
International Journal of Food Microbiology, 36:111¨C125.
WHO/FAO (2000), WHO/FAO Guidelines on Hazard Characterization for
Pathogens in Food and Water (Preliminary Document). World Health
Organization/Food and Agriculture Organization of the United Nations.
Available for download from http://www.who.int/fsf/mbriskassess/
scientific_documents/HC_guidelines.pdf
WILSON G (1970), ¡®Symposium on microbiological samples in foods. Concluding
remarks¡¯, Chemistry and Industry, 1:273.
ZWIETERING M H and VAN GERWEN S (2000), ¡®Sensitivity analysis in quantitative
microbial risk assessment¡¯, International Journal of Food Microbiology,
58:213¨C221.
Appendix: details of the simulation model used in Section 11.7
Model inputs
Lag generations: Normal Normal (5.000e + 0, 5.000e 1)
Shorter Normal (2.000e + 0, 5.000e 1)
Contamination levels: Normal 10 exp(Beta(1, 10, 1, 100)
Low 10 exp(Beta(1, 10, 1, 100)
Storage temperature distribution Beta(4, 10, 1, 12)
Generation time (h) at 5 oC:
Normal Triangular(30, 48, 72)
Modified product 1.2 * Triangular(30, 48, 72)
246 Microbiological risk assessment in food processing
Equations in model
Lag generations lag time/generation time
Time (days) to 100 CFU g
1
3:32 log
10
100
contamination level
lag generations
growth rate
24
Generation time growth rate at 5 oC Relative rate function
(based on McMeekin et al., 1988).
Using as an example cold smoked fish, representative values of generations
times at 5 oC estimated from literature and various predictive models were used
and estimated to range from 30 to 72 h with a most likely value of 48 h. Storage
temperatures were assumed to vary between 1 and 10 oC, with 4 oC most likely.
A relative rate function (McMeekin et al., 1988) based on a simple square root
model (Ratkowsky et al., 1982) was used to estimate the generation time at
temperatures other than 5 oC, using a T
min
of 1 oC. Lag times were modelled as
a function of the generation time at the sampled temperature, with the mean lag
time set equivalent to five generation times, with a standard deviation of 0.5
generation times. Contamination levels were drawn from the literature and were
described as ranging from 1 to 100 cfu g
1
with a most likely value of 10
cfu g
1
.
The model was also constructed so that the effect of increasing lag times, or
reformulating the product to achieve a slower growth of L. monocytogenes,or
reducing contamination levels by a factor could be compared.
The model was executed with 30 000 iterations. Estimates of maximum shelf-
lives, including the effect of some modification to the product that reduces L.
monocytogenes growth rate by 20%, by reduction of initial contamination levels
by 90%, or by increasing the lag time by 60% are shown in Table 11.2. Table
11.2 also shows the results for different levels of confidence that compliance
will achieve.
Microbiological criteria and microbiological risk assessment 247
12.1 Introduction
It is well known that the Hazard Analysis and Critical Control Point (HACCP)
system was originally developed by the Pilsbury Company working with NASA
and the US Army Laboratories at Natick to assure that food supplied to the
manned space programme was microbiologically safe (Anon., 1973a; Bauman,
1974). The limitations of the end-product testing that was in general use by the
food industry at that time were recognised by Pilsbury. These limitations
included:
? the need to use a significant quantity of products to provide a representative
sample for testing;
? the fact that only the tested sample could be completely assured as
microbiologically safe;
? the problem that control of hazards was reactive.
A preventative approach to food manufacture was identified as providing a
better assurance of food safety. An engineering system known as Failure, Mode
and Effect Analysis (FMEA) provided the basis for this new approach. In FMEA
potential failures are identified at each stage of an operation. Mechanisms to
prevent these failures from occurring are then put into place. The similarities to
HACCP are clear. In HACCP systems, potential and predictable food safety
hazards are identified at each step of a food manufacturing or handling
operation, and effective methods to control these hazards are identified. Those
steps determined to be critical to control food safety hazards are managed
through the monitoring of critical limits of the control measures, with a
predetermined corrective action plan in case of failure to meet a critical limit.
12
HACCP systems and microbiological risk
assessment
R. Gaze, R. Betts and M. F. Stringer, Campden and Chorleywood
Food Research Association, Chipping Campden
Pilsbury initially used HACCP to assure microbiological safety. Since then
HACCP principles have also been applied to physical and chemical safety
hazards. HACCP has become internationally recognised as the preferred
system to manage the production of safe food. HACCP systems, or systems
based on HACCP principles, have been made mandatory by food safety
legislation, for example in the European Union, the United States of America
and Canada. HACCP has an increasing role to play in international food trade,
especially within the concept of equivalence of trade agreements.
International guidance covering the development, implementation and
maintenance of HACCP as a food safety management system has been
provided by the Codex Alimentarius Commission and the US National
Advisory Committee on Microbiological Criteria for Food (Anon., 1992;
1997a; 1997b). There is now close agreement on the basic principles and
terminology between these two sources of guidance.
12.2 Legal requirements for HACCP systems
Although Pilsbury presented a paper on their management system at a food
industry conference in the early 1970s it took time for the potential benefits to be
recognised. The 1973 U.S. Food and Drug Administration (FDA) canned food
regulations represented the first regulatory use of HACCP principles to identify
specified controls (Anon.,1973b). This legislation was followed much later by
FDA mandatory HACCP regulations for domestic and imported fish and fishery
products in 1995/1996 and the U.S. Department of Agriculture (USDA) HACCP
regulation covering domestic and imported meat and poultry products (Anon.,
1995a; Anon., 1996). Further U.S. regulations now require HACCP systems for
fruit and vegetable juices and eggs. The regulations have brought in new
requirements called Sanitation Standard Operating Procedures (SSOPs) that
provide a solid foundation and act as prerequisites for the HACCP system.
In Europe, the European Community Directive 93/43 EEC (1993) states that:
Food business operators shall identify steps in their activities which are
critical to ensuring food safety and ensure that adequate safety
procedures are identified, implemented, maintained and reviewed on the
basis of the following principles, used to develop the HACCP (Hazard
Analysis Critical Control Point) system.
It then lists at least six of the Codex principles, missing out verification,
validation and documentation. In addition there are three product specific
¡®vertical¡¯ directives, Council Directives 91/493 EEC, 92/5 EEC and 92/46 EEC
(1991; 1992a; 1992b) covering fishery, meat and dairy products. These
directives bring in an ¡®own check¡¯ requirement which includes keeping a
written or registered record. The EU plans to harmonise these directives into a
single regulation with HACCP as a legal requirement for all businesses covered
by the regulations. Mandatory HACCP implementation is also in force or
HACCP systems and microbiological risk assessment 249
proposed in a number of countries including Australia, Canada, Cuba, Mexico,
New Zealand and Thailand, mostly for seafood products.
12.3 International guidance on HACCP implementation
Two key sources of international HACCP guidance are available from the U.S.
National Advisory Committee on Microbiological Criteria for Foods
(NACMCF) and Codex Alimentarius Commission (CAC). The NACMCF
produced its first guide in 1989, followed by updated versions in 1992 and 1997
(Anon., 1989, 1992 and 1997b). The 1989 guide described seven principles and
brought in the concepts of critical limits, corrective actions, record keeping and
the HACCP plan. The later guides were modified to make the system easier to
use and to reflect the evolution of the HACCP system. The 1997 guide includes
a section on prerequisites and is in close agreement with the CAC guidance
(Anon.,1997a).
CAC has produced a number of guides, e.g. 1993 culminating in the present
guide adopted in 1997. Both the CAC and NACMCF guides identify seven key
principles:
Principle 1 Conduct a hazard analysis.
Principle 2 Determine the Critical Control Points (CCPs).
Principle 3 Establish critical limit(s).
Principle 4 Establish a system to monitor control of the CCP.
Principle 5 Establish the corrective action to be taken when monitoring
indicates that a particular CCP is not under control.
Principle 6 Establish procedures for verification to confirm that the HACCP
system is working effectively.
Principle 7 Establish documentation concerning all procedures, and records
appropriate to these principles and their application.
In the same document Codex also usefully provides guidelines for the
application of the HACCP system. These guidelines include a sequence of
activities for the application of HACCP principles which are outlined in Table
12.1. In its Technical Manual No 38 Campden and Chorleywood Food Research
Association (CCFRA) recommends a similar sequence of steps in HACCP
implementation, which is compared with Codex and the standard introduction to
HACCP implementation by Mortimore and Wallace in Table 12.1 (Anon,
1997c; Mortimore and Wallace, 1998). This shows that both CCFRA and
Mortimore and Wallace put more emphasis on initial preparation and planning.
CCFRA suggest a further preparatory stage of defining the terms of reference.
This stage has also been referred to as establishing the scope of the study. The
HACCP team should clearly define what the study is to cover, whether it is to be
a specific product or process line, or a specific range of activities typically called
a module. The terms of reference should clearly outline the food safety hazards
that are to be considered in the study, whether they will be biological, chemical
250 Microbiological risk assessment in food processing
Table 12.1 Approaches to HACCP implementation
a
Codex guidelines Mortimore and Wallace CCFRA manual
Stage 1 Preparation and
planning
1 Assemble HACCP team ? understanding HACCP
concept
? identifying and training
HACCP team
? baseline audit
? project planning (incl.
improving prerequisite
systems)
2 Assemble HACCP team
Stage 2 HACCP studies and
planning
2/3 Describe product and
intended use
4/5 Construct and verify
flow diagram
6 Conduct hazard
analysis, identify
control measures
7 Determine CCPs
8 Establish critical
limits
9 Establish monitoring
procedures
10 Establish corrective
action procedures
? terms of reference
? describe product and
intended use
? construct process flow
diagram
? hazard analysis
? identify CCPs
? establish critical limits
? identify monitoring
procedures
? establish corrective action
procedures
? validate HACCP plan
1 Define terms of reference
3/4 Describe product and
intended use
5/6 Construct and verify flow
diagram
7 Conduct a hazard
analysis, identify control
measures
8 Identify CCPs
9 Establish critical limits
10 Establishing monitoring
procedures
11 Establish corrective
action procedures
Stage 3 Implementing the
HACCP plan
12 Establish records and
documentation
11 Establish verification
procedures
? determine implementation
method
? set up implementation team
? agree actions and timetable
(incl. training, equipment,
record-keeping)
? confirm implementation
complete
? verify implementation
through audit
13 Establish documentation
and record-keeping
12 Verification
Stage 4 Maintaining the
HACCP system
? defined standards and
regular audit
? ongoing maintenance
? data analysis and corrective
action
? HACCP plan re-validation
? update
14 Review the HACCP plan
a
The sequence of steps given by Codex and the CCFRA has been altered to fit the sequence
suggested by Mortimore and Wallace (1998) for ease of comparison. The original sequences are
indicated by the numbering of the steps in each case.
HACCP systems and microbiological risk assessment 251
or physical hazards or any combination of these. If quality aspects such as
microbiological spoilage are to be included, this should be clearly stated. The
start point and end point of the study should also be included. Depending on the
approach, this could be from raw material purchase through to, at least, onward
despatch of the finished product. The role of the prerequisites, if used, should
also be clearly stated.
As well as the above guides, there have been a number of publications on the
effective implementation of HACCP systems. These include those by bodies
such as ILSI (Anon., 1997d) and ICMSF (Anon., 1988) as well as guides such as
Corlett (1998) and Mayes and Mortimore (2001), the latter reviewing the
experience of those with a background of implementing HACCP systems in
practice. The following discussion picks up some of the key issues raised in
these guides in the successful implementation of HACCP systems.
Wherever possible the HACCP study should be performed by a multi-
disciplinary team with relevant technical/scientific expertise and knowledge of the
operation. The team should be able to draw on the skills of a production specialist,
an engineer, a quality assurance or control/technical specialist, hygiene
management and, if the terms of reference include microbiological issues, a
microbiologist. It is very useful to include staff with practical knowledge as well
as managers. Typically teams comprising four to six people have been found to be
effective. A team leader or chairperson needs to be assigned, and should, ideally,
be the production specialist. The team leader is responsible for managing the study
and team meetings. In many operations the team leader is the person with specific
training and expertise in HACCP principles and implementation. A member of the
team needs to take notes from team meetings and to draft the HACCP plan as it
emerges from discussions within the HACCP team.
It is recommended that all members of the team receive basic training in
HACCP principles. In the UK the Steering Group on HACCP Standards has
developed both introductory and advanced level HACCP training standards on
HACCP Principles and their Application in Food Safety (Anon., 1995b; Anon.,
1999a). Training courses designed to this standard are specifically intended for
HACCP team members concerned with developing and implementing HACCP
systems. In small businesses it is likely that a team approach will not be feasible.
If the study is undertaken by one person, it is recommended that they seek
specialist external support or information to ensure the study will be effective.
This role may be undertaken by suitably skilled industry consultants. The
consultant should not prepare the HACCP plan for the business as it can be a
barrier to understanding and ownership of the system by the business (Taylor,
2001). Whoever has been involved in developing the study should be recorded,
together with the roles. The commitment of senior management is fundamental
to the effective development and implementation of HACCP. The team must
receive the resources and backing it needs.
The product(s) to be covered in the study must be fully described and defined
in terms of the key parameters that influence the safety of the product. Key
parameters could include composition (e.g. formulation, ingredients), physical/
252 Microbiological risk assessment in food processing
chemical structure (e.g. Aw, pH), processing (e.g. pasteurisation or sterilisation
heat treatment, or freezing), the packaging system, storage and distribution
conditions and required shelf life. In addition to knowledge about the product
the team must have a clear understanding of the expected uses of the product by
the end-user or consumer. This should include the target consumer group for the
product which may be a vulnerable group such as infants or the immuno-
compromised.
Prior to the Hazard Analysis it is necessary to examine carefully the product/
process under study and prepare a flow diagram. The flow diagram should provide
a clear, accurate and simple description of all the operational steps, in sequence, in
the process. The team also needs to gather sufficient technical data for the study to
proceed. Supplementary information such as site plans, equipment layouts, details
of personnel routes, low care/high care separation and waste material flows can be
very useful to the team. Transfer of product from one step to the next must not be
forgotten. Typically this would be included in a specified step. A commonly found
problem with flow diagrams is that product recycling or rework loops are
forgotten. The format of the flow diagram is a matter of choice with no universal
rules for presentation. It is important that, once the flow diagram has been
prepared, it should be confirmed to ensure its accuracy and completeness. This
should include confirmation of any variations in procedures during, for example,
the night shift or at weekends. The prepared flow diagram must be amended to take
into account any deviations identified. The flow diagram will also need to be
amended as the process changes over time.
Once these preparatory stages have been fully performed, hazard analysis can
begin. Using the flow diagram as a guide the HACCP team should list all the
potential hazards that could realistically occur for each step of the process.
HACCP teams often use brainstorming techniques to help identify these
potential hazards. A hazard analysis must be conducted to determine which
hazards must be eliminated or reduced to an acceptable level to assure food
safety. Each hazard should be assessed with consideration of the risk of it
occurring (i.e. is it realistic?) and the severity of the harm it could cause to the
consumer. Considerations should include a combination of the following:
? the likelihood of the hazard occurring;
? the severity of the hazard to the consumer, including the numbers potentially
exposed to the hazard and the vulnerability of those exposed;
? if the hazard is microbial, whether and how it survives or multiplies;
? production or persistence of toxins in foods;
? chemical or physical contaminants.
Currently many of the judgements made in hazard analysis are based on
qualitative data. Indeed, hazard analysis has been identified as one of the most
difficult areas in HACCP implementation (Mayes and Mortimore, 2001).
The HACCP team must then determine how the identified hazards are to be
prevented, eliminated or reduced to an acceptable level by appropriate control
measures. More than one measure may be required to control a specific hazard,
HACCP systems and microbiological risk assessment 253
although, in some cases, one control measure might control a number of hazards.
Modification of the process step or the operation may be required in the absence
of a suitable control measure. Many physical and chemical hazards may be
effectively controlled as part of the prerequisite programme.
The next HACCP stage is the determination of the critical control points
(CCPs). CCPs are process steps where control measures are essential to control a
hazard. Their identification requires professional judgement and experience
which is best provided by a multi-disciplinary team. The use of a decision tree
may help to determine the CCP and the reasoning behind it. A decision tree is a
logical sequence of questions which can be applied to each hazard at each
process step. A number of different versions of the tree have been developed,
and training is recommended for their correct use. The application of the tree
should be flexible and requires common sense. However, they do promote
structured thinking and ensure a consistent approach is taken at every process
step. However, they should only be used for guidance when determining the
CCPs. There is no limit to the number of CCPs that may be identified in a
process. Different businesses producing the same product may have different
process steps as CCPs and a different total number of CCPs. Correct
identification of the CCPs is essential to enable the business to focus its
resources at those steps critical for food safety.
The following HACCP stage is the definition of the critical limits for the
control measure(s) at each CCP. The critical limit is the value that separates safe
from unsafe. Critical limits may be stated in legislation and codes of practice. In
some cases experimental data will need to be collected before the critical limit
can be set. Critical limits need to have a definable, achievable level which can
be quickly and easily measured or observed through monitoring. Criteria often
used for critical levels include measurements of temperature, time and pH. In
many instances it is useful to include an operational target level. These levels are
set for day-to-day management of the step and are more stringent than the
critical limit. They will take into account normal process fluctuations.
Tolerances may also be established which indicate the degree of latitude
allowable around operational limits.
Monitoring the critical limits of the control measures at the CCPs is the next
stage in HACCP development. Monitoring is a planned sequence of recorded
checks, either by observation or measurement. It is an essential part of HACCP
systems since it establishes that critical limits are being met and that the CCPs
are all in control. The procedures used must therefore detect loss of control, or a
move towards a loss of control. The frequency of monitoring should be
sufficient to enable corrective action to regain control of the process, and ideally
should be continuous. Responsibility for monitoring must be clearly defined.
Monitors will require specific training to ensure they can perform the task
correctly. The records from monitoring provide evidence that safe food is being
produced, and must therefore be accurate and genuine. Records will need to be
signed by the person responsible for monitoring and reviewed by a responsible
reviewing official of the company.
254 Microbiological risk assessment in food processing
When monitoring indicates a loss of control, and the critical limit has been
exceeded, the business must take corrective action. This is another essential
stage in the HACCP system. The corrective action plan must clearly state what
to do when things have gone wrong to bring the CCP back under control. The
corrective action plan should address:
? the identification and correction of the problem;
? the treatment and disposition of the affected product since the last acceptable
monitoring;
? the need to record the incident and the actions taken;
? the need to investigate the cause of the deviation and the steps required to
prevent its recurrence.
Clear responsibility for taking action will need to be defined. The records should
provide evidence that unsafe product did not reach the consumer.
Once the HACCP study has been completed and implemented, it must be
maintained and verified. Verification procedures check that the HACCP system
is achieving what it has been set up to do, if it is working effectively and
whether it is being followed. The Codex guidelines identify three elements in
verification: auditing, review and validation. The importance of these elements
and the differences between them has been clearly defined in an ILSI
publication entitled Validation and Verification of HACCP (Anon., 1999b).
Validation should be an essential part of the HACCP process and should be
performed prior to implementing the plan. It is the responsibility of the business
to ensure that the HACCP plan that has been developed is valid. Validation
involves checking that:
? the hazards have been correctly identified and that they can be effectively
controlled;
? the CCPs have been correctly determined and that critical limits will
adequately control the hazards to a safe level;
? the defined monitoring procedures will effectively monitor the critical limit;
? corrective actions will stop unsafe food reaching the consumer if the
procedures are correctly implemented.
Validation data needs to be quantifiable and objective.
In contrast, verification can only be performed on an implemented system.
Verification is the systematic gathering and evaluation of data to show that the
personnel are following the plan and that it has been implemented effectively. In
addition, periodic review should be performed to establish if there have been
changes to the operation or external factors that mean the HACCP will need to
be updated. Typically HACCP teams perform a recorded annual review, with a
system in place to trigger automatically a review of the HACCP plan prior to
any change.
The final stage in HACCP development is that of documenting HACCP
procedures and record keeping. HACCP documentation must include at least the
HACCP plan, the written document that shows the application of the HACCP
HACCP systems and microbiological risk assessment 255
principles. Typically this would contain information on the preparatory stages,
including the terms of reference, team details, product description and intended
use and flow diagram, and the HACCP charts detailing the control of the CCPs.
There will also be a requirement for supporting information, such as procedures,
work instructions, records of HACCP team meetings and prerequisites details.
Documentation must be kept up-to date by controlled amendment. Accurate and
efficient record keeping is essential to the successful application of HACCP to a
food business. Records provide support for a due diligence defence under the
UK Food Safety Act. Records need to be retained for an appropriate time and
kept easily accessible. Computer software packages are available to help
businesses document their HACCP studies. They guide the user through HACCP
principles logically and systematically and many have HACCP guidance notes
to aid the team. Easier and more controlled amendment of an existing HACCP
plan is one major advantage of their use. Although implementation is not a
stated principle, it is implied within the Codex text. Essential to successful
implementation is the commitment of senior management and staff, the transfer
of ownership, training of relevant staff, and maintenance including the
verification and the use of a valid plan.
12.4 Problems in HACCP implementation
HACCP has become adopted throughout many sectors of the food and drink
chain as the pre-eminent tool for food safety management. It has been endorsed
in both national and international legislation and by various bodies such as
WHO and FAO. A major attraction of HACCP is its flexibility. Once the
principles have been fully understood, it is possible to apply them throughout the
food chain. Although presently used mainly by food manufacturers, it can be
appropriate for use by caterers, retailers and primary producers. It is a logical
system largely based on common sense.
Whilst undoubtedly it has taken food safety forward, there are limitations and
potential weaknesses in HACCP systems which need to be addressed. Hazard
analysis is difficult without access to the required expertise. The setting of
critical limits also requires expertise that may not be available within a food
operation. Kane (2001), in addressing the subject of assessing supplier HACCP
systems, identified three main areas of weakness which auditors should take
account of:
1. Weaknesses in the design of the HACCP plan.
2. Failure to maintain the HACCP system.
3. Very occasionally, management neglect of safety as a priority.
As an example of a design weakness, Kane described an investigation in the
mid-1990s involving cases of Salmonella illness associated with infant food. He
concludes that the possibility of an elevated susceptibility of infants to lower
than average Salmonella contamination levels had not been adequately
256 Microbiological risk assessment in food processing
considered in the product formulation or process specification, because it had
not been adequately considered in the initial HACCP study of the product, or
identified as a CCP. Microbiological Risk Assessment (MRA) could have
assisted here by helping make a more informed judgement of the level of risk
and hence the need for a critical control point and appropriate critical limit.
It is most important that HACCP plans incorporate new processing and
preservation techniques, product development changes and the food safety
implications further down the process. As an example, Kane refers to an incident
of Salmonella food poisoning with snack salami, where in his view, management
failed to understand the basic food science and technology changes in moving
from a traditional single salami to a snack salami of finger-thick dimensions. This
meant that the surface area to mass ratio was different, the salami dried much
quicker, water activity fell faster and acidity was incomplete. In this case,
management lacked the appropriate microbiological expertise to identify the
growth parameters for Salmonella and the impact of process changes in removing
a traditional control point. MRA would provide more systematic data on the risks
posed by pathogens for differing product-process combinations
The lessons to be learnt from HACCP systems that have been implemented in
practice have been considered by a number of others, notably Mayes and
Mortimore (2001). They identify similar weaknesses to those identified by
Kane:
? wrong perceptions of HACCP as a overcomplex and bureaucratic system,
resulting in poor motivation in implementing HACCP systems effectively;
? the lack of a proactive culture at all levels of the organisation, leading, for
example, to the failure of production staff to take responsibility for critical
control points;
? misunderstanding of HACCP methodology, for example in confusing safety
and quality issues, or identifying too many control points as ¡®critical¡¯;
? lack of expertise in such areas as hazard analysis.
These weaknesses lead to a number of common failures in HACCP systems,
including:
? failure to identify and allow for some hazards in HACCP planning,
compounded by poor validation;
? over-complex and unworkable HACCP systems;
? ineffective monitoring and corrective action due to failures in organisational
culture, poor training and verification procedures;
? poor documentation and review.
The UK Food Standards Agency (FSA) has an active programme of research
focused on improving food safety, and a specific programme on managing
microbiological hazards and risks. As part of this programme the FSA has
funded projects on HACCP (Anon., 2000). In three projects, the use of HACCP,
the necessity for documentation and the degree of verification of HACCP
procedures required have been studied. One of these projects focused on meat
HACCP systems and microbiological risk assessment 257
product manufacturing and butchers¡¯ shops. Significant barriers to the adoption
of HACCP were the lack of training and access to expert advice. It was found
that the reliance on independent consultants could lead to problems on
occasions. A second project involving a range of manufacturing companies
showed that most could identify hazards but had difficulty identifying critical
control points. There was also considerable confusion regarding the verification
of HACCP. The third project focused on the catering and retail sectors and
found an excess of documentation in applying HACCP but a failure to identify
CCPs correctly.
The FSA Food Research Programmes Annual Report 1999¨C2000 states that
improvements might include:
? improved dissemination of technical information for food businesses;
? emphasis on the differences between quality and safety issues;
? reducing the number of CCPs by expert risk assessment;
? effective identification of general hygiene aspects (GHP) from the precise
product/process controls required for HACCP.
The FSA has also started a number of initiatives to overcome these problems,
including establishing a documentation system which guides HACCP teams
through CCP analysis in order to arrive at valid CCPs (FSA 2000).
12.5 The interaction between HACCP systems and
microbiological risk assessment (MRA)
HACCP is a management tool that in many ways can be equated to Risk
Management, rather than Risk Assessment. It should be a practical operational
system to assure the production and handling of safe food by a particular
business, with clear identification of the potential hazards in that operation and
the application of appropriate and effective control measures.
In comparison, MRA is a process or design tool enabling the risks from a
particular and defined hazard to be identified independently of operational
solutions. Increasingly the results of individual MRAs will provide a
quantitative analysis of a hazard and its potential effect on the consumer.
While it is recognised that full and comprehensive MRAs are only likely to be
undertaken by government agencies, research organisations and perhaps some
larger food organisations, the principles and tools of MRA will have wide
applicability throughout the food chain to businesses of all sizes. The Campden
and Chorleywood Food Research Association (CCFRA) has produced a
guidance document on the ¡®Introduction to the Practice of Microbiological
Risk Assessment for Food Industry Applications¡¯ (Voysey, 2000). CCFRA is
currently exploring the possibility of producing an updated and simplified MRA
guide also targeted at the food industry.
The CCFRA document entitled ¡®HACCP: A Practical Guide¡¯ (Anon., 1997c)
describes 14 stages in the process of undertaking a HACCP study. These 14
258 Microbiological risk assessment in food processing
stages embrace the seven Codex principles. Below we have attempted to
indicate where MRA can assist in developing a more informed and robust
HACCP by reference to the 14 stages described in the CCFRA document.
12.5.1 Stage 1: Terms of Reference
It is essential that a HACCP study is undertaken on a specific product/process
line. Likewise, a MRA is a highly targeted assessment which addresses specific
hazards associated with clearly defined production/processing scenarios. The
better defined the questions are then the more valuable the outputs will be.
MRAs may help HACCP teams to define clearer terms of reference.
12.5.2 Stage 2: Selecting the HACCP Team
It is recommended that HACCP teams draw on a range of expertise, e.g. quality
control specialists, production experts, engineers. Small businesses are advised
to seek specialist external support if such expertise is not available in house. By
their very nature, MRAs are complex and comprehensive investigations which
are generally impossible to undertake without the active involvement of a wide
range of specialist skills including clinicians, epidemiologists, mathematical
modellers and statisticians. The expertise pool involved in MRA is almost
certainly going to be far more extensive than that employed in a HACCP team.
MRAs will, therefore, provide HACCP teams with access to a wider pool of
expert knowledge.
12.5.3 Stage 3: Describe the product
Both HACCP and MRA should be equally comprehensive in identifying the
intrinsic and extrinsic parameters which influence product safety. MRAs will
provide a resource in informing this stage in HACCP analysis.
12.5.4 Stage 4: Identifying intended use
In HACCP analysis this stage requires a clear understanding by the
manufacturer of the intended use of the product by the consumer and the
potential vulnerability of different sub-groups within the consumer population,
e.g. infants, the elderly or the immunocompromised. The hazard characterisation
step of an MRA will include a dose-response assessment. This will involve an
understanding of the nature, severity and duration of adverse health effects
associated with harmful agents in food. It will also consider the dynamics of
infection and the sensitivity of the host population. As foods become more
targeted at different sub-groups within the population, the type of information
that can be derived from MRA will become increasingly valuable. At the present
time, we are principally dealing with acute illness syndromes. In the future, both
HACCP and MRA are likely to address long-term or chronic illnesses.
HACCP systems and microbiological risk assessment 259
12.5.5 Stage 5: Construct a flow diagram/Stage 6: On-site confirmation of
flow diagrams
A comprehensive HACCP study should create a full process flow diagram. It is
unlikely that MRA would add anything further to this stage. It is always
important that the flow diagram description is an accurate reflection of what
actually happens in practice and should include any night-shift or weekend
deviations.
12.5.6 Stage 7: List all potential hazards; conduct hazards analysis and
consider measures to identify control measures
This is an activity where potentially MRA has a considerable opportunity to
enhance the debate and the judgements made, particularly in relation to the
severity of hazards. One criticism of HACCP methodology is that it does not
define and measure outcomes for consumer safety (Orris and Whitehead, 2000).
This weakness contributes to confusion about what constitutes a hazard and
which hazards present the greatest level of risk. MRA provides a systematic
analysis of levels of risk for differing pathogens to consumers. Whilst in the past
most judgements have been made based on qualitative data, MRA is
increasingly introducing quantitative appraisal of data. This of course not only
gives greater confidence in data but allows for more objective comparative
analysis between data sets. In MRA information on dose¨Cresponse and exposure
assessment is an integral part of the process with rigorous procedures for
identifying variability and uncertainty in the data. In MRA the effects of
intervention or mitigation strategies can also be analysed to see where the best
options for control exist. In HACCP, the need to consider the measures to
control identified hazards can only be enhanced by information derived from
MRA studies.
12.5.7 Stage 8: Determine Critical Control Points (CCPs)/ Stage 9:
Establish Critical limits for each CCP/Stage 10: Establish a monitoring
system for each CCP
In HACCP the setting of measurable and meaningful control limits is often the
most difficult task and can lead to problems. The real critical limit, the division
between safe and unsafe food, is often not known or is based on qualitative data.
While some criteria are defined in legislation, e.g. time and temperature for milk
pasteurisation, some may need extra data to be collected to determine the critical
limit. The target levels and tolerances set for each CCP have to be chosen with
care. The benefit of MRA is that techniques are increasingly being developed
which can explore the impact of changing parameters i.e. exploring a number of
¡®what if¡¯ scenarios. MRAs will thus provide valuable information in setting
critical limits.
260 Microbiological risk assessment in food processing
12.5.8 Stage 11: Establish a corrective action plan/Stage 12: Verification/
Stage 13: Establish documentation and record keeping/Stage 14: Review the
HACCP plan
MRA will have limited input on these final four stages of the HACCP process.
These are mainly concerned with management and audit control, corrective
action, improvement and review.
In summary, the major input of MRA studies will be on the identification of
hazards, control measures and the assessment and identification of critical
control points. Interestingly, this is the area where there continues to be a level
of concern in terms of current HACCP implementation.
12.6 The future relationship of HACCP systems and MRA
Mayes and Mortimore (2001) identified a number of issues that will impact on
HACCP in the forthcoming years and promote change. These include:
? the increasing globalisation and harmonisation of trade between countries;
? the changing role of governments and regulatory authorities in the assessment
of HACCP;
? the role of HACCP in new science/food safety initiatives such as Quantitative
Risk Analysis;
? the need for application of HACCP throughout the supply chain.
A key provision of the World Trade Organisation Sanitary and Phytosanitary
(SPS) Agreement is the requirement for countries to provide risk assessments to
ensure the safety of food and that standards of safety between exporting and
importing countries are equivalent. Mayes and Mortimore (2001) consider that
the concept of equivalence is one of the most contentious issues in food safety at
the present time.
The International Commission on Microbiological Specifications for Foods
(ICMSF) has proposed a scheme for managing microbiological risk for foods in
international trade in which the Food Safety Objective (FSO) is a functional
link between risk assessment and risk management (Legan et al., 2002). An FSO
is defined as a statement of the frequency or maximum concentration of a
microbiological hazard in a food considered acceptable for consumer protection.
FSOs allow for the equivalence of different control measures to be established.
The ICMSF has proposed five steps for using FSOs in managing food safety
(van Schothorst, 1998):
1. Conduct risk assessment.
2. Conduct risk management option assessment.
3. Establish the food safety objective.
4. Confirm that the food safety objective is achievable through good hygiene
practices and HACCP.
5. Establish acceptance procedures.
HACCP systems and microbiological risk assessment 261
An example of an FSO could be that the level of Listeria monocytogenes in
ready-to-eat foods should not exceed 100 cfu/g at the time of consumption. The
term FSO and others are slowly being introduced into the food safety
management vocabulary. Key terms have been identified and defined by van
Schothorst (1998) as follows:
? a performance criterion is the required outcome of a step or a combination of
steps that can be applied to ensure an FSO is met, e.g. a 6 log
10
reduction in
the target organism;
? a step is a points procedure, operation or stage in the food chain including
raw materials from primary production to final consumption;
? a process criterion is the control parameters of a step or combination of steps
that are applied to achieve the performance criterion, e.g. heating for 2 mins
at 70 oC.
Risk assessment is very much a scienced-based activity that provides data for
use in risk management decision making. Many scientists and risk managers
believe quite strongly that these two processes should be kept quite separate. In
that way, objective data is produced based on the best available knowledge. The
subsequent process of managing risk is influenced by a number of socio-
economic factors. Increasingly, it is being recognised that the outcome of a risk
assessment will be a Level of Protection (LOP) (e.g. the estimated number of
Fig. 12.1 Government and industry responsibilities in food safety management
(reproduced with permission of the International Life Sciences Institute).
262 Microbiological risk assessment in food processing
cases of Salmonella infection per year associated with chicken for 100,000 of
the consumer population). It is for the risk managers (government agencies) to
decide if this LOP is acceptable or appropriate (Mayes and Mortimore, 2001).
Thus the outcome of qualitative risk analysis is an acceptable level of protection
(ALOP). Food safety objectives (FSO¡¯s) are intended to convert the ALOP
(level of risk) to a level of hazard.
Anon. (1998) have produced a schematic diagram which illustrates how risk
assessment, as an integral part of risk analysis, leads to the production of food
safety objectives (Fig. 12.1). Setting FSOs is a government responsibility. It is
for the food industry to embrace these objectives in their food safety
management procedures, of which HACCP is a key component. It is clear
that several of the tools and approaches used in risk assessment can be of direct
benefit to the design, implementation and verification of industrial HACCP
schemes.
12.7 References
ANON (1973a) Food Safety Through the Hazard Analysis and Critical Control
Point System. Pillsbury Company.
ANON (1973b) Acidified Foods and Low Acid Foods in Hermetically Sealed
Containers. United States Food and Drug Administration. Code of US
Federal Regulations, Title 21, Ch. 1, Part 103, 113, 114. Washington DC:
GPO.
ANON (1988) Micro-organisms in food 4: Application of the Hazard Analysis
and Critical Control Point (HACCP) System to Ensure Microbiological
Safety and Quality. Blackwell Scientific Publications, London.
ANON (1989) HACCP Principles for Food Production. National Advisory
Committee on Microbiological Criteria for Foods.
ANON (1992) Hazard Analysis and Critical Control Point System. Revision of
the 1989 guide. National Advisory Committee on Microbiological Criteria
for Foods.
ANON (1995a) Procedures for the Safe and Sanitary Processing and Importing of
Fish and Fishery Products. United States Food and Drug Administration.
21 CFR, Parts 123 and 1240. Federal Register, Vol 60, No 242. Rules and
Regulations.
ANON (1995b) HACCP principles and their application in food safety
(introductory level) training standard. Royal Institute of Public Health
and Hygiene, London
ANON (1996) Pathogen Reduction: Hazard Analysis and Critical Control Point
(HACCP) Systems. Food Safety Inspection Service ¨C United States
Department of Agriculture. GCFR, Part 304 etc. Federal Register, Vol 61,
No 144. Rules and Regulations.
ANON (1997a) Hazard Analysis and Critical Control Point System and
Guidelines for its Application. Alinorm 97/13A, Codex Alimentarius
HACCP systems and microbiological risk assessment 263
Commission (CAC), Rome.
ANON (1997b) Hazard Analysis and Critical Control Point Principles and
Application Guidelines. National Advisory Committee on Microbiological
Criteria for Foods
ANON (1997c) HACCP: A Practical Guide (Second Edition). Technical Manual
No. 38. Campden and Chorleywood Food Research Association.
ANON (1997d) A Simple Guide to Understanding and Applying the Hazard
Analysis and Critical Control Point Concept (Second Edition).
International Life Sciences Institute (ILSI) Europe Scientific Committee
on Food Safety, Brussels.
ANON. (1998) Food Safety Management Tools. Report prepared by the
International Life Sciences Institute (ILSI) Europe Risk Analysis in
Microbiology Task Force.
ANON (1999a) HACCP principles and their application in food safety (advanced
level) training standard. UK Steering Group on HACCP Training
Standards. Royal Institute of Public Health and Hygiene, London
ANON (1999b) Validation and verification of HACCP. International Life
Sciences Institute (ILSI) Europe, Brussels.
ANON (2000) Food Standards Agency: Food Research Programmes Annual
Report 1999¨C2000. HMSO, London
BAUMAN, H E (1974) The HACCP concept and microbiological hazard
categories. Food Technology 28 (9): 30¨C2.
CORLETT, D. (1998). HACCP Users¡¯ Manual. Aspen Publishers Inc.,
Gaithersburg
EEC (1991) Council Directive 91/493/EEC of 22 July 1991 laying down the
health conditions for the production and the placing on the market of
fishery products. Official Journal of the EC L 268, 24/09/1991 p0015¨C
0034. As amended.
EEC (1992a) Council Directive 92/5/EEC of 10 February 1992 amending and
updating Directive 77/99/EEC on health problems affecting into-
community trade in meat products and amending Directive 64/433/EEC.
Official Journal of the ECL 057, 02/03/1992 p0001¨C0026.
EEC (1992b) Council Directive 92/46/EEC of 16 June 1992 laying down the
health rules for the production and placing on the market of raw milk, heat
treated milk and milk based products. Official Journal of the ECL 268, 14/
09/1992 p0001¨C0032. As amended.
EEC (1993) Council Directive 93/43/EEC of 14 June 1993 on the Hygiene of
Foodstuffs. Official Journal of the European Communities, No L175/1-11.
FSA (FOOD STANDARDS AGENCY) (2000) Food Research Programmes Annual
Report 1999¨C2000.
KANE, M. (2001) Assessing supplier HACCP systems: a retailer¡¯s perspective. In
Dillon, M and Griffith, C (eds), Auditing in the food industry. Woodhead
Publishing Ltd., Cambridge.
LEGAN, D., VANDEVEN, M., STEWART, C. AND COLE, M. (2002) Modelling the
growth, survival and death of bacterial pathogens in food. In Blackburn, C
264 Microbiological risk assessment in food processing
de W, and McClure, P J (eds), Foodborne pathogens: hazards, risk
analysis and control. Woodhead Publishing Ltd., Cambridge.
MAYES, T. AND MORTIMORE, S. (2001) Making the most of HACCP: learning from
others¡¯ experience. Woodhead Publishing Ltd., Cambridge.
MORTIMORE, S. AND WALLACE, C. (1998). HACCP: a Practical Approach (Second
edition) Aspen Publishers Inc., Gaithersburg.
ORRISS, G AND WHITEHEAD, A (2000) Hazard analysis and critical control point
(HACCP) as part of an overall quality assurance system in international
food trade. Food Control, 11: 345¨C51.
TAYLOR E (2001), HACCP and SMEs: problems and opportunities. In Mayes, T.
and Mortimore, S. (eds) Making the most of HACCP: learning from
others¡¯ experience. Woodhead Publishing Ltd., Cambridge.
VAN SCHOTHORST, M. (1998) Principles for the establishment of microbiological
food safety objectives and related control measures, Food Control, 9 (6):
379¨C384.
VOYSEY, P (2000) An Introduction to the Practice of Microbiological Risk
Assessment for Food Industry Applications. CCFRA, Guideline no. 28.
HACCP systems and microbiological risk assessment 265
13.1 Introduction
In considering the future of MRA, it is important to recognize the difference
between ¡®hazard¡¯ and ¡®risk¡¯, because they pose different challenges and
opportunities for risk analysis.
13.1.1 Hazard
A food hazard is a biological, chemical or physical agent in a food with the
potential to cause adverse health effects (Codex Alimentarius Commission,
1997; Anon., 1996). There are many microbiological hazards associated with
food that can and do cause injury and harm to human health. Millions of people
world-wide suffer from ¡®food-borne¡¯ diseases of microbiological origin each
year. Microbial hazards are not a static group, but change because of differences
in prevalence, eating habits and human sensitivity and the exchange of genetic
material between some species of micro-organisms. Hence, reliable identifica-
tion and characterisation of hazards for any supply chain, food and group of
consumers is an essential basis for risk assessment. Providing adequate and
credible information for decision-makers will be the major scientific task for
researchers, industry and regulators. Similarly it will not be a one-off task, the
regular review of which hazards are current, which are emerging with the
potential to cause harm, and the severity of their effects, are essential to the
future of risk assessment. Accompanying this must be the development and
selection of preventive measures, whose severity and nature will be linked to the
characteristics and resistance of the hazard and the severity of its effects for
humans.
13
The future of microbiological risk
assessment
M. Brown, Unilever R & D, Sharnbrook and M. Stringer, Campden
and Chorleywood Food Research Association, Chipping Campden
13.1.2 Risk
In contrast, risk is the estimated probability (or perception) and severity of
adverse health effects caused by consumption of a hazardous agent in a food.
Even if the nature of the hazard itself does not change, demography or other
factors, (e.g. increased consumer awareness of health effects or wider
geographical distribution of the hazard) can alter its impact. Time and
demographic change bring about differences in risk level for various groups
of consumers (e.g. those with increased levels of sensitivity ¨C young, old,
pregnant, immuno-compromised, which represent growing sections of the
population) and need to be considered, along with perceived health impact.
Designing products or supply chain conditions to protect vulnerable consumers
may impose (unacceptable) costs on other consumers that are not at risk, or may
limit their choice, by restricting options for risk management. Ways to manage
this have to be found, so that risk management can provide all consumers with
¡®real¡¯ protection (from food poisoning) whilst ensuring that any hazard,
technology or product provides acceptable (perceived) consequences for
everyone affected by it. To help this process, it may be useful to examine the
impact of the hazard, and express risk acceptance in terms of performance
standards, conveying risk-benefit balances in meaningful terms to consumers,
rather than just using technical standards, limited to product, process and
microbiological details (e.g. microbiological food safety objectives). This should
help consumers to evaluate and compare the performance of control options and
may reduce their resistance to technical innovation or calls for changes in
behaviour (e.g. cooking of beef meat to eliminate E.coli O 157). The most
valuable performance standards would allow consumers to use their own direct
experience and perception. Risk analysis needs to play a key role in helping
consumers confidently accept a balance between technically valid solutions and
those based on perception.
13.1.3 Tools for assessing risks
Risk management and communication should structure the information from
risk assessments and risk estimates to form a supportable basis for reducing
safety risks to an acceptable level. Explaining the link between reducing the
level of a hazard in food and the decrease in risk that this causes is essential to
providing credible food safety controls, and must be clearly linked to any
accompanying increases in cost, or other restrictions. A transparent process for
doing this, including the relevant participants in the whole process from the risk
profile to the final management actions is a key to reaching valid and acceptable
decisions that are supported by consumers and form sustainable solutions to any
food safety problem.
Hazard analysis (as part of HACCP) is already established as a key
operational means of controlling risks. Lack of rigour in hazard analysis, (an
activity roughly equivalent to exposure assessment), has already been seen as a
weakness in the HACCP system and more reliable means of hazard
The future of microbiological risk assessment 267
identification and exposure assessment are also needed. In processing
operations, there is a progressive move towards making production responsible
for quality, with QA and regulatory bodies having a facilitation and monitoring
role, rather than their traditional ¡®control¡¯ role. Therefore training should ensure
that food industry workers understand their essential role in assuring food safety
and maintaining consumer trust. To do this means that they need better training
to recognize the importance of risk determining steps, including CCP¡¯s.
A different emphasis in the legal framework of various countries places
operational responsibility for microbiological safety with producers and
regulatory agencies to differing extents. But the level of risk to the world¡¯s
consumers from food borne microbial hazards is always fixed by the day-to-day
control exercised by producers and processors. Official food control authorities
are often not fully involved in risk management because of the high costs of
participation and the impossibility of them understanding safety aspects all the
technologies and supply chains they encounter. Their major activity should
focus on improvement in techniques for monitoring compliance with established
or new safety principles. Where there is doubt over the acceptability of risks, the
question of ¡®sufficient proof¡¯ to show harmlessness, or the effectiveness of
control measures needs to be addressed by publicly managed research pro-
grammes to prevent undue reliance on precaution and restriction of innovation.
13.1.4 Microbiological safety risk analysis
The systematic analysis of microbiological safety risks is currently an emerging
technique with some information sources and some tools. Its process currently
comprises three separate tools for the assessment, management and com-
munication of risks, but it lacks any tool for predicting the acceptance of its
decisions and actions. The immediate challenge is how to collect and analyse
meaningful data and then send out sound messages about safety risks from food,
or water, to consumers and trading partners. For these messages to be trusted and
acted upon, the information and decision-making processes must be credible.
Food safety incidents and scares, resulting from contamination in food have hit
trust in areas where it is essential. The content and quality of safety messages
has to ensure that they are understood. This will only be achieved by the
development of:
? transparent decision-making processes that consumers, producers and
scientists understand and trust;
? reliable and pertinent information, including clear identification and handling
of knowledge gaps, uncertainties and variability.
In spite of these weaknesses, risk analysis is now widely recognized as a vital
analytical process for the development of food safety standards. It is
increasingly valued, because its tools can systematically investigate the
relevance of hazards and the levels of safety in products and supply chain
operations needed to retain consumer safety and confidence.
268 Microbiological risk assessment in food processing
The fate of micro-organisms in food and human responses to them depend on
a number of variables, some of which may be inter-linked; therefore
microbiological risk assessment requires handling complex information and
predictions. There is rarely good cause and effect data available. Health effects
of any agent may be severe in one person, mild in another or completely absent
in others. Ideally an assessment of clinical effects and consumer impact would
be limited to a technical assessment. But in practice, perceived risk is often a
stronger driver for consumer (or regulatory) response to hazards and it is not
well covered by current management and communication tools. Especially,
consumers express continuing concerns about health effects linked to specific
microbiological contaminants (e.g. Listeria monocytogenes, E.coli O157,
campylobacter and and viruses), new technologies, uncontrolled or unacceptable
food handling practices, or technologies that may introduce or leave
microbiological hazards in food. Their assessments and conclusions are often
different from those of experts and they give different weighting to the presence
or absence of scientific data on hazards, often basing decisions on experience or
media material. Safety concerns are most often voiced in the developed world;
but improvements in global communication are likely to heighten global interest
on them.
To meet these challenges and ensure a balanced response to regulating food
safety, the development of risk analysis needs to be managed to handle both
¡®real¡¯ and ¡®perceived¡¯ risks, so that it becomes progressively more widely
accepted. And the scientific and transparent assessment of microbiological
safety risks builds rather then undermines consumer confidence by providing
data that feeds into the channels used by consumers. Risk assessment outputs
based on risk level and hazard severity are the best means for risk managers to
develop reduction or prevention measures that are scientifically justifiable,
consistent and likely to be acceptable to consumers. But perception of their
effectiveness or acceptance will depend on presentation and be verified by the
subsequent willingness of consumers to accept risks in the form of a product or
technology or lobby for change, in the context of their own circumstances.
Understanding where the balance lies between technical and perceived risk in
various markets can point the way for management strategies and indicate
potential problem areas, such as global differences in acceptable levels of safety
depending on food availability and the feasibility or cost of control measures.
This chapter focuses on future developments and outlines some of the
developments, choices, information gaps and process changes that may shape it.
13.2 Information needs for risk assessment
13.2.1 Data needs
Risk analysis needs sound information to produce sound actions. The
information flow in risk-based food safety control systems can be summarised
as shown below. For any topic, information is available from a variety of
The future of microbiological risk assessment 269
sources. Criteria to define the information relevant to the risk assessment should
be provided by the risk profile, because this information needs to support both
the risk manager and the risk communicator, who may have different
requirements arising from their responsibility for technical or perceived aspects
of risk. Working backwards from their needs may clarify requirements for a
credible message or management actions, and highlight differences between
what is needed, what is available and knowledge gaps. Different techniques can
be used to fill knowledge gaps, in reliability they range downwards from new
data, validated predictive modelling to assumptions (Fig. 13.1).
13.2.2 Data from testing, surveys and clinical, epidemiological or case-
control studies
Data on hazards (e.g. infectious or toxigenic pathogens and toxins), routes of
contamination, prevalence, geographical or commodity distribution of
pathogens, commercial and consumer practices and consumer sensitivity are
essential information for microbiological risk analyses and are required, but
incompletely available. Limited information is available from many sources, but
currently few organisations collect pertinent data or structure it so that it can
easily be used by risk assessment studies. Adoption of a risk-based safety control
approach by regulatory and other organisations would require co-ordinated
development of global or web-based systems for storage and retrieval of
microbiological, processing and prevalence information.
Many sources of data already exist because many organisations and
companies examine the microbial populations of food and raw materials on a
routine basis for comparison with microbiological criteria (regulations, stan-
dards specifications or directives) to determine product or process acceptability.
Systematic organisation of such data could provide up to date information on the
Data required by
risk manager
Data required by
risk communicator
Data
passing
criteria
for
admission
Data a
v
ailable
Management actions
Credible
message
Kno
wledg
e
ga
p
Data required for
actions or a credible
message
Fig. 13.1 Information flow in risk-based food safety control systems.
270 Microbiological risk assessment in food processing
distribution of pathogens in food and show the actual risks of current products,
product designs, supply chain operations and consumer uses (e.g. chilled meals,
where there is a continuing debate between expert microbiologists and industry
over the risks of botulism). Useful data would cover microbiological levels (e.g.
counts of specific types of micro-organisms ~ salmonella absent in 25 g), key
process attributes (e.g. heating to 75 oC) and product characteristics (e.g. pH
4.6). It could be used to judge the effectiveness of current controls and the
implementation of existing risk management actions.
Any proposed microbiological criteria should be compared with accepted
criteria and systems (e.g. good manufacturing or safety/quality limits, ICMSF)
as these are likely to be widely accepted and based on expert opinion or surveys.
Formal risk assessment may help to prevent increasingly tight criteria being
forced by uninformed opinion that assumes their value in protecting public
health, unsubstantiated by scientific or epidemiological information.
13.2.3 Genomics and bio-informatics
Information on the microbial genome could provide a mechanistic analysis of
the different characteristics of micro-organisms that result from their molecular
make-up (e.g. acid or heat tolerance or different growth rates, toxigenesis or
pathogenicity in response to environmental conditions). At a functional level
this tool could be used to predict the activities of any micro-organism present, or
growing, in products, provided they and their environment could be adequately
characterised. At present microbiological analysis of processes and materials
using conventional microbiological methods offer a very intensive and low-
resolution approach to determining cause and effect linkages or predicting the
fate of food borne pathogens. Because comparative genomics profiles the
complete DNA sequence (the genome) it can show capabilities of microbes.
Sequencing microbial genomes using gene-transcription will be able provide
microbiologists with a genetic parts list of specific pathogens, etc., but not
indicate their functionality or potential. Showing this, based on how these parts
come together to form functioning micro-organisms is the role of the emerging
science of Bio-informatics, which looks at how genes, or groups of genes, work
together to produce specific products or determine particular attributes, such as
resistance, infectivity or toxigenisis. The combination of techniques would
provide a very powerful approach to tracing the origin and risks of bacterial
stains and species and would improve the quality of exposure assessments.
Genome profiles would provide more complete functional identification, than
the current ¡®fingerprinting¡¯ ribosomal rRNA techniques (e.g. PCR amplification,
electrophoresis or hybridization).
13.2.4 Predictive modelling
Risk assessments can use predictive models to help reach conclusions and
propose actions. At present, model availability and use is patchy and the
The future of microbiological risk assessment 271
usefulness of models for anticipating microbial responses in real food systems is
limited. The development, refinement and wide acceptance of predictive models
is essential for the development of risk-based food safety control systems. Such
models have a tremendous role to play in effective risk management because of
their ability to allow ¡®what if¡¯ to be examined without protracted experimental
work. Current models are strong at predicting pathogen responses under stable
or non-changing conditions (e.g. growth at steady temperatures). But validated
ways of dealing with fluctuating temperatures (or other environmental shifts) for
example a concept equivalent to the ¡®z¡¯ concept used in death kinetics does not
yet exist. Their development is essential for exposure assessment to deal
realistically with changes in process or storage conditions (e.g. if a processor
wants to determine the potential increase in pathogen numbers during a process
run at different temperatures or with different storage patterns). Some models
are able to provide and use frequency distributions (e.g. Monte Carlo) rather
than single point estimates for risk assessment and this dramatically improves
their practical usefulness.
Modelling is currently limited to predictive models for growth and does not
cover the range of processing operations that affect microbial levels in food.
Models for microbial survival in foods hardly exist, because of the technical
difficulty in producing them. On the other hand simple models for the killing of
micro-organisms by heat (e.g. D and z) are well established. Although their
principles determine the fate of micro-organisms, chemical or thermo-physical
models (e.g. acidification of pH buffered foods or heat transfer and penetration
into liquid or solid foods) are not currently considered as microbiological
models. In spite of obvious theoretical shortcomings (i.e. a log linear
interpretation of population death curves that clearly exhibit ¡®survivor tailing¡¯),
simple models have proved effective for predicting heat process lethality, when
integration of time and temperature is needed by processors to assess the effects
of cooking or heating. Many models are constructed and research programmes
undertaken on the premise that mathematical refinement and increasingly
accurate predictions are needed.
The practical needs of risk assessors are not really taken into account when
software entry procedures or data sets for ¡®public¡¯ models are considered. These
models are not offered to potential users with clear indications of their fitness for
practical use (e.g. can a model provide a 90% probability estimate to ± 2 minutes
(suitable for lethality or cooking calculations), ± 2 hours for process
management or hygiene purposes or ± 8 hours or the difference made by a
storage temperature rise of 5 oC for setting and understanding trade-offs in
setting shelf-life conditions). Improved models require a practical explanation of
their limitations or scope of use, and in some cases predictions discarded as
insufficiently accurate by experts may prove adequate for guiding industrial
users. In addition to their practical use in exposure assessment, the public
availability of trusted models could provide informed consumers with tools for
examining the risks in new foods or new hazards in familiar foods. Hazard
characterisation would benefit from reliable tools to predict the extent and
272 Microbiological risk assessment in food processing
distribution of any hazard, (especially infectious pathogens) and would ideally
incorporate epidemiological data to predict the extent and severity of outbreaks
for any hazard level in food, based on portion size and the distribution of
sensitivity in a population.
Increased availability and acceptance of predictive models by food producers
and regulatory agencies would allow businesses without large technical
resources, or informed consumers, to examine the safety of food under different
scenarios. For example the impact of warm storage of chill-stable foods or the
effect of recontamination with subsequent ambient storage of an unpreserved
food and reach conclusions on likely harmfulness. This ability would provide a
common language for discussing and agreeing the acceptability of risks. The
establishment of national and international microbiological models or databases
for the growth, death or survival kinetics of infectious (e.g. Salmonella, Listeria,
E. coli (including O157:H7), Bacillus, Yersinia, Campylobacter or toxigenic
(e.g. clostridia and Staphylococcus aureus) pathogens), would improve global
ability to trade in safe food and allow trading partners some insight into local
risk levels or the effectiveness of HACCP plans or process conditions.
Even producing and agreeing the application and boundaries for use of
¡®single species¡¯ models will be a formidable task. If microbial associations
significantly affect the behaviour, or harmfulness to humans, of target species,
models of increased complexity with a wider range of variables become
essential. Software tools for analysing the behaviour of pathogens in the food
chain or prediction of microbial inactivation and growth rates, behaviour in
different habitats or in foods of different chemical compositions need to be
developed. Ideally models would have a mechanistic base, and in practical terms
better validation in food systems would provide a basis for enhanced predictions
and indicate limits of validity. The present system of ¡®curve-fitting¡¯ to data
points without a mechanistic basis seems to provide effective models, subject to
the same limitations as D and z models. Current model outputs need to be
validated by limited experimentation, or survey data, on the foods where
¡®boundary¡¯ conditions are in question.
Commercial software is likely to become more fully available and the
increasing number of large scale (government published) risk assessments will
provide valuable benchmarks or reality checks for less ambitious studies. It is
possible to use EXCEL spreadsheet based models for deterministic fixed risk
assessments (e.g. scenarios covering best, worst and average conditions or
levels). Special software (e.g. @ Risk) is also available using a stochastic (Latin
cube) approach to probability distributions for the key variables.
13.2.5 Supporting illustrative or generic studies
Ideally, supporting information for risk management and communication will
come from on-site risk assessments. Where output from a direct study is not
available, illustrative or generic studies could be used to support choices
between risk management options, if alternatives to existing controls are
The future of microbiological risk assessment 273
required. Such guides based on generic studies are already used to support the
introduction of HACCP and may help businesses with limited technical
resources to get started with risk-based safety management. Generic studies need
to provide information for hazard identification and characterisation, e.g. the
likely hazardous dose and severity of illness, with illustrative epidemiological or
outbreak data. Microbiological reference values (see below) could also be
included. A review of information for hazard identification and exposure
assessment would need to identify typical contaminants, commodities or supply
chains that are problematic and control measures that have worked (e.g.
pasteurisation conditions or hygienic manufacturing). Guidance for exposure
assessment could focus on process analysis and indicate any likely risk
determining steps and provide access to predictive models. Advice on setting
acceptable levels of risk for any hazard will help to selection of control options,
based on local circumstances and the cost-benefit ratio. Specific guidance on
identifying similarities and differences between published studies and user
requirements for supporting information would be essential to prevent their
misuse. Such guides and data-bases will become feasible as more detailed,
quantitative risk assessments become available.
To ensure the long-term effectiveness of such short cuts, regular updating with
scientific (e.g. pathogen prevalence and growth, the relative importance and
location of pathways of contamination and infection/toxigenisis) and commercial
knowledge (e.g. technological changes in food production and the supply chain,
storage and distribution) would be needed. Continuing review may lead to the
recognition of groups of process systems or sub-systems, sources of failure or
common risk determining steps. Consumer understanding and confidence may be
improved by making such studies available in publicly accessible data-bases,
grouped according to food commodity and presenting data in scenarios based on
risk factors at their ¡®best¡¯, ¡®average¡¯ or ¡®worst¡¯ levels. This would improve
identification of key risk determining steps and demonstrate the sensitivity of risk
level to changes in various factors, such as product cooking or storage.
13.2.6 Microbiological reference values
To promote the use of a risk-based approach, consistency in maximum accepted
risk levels to protect human health is needed for particular hazards, based on the
severity of their effects. The lack of generally accepted reference values relating
hazard level and consumer sensitivity has led to situations where food-products
have been declared unfit for human consumption because of non-quantified
demonstration of pathogen presence, or in some cases dangerous pathogen levels
have passed unchallenged. For example, the detection level for L. monocytogenes
is far below the threshold value for harm, and detection does not always indicate
hazardous food. Alternatively levels below the detection limit can grow, given
the right conditions to hazardous levels. Fixing reference values for various
product types would provide targets for product designers and the supply chain
and allow the integrated effect of process steps, relative to a limit to be expressed.
274 Microbiological risk assessment in food processing
At present, finding accepted reference values for products, process conditions,
chances of recontamination, levels of microbial survival or reduction in numbers
after a process step is problematic; because of the range of consumer sensitivities
and the unpredictable nature of their response to a level of hazard.
At the regulatory, or government level, the problem of reference values is
beginning to be addressed through Microbiological Food Safety Objectives
(MFSOs) that give a target level considered necessary to protect the health of
consumers. Usually this means (maximum) levels of a pathogen or toxin in a
food, and therefore leads to end-product criteria, supply chain performance
indicators or target prevalence rates/levels for pathogens. To be useful, MSFO¡¯s
must be feasible and practical and should identify the food, the hazard and the
level of protection required. At present very few quantitative values such as
acceptable rates of illness or death, are published. In principle, Governments
should be able to use them to communicate expected levels of food safety to
consumers and the food industry, and industry may in turn use them to show that
their products meet acceptable levels of risk. MFSOs do not prescribe how levels
of food safety can be achieved, and therefore they allow processors to select
appropriate, or equivalent, technologies and performance criteria that will
provide food complying with the reference value. At an international level they
could provide the basis for determining equivalence by showing that different
control measures (e.g. hygiene practices or critical control points) give the
similar levels of protection.
Because there are significant differences in the occurrence of pathogens in
different foods, countries and regions, MFSOs (or, more specifically, sampling
plans, criteria, etc.) cannot be global, but must take into account national and
regional situations at both ends of the supply chain. MFSOs are important tools
in the implementation of risk management decisions, because they communicate
the level of safety that should be achieved and focus process monitoring and
limited regulatory resources. Much of the future acceptance and use of risk-
based systems relies heavily on finding MFSOs.
Generally the application of existing food hygiene principles and, in
particular, HACCP and prerequisites in food production chains will form the
basis for any control of food-poisoning micro-organisms. Decisions on control
measures should give priority to preventing risks, not just controlling them. But
even where risk-based control systems are functioning, production units still
need guidance on acceptable levels of public health protection and specific
guidance on targets (MFSOs) for any control method. This guidance could be
provided by extending the coverage of MFSOs to take account of the level of
risk or health protection accepted by consumers and risk managers and
enforceable within a country¡¯s legal and regulatory structure.
13.2.7 Microbiological knowledge gaps and requirements
Biological hazards in food include pathogenic strains of bacteria, viruses,
helminths, protozoa, algae and their toxic products (see Fig. 13.2). Pathogenic
The future of microbiological risk assessment 275
bacteria are currently the most significant public health challenge
internationally. In many cases quantitative assessment of the risks they pose
cannot be done, because of knowledge gaps concerning their behaviour and
resistance in food. These gaps are widened by cultural (e.g. cooking and
storage), geographical (e.g. pathogen prevalence and ambient temperature)
and practical differences (e.g. agriculture, processing and storage) that affect
the chances of pathogens being in a food at consumption. These differences
will mainly be reflected in the exposure assessment, but should also be
examined by hazard characterisation, as process and other factors may also
affect virulence or pathogenicity. If there are knowledge gaps on pathogen
numbers or incidence or uncertainties about pathogenicity, a quantitative
assessment is not possible and a qualitative assessment may be the only
realistic alternative. To bring about recognition of risk analysis as the
underpinning tool for regulation of food safety control, microbiologists must
move on from qualitative risk assessment by generating the data needed for
quantitative assessments.
Certain food commodities and pathogens represent special risks in relation to
foodborne disease, related to their potential for growth and infectivity or
toxigenisis in food (e.g. L. monocytogenes and listeriosis or Staphylococcus
aureus and toxin production). Knowledge of the boundaries for survival and
multiplication or synthesis of toxins can be used to group food commodities and
supply chains relative to conditions for growth and activity.
While scientific studies increase information on hazards in food, uncertainty
and knowledge gaps continue to cause concern to decision-makers and
consumers. Only continued research can provide the necessary answers. Until
answers are available, much of what is known about hazards and used for
controlling risks is based only on partial information, with uncertainties and
Fig. 13.2 Toxigenisis and cell fate.
276 Microbiological risk assessment in food processing
assumptions factored in. Although in future, risk managers will need to take
greater account of consumer perception of risk and cost v. benefit, it is essential
that data and scientific analysis continue to play the major role in risk
management; even though consumers may not be convinced by a purely
scientific, or technical, approach, or accept the authority of scientific opinion.
13.2.8 Risk acceptance: identifying an acceptable or tolerable level of risk
The perception and acceptance of risk by consumers differs from issue to issue.
Whereas experts consider risk in terms of estimates arrived at through scientific
methods, consumers are more value driven and manufacturers may be market
driven. In order to decide on product, hazard or technology, consumers and
manufacturers need information on:
? the nature of the hazard;
? the likely scale of the risk;
? the urgency of the situation;
? who is at risk;
? any uncertainty surrounding the information;
? possible risk management options and likely costs.
This information should form the risk profile and MFSOs, or a consensus on the
tolerable level of risk may suggest priorities for actions.
There is currently limited (and unsuccessful) experience of decision-making
predicting value judgements concerning the ¡®acceptance¡¯ of risk. This is
because inappropriate weighting has been given to different factors such as the
certainty and severity of the risk; its health effect; consumers¡¯ technical
knowledge, the implications of any control measures; and whether the risk is
seen as voluntarily accepted or imposed. Consumers do not accept or reject
risks in isolation. Usually they make choices between various courses of action.
How this is done is very poorly understood. It is obvious that low-risk
technologies and products whose risks are generally regarded by consumers as
acceptable can be sold without worrying further about consumer response to the
inherent risks. On the other hand riskier technologies may need regulation, or
public discussion, to gain acceptance. For designers and producers to have well-
defined ideas of the boundaries of acceptable risk would provide them with
clear targets and courses of action for managing technology and market
development. For regulators, reliable identification of acceptable levels of risk
would mean they could propose valid levels of protection, and allow technical
staff to concentrate on monitoring performance routinely, without having to
make case-specific decisions. For consumers, the availability of trusted,
acceptable levels of risk would provide them with a means for evaluating how
well food safety is being protected, and remove the need for them to understand
technical details.
To accept and communicate on tolerable risk levels and express risk
reduction preferences, consumers need understandable information on hazards.
The future of microbiological risk assessment 277
They need to know the extent, severity and time course of any health effects,
attendant uncertainties in information and their potential exposure to the hazard.
These should be contained in a summary of the distribution of risks and benefits
of a product or process. Specified risks should include the direct risks, any others
that may arise from controls and the cost of prevention and control versus the
effectiveness and feasibility of the proposed prevention or control options. Such
information is not published by current risk analysis studies and is essential to
their acceptance. An overall scheme for risk-based food safety control is shown
above (Fig. 13.3).
13.3 How should risk assessment processes develop?
The importance of the risk assessment process lies not only in its capacity to
estimate risks to human health, but also in its ability to organize and
communicate data on food safety and allocate responsibilities for data analysis
and control or preventative actions. At present processes are limited to the
technical appraisal of risks.
Topics for risk assessment, or problems, may be identified by consumers or
industry, single stakeholders or by collaboration between different stakeholders
and should form the major input to the risk profile (see pages 280¨C1). Formal
Fig. 13.3 Overall scheme for risk-based food safety control.
278 Microbiological risk assessment in food processing
procedures for problem identification and management may not be necessary if
food hygiene problems and control measures are well known. This usually
means they can be dealt with on a routine basis, or managed directly, by
applying hygiene and other guidelines or codes already developed for specific
food hazards. The strength of risk assessment lies in the systematic collection
and evaluation of information on new hazards, and in dealing with altered
conditions or supply chain changes. On the down side, risk assessment is likely
to be ineffective or misleading where there is absent or variable data,
uncertainties or knowledge gaps are recognized or alternative interpretations
of the data are scientifically plausible. If models or analogies are used, there will
be doubts about the origin and applicability of their information, and these must
be balanced by discussion of their value as an aid to decision making, when
directly relevant data is not available.
In scope, a risk-based approach to safety should address the supply chain
from farm to table. Its output should be directly useable as control measures, in
combination with prerequisite/good manufacturing practice programmes and
HACCP. Controls based on risk analysis progressively place responsibility for
safe foods with the supply chain and retailer. And as a consequence, if the
approach is effectively used, the role of regulators progressively becomes
limited to providing the necessary criteria (e.g. MFSOs), monitoring, support
and direction. A key part of their role will be review and guiding revision of the
existing good manufacturing practices and criteria (e.g. specifications) used to
control risk. Non-risk-based control systems place greater responsibility for
ensuring compliance on the regulatory authorities as they are based on generic
criteria, that may or may not offer the best protection of public health whilst
promoting trade. Properly validated, targeted control measures can cause
significant reductions in pathogen contamination levels in foods, but they limit
innovation and improvement and may increase the regulatory cost burden.
Because of their inflexibility (e.g. based on prescriptive requirements), locally-
enforced regulatory systems cannot always respond to changes in risk levels,
new hazards or provide remedies for individual situations in a cost-effective
manner. Their future usefulness in international trade is likely to be limited,
because they do not take account of the needs of a global economy or
developing economies trying to export to countries with fully developed food
safety control systems. Therefore there is a major economic and regulatory need
to develop risk-based safety management systems that contain risk to tolerable
levels, are able to handle equivalence and maintain consumer confidence.
Any regulatory policy using a risk-based approach to determining product
safety needs, must have consumer protection as its focus and be based on
¡®formal¡¯ procedures, open to review during their development and application.
Such a consumer focus may challenge existing global or national standards, or
identify regulatory criteria that are seen as underweight or alternatively too
restrictive by consumers. Hence a process for balancing risks and benefits in
their local context is urgently needed, to promote general acceptance of risk-
based management of microbiological risks. Industry (or regulatory agencies, if
The future of microbiological risk assessment 279
they are responsible for the control of technology), would benefit from a
generally credible procedure that promotes acceptance of a balance of risks and
benefits, prevents unpleasant surprises and reduces the costs of regulatory
intervention. Consumers need to be involved in building such a process and its
development should leave producers with design freedom; so that trade is not
limited and innovation is promoted. This means that technical resources and
information to support exposure assessment have to be better directed, to stop
narrow or conservative approaches being used to speed decision making. On a
routine basis application of studies to particular lines, technologies or products
with recurrent problems may provide solutions and build consumer confidence.
Use of risk analysis processes in industry and by regulators is likely to be
limited by availability of data and competent personnel. At the outset of any
microbiological risk analysis activity, competent risk assessors, risk managers
and experts need to be identified as early as possible, although the correct choice
may not be evident until knowledge gaps or risk management options have been
identified. Correct preparation of the risk profile should minimise the chances of
making incorrect choices. Depending on circumstances, risk management and
communication responsibilities may pass to different stakeholders and the
messages and outputs should be tailored to meet their needs. Public authorities
and research institutes need to take on a pivotal role in ensuring scientific
integrity, especially by ensuring (e.g. by training) that the roles of assessment
and management are separated but interactive. Where assessors and managers
have dual roles in a study, they need to be alert to conflicts of interest, as they
need to maintain frequent and transparent interactions to arrive at effective and
practical risk management decisions.
13.4 Key steps in risk assessment
Risk assessment involves four sequential and interrelated steps and in the
foreseeable future will have many limitations. Ways of recognizing and
managing these need to become better defined, to prevent the technique
becoming discredited or misused. Risk assessment needs to have a formal
preparatory step, the risk profile, to ensure it has the required outputs.
13.4.1 Risk profile
The risk profile forms the essential starting and reference point for risk analysis.
It provides a situation analysis of the microbiological food safety problem and
its context. It covers what is known and what is not known, and what is relevant
to risk management decisions. It is a checklist of the areas of risk relevant to
prioritising and setting the limits for risk analysis. Preparation of a risk profile
may be triggered by information on the presence, or an unusual level, of hazards
in food or the environment, by disease surveillance or monitoring information,
clinical or laboratory studies. In industry, alerts may come from knowledge of
280 Microbiological risk assessment in food processing
production practices including process innovation, a failure to comply with
specifications, expert opinion or consumer complaints.
After identification of a problem, the risk profile needs to identify the hazard
and define the scope of the problem. It needs to outline any potential consequences
associated with courses of action and the likely consumer perception of the
problem and any solutions. It should indicate whether or not a risk assessment can
or should be carried out and whether it will improve control of the hazard (Table
13.1). Because of its importance, risk profiling needs to be developed into a more
¡®formal¡¯ activity within risk assessment. It needs tools to indicate before a study is
undertaken, whether there is insufficient information and resources, or whether
control can better be established by defaulting to established controls. Possible
controls may include planned inspection, end-product testing, or existing
regulatory measures. The risk profile should therefore outline a range of control
actions that are likely to be acceptable to all the stakeholders and should indicate
whether likely concerns are interim (acute) or long-term (chronic). Increased
globalisation of the trade in foodstuffs, and the regional prevalence of food borne
pathogens, increases the challenge of providing accurate risk profiles for hazards
that require the use of technically complex controls in parts of the world that are
remote from both control agencies and customers.
13.4.2 Risk assessment
If the risk profile suggests that risk assessment can improve decision making, the
team must then collect and structure scientifically derived information on
hazards, food vehicles, the supply chain and usage habits. This basic information
must allow them to determine, through hazard identification and exposure
Table 13.1 Risk profile: scope and content
? A brief description of the situation,
product or commodity involved;
including the source of problem or
topic (e.g. from the entire food chain,
a specific food-stuff or animal or
person to person transmission)
? Identification and description of the
hazard and supply chain involved,
including concern levels of the hazard
? Who (e.g. consumer sensitivity based
on disease incidence data and the type
and severity of the adverse effects,
indicating any consumers particularly
at risk (e.g. the elderly, children or
those whose exposure may be increased
by dietary intake; socio-economic
status, or other characteristics) and
what (economic concerns) is at risk
and potential consequences
? The proposed risk analysis team and
stakeholder involvement
? Relevant regulatory or GMP tools and
any microbiological food safety
objectives
? Information on tolerable level of risk.
Possible:
? control options
? need for precaution
? Likely responsibility for implementa-
tion of microbiological risk manage-
ment decisions
? Monitoring and review requirements
? Consumer perception of the risk and
local considerations or restrictions
? Anticipated channels of risk com-
munication
? The expected or acceptable distribution
of risks and benefits
The future of microbiological risk assessment 281
assessment, if they are dealing with a recognized, new or latent microbiological
public health problem or one linked to changes in technology or the food or
supply chain. Information should fully describe the hazard (e.g. growth, survival
or toxigenesis characteristics), its fate during processing and distribution (e.g.
kinetics of killing), routes of contamination (e.g. levels and incidence) and the
effects on consumers.
Better information in this area is needed if the risk assessment is to allow the
exposure assessment to outline risk determining steps in the food chain, in the
context of the food and its use. All risk assessments will include uncertain and
variable data, its importance should be communicated to all the interested,
affected and expert parties and decision-makers, so that they can take effective
and sustainable decisions and explain the basis to consumers at risk. Confronted
with variability and uncertainty in information, assessors need to recognize
when there is insufficient sound, scientific data, or resources, to produce a valid
risk estimate or allow risk managers to decide between risk-based options with
confidence. Under these conditions, either the risk profile or assessment should
indicate whether existing measures or practices could provide better
management of a risk than one derived from a risk analysis.
Technically satisfactory solutions may not be able to maintain or restore
consumer trust in a product. Therefore as a second stage risk assessment, Impact
Assessment (see pages 287¨C9) needs to consider the perception of risk by
consumers, so that any limitations (e.g. on controls) can be presented to risk
managers and communicators, improving their chances of providing effective
and acceptable control measures and ensuring risk acceptance.
Hazard identification
Correct identification and understanding of hazards needs to be based on their
association with food and consumer illness. Research to provide targeted
experimental data and support predictive models for growth/survival/elimination
and toxin production is essential and at a practical level needs to provide
knowledge for control of ¡®house strains¡¯ in factories. The availability of better
data will lead to the use of wider hazard descriptions and improve the
identification of key input parameters (risk determining steps) such as
potentially hazardous properties in raw materials, and changes in risk associated
with formulation, processing and product usage. Better detail will lead to better
estimates of inputs, fate of pathogens during processing and factors affecting
their eventual presence in the product, such wider descriptions are already used
in the USA for MRAs that take account of, for example, the effects of
environmental stress on microbial resistance or virulence.
Hazard characterisation
This stage should produce a qualitative and/or quantitative evaluation of the
impact of the hazard on human health. Its quality and relevance need to be
improved, so that the hazard and its effects can be more realistically described,
and prevent unnecessary caution or risk acceptance by sensitive sub-populations.
282 Microbiological risk assessment in food processing
At present the focus of microbiological risk assessment is on acute disease,
chronic diseases receive little attention. With greater interest in long-term health
and a wider range of susceptibilities among consumers this must change.
Similarly current knowledge of virulence and pathogenicity is limited to a few
food vehicles and incidents. Because of food vehicle effects on harmful dose and
illness response, progress in technology should be used to improve knowledge
on the links between cell physiology and pathogenicity. The first step is to
identify critical gaps in our knowledge about hazard inputs and invasion/
transmission pathways. Dose-response models have been used in many studies
and are likely to form the future basis for risk characterisation. Buchanan et al.
(1997) have already used an exponential model combining epidemiological data
with survey data on L. monocytogenes in foods.
Exposure assessment
Reliable exposure assessment is at the heart of risk analysis. Improved
qualitative and/or quantitative evaluation of the impact of the supply chain on
likely pathogen or toxin levels at consumption will come from data on:
? supply chain and especially process conditions (e.g. times, temperatures and
hygiene in manufacturing premises and equipment);
? the microbiological environment (e.g. pH, A
w
) within raw materials, food and
packaging;
? knowledge of the kinetics and limits of pathogen behaviour;
? likely pathogen distribution and levels.
Integrating this data will show levels and likely intakes of pathogens from
different processes or foods or groups of foods. If direct data is not available or
there are requirements from process design, estimates may draw on predictions
from modelling programs or use assumptions or expert opinion. To ensure
maximum accuracy, a strategy for measurement and rules for populating and
using predictive models, are needed, with clear explanations of any default
values, criteria or assumptions used for particular foods or hazards. The
reliability of models must always be considered in the context of the
requirements of the study and rules to do this need to be developed.
The use of groups of processes or unit operations (such as pasteurization or
acidification) as reference points or short cuts in a study can be illustrated by
reference to the management of continuous heat exchangers, where design
principles are used to set control parameters for product heating during
sterilization. Sterilization process conditions are related to the inactivation
kinetics of target micro-organisms (e.g. Clostridium botulinum) using product
residence time and temperature, based on (validated) assumptions concerning
heat transfer and heat penetration into the product. Products may be grouped
depending on their heating or flow characteristics or the microbial load of raw
materials, to use models and reduce the need for repetition of experimental
work. There are limitations to grouping products and using generic exposure
assessments. For example, if food products containing particles are made, the
The future of microbiological risk assessment 283
composition or particle size may vary from batch to batch making it difficult to
predict the exact heat treatment of each portion of the product during passage
through a heat exchanger. Mathematical models exist to predict the distribution
of residence times, heat transfer and heat penetration and allow processes to be
based on the residence time of the fastest moving particle. The weakness is that
models are least accurate at their tails, where the extremes (i.e. fastest or slowest
moving product) are found and these include the most critical area for safety.
Risk assessments have been used (Brand et al., 2000) to provide insight into this
problem based on assumptions (e.g. liquid carrier rheology, minimal
temperature loss from equipment and predicable solids shape and concentration)
derived from thermophysical characterisation of the food material. Probabilistic,
rather than deterministic, risk assessments can be used to take account of this
type of uncertainty and improve to process design and management.
Risk characterisation
Integration of information from the first three assessment steps into an estimate
of adverse effects in target consumers is the aim. Risk characterisation may
address a current situation and suggest a range of reasonable options or
alternatives. To improve consumer understanding, risk characterisation may
present a comparison of the current microbiological risk with other health risks.
Scenarios may improve the reliability and understanding of uncertain
information, e.g. a span of sensitivities derived from health statistics or process
treatments derived from models, including for example temperature changes
associated with retail practices.
An extended framework for risk analysis is needed for many reasons. The
current Codex version was developed for regulatory purposes, not for day-to-day
supply chain use. Current activities need to be undertaken in a more practical
manner and with an aim of widening use and accessibility and improving
interaction between the sub-activities.
13.5 Risk acceptance
An additional activity, risk acceptance, is needed to improve the ability of risk
analysis to provide messages that maintain consumer choice and provide working
information for risk managers, so that they can meet consumer needs based on
scientific data. ¡®Risk acceptance¡¯ would structure and take account of the factors
valued by stakeholders for identified hazards and try to outline the boundary
between acceptance and rejection of a risk within the terms of reference of the
risk profile (Fig. 13.4). A balancing stage in risk acceptance should identify
acceptable trade-off and ensure that this information is incorporated into a
credible control process. This is beyond what can be captured by current
technical summaries of risks and benefits and should form the new discipline of
risk acceptance. Currently, risk assessment only indirectly provides information
leading to risk acceptance. In future the credibility of information providers may
284 Microbiological risk assessment in food processing
need to be established and the information needs of consumers better understood
within risk analysis. The activity is implied by current schemes, but because of its
sensitivity is not made explicit and hence not really managed.
At the beginning of the study, the risk management has to provide the risk
assessor with a clear and unbiased brief for collection of data, set out in the risk
profile. Risk assessment is currently limited to an analysis of technical data and
scientific uncertainty. In future, its output must meet the information needs of
risk managers more closely and this must be usable by risk communicators to
retain consumer confidence. Effective interaction is needed between risk
assessors and risk managers and this should be contained in the risk profile. Risk
communication needs a strong 2-way interaction with risk acceptance to ensure
that the latter receives the necessary or desired information from the risk
assessment, and in turn allows the risk communicator to profile responses to
meet consumer demands. This information flow should re-focus, re-direct or
extend information gathering by risk assessment (Fig. 13.5).
To derive operational principles for risk acceptance, it is important that the
general rules and information sources used by consumers for screening products
and technologies are identified and used to provide guidance for producers and
regulators on risk acceptance for any topic. The core of any safety determination
by consumers is their value judgement of the ¡®acceptance¡¯ of a risk. Expression
of ¡®technical¡¯ risk in terms of a performance standard, outlining the cost-benefit
trade-offs of a technology may give more informed consumer judgement than
use of a technical standard specifying only product and operational details.
Properly prepared, a performance standard would allow product and process
risks to be evaluated by individual consumers and eventually produce a set of
general rules for screening innovations or hazards, so that producers and
innovators could eliminate from the risk assessment process technologies that
are seen to pose a negligible risk. In the remaining cases, an inventory of costs
and benefits would be prepared, thereby characterizing acceptable trade-offs.
Such a system needs to contain a means for adjusting the balance statement to
accommodate additional factors, for example those pertinent to special groups of
consumers.
Fig. 13.4 Risk analysis outcomes: messages to consumers and actions for providers.
The future of microbiological risk assessment 285
13.5.1 Finding acceptable risk standards and trade-offs ¨C screening,
balancing and adjusting
Many different approaches to understanding consumer perception of risks and
hazards have been tried. These include gathering data from those likely to be
affected; participatory research with interested parties and consumers to plan,
analyse, and react to risk profiles or the findings of a risk assessment. To support
decision making, interested parties, such as the media have been involved in
creating adapted or alternative plans to assess the potential impact of different
courses of action.
To encourage consumers to define acceptable trade-offs, they need to
understand how risks are managed and tolerable levels of risk, meeting their
needs, are fixed. To do this, they need a credible and orderly procedure for
evaluating risk management options (cost/benefit) against individual topics.
Availability of such a capability would affect industry, regulatory agencies, con-
sumers and public interest organisations. But consumer trust can only be improved
by giving them greater insight into safety systems and better involvement in
regulatory processes. This approach may improve acceptance of novel
technologies, because consumers would perceive that they have a strong role in
determination of safety criteria. Therefore regulators and industry should move
from only offering ¡®technical¡¯ solutions to risk management and communication,
within a strict risk/benefit context, towards factoring in risk perception.
Fig. 13.5 Information flows in risk acceptance.
286 Microbiological risk assessment in food processing
A methodology for finding acceptable levels of risk is needed and within it,
procedures for three activities are required: screening, balancing and adjustment.
Screening would establish whether a consumer is aware of, or may be exposed to, a
new hazard or altered level of risk by a supply chain problem or change or new
technology. Balancing would identify acceptable trade-offs based on perception of
the change and adjustment would incorporate additional factors needed to ensure
perception of a safe control or regulatory process, within the framework of the risk
benefit equation. These stages would normally provide levels of protection in
excess of that needed for strictly public health purposes. With suitable information
and trust, consumers will readily accept products where they accept that they pose a
negligible risk, in the remaining cases, they balance risks and benefits to reach
acceptable trade-offs or rejections. In some circumstances they may accept a given
risk once they are aware of it (e.g. cheese from unpasteurized milk). But they may
not accept a similar risk if it challenges their values (e.g. beef or beef on the bone)
or acceptance may be adjusted to accommodate personal factors (e.g. consuming
raw or undercooked meat). To understand how this is done and make use of it in
developing products and technologies with a high chance of consumer acceptance
is the function of the impact assessment part of risk acceptance, quantifies
consumers¡¯ willingness to trade costs and benefits.
If scientific knowledge of the hazard is insufficient or there is uncertainty
over interpretation of data, risk managers may apply tighter requirements (a
precautionary approach) until better information is available. This is an essential
element of risk analysis. It is important at all steps that provide an input to the
precautionary measures, especially screening, balancing and adjustment, and
their use may lead to a formal requirement for pre-market approval or enhanced
QA procedures. Requirements may be relaxed as safety data on new ingredients
or technologies become available. Where data is insufficient, additional
information should be sought before precautions are relaxed; principles for
making decisions are not available at the moment.
13.5.2 Impact assessments
Assessments are analytical techniques to provide risk managers with informa-
tion to identify and anticipate potential hazards and maximise the effectiveness
of risk management actions. To help risk managers decide effectively between
alternative means of risk reduction, information from other types of assessment
(e.g. social and economic) in addition to the strictly microbiological may be
needed, each one being focused on particular aspects of the topic and included
under risk acceptance. For example changes in sourcing, preservation or
technology may have additional economic and social effects that affect accept-
ability to consumers. A general analytical methodology (similar to
microbiological risk assessment) is not likely to be suitable for examining all
these impacts of a decision.
Conflicts arising from different value systems or interpretations of scientific
data are inherent in assessments and may require trade-offs or compromises to
The future of microbiological risk assessment 287
reach an action plan, contributory factors need to be given priorities. Tools are
needed to provide these priorities and balance risks, costs and benefits in the
short and long-term. For example in food safety matters, some consumers may
have a technically unjustifiable preference for decentralized, simple, technology
versus centralized, complex corporate technology, although from a technical
perspective simple technology may be riskier. An impact assessment tool should
expose such counter pressures; to show where choices cannot be based only on
technical inputs, but have to account for consumer and producer values. Inputs
to impact assessments should not be made by experts alone, but should include
consumer views and a process for doing this does not currently exist as part of
the risk analysis framework. Although some manufacturing and other technical
staff may not be directly engaged in conducting either impact or risk
assessments, they need to know what sound assessments look like; so that they
can make use of the outputs or contribute to impact assessments when needed.
Social impact assessments
A procedure to analyse social impact is needed to determine the social acceptability
of products and technologies and produce solutions that are consistent with
consumer values. This is necessary because many consumers recognize that
although the negative effects of technological change cannot always be prevented,
they need to be balanced against benefits. Impact assessment could provide a tool
for doing this. Social impact assessments need to assemble information on the
likely effects of new products or technologies, including showing options (e.g.
alternative approaches or technologies) for preventing or minimizing adverse
effects and demonstrating how well consumers are protected. Those having an
interest in a particular product may be unclear about the potential problems, thus
descriptions of it may require different techniques (e.g. cost-benefit analysis) or
reworking information (e.g. as trends, historical surveys or analogy) to meet their
needs. Extension to the assessment process would provide a basis for solutions that
provide the optimum balance between protection and acceptance. Involvement of
consumers leads to the practical problem of expressing risks and benefits.
Accessibility may be improved by presenting analyses of the risk/cost/benefit
distribution as scenarios. These analyses should be illustrated by various groups
including those at greatest risk (to show their benefits), those with the least benefit
(to estimate their risks) and a group with an intermediate level of both. In this way
individuals, or sub-groups, may be identified whose sensitivity requires more
detailed analysis to propose an optimal solution. Accepted changes or risks should
be acceptable to every consumer based on best estimates of technological effects
and the values used by reasonable individuals.
Some current methods of impact assessment are based on what people say
about their values, but outcomes (e.g. product or technology acceptance) rely on
what people actually do. No methodology is likely be perfect, the most they can
do is provide insight based on strengths, weaknesses and drivers. Better
understanding of the framework of consumer response to hazards; the fewer
issues will have to be addressed on an ad hoc basis.
288 Microbiological risk assessment in food processing
Economic impact assessments
To date major users of economic impact assessments have been businesses and
governments, who typically ask whether any proposal is workable or feasible,
who is affected by it and what is the cost/benefit ratio. These considerations
should lead to selection from alternatives. Currently assessment processes to
take account of economic considerations do not exist for MRA, although they
are a key part of risk acceptance. The process is likely to be based on supply
chain cost data collection and analysis, followed by screening, balancing and
adjusting to provide optimal, least cost solutions.
13.6 The outputs of risk assessment: risk management and
communication
The outputs of impact assessments should accompany risk characterisations and
allow relevant information to be fully communicated and/or explained to users
and consumers before risk management actions are undertaken.
13.6.1 Risk management and risk/benefit information
Risk management weighs alternative courses of action currently based on the
results of a risk assessment and then selects and implements suitable
preventative or control options, whilst contending with the uncertainties left
by risk and benefit assessments. The future development and use of a process to
determine risk acceptance would improve the reliability of these decisions.
Because from an industry view, the current three tools are biased towards
regulatory and policy use, to be usable at a more practical level, risk manage-
ment needs to be reinforced with a group of tools to ensure that risks are
managed at tolerable cost whilst satisfying the expectations of customers.
This optimisation would be helped by the development of explicit regulatory
criteria for acceptable safety performance of specific or novel technologies, for use
after impact assessments have determined their fate. To do this effectively
questions raised by impact assessments should be resolved early on in any study, so
that risk management only focuses on application of the outcome and technical
requirements to meet the criteria. Benefits and costs of reducing risk need to be
compared, from both operational and consumer perspectives, so that a choices can
be made and risk reduction measures implemented ¨C right first time. Quantification
of risk and benefit, without balancing and adjusting may in some circumstances be
enough to demonstrate acceptance or provide practical guidance. Consumers may
eventually accept replacing case-by-case assessments with generic assessments,
especially if the science is more convincing at an aggregated level and the value of
the approach has been validated by everyday examples.
Involvement of consumers in the risk assessment and risk management
process through the development of open transparent procedures for risk
acceptance may make the overall process of risk analysis more complex.
The future of microbiological risk assessment 289
Decision-making within this extended framework would take greater account of
the needs of all stakeholders (e.g. consumer perception of the problem, the
distribution of risks and benefits, expressed preferences for risk reduction and
the cost of prevention and control versus effectiveness of risk reduction
measures). But progress would provide certainty for risk managers and decision-
makers struggling over recommendations.
13.6.2 Selection of options
When risk management options are presented, irrespective of perception, the
primary driver for decision-making should be the protection of human health
(level of protection), based on scientific knowledge of the microbiological
hazards. Technical and economic information on the hazard will have been
assessed to show the effects of primary production and processing technology,
inspection, and sampling methods on risk level. Any course of action will have
technical and economic implications for the operations involved and these will
determine its feasibility in a particular situation. The best solutions will provide
cost-effective means of limiting risk, with benefits and costs reasonably related,
based on the tolerable level of risk and preferences expressed by consumers.
End-product testing alone cannot ensure effective control of food safety,
because it cannot assure the absence of pathogens. Increased levels of testing
should not be a recognized risk management outcome, because the low levels
and the non-uniform distribution of pathogens in most foods make it statistically
impossible for end-product testing to ensure low levels of risk. These are only
obtainable from correct product/process design and process control. However
microbiological testing can be used to validate control measures (e.g. HACCP)
and to verify, on a day to day basis, their consistent implementation and
effectiveness. Where HACCP has not been employed, or there are production
problems or limited access to verification information (e.g. from suppliers),
testing has a useful role in risk management. The development of protocols and
¡®generic¡¯ schemes for the validation and verification of risk-based systems
would increase their accessibility to small businesses and improve their
acceptability to consumers, if part of the validation process involved showing
their relation to an impact assessment (see below).
13.6.3 Monitoring and review
Risk management decisions will lead to control or preventative measures. Their
implementation, effectiveness and relevance should be monitored in relation to
the incidence and level of linked food-borne diseases and should be reviewed, as
new information becomes available. In addition to regular verification, new
information should trigger the review of scientific (e.g. epidemiological studies,
knowledge of the virulence of the organism or its prevalence and level in foods),
supply chain (e.g. changes in food technology or the supply chain) or consumer
information (e.g. changes in intake patterns, product use or the extent of
290 Microbiological risk assessment in food processing
sensitive populations). This may be compared with and used to update the risk
profile and would be a valuable addition to any web-based system containing
hazard and risk data along with generic studies.
Producers and consumers should contribute to the development of guidelines
for review, who will conduct them, what will be evaluated and the techniques to
be used. The review should be based on performance criteria to judge success in
implementing risk reduction plans and reducing risk, for example by reference
to the incidence and nature of product recalls and consumer complaints, or
information on the effectiveness of HACCP or pre-requisite programs.
Government may also carry out reviews to support their responsibilities for
setting objectives, including the availability of information relating to the food
borne pathogen(s) targeted for control measures and the effectiveness of the
regulatory control programmes. Such reviews need to ensure that additional
information from disease surveillance and research programmes, is fed through
to existing studies to ensure that uncertainties and knowledge gaps are
progressively reduced. Both producers and governments should review costs and
benefits and promote discussion with consumers and producers; results may
warrant changing parts of the risk assessment or risk management activities to
ensure that on-going measures remain effective and are perceived as such.
13.6.4 Risk communication
Risk communication is the final element of the risk analysis process and an
integral part of the other elements. Risk communication should provide consumers
with an effective representation of risks and controls, including the probability of
realization and the consequences (hazard). Communicating the results of a risk
assessment and interaction with risk management may be done to as part of risk
acceptance, to answer questions from decision-makers. The questions will often
be context dependent e.g. probability of harm in different markets, sensitivity of
incidence to the level of hazard in raw materials, risks per typical product portion
and effects of product use. Providing clear answers is a function of the impact
assessments and the process of screening, balancing and adjustment.
Clearly risk communication has dual roles. Firstly, to provide consumers with
information from the expert scientific review of the hazard and assessment of
risks, including information relevant for specific target groups, such as infants or
the elderly. For any hazard, this information should allow consumers at risk to
exercise their own options to achieve preferred levels of protection. Secondly, it
needs to provide producers and regulators with the information specific for risk
management. The outputs needed by consumers and users, especially decision-
makers need to be identified from the risk profile onwards. However risk
assessment is structured, dealing with uncertainty and variability results from
lack of information or the availability of controls remains a knotty problem.
The Codex Alimentarius position on communication in risk analysis is too
narrow for practical use, it needs to develop from being ¡®an interactive process
of exchange of information and opinion on risk among risk assessors, risk
The future of microbiological risk assessment 291
managers and other interested parties¡¯ (Codex Alimentarius, 1997), towards the
broader definition from The United States Research Council ¡®an interactive
process of exchange of information and opinion among individuals, groups and
institutions which involves multiple messages about the nature of risk and other
messages, not strictly about risk, that express concerns, opinions or reactions to
risk messages or to legal and institutional arrangements for risk management¡¯
(US National Research Council, 1996).
13.7 Conclusion
Orderly safety management and regulation against accepted criteria requires
well-specified, logically defensible procedures that are perceived as acceptable
by all the parties involved. Without them, control is inconsistent and
unpredictable, failing to provide either the level of protection that consumers
expect and the stable environment needed by producers. The aim of
microbiological risk analysis in an extended form should be to provide a global
standard for identification of hazards and management of the acceptance of risks
associated with foods for different groups of consumers. Risk assessment is an
essential part of risk analysis because it specifies technical risks for pathogenic
micro-organisms and foods, on the basis of sound science, combining process
and scientific data. Its further development should deal with perceived risks to
increase its acceptance as a risk management tool among consumers. This
concept is still in its infancy, but needs to be developed globally.
13.8 References
ANON. (1996) Principles and guidelines for the application of microbiological
risk assessment. Alinorm 96/10 Codex Alimentarius Commission, Rome.
BRAUD, L.M., CASTELL-PEREZ, M.ELENA AND MATLOCK, M.D. (2000) ¡®Risk-based
design of aseptic processing of heterogeneous food products¡¯, Risk
Analysis 20, 405¨C12.
BUCHANAN, R.L., DAMBERT, W.G., WHITING, R.C. and VAN SCHOTHORST, M. (1997)
¡®Use of epidemiologic and food survey data to estimate a purposefully
conservative dose-response relationship for Listeria monocytogenes levels
and incidence of listeriosis¡¯, J. Food Protection 60, 918¨C22.
CODEX ALIMENTARIUS COMMISSION (1997) Joint FAO/WHO Expert
Consultation of the Application of Risk Management to Food Safety
Matters. FAO/WHO, Rome.
US NATIONAL RESEARCH COUNCIL (1996) Understanding Risk: Informing
Decisions in a Democratic Society. In P.C. Stern and H.V. Fineberg
(eds), Committee on Risk Characterisation, National Research Council.
National Academy Press, Washington DC.
292 Microbiological risk assessment in food processing
acceptable daily intakes (ADIs) 24
acceptable level of risk see appropriate
level of protection (ALOP)
acceptance of risk see risk acceptance
acidity 80¨C1
adjustment 287
Adulteration of Food Act 1860 218
adverse health effects 55¨C6
alar 190
ALARA principle (as low as reasonably
achievable) 27, 176
algebra 146¨C7
animals 117
small animal studies 84, 85¨C6
extrapolation to humans 93
Appert, Nicholas 7
apples 190
appropriate level of protection (ALOP)
(TLR) 27, 30¨C1, 177, 178,
179¨C80, 239¨C40, 263, 277¨C8
@RISK 203¨C5
attributes sampling plans 222, 238¨C9
audience 163¨C4
Australia 195, 236
E. coli process criteria 217, 233
Bacillus cereus 5, 31¨C3
bacteria 70, 275¨C6
see also pathogens
¡®bad bug book¡¯ 69, 195, 196
balancing 287
Bayesian statistics 147
benchmarking 176
Beta-binomial function 89
Beta-Poisson function 89
bio-informatics 271
biomarkers 84, 86
blood products 9
bongkrek 10
botulinum cook (bot cook) 189, 216¨C17,
234
Campden and Chorleywood Food
Research Association (CCFRA)
250, 251, 258¨C61
Campylobacter 33¨C4, 160
canned foods 11¨C13, 118¨C21, 189
case-control studies 270¨C1
Center for Disease Control and
Prevention (CDC) 69, 195, 196
Center for Food Safety and Applied
Nutrition 195, 196
cheese 182
chemicals 27¨C8
Chicago Department of Public Health
(CDPH) 96
chicken products 150
see also poultry meat
clearing houses 195¨C6
clinical trials 83¨C4, 92¨C3, 270¨C1
Clostridium botulinum 11¨C13, 30¨C1
botulinum cook 189, 216¨C17, 234
Index
Clostridium perfringens 5
Codex Alimentarius Commission (CAC)
2, 17, 20, 24, 25, 78, 215
Codex Committee on Food Hygiene 2
Codex Committees 24, 25, 26
General Principles of Food Hygiene
184
HACCP 250, 251
microbiological criteria 185¨C6, 215¨C16,
219¨C20
Principles and Guidelines for the
Conduct of MRA 22, 49¨C50
qualitative risk assessment 143
risk 157
risk communication 155, 166
risk management 177¨C8
cold smoked fish 234¨C6, 247
Communicable Diseases Centre 195
communication
concept 163¨C6
risk communication see risk
communication
comparative MRAs 143
computing skills 134
confidence intervals 140, 141, 201
consumers
information on consumption and 59,
68, 110¨C11, 117
risk perception 158¨C63, 190, 286
contagious flesh, selling of 9
contamination levels 73, 238¨C9
content sites 195, 196
continuous heat exchangers 283¨C4
control measures 181, 267¨C8
see also food safety management
strategies
control points 232¨C3
see also critical control points
cooked ham 118¨C21
corrective action plan 250, 255, 261
Coxiella burnetii 13¨C14
criteria see microbiological criteria;
performance criteria; process
criteria
critical control points (CCPs) 17, 18, 70,
183, 250, 254, 260
critical limits 184, 250, 254, 260
D-value 74, 116¨C17
data see information
databases, linked system of 194, 209¨C10
death of pathogens 73¨C4, 272
decision support 203, 204
decision support systems 207¨C9, 210
decision trees 203, 204, 254
default hazard characterization 237
Department of Health 167¨C9
design weakness 256¨C7
deterministic (mechanistic) models 60,
87¨C8, 145, 201¨C3
Disability Adjusted Life Years (DALY)
55, 229¨C31
disease triangle 54, 54¨C5, 79¨C81
distribution functions 88¨C9
distribution of microbiological hazards
72¨C3
DMFit 205
documentation 250, 255¨C6, 261
dose¨Cresponse relationship 2, 14, 36, 50,
54¨C7, 77¨C99, 283
characters of pathogen, host and
environment 54¨C5
disease triangle 54, 54¨C5, 79¨C81
dose¨Cresponse analysis 56¨C7
evaluation of adverse health effects
55¨C6
extrapolation from high doses to low
doses 93
hazard characterization v. 78¨C9
and microbiological criteria 228¨C9, 237
modelling dose¨Cresponse relationships
56¨C7, 86¨C90, 91
theories of infection 81¨C2
types of dose¨Cresponse data 83¨C6
see also hazard characterization
dying, risk of 137, 158
dynamic models 197
economic impact assessments 289
egalitarians 164
eggs and egg products 150, 189
empirical models 87¨C8, 197¨C201
end-product testing 14¨C16, 270¨C1, 290
Enterobacteriaceae 189
epidemiological studies 84, 84¨C5, 270¨C1
equivalence 261
Escherichia coli (E. coli) 91, 110, 150¨C1,
217, 232¨C3, 233
European Union (EU) 215¨C16, 249
Eurosurveillance system 196
experimental design 198
expert consultations 21¨C3, 24, 26, 155¨C6,
167, 169
expert opinion 84, 114, 152
expert systems 147
experts 117
risk perception 161¨C3
exponential function 89
294 Index
exposure assessment 2, 14, 35¨C6, 50,
57¨C60, 100¨C26, 283¨C4
building up supply chain data for
109¨C11
characterization of sources, routes of
exposure and pathogen occurrence
58¨C9
compared with hazard indentification
65
and HACCP 103¨C5, 109¨C10
hazard characterization, risk
characterization and 135¨C6
integration with other MRA stages in
risk characterization 128, 130, 133
output 117¨C22
role in MRA 101¨C5
scope 114¨C15
sources of information 111¨C14
stages in 105¨C8
team for 109
types of data 114¨C17
use of models 35¨C6, 59¨C60
extrapolation 93
factor analysis 160¨C1, 162, 162¨C3
Failure, Mode and Effect Analysis
(FMEA) 248
fatalists 164
first order exponential growth model 197
fish
cold smoked fish 234¨C6, 247
pufferfish 9¨C10
unsold 9
flow diagrams 253, 260
Food and Agriculture Organisation (FAO)
20, 24, 25, 78, 175, 195, 196
joint expert consultations 21¨C3, 24, 26,
155¨C6, 167, 169
Food Design Support System 207
Food and Drug Administration (FDA) 96,
147, 148¨C50, 249
¡®bad bug book¡¯ 69, 195, 196
food matrix effects 54¨C5, 80¨C1
Food MicroModel 206
Food and Nutrition Alliance 163¨C4
food processing skills 135
food safety 183, 268¨C9
historical background 6¨C7
trends in food safety control 29¨C34
food safety control scheme 278
Food Safety Information Center 69
food safety management strategies
181¨C90
definitions and equation 181¨C2
microbiological criteria 185¨C90
performance criteria 181, 183¨C4
performance standards 181¨C2, 182¨C3
process criteria 181, 184
product criteria 181, 184¨C5
food safety objectives (FSOs) 3, 37, 177,
181, 261¨C3, 275
establishing 177¨C80
and microbiological criteria 185, 186,
189¨C90, 226¨C7
performance standards 182¨C3
food safety risk assessment tool 208¨C9
food safety standards 23¨C9, 29¨C30
setting 23¨C6
trends in 27¨C9
food safety systems 7¨C23
establishing process criteria 11¨C14
GMP 8, 16, 29
HACCP see HACCP
microbiological testing 14¨C16
¡®precautionary¡¯ principle 10¨C11
predictive modelling 18¨C20
¡®prohibition¡¯ principle 9¨C10
QRA 8, 20¨C3
Food Standards Agency (FSA) 159,
257¨C8
FORECAST system 206¨C7
frozen products 118¨C21
fugu 9¨C10
future of MRA 266¨C92
development of risk assessment
processes 278¨C80
information for risk assessment 269¨C78
key steps in risk assessment 280¨C4
risk acceptance 284¨C9
risk management and communication
289¨C92
gastrin 81
General Agreement on Tariffs and Trade
(GATT) 1, 20, 214, 226
see also World Trade Organisation
generic studies 273¨C4
genomics 271
Gompertz model 116, 196¨C7
good hygienic practice (GHP) 176, 179,
180, 227
good manufacturing practice (GMP) 8,
16, 29
governmental MRA 148¨C50, 151¨C2, 176¨C7
ground beef hamburgers 150¨C1, 232¨C3
growth of pathogens 71¨C2, 73
growth curves 116, 198¨C200
guidelines, microbiological 216
Index 295
HACCP (Hazard Analysis and Critical
Control Point) system 3, 248¨C65
common failures 257
exposure assessment and 103¨C5, 109¨C10
food safety control 29, 33¨C4
future relationship with MRA 261¨C3
implementation problems 256¨C8
interaction between HACCP systems
and MRA 258¨C61
international guidance on
implementation 250¨C6
introduction of 8, 17¨C18
legal requirements 249¨C50
and microbiological criteria 215, 218,
226, 227, 231
principles 18, 250
risk management 176, 179, 180
team 252, 259
ham, cooked 118¨C21
hamburgers 150¨C1, 232¨C3
hazard analysis
HACCP system 250, 253¨C4, 260, 267
industrial 176
hazard characterization 2, 5, 50, 54¨C7,
65, 77¨C99, 282¨C3
default 237
disease triangle 54, 54¨C5, 79¨C81
v. dose¨Cresponse 78¨C9
evaluation of adverse health effects
55¨C6
exposure assessment, risk
characterization and 135¨C6
future trends 94¨C6
integration with other stages of MRA in
risk characterization 128, 130, 133
key issues 77¨C82
modelling dose¨Cresponse relationships
56¨C7, 86¨C90, 91
multidimensional 139
problems in 90¨C4
research needs 95
theories of infection 81¨C2
types of dose¨Cresponse data 83¨C6
see also dose¨Cresponse relationship
hazard identification 2, 5, 13, 49, 53¨C4,
64¨C76, 282
changes in microbial hazards 73¨C5
conducting 66¨C7
coverage 65¨C6
decision support in 203, 204
defining 64¨C5
integrating with other stages of MRA
in risk characterization 128, 130,
133
key information 67¨C9
microbial hazards 70¨C2
origin and distribution of microbial
hazards 72¨C3
other biological hazards 75
scope 66
tools 69¨C70
hazards 70¨C2, 266
defining for HACCP 17¨C18
severity 71, 187¨C8
dose and 228¨C9
sampling plans 188, 225
health effects, adverse 55¨C6
heat exchangers 283¨C4
heat treatment 7, 74, 232, 272
botulinum cook 189, 216¨C17, 234
D and z 74, 116¨C17
process criteria 11¨C14
see also temperature
heuristic knowledge 194
hierarchists 164
host factors 54, 54¨C5, 79¨C80
human volunteer feeding studies 83¨C4,
92¨C3, 270¨C1
humans see people
illustrative studies 273¨C4
ILSI Risk Science Institute 50¨C1
impact assessments 282, 287¨C9
in vitro studies 84, 86
incompatibility 133
indicator organisms 15
individualists 164
industrial MRA 151¨C2, 176¨C7
industry records 180
infection, theories of 81¨C2, 179, 227¨C8
infectious disease triangle 54, 54¨C5,
79¨C81
infectious dose 79
infectious microorganisms 79
inference engine 210
influence diagram 234, 235
information
for exposure assessment 111¨C17
sources 111¨C14
types of data 114¨C17
flows in risk acceptance 285, 286
for hazard identification 67¨C9
internet sites 69, 195¨C6
linked system of databases 194, 209¨C10
microbiological information 67¨C8, 103,
106, 116¨C17
quality 69, 112¨C14
for risk assessment 269¨C78
296 Index
data needs 269¨C70
knowledge gaps 275¨C7
risk characterization 152
sources for MRA 193¨C4
inputs, appropriate 133
intensive outbreak investigations protocol
96
interactive food safety risk assessment
tool 208¨C9
International Commission on
Microbiological Specifications for
Foods (ICMSF) 15, 17, 35, 261
sampling plans 186¨C8, 221, 222, 225
international food safety standards see
food safety standards
internet information sites 69, 195¨C6
iso-risks 229¨C31
Japan 9¨C10, 85
joint expert consultations 21¨C3, 24, 26,
155¨C6, 167, 169
Joint FAO/WHO Expert Committee on
Food Additives (JECFA) 20, 25¨C7
Joint FAO/WHO Expert Meeting on
Microbial Risk Assessment
(JEMRA) 78
Joint FAO/WHO Meeting on Pesticide
Residues (JMPR) 24, 25, 26
kinetic models 116
knowledge gaps 275¨C7
Koch, Robert 7, 14
Level of Protection (LOP) 262¨C3
see also appropriate level of protection
(ALOP)
linked system of databases 194, 209¨C10
Listeria monocytogenes 5, 110, 182, 185,
208, 226, 274
dose and hazard severity 228¨C9
dose¨Cresponse models 90, 91
exposure assessment 118¨C21
FSO 179, 189, 234
process criteria 234¨C6
risk assessment for RTE foods 148,
191
listeriosis 148, 224, 229
low-acid canned foods 11¨C13, 118¨C21,
189
management neglect 257
mandatory criteria 215
mandatory guidelines 216
mathematical models see modelling/
models
mathematical skills 134
matrix effects, food 54¨C5, 80¨C1
maximal residue levels (MRLs) 26
mechanistic (deterministic) models 60,
87¨C8, 145, 201¨C3
media 165¨C6
medical skills 134
message 164¨C5
microbiological criteria 3, 15, 16,
185¨C90, 214¨C47
dose and hazard severity 228¨C9
early use of 218¨C19
¡®equivalent¡¯ food safety risk 229¨C31
and food safety assurance 226¨C7
and FSOs 185, 186, 189¨C90, 226¨C7
future trends 239¨C40
key issues in use of 217¨C20
principles for establishing 219¨C20
prioritising risk management actions
236
sampling plans 185¨C8, 221¨C5
specification of 220
types of 215¨C16
using MRA to set 227¨C31
using MRA to develop performance
and process criteria 231¨C6
using in risk assessments 236¨C9
microbiological guidelines 216
microbiological information 67¨C8, 103,
106, 116¨C17
microbiological models see modelling/
models
microbiological reference values 274¨C5
microbiological risk assessment (MRA) 2,
3, 13¨C14, 47¨C63
current activities 22¨C3
current issues 34¨C8
exposure assessment see exposure
assessment
future of see future of MRA
hazard characterization see hazard
characterization; dose¨Cresponse
relationship
hazard identification see hazard
identification
interaction between HACCP systems
and 258¨C61
key steps 49¨C53, 280¨C4
limitations 49
principles 51¨C2
risk characterization see risk
characterization
safety 268¨C9
Index 297
microbiological risk assessment (cont.)
tools for see tools for MRA
trends in food safety control 29¨C34
usefulness 48
microbiological skills 134
microbiological specifications 216
microbiological standards 215¨C16
microbiological testing 14¨C16, 270¨C1,
290
Microfit 205
MIDAS system 207
milk 13¨C14, 31¨C3
minimum infective dose (MID) (threshold
model) 81¨C2, 179, 227¨C8
modelling/models 52, 73
deterministic (mechanistic) models 60,
87¨C8, 145, 201¨C3
dose¨Cresponse relationships 56¨C7,
86¨C90, 91
empirical models 87¨C8, 197¨C201
exposure assessment 35¨C6, 59¨C60
Monte Carlo 35¨C6, 60, 113, 145¨C6,
205
off-the-shelf models 205¨C7
predictive see predictive modelling
process and performance criteria
234¨C6, 246¨C7
risk characterization 144¨C51
selection of models 88
moderate hazards 187
molecular biological techniques 69¨C70
monitoring 290¨C1
HACCP 250, 254, 260
Monte Carlo modelling 35¨C6, 60, 113,
145¨C6, 205
Mortimore, S. 250, 251
most probable number (MPN) 238
multiple infection sites 82
multiple outbreak investigations 85
National Advisory Committee on
Microbiological Criteria for Foods
(NACMCF) 250
neural networks 147
non-compliance 235¨C6, 237
non-threshold model (single¨Ccell) 81¨C2,
179, 228
objectives 130
off-the-shelf models 205¨C7
Office International des Epizooties (OIE)
51
older people 10¨C11
operating characteristic curve (OC curve)
223¨C4
options, selection of 290
outbreak investigations 96, 117
outline MRA 132, 144
outputs
exposure assessment 117¨C22
risk assessment 289¨C92
risk characterization 131¨C2, 142¨C7
outrage factors 158¨C61
oysters 149
parasites 75
Pasteur, Louis 7
pasteurisation 7, 13¨C14, 232
pasteurised milk 31¨C3
pate¡ä 191
Pathogen Modelling Program 69, 205¨C6
pathogen risk management see risk
management
pathogenicity, mechanisms of 79
pathogens 35
death of 73¨C4, 272
disease triangle 54, 54¨C5, 79, 80
growth see growth of pathogens
identifying origin and distribution 72¨C3
information about 67¨C8, 103, 106,
116¨C17
knowledge gaps 275¨C7
source(s), route(s) and occurrence 58¨C9
survival of 74
virulence 36, 68, 80
people
consumers see consumers
extrapolation of animal data to 93
host factors 54, 54¨C5, 79¨C80
lack of quantitative human data 92¨C3
susceptibility 68, 79¨C80, 229
types 164
vulnerable groups 10¨C11, 12, 160
performance criteria 179, 181, 183¨C4,
216¨C17, 262, 291
development using MRA 231¨C6
heat pasteurization of milk 13¨C14
performance standards 181¨C2, 182¨C3, 285
Pilsbury Company 248, 249
portal sites 195, 196
portion size 110
poultry meat 33¨C4, 118¨C21, 150
¡®precautionary¡¯ principle 10¨C11, 191, 287
predictive modelling 18¨C20, 29, 59¨C60,
108, 196¨C203, 271¨C3
empirical modelling 197¨C201
mechanistic models 201¨C3
pregnant women 11, 12
298 Index
preliminary MRA 132, 144
prioritisation 236
proactive risk communication 167, 167¨C9
probabilistic modelling 60, 209
probability 157
process criteria 7¨C8, 11¨C14, 181, 184,
216¨C17, 262
low-acid canned foods 11¨C13
pasteurization of milk 13¨C14
using MRA to develop 231¨C6
Process Hygiene Index 217
process¨Crelated information 68¨C9
process risk model (PRM) 150¨C1, 232¨C3
product 252¨C3
description 259
diversity 34¨C5
identification of intended use 259
product criteria 181, 184¨C5
¡®prohibition¡¯ principle 9¨C10
psychometric techniques 160¨C1, 162,
162¨C3
Public Health Laboratory Service 195
pufferfish 9¨C10
purpose 128
clear statement of 130¨C2
Q fever 13
qualitative information (narratives) 62
qualitative risk assessment 52, 59,
117¨C22, 276
risk characterization 142¨C4
tools for MRA 194, 195¨C6
quality of life indicators 55, 229¨C31
quantitative risk assessment (QRA) 8,
20¨C3, 52, 59¨C60, 122, 276
examples of 102, 147¨C51
risk characterization 144¨C7
tools for MRA 194, 196¨C209
quasi-mechanistic models 202
quorum sensing 80
reactive risk communication 167
ready-to-eat (RTE) foods 148, 189, 191
record keeping 250, 255¨C6, 261
reference values, microbiological 274¨C5
regulators 279
review 261, 290¨C1
risk 156¨C8, 267
meanings 156¨C7
technical expression of 157¨C8
risk acceptance 277¨C8, 284¨C9
acceptable standards and trade¨Coffs
286¨C7
impact assessments 287¨C9
risk analysis 3, 95¨C6
expert consultation 21
risk assessment 3
expert consultations 21¨C2
food safety standards 24¨C6
key steps 280¨C4
microbiological see microbiological
risk assessment
in a risk analysis framework 95¨C6
tools for 267¨C8
risk assessors 280
risk-based food safety control scheme
278
risk/benefit information 289¨C90
risk characterization 2, 14, 36¨C7, 50,
60¨C2, 127¨C54, 284
current problems and future trends
151¨C2
examples 147¨C51
key issues 127¨C9
methods 135¨C42
qualitative outputs 142¨C4
quantitative outputs 144¨C7
requirements 129¨C35
appropriate inputs 133
clear statement of purpose 130¨C2
skills and tools 133¨C5
risk characterization curve 179¨C80
semi-quantitative outputs 143, 144
stages 128¨C9
risk communication 3, 155¨C71, 190, 285,
291¨C2
benefits and uses 166¨C7
concept of communication 163¨C6
concept of risk 156¨C8
expert consultation 21¨C2
future of 169¨C70
proactive and reactive 167
risk perception 158¨C63
strategy 167¨C9, 169
risk comparisons 157, 158, 165
risk determining factors 139¨C40
risk estimate 60¨C1, 137
risk factor analysis 160¨C1, 162, 162¨C3
risk management 3, 29¨C30, 103, 175¨C92,
285, 289¨C91
Codex Committees 26
developing food safety management
strategies 181¨C5
establishing FSOs 177¨C80
establishing microbiological criteria
185¨C90
expert consultations 21, 22
future trends 191
Index 299
risk management (cont.)
implementation problems 190
monitoring and review 290¨C1
prioritization using MRA 236
and risk/benefit information 289¨C90
selection of options 290
risk managers 101¨C2, 280
risk perception 158¨C63, 190, 286
expert perception vs public perception
161¨C3
outrage factors 158¨C61
risk profile 143¨C4, 280¨C1
route(s) of exposure 58¨C9
safety see food safety
Salmonella 15, 16, 150, 160, 189, 256¨C7
dose¨Cresponse relationships 90, 91
exposure assessments 118¨C21
targets 33¨C4
Salmonella enteritidis 85, 91, 150
salt 6
sampling plans 185¨C8, 221¨C5
reliability of 223¨C5
and risk 225
stringency 188, 225
variability and 221¨C2
SAS 203
Sanitary and Phytosanitary (SPS)
Agreement 1¨C2, 20¨C1, 177, 226,
239¨C40, 261
Sanitation Standard Operating Procedures
(SSOPs) 249
scope
exposure assessment 114¨C15
governmental and industrial MRAs
151¨C2
hazard identification 66
risk characterization 130¨C1
screening 287
second-order modelling 87, 146
semi-quantitative MRA 143, 144, 225
sensitivity analysis 61, 232¨C3
risk characterization 129, 136¨C7,
139¨C40, 140
sensitivity coefficient 137
sequelae 93¨C4
serious hazards 187
severe hazards 187
severity
disease 55, 229¨C31
hazard 71, 187¨C8, 225, 228¨C9
severity assessments 92
shelf-life model 234¨C6, 246¨C7
shellfish 149
SIEFE model (Stepwise and Interactive
Evaluation of Food safety by an
Expert system) 207¨C8
sigmoid growth curves 116, 198¨C200
single-cell (non-threshold) model 81¨C2,
179, 228
Single-Hit function 89
social impact assessments 288¨C9
source(s) of pathogen 58¨C9
specifications, microbiological 216
spreadsheet-based food safety risk
assessment tool 208¨C9
SPSS 203
standards
international food safety standards
23¨C9, 29¨C30
microbiological 215¨C16
performance 181¨C2, 182¨C3, 285
Staphylococcus aureus 15, 16, 75, 118¨C21
static models 196¨C7
statistics 134, 146¨C7
Steering Group on HACCP Standards 252
sterilization 11¨C13, 189, 283¨C4
storage criteria 31¨C3
supply chain 279
core information 115¨C16
data for exposure assessment 109¨C11
generic model 103, 104
model 105¨C8, 111
surveys 270¨C1
survival of microorganisms 74
susceptibility 68, 79¨C80, 229
taboos 6
targets 33¨C4, 37
tempe bongkrek 10
temperature 32¨C3, 116¨C17
see also heat treatment
terms of reference 250¨C2, 259
testing, microbiological 14¨C16, 270¨C1,
290
tetrodotoxin poisoning 9¨C10
threshold (miniumum infective dose)
model 81¨C2, 179, 227¨C8
tolerable level of risk (TLR) (ALOP) 27,
30¨C1, 177, 178, 179¨C80, 239¨C40,
263, 277¨C8
tools for MRA 193¨C213
decision support in hazard
identification 203, 204
decision support systems 207¨C9
future trends 209¨C10
model development and validation
203¨C5
300 Index
off-the-shelf models 205¨C7
predictive modelling 196¨C203
qualitative tools 195¨C6
toxico-infectious microorganisms 79
toxins 71, 72, 79, 110
toxigenesis 74¨C5, 275¨C6
trade-offs 285, 286¨C7
trust 159
uncertainty 87
distinguishing variability and 138
exposure assessment 112¨C14
risk characterization 61¨C2, 129, 132,
138, 140¨C1
United Kingdom (UK) 218, 252
FSA 159, 257¨C8
risk communication 167¨C9
United States (USA) 236, 250
CDPH 96
Department of Agriculture (USDA)
147, 148¨C50, 195, 196
FDA 69, 96, 147, 148¨C50, 195, 196,
249
HACCP regulations 249
internet information sites 69, 195¨C6
listeriosis outbreak 1998/9 224
unsold fish 9
Uruguay Round 1, 20, 226
validation 62, 129, 142
dose¨Cresponse models 94
HACCP 255
performance criteria 183¨C4
predictive models 200, 201, 202, 203¨C5
variability 87
distinguishing uncertainty and 138
exposure assessment 112¨C14
risk characterization 61¨C2, 128, 132,
138¨C9
and sampling plans 221¨C2
variables sampling plans 221¨C2
verification
HACCP 250, 255, 261
MRA 37¨C8, 62
Vibrio cholerae 7
Vibrio parahaemolyticus 91, 149
virulence 36, 68, 80
viruses 75
vulnerable groups 10¨C11, 12, 160
Wallace, C. 250, 251
Weibull-gamma function 89
World Health Organisation (WHO) 20,
25, 78, 175, 195, 196
joint expert consultations 21¨C3, 24, 26,
155¨C6, 167, 169
World Trade Organisation (WTO) 8
SPS Agreement 1¨C2, 20¨C1, 177, 226,
239¨C40, 261
see also General Agreement on Tariffs
and Trade
z-value 74, 116¨C17
Index 301