J R Soc Med. 2006 Jan; 99(1): 14–19. Risk assessment can be thought of as the lens through which we anticipate the consequences of research and the impact of the actions of researchers. The way in which risk of harm is managed in research is strongly influenced by the surrounding social and political environment, leading to
differences in national and local styles of regulation and review. Different research studies carry different risks, so systems for review and approval must adapt to the question being asked and the nature of the study. Researchers can never wholly guarantee safety in any research but participants and researchers must be offered reasonable protection within any study, with appropriate arrangements in place should something go wrong. The past 20 years have witnessed the development of a more systematized approach to research, with greater emphasis on accountability, performance management, and quality assurance. The review of research now involves interpreting layers of complex legislation and assessing whether the potential benefits of a particular research project in terms of important knowledge gained are proportionate to the potential physical and/or psychological harm it
might cause. However, efforts to articulate this in the design, conduct and management of research have revealed deep divisions around how to define and apply concepts of minimal risk, potential benefit and important knowledge. In this paper, we explore the risk of harm within research, the means by which this is regulated and the impact on researchers. Paper three in this series describes how risk can be effectively communicated to potential research participants. Risk is broadly concerned with potential—but not precisely knowable—harm. It is a concept that is pervasive in modern Western society and, despite a growing literature on public views of risk, is most often articulated in terms of calculation, measurement, probability and the prediction of potential adverse events (having been based in earlier times on notions of fate or chance). This approach is grounded in a rational,
post-Enlightenment view of the world, where potential harm is assessed using mathematical judgements to weigh up potential risks and benefits.1,2 Such judgements are not value free. They are based on interpretation of the scientific evidence about the risk of harm to
research participants2,3 and may be influenced by high-profile events that provide impetus for government or professional intervention.4 For instance,
despite the evidence that many of the actions taken prior to the Alder Hey organ retention scandal were within legal and ethical codes of the time, the high-profile and controversial nature of subsequent debate resulted in changes in the way in which surgical or autopsy tissue is
stored.5,6 Researchers now work within what has been termed a ‘risk society’,7 characterized by social and technical advances but with
limited knowledge regarding related risks (such as those associated with unknown latent infection following xenotransplantation). In seeking to cope with this phenomenon, modern society has become increasingly concerned, not only with the risk of harm, but with the assessment, management, communication and monitoring of that
risk.7,8 However, not only is the concept of risk historically and socially located (i.e. it is perceived in different ways by different people and across different
societies1), there is also little evidence that the extensive arrangements in place for assessing and managing risk are effective. Risk is therefore not a blanket term to be applied in the same way across all studies. This is evident in the UK Research Governance Framework, which states that some standards for managing risk in research may require judgement and
interpretation.9 Good governance structures and systems therefore provide a framework for action rather than prescriptive protocols. As with every activity, research inevitably carries some risk (see
Box 1).10 In the same way that there are a number of assumed risks that we sign up to in our daily lives (for instance, in getting on a bus, we implicitly accept the risk of a road accident), researchers can never wholly
guarantee safety and participants must therefore be aware of the risks and accept them before taking part in the research. Harm can be either unintentional (also referred to as non-negligent) or negligent. Whereas non-negligent harm might be regarded as bad luck or one of the risks we all live with (for example slipping over and breaking your ankle while attending for a sight test); negligent harm involves some level of culpability (for example, on the part of a nurse who administers a dirty
needle when taking blood and is later faced with a patient infected with Hepatitis B). Risk assessment goes some way to addressing negligent harm (by, for instance, identifying likely incidents of fraud and misconduct) and offers a means of minimizing the potential for non-negligent harm (by, for instance, ensuring that individuals are properly selected, trained and supervised and keep auditable records). Evidence of harm in
research
The processes of review undertaken by research ethics and governance committees provide a framework for assessing the risk of harm potentially brought about by research studies and ensuring that this is proportionate to the potential benefit(s) to be gained. In terms of thinking about the potential harms of research, local and multicentre research ethics committees draw upon a range of ethical frameworks and guidelines (see paper one for detailed list), all of which were developed primarily around the clinical drug trial. As a result, the concept of risk has typically been expressed in terms of the physical, moral and emotional harm associated with drug interventions and associated tests and monitoring procedures. Box 2 Sources of advice and guidance* ASSESSING RISK
LEGAL ISSUES INDEMNITY AND INSURANCE
ACCESSING PATIENT DATA
Actual evidence about the harms of research is fairly thin on the ground, particularly for non-intervention studies. Assessing the potential benefit of a proposed study is an even more difficult task. For example, research involving human gene transfer carries formidable challenges to ethics committees trying to evaluate proportionality of risk and benefit without any way of knowing the actual (as opposed to expected) impact the work will have in the future. Because ethical frameworks and guidelines necessarily need to be interpreted, and because evidence on the potential harms of research is invariably incomplete, committees have to make judgements, often on a case-by-case basis. Over time, they may also develop patterns of custom and practice—a fact that partly explains the variation that researchers can experience between committees.17,18 Although non-intervention studies such as survey research or participatory action research are not devoid of risks, there has been a recent trend towards greater consideration of the potential harms of such studies.19 Ironically, this approach carries some risk of its own by imposing inappropriate or inflexible frameworks of ethical evaluation. For example, there is a danger of over-emphasizing risks20 or imposing requirements (e.g. written consent), which may fit poorly with the research design.21 It is beginning to be acknowledged that a more flexible review and approvals' procedure would probably lead to greater benefits overall. The NHS R&D forum has already begun exploring this issue (Box 2). An unintended consequence of the system has been to block or hinder research studies that do not really have unresolved ethical issues, through either delays20 or committees' conservative judgements.18 Although the protection of participants (particularly ‘vulnerable groups’19 such as children or the mentally incapacitated) from physical and psychological harm is the raison d'être of ethics committees, there is a paucity of research about the extent of protection needed. Two recent studies have strongly suggested that ethics committees' assumptions about the vulnerability of certain groups may not always be in step with the views of the ‘vulnerable’ individuals themselves.22,23 The involvement of multiple partners in research has the knock-on effect of potentially diffusing responsibility for any adverse effect on participants across organizations. As a result, this can accentuate perceptions of risk and lead to more stringent regulations within collaborative agreements.3 UK governance arrangements attempt to address this by requiring delineation of research responsibilities and ensuring these are not only documented, but also communicated to potential research participants. Box 3 outlines key areas of accountability, which also, importantly, indicate potential liability. Box 3Roles and responsibilities and liabilities in research
LEGAL ISSUESAs well as research being ethical, it must also be legal. In the UK, NHS R&D committees are responsible for ensuring that the research carried out within the NHS is legal. Universities also have a responsibility to ensure that research carried out under their auspices is legal. In contrast, although NHS ethics committees must have due regard for the requirements of relevant regulatory agencies and applicable laws, they are required to pass opinion on the ethical acceptability of a project, rather than specific interpretation of regulations or laws.23 In the UK there are a number of laws that researchers, R&D committees, and universities must be aware of (see Box 4). Box 4 Key legal requirements in research (see Box 2 for web links) The Human Tissue Act5 introduced legislation in 2004 to regulate the removal, storage and use of human organs and tissue. As well as streamlining and up-dating existing law, the Act aimed to provide safeguards and penalties to prevent a recurrence of the distress caused by retention of tissue and organs without proper consent. As a result, living patients must now consent to retention/use of their organs and tissue when this goes beyond diagnosis and treatment and there must be consent for removal, retention and use of tissue from people who have died (in the event that they die without expressing a wish, consent must be given by someone nominated by or close to them) The Medicines for Human Use (Clinical Trials) Regulations29 came into force on 1 May 2004 to implement the European Union Clinical Trials Directive25. The aim is to provide an environment for conducting clinical research that protects participants without hampering the discovery of new essential medicines and to simplify and harmonize the administration of clinical trials across EU Member States. As a result, anyone wishing to ascertain the efficacy or safety of a medicine in human subjects must obtain a clinical trial certificate The Data Protection Act31 seeks to strike a balance between individual's rights regarding information held about them and those with legitimate reasons for processing and using their personal information. Those processing personal information must comply with principles of good information handling (e.g. data must be processed for limited purposes; be adequate, relevant and not excessive; be accurate and up to date and not kept longer than necessary) The Human Fertilisation and Embryology Act32 provides a legal framework for everyone involved in fertility treatments, making provisions to license and monitor any research using human embryos, as well as the performance of fertility treatment clinics. An amendment in 2001 allows for the creation of embryos for therapeutic cloning research. The Act permits research on human embryos only for strictly defined purposes, and if the Human Fertilisation and Embryology Authority considers the research to be ‘necessary and desirable’ The Health and Social Care Act33 allows for the use of patients' medical information without their consent to support essential medical purposes that are in the interests of the wider public and where obtaining consent is impracticable. Disclosures of data to cancer registries and for the purpose of communicable disease surveillance have been approved There is huge overlap between what is ethical and what is legal, but there are contested areas. There has been much debate about the requirements of the Data Protection Act and the common law on confidentiality in recent years.25 Although the debate continues,26,27 the result has been greater restrictions on the use of identifiable medical data in order to lessen the risk of a breach of confidentiality. Researchers who are not involved directly in patients' clinical care now must apply, after gaining ethical approval, to the Patient Information Advisory Group (a temporary quango set up by the 2001 Health and Social Care Act—see Box 4) in order to use identifiable data without NHS patients' explicit consent. Such uses include previously unremarkable activities such as identifying a sampling frame for a survey, compiling registry information,28 linking existing datasets, or identifying suitable patients to invite them to take part in a research study.26,28 ARRANGEMENTS FOR INDEMNITYThe very nature of risk means that researchers can only offer reasonable, not absolute, protection to participants. In general, indemnity (or insurance) arrangements must be in place so that, in the event that anyone is harmed as a result of deliberate intent or failing to follow normal procedures (negligence),9 those affected within a research study can receive compensation via appropriate channels. However, less provision is generally made for non-negligent harm. (See Box 1 for examples of negligent and non-negligent harm.) For instance, the NHS can only address negligent harm as the legal liability arising from NHS Trusts' Duty of Care towards patients (i.e. NHS cover is not available for non-negligent harm). It is the role of ethics committees to decide whether or not a study can go ahead without a scheme of compensation for non-negligent harm. Although there are no legal requirements to incorporate such a scheme, committees will consider this on a case-by-case basis (though researchers may also wish to consider any moral responsibilities in this regard). In general, committees are less concerned with non-intervention studies where the risk of harm is considerably less (see above). Either way, the Research Governance Framework9 requires that compensation arrangements for negligent and non-negligent harm are made clear to participants before a piece of research can commence (see paper three in this series). This is particularly important where the research involves multiple partners. CONCLUSIONConsideration of the risk of harm is integral to high quality research (see Box 2 for links to further guidance on many of the issues raised). Ethics and governance committees involved in approving research have an important role in conceptualizing what constitutes harm, giving importance to reducing risk to participants.9,10,13 Influenced by public concern and anxiety over medical research, those reviewing research do so through the lens of modern ‘risk society’, tending to focus on technical assessments of the risk of harm. For researchers seeking approval there remain many unanswered questions such as: Who decides on risks? By what criteria? How do the reviewers account for their decisions? In addition, more complex judgements regarding the character, professional integrity and experiential judgement of the researcher are not explicitly included, though a face-to-face interview at an ethics committee is an opportunity for the researcher to demonstrate these qualities. Arguably, there is a greater need for the formal consideration of researchers' virtues (as well as technical procedures) within risk assessment and governance arrangements generally. Consideration of issues of trust might facilitate risk assessment by allowing committees to explicitly differentiate between different studies and settings. One way to begin to address this might be to understand better how the research process is conceptualized and risk is assessed in different settings. AcknowledgmentsWe thank Mandy Lee for her helpful comments. NotesThis is the second in a series of three papers on research governance. [Series editors: Trisha Greenhalgh and Sara Shaw] Footnotes*Refer to the first paper in the series for details of (and links to) regulations and guidance relating to healthcare research References1. Lupton D. Risk. London: Routledge, 1999 2. Van Ness P. The concept of risk in biomedical research involving human subjects. Bioethics 2001;15: 364-70 [PMC free article] [PubMed] [Google Scholar] 3. Parker DB, Barrett RJ. Collective danger and individual risk: cultural perspectives on the hazards of medical research. Intern Med J 2003;33: 463-4 [PubMed] [Google Scholar] 4. Evans JH. A sociological account of the growth of principalism. Hastings Center Rep 2000;30: 31-8 [PubMed] [Google Scholar] 5. Human Tissue Act c.30. London: HMSO, 2004 7. Beck U. Risk Society. London: Sage, 1992 8. Power M. The Audit Society: Rituals of Verification. Oxford: Oxford University Press, 1997 9. Department of Health. Research Governance Framework for Health and Social Care. London, Department of Health, 2001 10. Jamrozik K. The case for a new system of oversight of research on human subjects. J Med Ethics 2000;26: 334-9 [PMC free article] [PubMed] [Google Scholar] 11. Lock S, Wells F, eds. Fraud and Misconduct in Medical Research. London: BMJ Publishing, 1993 12. DuVal G. Ethics in psychiatric research: study design issues. Can J Psychiatry 2004;49: 55-9 [PubMed] [Google Scholar] 13. Saunders J, Wainwright P. Risk: Helsinki 2000 and the use of placebo in medical research. Clin Med 2005;3: 435-9 [PMC free article] [PubMed] [Google Scholar] 14. Sibille M, Deigat N, Janin A, Irkesse, S, Durand DV. Adverse events in phase-I studies: a report in 1015 healthy volunteers. Eur J Clin Pharmacol 1998;54: 13-20 [PubMed] [Google Scholar] 15. Fitzsimons D, McAloon T. The ethics of non-intervention in a study of patients awaiting coronary artery bypass surgery. J Adv Nurs 2004;46: 395-402 [PubMed] [Google Scholar] 17. Hearnshaw H. Comparison of requirements of research ethics committees in 11 European countries for a non-invasive interventional study. BMJ 2004;328: 140-1 [PMC free article] [PubMed] [Google Scholar] 19. World Medical Association. Declaration of Helsinki. Helsinki: WMA, 1964 [PubMed] 20. Royal College of General Practitioners. Informal Consultation On Barriers To Research Created By Over-Regulation, Ethics Committees Etc. London: Royal College of General Practitioners Research Group, 2004 21. Khanlou N, Peter E. Participatory action research: considerations for ethical review. Soc Sci Med 2005;60: 2333-40 [PubMed] [Google Scholar] 22. Terry W, Olson LG, Ravenscroft P, Wilss L, Boulton-Lewis G. Hospice patients' views of research in palliative care. Internal Med J (in press) 23. Dyregrov K. Bereaved parents' experience of research participation. Soc Sci & Med 2004;58: 391-400 [PubMed] [Google Scholar] 24. Central Office for Research Ethics Committees. Governance Arrangements for NHS Research Ethics Committees. London: COREC, 2001 25. Coleman MP, Evans BG, Barrett G. Confidentiality and the public interest in medical research—will we ever get it right? Clin Med 2003;3: 219-28 [PMC free article] [PubMed] [Google Scholar] 26. Peto J, Fletcher O, Gilham C. Data protection, informed consent, and research. BMJ 2004;328: 1029-30 [PMC free article] [PubMed] [Google Scholar] 27. Cassell J, Young A. Why we should not seek individual informed consent for participation in health services research. J Med Ethics 2002;28: 313-7 [PMC free article] [PubMed] [Google Scholar] 28. McKinney PA, Jones S, Parslow R, et al. A feasibility study of signed consent for the collection of patient identifiable information for a national paediatric clinical audit database. BMJ 2005;330: 877-9 [PMC free article] [PubMed] [Google Scholar] 29. The Medicines for Human Use (Clinical Trials) Regulations, SI No. 1031. London: HMSO, 2004 30. European Parliament and the Council of the European Union. Directive 2001/20/EC. Luxembourg: European Parliament, 2001 31. Data Protection Act. London: HMSO, 1998 32. Human Fertilisation and Embryology Act. London: HMSO, 1990 33. Health and Social Care Act. London: HMSO, 2001 Articles from Journal of the Royal Society of Medicine are provided here courtesy of Royal Society of Medicine Press What is risk of harm in research?A risk is a potential harm or injury associated with the research that a reasonable person would be likely to consider significant in deciding whether or not to participate in the study. The concept of risk includes discomfort, burden, or inconvenience a subject may experience as a result of the research procedures.
What is no harm to participants in research?So, what does the term “beneficence” mean? In its simplest form, it means to maintain the welfare of human research participants by doing no harm to them. This includes physical harm that may be associated with experimental research studies, as well as emotional and psychological harm.
Which of the following acts are considered as unethical in research?Research misconduct is defined in the USPHS Policy as “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.”
Why is protection from harm important in research?Firstly, it aims to ensure that none of the participants have been harmed in any way by the study. Secondly, it aims to make sure that the researchers have informed consent. Thirdly, it allows the participants an opportunity to remove their results from the study.
|