Home » Science and Technology Ethics » S&T Ethics 11

S&T Ethics 11

6.4 Risk, Uncertainty, and Precaution

Almost all debates in the technology discourse are – in one way or another – about risk and uncertainty. Sustainability is at risk, values are at risk of being impacted or violated, responsibilities are attributed concerning the competence of dealing with risks and keeping them at a low level. Therefore, this topic deserves its own section in which we will shed light onto its definitions, its handling and its institutional implementation in the form of the precautionary principle.

6.4.1 Perspectives and Definitions

When asking different technological stakeholder about their definition of risk, we will get different answers. The wider public or the society – the common man on the road – uses the word risk when talking about hazards, harms, or dangers. Particular concerns and often also fears – reasonable and irrational ones – are associated with the concept of risk. Besides the question whether certain worries and fears based on the perception of risks are justified, it is important for technology and risk assessors not to ignore these concerns since they express the real atmosphere or mood in the society. On the other side we have the scientific approach to risk: natural and technical sciences (including engineering) study risk factors associated to technology empirically and numerically, often with a focus on the technology. Semi-empirical sciences like the social sciences and humanities study risk rather with a focus on the affected people. In both cases, dealing with risk is a very rational and pragmatic endeavour rather than an emotional or intuitive one. For the economy, the potential or actual risks are of a different nature: For companies and other economic actors, risks exist related to economic impact of activities, often in monetary aspects. The risk of a malfunctioning technological artefact is not only manifested in injuring a user, but also in decreasing the profit of the manufacturer. Politicians – ideally – see it as their task to keep risk levels at a minimum and support the benefit side of technology development by making the right decisions in policy and governance. They want to know details about risk levels in order to respond to threats with regulatory guidance. Last but not least, questions of risk are always also philosophical questions, especially in ethics: What is a risk and for who, and what kind of value is at risk in a particular situation. Ethicists define the normative framework in which a risk debate is held. This will be the focus of this section.

11-1

Besides all these different nuances in the perspectives on risk, we may try to find a common ground in the form of a definition of risk. A first way of saying it is this:

                Risk is an unwanted event which may or may not occur.

There are two things to pay attention two. The first is the may or may not formulation. In case we are sure that something will occur, we call it a harm or a threat or a hazard. In case we are not so sure – it means there is a degree of uncertainty – we call it a risk. The second is the focus on unwanted. Obviously, risk is always associated with something negative, undesirable, displeasing. In this simplest definition the risk is the event itself. An example could be the statement “Lung cancer is one of the major risks that affect smokers.”. However, sometimes the word risk is used differently:

                Risk is the cause of an unwanted event which may or may not occur.

The exemplary statement would then be “Smoking is by far the most important health risk in industrialized countries.”. Both of these understandings of risk – an unwanted event or its cause – are qualitative. In technical contexts, we need a more quantitative definition, like this one:

                Risk is the probability of an unwanted event which may or may not occur.

Here, we could state exemplary “The risk that a smoker’s life is shortened by a smoking-related disease is about 50%.”. In many contexts, the mere probability is not sufficient. It must be combined with the severity of an event:

Risk is the statistical expectation value of an unwanted event which may or may not occur.

The expectation value of a possible negative event is the product of its probability and some measure of its severity. It is common to use the number of killed persons as a measure of the severity of an accident. With this measure of severity, the risk associated with a potential accident is equal to the statistically expected number of deaths. Other measures of severity give rise to other measures of risk. Although expectation values have been calculated since the 17th century, the use of the term risk in this sense is relatively new. It was introduced into risk analysis in the influential Reactor Safety Study, WASH-1400, in 1975. Today it is the standard technical meaning of the term risk in many disciplines. It is regarded by some risk analysts as the only correct usage of the term. It is important to note that risk is differentiated from uncertainty as described by this definition:

Risk is the fact that a decision is made under conditions of known probabilities (“decision under risk” as opposed to “decision under uncertainty”)

When there is a risk, there must be something that is unknown or has an unknown outcome. Therefore, knowledge about risk is knowledge about lack of knowledge. A scientific approach to risk, consequently, puts a methodological focus on generating more knowledge in order to reduce risk levels. This has two strains of argumentation: First, the more we know the more problems we shift from (unmanageable) uncertainty-related to (manageable) risk-related issues. Second, the clearer we are about probabilities of occurrences and events the easier it is for us to intervene or prepare for it. In other words, the first endeavour is to reduce uncertainty, and the second is to manage risks and react on them properly. This is a task for risk assessment.

6.4.2 Risk Assessment

The following scheme (compiled by the International Risk Governance Council, communicated by risk researcher Ortwin Renn) summarises the cycle of risk assessment that proved useful and practicable for technology governance.

11-2

Every assessment starts with the awareness for a particular problem or conflict. Someone has to point out that there is or might be a risk. In this pre-assessment phase, problem-framing takes place and early warnings are expressed. A superficial screening under consideration of scientific conventions and viewpoints reveals whether or not an articulated problem (here: a risk) makes it to the next stage, the assessment phase. Here, the risk is analysed thoroughly. First, the particular hazard must be identified, for example the contamination of a river, the chance of injuries from misuse of a technical artefact, or social injustice as the result of misregulation of new technologies. This hazard must be characterised in numbers, for example pollutant concentrations, their source, their effect, etc. Then, it has to be determined who or what and to what extent (how many, how much) is exposed to the risk – in the river example: fish, citizen, and so on. With this knowledge, the actual risk can be characterised: “There is a risk of losing up to 20% of the fish in that river due to the release of pollutant from the upstream lacquer manufacturing plant.”. When the risk is thus determined, it has to be evaluated whether or not the risk is tolerable and acceptable, and whether or not there is the need for risk reduction measures. As we will see later, this might be the most difficult part of the chain. Before discussing this point in more detail, let’s see what happens when it is decided to do something about the risk: The risk needs to be managed which usually means it is attempted to reduce it. Based on the available knowledge, options for action have to be identified, assessed and evaluated. Finally, the best options are selected. After this decision-making procedure the options are implemented and realised, including monitoring and control of the process. With this feedback it can be decided whether the measures are successful or not, whether risks remain or not, or whether new risks arise instead or not. Here, the cycle is complete by subjecting the risk analysis to another pre-assessment stage to eventually start the cycle again. All parts are necessarily connected by the important aspect of risk communication. In order to ensure the efficacy and usefulness of risk governance, all involved parties need to establish channels of clear, efficient and fast communication so that important information finds its way into the decision-making process.

We can see clearly in this scheme that there are, basically, two main parts of work to do: First – represented by the left half – the acquisition of knowledge related to the particular case of risk, second – with the right half being in charge of it – the decision-making on actions and their appropriate implementation. In most of the cases, the people who deal with the tasks involved in this process are different for the assessment and the management phase. While scientists, researchers and other kinds of (technical) experts elaborate and compile knowledge, it is often politicians, managers, directors or other leaders of councils, agencies, companies, etc. who debate and decide on strategies and actions. It is, however, quite unclear who is in charge of the evaluation phase (the green box). Is that a third group of people (for example ethicists or social scientists)? Is that the experts from the left side or the decision-makers of the right side, or both? When technical experts are equipped with the task and responsibility to make evaluative and normative judgments, there is a danger of a technocratic decision-making system. When political or economic stakeholders in leading positions have that power, there might be the danger of biased decisions and severe conflicts of interest. Today, different levels of risk governance (for example, within companies, on the national governance level, internationally, globally), different countries and different political organs handle the organisation of the evaluation phase in different ways for different types of risk. Some parliaments established independent institutions with it (for example the Office of Technology Assessment in Germany), others delegate the evaluation of risks to those who are also in charge of managing them. For high-impact social risks of sociotechnical systems (like bio- or nanotechnologies), even the wider public is participating in various channels in the risk evaluation.

As the examples I mentioned (river pollution, injuries from technical artefacts, social injustice from impact of a new technology) indicate, many different kinds of risk can be subjected into this procedure. Classically, this cycle was applied mostly to mere technical risks like contaminant concentrations or malfunctioning probabilities. However, it is certainly possible to apply the same strategy to ethical and social risks. We have seen this overview in section 5.2.2:

8-6

The classical risk assessment is, according to this tiered structure, only the first step of a more complex assessment. However, in technology governance, not only risk perceptions need to be assessed and addressed. A major goal is the public acceptance of a technology and its development, and ultimately this can be reached by the societal embedding of the development process. This requires the inclusion of social and ethical implications into the assessment. Risk governance as an assessment tool, therefore, shouldn’t be limited to technical risks, but may be expanded to those kinds of conflicts that bear social tensions and ethical (or broader: normative) ambiguities.

Of course, not all cases of risk and their respective conflicts have the same character and impact. It would be waste of resources and energy to treat a simple risk with a commission on ethical and social implications, same as it would be dangerous to mandate a classical risk management group with the evaluation of ethical and social implications of (for example) nanotechnology. The abovementioned IRGC suggested the following scheme to classify risks and their management according to their character:

11-3

We may distinguish four types of risk: simple risks (cases of known probabilities of a clearly defined hazard), risks induced by complexity of the case, risks caused by high degree of uncertainty, and risks arising from ambiguity and strong disagreement between stakeholders. In the first case, an instrumental discourse is sufficient: An agency (e.g. an environmental office) discusses the case on the basis of the known facts and performs the risk assessment as a statistical risk analysis in order to conclude what is the best strategy to deal with this risk. There is, most likely, no conflict arising from this kind of risk.

When the case is too complex, there is usually a lack of sufficient knowledge about the relevant factors. Then, the discourse should be epistemic, which means it should be focused on the generation of more and deeper knowledge that can help solving the problem. In order to do so, the input from external experts in fields that have to do with the case is required. Arising conflicts are usually of cognitive nature: Two experts disagree on a key aspect and try to convince each other by presenting knowledge and facts that they regard as significant. Since statistical data is often not available for cases like this, the best remedy is a probabilistic risk analysis.

It is becoming more tricky when the risk arises from uncertainty, that means when not even probabilities of events can be given. A discourse can only be reflective, which means that it is continued in small steps of action-feedback loops. Agency staff, external experts and various additional affected stakeholders try to figure out the best options, implement them, re-evaluate them and proceed step by step, like walking in a dark unknown room. Conflicts, here, are not only cognitive (“What are the most reasonable options?”), but also evaluative (“What would we do, if…?”). Risks like this often can’t be reduced or dissolved. Therefore, solutions are found not merely in probabilistic risk modelling but require a balancing of risks to an acceptable level, for example by exploiting principles of distributive justice.

Ambiguity-induced risks arise when interests and integrities of affected parties collide and conflict in intractable disagreement. Either one side’s interests are neglected or the other’s. Only a participative discourse that brings all stakeholders – including the general public and other third parties – together has a chance to solve the issue. The conflicts are not merely of cognitive or evaluative character, but of normative impact (“What shall we do? What do we value?”). Therefore, a merely objective risk assessment based on data and probabilities is insufficient. The solution can only be found in a trade-off of risks and a careful participative deliberation on how to proceed – with other words: in making compromises.

6.4.3 Risk and Ethics

From these considerations you can (hopefully) see clearly how the risk discourse is strongly interwoven with ethics. Even the definition of what counts as a risk and what counts as a benefit is an evaluation that requires normative premises derived from ethical reasoning strategies. Moreover, what is one stakeholder’s benefit is another stakeholder’s risk. The question for whom something is a risk requires careful analysis and argumentation, too. Some say, risks can only exist for individual persons since only those have expressible interests and a personal integrity that can be at risk. However, it was formulated that societies or even mankind as such may face risks. Moreover, certain risks are certainly also threatening non-human stakeholders, the biosphere, the environment or the world as such. Here, again, we see the necessity to apply the centrisms in order to support our arguments. Anthropocentrists and bio- or ecocentrists will argue differently, or – as we called it in the context of justice – will add different entities to the equation.

Ethics becomes especially significant when arguments have to be compared, weighed and prioritised. Especially in the ambiguity-induced risk discourse, as we have seen, different interests and perspectives collide. Even though it is often impossible to identify one (or some) viewpoint as correct and others as wrong, it is often possible to give good arguments why one viewpoint is more convincing or stronger than another. The resulting risk trade-off – a decision on a proper distribution of risks – is based on principles of justice and fairness. As usual in our technology discourses, the most common arguments are either consequentialistic or deontological.

Furthermore, the risk debate has clear connections to that of responsibility. As seen in section 6.3, technology related responsibilities are often attributed in view of certain risks and their prevention or reduction. Moreover, technological risks are usually related to one or more of the values we identified in section 6.2, for example functionality and safety, or health, environmental, etc. Clarifications on these relations help identifying conflicting interests and mediating options of further proceeding. The normative framework of sustainability with its applications of justice and fairness principles gives a helpful reference for risk evaluation and management.

Ethics also come into play on the level of risk communication. The expectations on how to address and approach conflicts and their solutions are different among public stakeholders and S&T enactors. Ortwin Renn pointed out three levels of risk communication according to the degree of complexity and the intensity of the conflict (scheme A). In short, he stated that knowledge and expertise (for example provided by scientific data or professionals from a certain field) can only help solving conflicts to a limited extend. The majority of concerns (for example those expressed by the public) cannot be answered by scientists and risk researchers alone since they are related to moral and social values and touch or effect certain worldviews.

11-4

Even though the model originally described aspects of risk communication in a debate among stakeholders, it can certainly be applied to conflict assessment in general. Concerns or problems with comparably low conflict potential can be solved by scientific and technical knowledge and expertise, even when the complexity might be high. For example: The toxicity of nanoparticles injected into a patient can be investigated in advance as long as trustworthy methods are available. This is difficult, but not impossible. As soon as clear toxicological data is available, it can convince the patient of the safety of the treatment. When the solution of a conflict requires arguments that are beyond empirical research findings and expert knowledge, or when there is no scientific data available, people trust experts that prove to have experience and competence in a particular field. In the above-mentioned example, the inquiries on (legal) responsibility for certain side-effects might fall into this category. The patients want to know if sufficient regulations are in place to clarify responsibilities in case of adverse outcomes. The problem is not very complex, but bears a high conflict potential since a large number of patients might be affected as soon as the nanotechnological methods are available and approved for application in medical treatment. In a debate, a scientist or an engineer who shows and proves competence and experience in a wider range of aspects related to his research focus can make a stronger argument that is trusted by laymen, rather than, for example, a viewpoint expressed by a politician or a businessman. However, it is dissatisfying for the patient with a “what if” question – inquiring on the preparedness for unexpected cases, e.g. “What if after the treatment the nanoparticles damage other organs?” – when the S&T experts’s reply solely focuses on empirical data and statistical extrapolation (“So far, there is no evidence that the applied nanoparticles have undesired health effects.”). Furthermore, in many cases, a science or technology related problem is beyond any competence or expertise. When knowledge or experience is not available since the uncertainty is high, when new territory is explored or effects are unforeseeable, the strongest argument in a debate is one that refers to values and worldviews. Privacy issues of personalized (nano)medicine is such a case. This concern needs to be answered with statements about ethical guidelines, laws and regulations that preserve and protect values that a society finds important.

A small but significant detail shall be pointed out here: Renn’s two-dimensional scheme might be interpreted in a way that for conflicts with low intensity and low complexity, knowledge and expertise are completely sufficient to achieve a solution. However, the model can and should be regarded as “three-dimensional” in a way that the three domains fully overlap (scheme B). Values and worldviews still play an important role when scientific knowledge and experts’ findings are powerful arguments. Only in view of a normative framework that is constituted by value and belief systems it can be defined what counts as risk and what as benefit (and for whom), what has the power to serve as an convincing fact or argument to solve a conflict, and what kind of incident or concern has the potential to mobilise sufficient awareness and attention so that it finds its way beyond the pre-assessment stage into the contemporary S&T discourse agenda. In this respect, ethics is a fundamental and crucial element of risk discourse, for example in ELSI research and modern TA concepts such as “constructive TA”, “argumentative TA” and “Parliamentary TA”. With other words: Ethics does not only come into play when science and politics are not convincing enough, but it is underlying the whole debate.

6.4.4 Precautionary Principle (預防原則)

When we are forced to make decisions in the face of risks or uncertainties, we might need to apply guidelines or principles that give an orientation on what is a proper way to proceed. The most established tool in the field of technology governance is the precautionary principle, or better: precautionary principles, since there are many variations of it. The simplest way of saying it is the proverb “Better safe than sorry!”. Common people apply this when they are anxious about potentially wrong decisions and remain inactive until they know clearer what is the best thing to do. In professional terms we may define precaution like this:

„Don‘t implement (apply, enact, publish) technology (and/or scientific knowledge) as long as there are uncertainties about its effects!“

In order to keep risk exposure low and effects manageable, it is advised to gain sufficient knowledge – and, by that, also control – first, and then bring the respective technology into effect. Politically, the PP was enacted first in the Rio declaration that was written at a UN conference in  Rio de Janeiro in 1992 (the same where also a declaration on global sustainability was made, see 5.1):

„Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” (Rio Declaration, 1992, Principle 15)

This formulation is difficult to understand and on first reading might sound confusing, even like the opposite of the common understanding of precaution. In order to understand it correctly it is important to know that at that time many legislations refused to fight environmental degradation and global phenomena like climate change, claiming that there is no scientific evidence that the problems are really caused by anthropogenic (human) activities. The idea is: Even though it is not scientifically certain in which way environmental degradation happens, we (mankind, represented by politicians) should do everything we can to prevent serious or irreversible damage, because if we wait too long, it might be to late to react! This is, since then, the major theme of precaution: Act as long as it is still possible. Precaution almost never means “Stop doing research!” or “Ban a technology!”. It was conceptualised as a tool to preserve human control over technological progress and to empower humans to protect the values they hold important.

The PP has been applied in various ways. Put negatively, it is simply a rhetoric tool exploited by politicians in order to silence critical voices, as in “No worries, we have everything under control!”. In a few actual cases, the PP was employed as a decision-making aid, for example in prioritising EU research funds for critical techno-scientific fields. It has been attempted (mostly in academic essays) to use the PP as a moral principle in the same way we use sustainability, autonomy or justice as principles. Moreover, some legislations made us of the PP as a legal principle in law- and policy-making, for example in several regulations of the EU on nanotechnological applications such as nano-scaled drugs.

Precaution was debated heatedly among philosophers and ethicists. Strictly speaking, it is not a principle that helps in situations of risk (a probabilistic estimate of the possibility that a (negative) event might occur), but only in cases of uncertainty (a situation where it is not possible to estimate risk probabilistically). This corresponds to the point we made in 6.4.2: Knowledge is created in order to elevate an issue of uncertainty into an issue of risk which is easier to handle. When this is not possible, the PP applies until knowledge is available. This causes a problem that the promoters of the PP intended to avoid: It supports scientistic and technocratic approaches of risk governance rather than holistic solutions under inclusion of ethical and social implications. With the intention to reduce uncertainties and deal with risks, technical and scientific strategies are promoted. Ultimately, this approach of avoiding facing moral and social risks blocks sustainable scientific and technological development, according to the critics of the PP.

6.4.5 Practice Questions

  • Which of the four risk assessment pathways should be chosen for the following risk issues:
    • Getting lung cancer from smoking
    • A black-out in Taiwan‘s electricity network
    • Consumption of seafood caught close to Fukushima
    • Protection of patient data generated by nanomedical methods
    • Rise of sea level due to global warming
  • What are possible approaches for the risk assessment of Google‘s self-driving car?
    • Think of technical aspects, but also legal, ethical and social implications!
  • New nano-scientific methods enable the production and application of radically new medical compounds with partly unknown toxicological properties – a case for the precautionary principle! What could it look like in practice?
    • Think of a roundtable of stakeholders discussing this issue. What are pros and cons of applying precautionary measures?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s