4.3.5 The Role of Scientists and Engineers
This section is mostly based on the book The honest broker: Making sense of science in policy and politics by Robert Pielke Jr. (Cambridge University Press 2007). See also: Carl Mitcham, Rationality in Technology and Ethics, in: New Perspectives in Technology, Values, and Ethics (edited by W.J. Gonzalez), Springer 2015.
Imagine a politician or citizen seeking counsel regarding geoengineering responses to climate change. The pure knowledge exponent engineer responds like a detached bystander, spelling out in detail the various chemical and/or mechanical engineering processes that can sequester carbon. Politicians and citizens might well feel like they had inadvertently walked into a technical engineering class. Presenting all the scientific background and the advantages and disadvantages of possible options, he himself remains neutral on any decision-making.
The issue advocate engineer, by contrast, acts like a salesperson and immediately argues for a bioengineering-related seeding of the ocean with iron to stimulate phytoplankton growth that would consume carbon dioxide. But the argument would be made with a peculiarly technical rhetoric that deploys information about the chemical composition of the iron, transport mechanisms, relation to phytoplankton blooms, and more. Politicians and citizens might well think they were standing at the booth demonstrating a proprietary innovation at an engineering trade show. The advocate seems to have made a choice on what he believes is the best solution, and promotes it regardless of the normative judgments made by others (politicians, public).
The arbiter engineer acts more like a hotel concierge. Steering a course between that of neutral bystander and advocate, such an engineer starts by asking what the politician or citizen wants from a geoengineering response: simplicity, low cost, safety, dramatic results, public acceptability, or what? Once informed that the aim is safety, the arbiter engineer would identify a matrix of options with associated low risk factors. The arbiter engineer engages with the public and communicates knowledge guided strongly by publicly expressed needs or interests. The concierge might on another occasion work as a medical doctor or psychologist counseling a patient.
Pielke suggested a 4th type of scientist/engineer that in his view would be the most helpful one in a normative discourse: the honest broker engineer reaffirms some modest distance from the immediate needs or interests of any inquirer in order to offer an expanded matrix of information about multiple geoengineering options and associated assessments in terms of simplicity, cost, safety, predictable outcomes, and more. The effect will often be to stimulate re-thinking on the part of inquirers, maybe a re-consideration of the needs or interests with which they may have been operating, even when they did not originally take the time to express them. The experience might be more analogous to a career fair than a single booth at a trade show.
Pielke and associates argue that the most appropriate path is to recognize the limits of engineering and to distance advice from interest-group politics while more robustly connecting it to specific policy alternatives. Research will not settle political and ethical disputes about the kind of world in which we wish to live. But engineers can connect their research with specific policies, once citizens or politicians have decided which outcomes to pursue. In this way, engineers provide an array of options that are clearly related to diverse policy goals. Rather than advocate a particular course of action, either openly or in disguise, engineers should work to help policymakers and the public understand which courses of action are consistent with our current – always fallible – technical knowledge about the world and our current – always revisable – visions of the good.
Almost all debates in the technology discourse are – in one way or another – about risk and uncertainty. Sustainability is at risk, values are at risk of being impacted or violated, responsibilities are attributed concerning the competence of dealing with risks and keeping them at a low level. Therefore, this topic deserves its own section in which we will shed light onto its definitions, its handling and its institutional implementation in the form of the precautionary principle.
188.8.131.52 Perspectives and Definitions
When asking different technological stakeholders about their definition of risk, we will get different answers. The wider public or the society – the common man on the road – uses the word risk when talking about hazards, harms, or dangers. Particular concerns and often also fears – reasonable and irrational ones – are associated with the concept of risk. Besides the question whether certain worries and fears based on the perception of risks are justified, it is important for technology and risk assessors not to ignore these concerns since they express the real atmosphere or mood in the society. On the other side we have the scientific approach to risk: natural and technical sciences (including engineering) study risk factors associated to technology empirically and numerically, often with a focus on the technology. Semi-empirical sciences like the social sciences and humanities study risk rather with a focus on the affected people. In both cases, dealing with risk is a very rational and pragmatic endeavour rather than an emotional or intuitive one. For the economy, the potential or actual risks are of a different nature: For companies and other economic actors, risks exist related to economic impact of activities, often in monetary aspects. The risk of a malfunctioning technological artefact is not only manifested in injuring a user, but also in decreasing the profit of the manufacturer. Politicians – ideally – see it as their task to keep risk levels at a minimum and support the benefit side of technology development by making the right decisions in policy and governance. They want to know details about risk levels in order to respond to threats with regulatory guidance. Last but not least, questions of risk are always also philosophical questions, especially in ethics: What is a risk and for who, and what kind of value is at risk in a particular situation. Ethicists define the normative framework in which a risk debate is held. This will be the focus of this section.
Besides all these different nuances in the perspectives on risk, we may try to find a common ground in the form of a definition of risk. A first way of saying it is this:
Risk is an unwanted event which may or may not occur.
There are two things to pay attention two. The first is the may or may not formulation. In case we are sure that something will occur, we call it a harm or a threat or a hazard. In case we are not so sure – it means there is a degree of uncertainty – we call it a risk. The second is the focus on unwanted. Obviously, risk is always associated with something negative, undesirable, displeasing. In this simplest definition the risk is the event itself. An example could be the statement “Lung cancer is one of the major risks that affect smokers.”. However, sometimes the word risk is used differently:
Risk is the cause of an unwanted event which may or may not occur.
The exemplary statement would then be “Smoking is by far the most important health risk in industrialized countries.”. Both of these understandings of risk – an unwanted event or its cause – are qualitative. In technical contexts, we need a more quantitative definition, like this one:
Risk is the probability of an unwanted event which may or may not occur.
Here, we could state exemplary “The risk that a smoker’s life is shortened by a smoking-related disease is about 50%.”. In many contexts, the mere probability is not sufficient. It must be combined with the severity of an event:
Risk is the statistical expectation value of an unwanted event which may or may not occur.
The expectation value of a possible negative event is the product of its probability and some measure of its severity. It is common to use the number of killed persons as a measure of the severity of an accident. With this measure of severity, the risk associated with a potential accident is equal to the statistically expected number of deaths. Other measures of severity give rise to other measures of risk. Although expectation values have been calculated since the 17th century, the use of the term risk in this sense is relatively new. It was introduced into risk analysis in the influential Reactor Safety Study, WASH-1400, in 1975. Today it is the standard technical meaning of the term risk in many disciplines. It is regarded by some risk analysts as the only correct usage of the term. It is important to note that risk is differentiated from uncertainty as described by this definition:
Risk is the fact that a decision is made under conditions of known probabilities (“decision under risk” as opposed to “decision under uncertainty”)
When there is a risk, there must be something that is unknown or has an unknown outcome. Therefore, knowledge about risk is knowledge about lack of knowledge. A scientific approach to risk, consequently, puts a methodological focus on generating more knowledge in order to reduce risk levels. This has two strains of argumentation: First, the more we know the more problems we shift from (unmanageable) uncertainty-related to (manageable) risk-related issues. Second, the clearer we are about probabilities of occurrences and events the easier it is for us to intervene or prepare for it. In other words, the first endeavour is to reduce uncertainty, and the second is to manage risks and react on them properly. This is a task for risk assessment.
184.108.40.206 Risk Assessment
The following scheme (compiled by the International Risk Governance Council, communicated by risk researcher Ortwin Renn) summarises the cycle of risk assessment that proved useful and practicable for technology governance.
Every assessment starts with the awareness for a particular problem or conflict. Someone has to point out that there is or might be a risk. In this pre-assessment phase, problem-framing takes place and early warnings are expressed. A superficial screening under consideration of scientific conventions and viewpoints reveals whether or not an articulated problem (here: a risk) makes it to the next stage, the assessment phase. Here, the risk is analysed thoroughly. First, the particular hazard must be identified, for example the contamination of a river, the chance of injuries from misuse of a technical artefact, or social injustice as the result of misregulation of new technologies. This hazard must be characterised in numbers, for example pollutant concentrations, their source, their effect, etc. Then, it has to be determined who or what and to what extent (how many, how much) is exposed to the risk – in the river example: fish, citizen, and so on. With this knowledge, the actual risk can be characterised: “There is a risk of losing up to 20% of the fish in that river due to the release of pollutant from the upstream lacquer manufacturing plant.”. When the risk is thus determined, it has to be evaluated whether or not the risk is tolerable and acceptable, and whether or not there is the need for risk reduction measures. As we will see later, this might be the most difficult part of the chain. Before discussing this point in more detail, let’s see what happens when it is decided to do something about the risk: The risk needs to be managed which usually means it is attempted to reduce it. Based on the available knowledge, options for action have to be identified, assessed and evaluated. Finally, the best options are selected. After this decision-making procedure the options are implemented and realised, including monitoring and control of the process. With this feedback it can be decided whether the measures are successful or not, whether risks remain or not, or whether new risks arise instead or not. Here, the cycle is complete by subjecting the risk analysis to another pre-assessment stage to eventually start the cycle again. All parts are necessarily connected by the important aspect of risk communication. In order to ensure the efficacy and usefulness of risk governance, all involved parties need to establish channels of clear, efficient and fast communication so that important information finds its way into the decision-making process.
We can see clearly in this scheme that there are, basically, two main parts of work to do: First – represented by the left half – the acquisition of knowledge related to the particular case of risk, second – with the right half being in charge of it – the decision-making on actions and their appropriate implementation. In most of the cases, the people who deal with the tasks involved in this process are different for the assessment and the management phase. While scientists, researchers and other kinds of (technical) experts elaborate and compile knowledge, it is often politicians, managers, directors or other leaders of councils, agencies, companies, etc. who debate and decide on strategies and actions. It is, however, quite unclear who is in charge of the evaluation phase (the green box). Is that a third group of people (for example ethicists or social scientists)? Is that the experts from the left side or the decision-makers of the right side, or both? When technical experts are equipped with the task and responsibility to make evaluative and normative judgments, there is a danger of a technocratic decision-making system. When political or economic stakeholders in leading positions have that power, there might be the danger of biased decisions and severe conflicts of interest. Today, different levels of risk governance (for example, within companies, on the national governance level, internationally, globally), different countries and different political organs handle the organisation of the evaluation phase in different ways for different types of risk. Some parliaments established independent institutions with it (for example the Office of Technology Assessment in Germany), others delegate the evaluation of risks to those who are also in charge of managing them. For high-impact social risks of sociotechnical systems (like bio- or nanotechnologies), even the wider public is participating in various channels in the risk evaluation.
As the examples I mentioned (river pollution, injuries from technical artefacts, social injustice from impact of a new technology) indicate, many different kinds of risk can be subjected into this procedure. Classically, this cycle was applied mostly to mere technical risks like contaminant concentrations or malfunctioning probabilities. However, it is certainly possible to apply the same strategy to ethical and social risks. Remember the scheme of The Larger Picture of technology assessment in section 4.3.1: The classical risk assessment is, according to this tiered structure, only the first step of a more complex assessment. However, in technology governance, not only risk perceptions need to be assessed and addressed. A major goal is the public acceptance of a technology and its development, and ultimately this can be reached by the societal embedding of the development process. This requires the inclusion of social and ethical implications into the assessment. Risk governance as an assessment tool, therefore, shouldn’t be limited to technical risks, but may be expanded to those kinds of conflicts that bear social tensions and ethical (or broader: normative) ambiguities.
220.127.116.11 Risk and Ethics
From these considerations you can (hopefully) see clearly how the risk discourse is strongly interwoven with ethics. Even the definition of what counts as a risk and what counts as a benefit is an evaluation that requires normative premises derived from ethical reasoning strategies. Moreover, what is one stakeholder’s benefit is another stakeholder’s risk. The question for whom something is a risk requires careful analysis and argumentation, too. Some say, risks can only exist for individual persons since only those have expressible interests and a personal integrity that can be at risk. However, it was formulated that societies or even mankind as such may face risks. Moreover, certain risks are certainly also threatening non-human stakeholders, the biosphere, the environment or the world as such. Here, again, we see the necessity to apply the centrisms in order to support our arguments. Anthropocentrists and bio- or ecocentrists will argue differently, or – as we called it in the context of justice – will add different entities to the equation.
Ethics becomes especially significant when arguments have to be compared, weighed and prioritised when different interests and perspectives collide. Even though it is often impossible to identify one (or some) viewpoint as correct and others as wrong, it is often possible to give good arguments why one viewpoint is more convincing or stronger than another. The resulting risk trade-off – a decision on a proper distribution of risks – is based on principles of justice and fairness. As usual in our technology discourses, the most common arguments are either consequentialistic or deontological.
Furthermore, the risk debate has clear connections to that of responsibility. As seen in section 4.3.3, technology related responsibilities are often attributed in view of certain risks and their prevention or reduction. Moreover, technological risks are usually related to one or more of the values we identified in section 4.3.2, for example functionality and safety, or health, environmental, etc. Clarifications on these relations help identifying conflicting interests and mediating options of further proceeding. The normative framework of sustainability with its applications of justice and fairness principles gives a helpful reference for risk evaluation and management.
18.104.22.168 Risk Discourse Types
Of course, not all cases of risk and their respective conflicts have the same character and impact. It would be waste of resources and energy to treat a simple risk with a commission on ethical and social implications, same as it would be dangerous to mandate a classical risk management group with the evaluation of ethical and social implications of (for example) nanotechnology. The abovementioned IRGC suggested the following scheme to classify risks and their management according to their character:
We may distinguish four types of risk: simple risks (cases of known probabilities of a clearly defined hazard), risks induced by complexity of the case, risks caused by high degree of uncertainty, and risks arising from ambiguity and strong disagreement between stakeholders. In the first case, an instrumental discourse is sufficient: An agency (e.g. an environmental office) discusses the case on the basis of the known facts and performs the risk assessment as a statistical risk analysis in order to conclude what is the best strategy to deal with this risk. There is, most likely, no conflict arising from this kind of risk.
When the case is too complex, there is usually a lack of sufficient knowledge about the relevant factors. Then, the discourse should be epistemic, which means it should be focused on the generation of more and deeper knowledge that can help solving the problem. In order to do so, the input from external experts in fields that have to do with the case is required. Arising conflicts are usually of cognitive nature: Two experts disagree on a key aspect and try to convince each other by presenting knowledge and facts that they regard as significant. Since statistical data is often not available for cases like this, the best remedy is a probabilistic risk analysis.
It is becoming more tricky when the risk arises from uncertainty, that means when not even probabilities of events can be given. A discourse can only be reflective, which means that it is continued in small steps of action-feedback loops. Agency staff, external experts and various additional affected stakeholders try to figure out the best options, implement them, re-evaluate them and proceed step by step, like walking in a dark unknown room. Conflicts, here, are not only cognitive (“What are the most reasonable options?“), but also evaluative (“What would we do, if…?“). Risks like this often can’t be reduced or dissolved. Therefore, solutions are found not merely in probabilistic risk modelling but require a balancing of risks to an acceptable level, for example by exploiting principles of distributive justice.
Ambiguity-induced risks arise when interests and integrities of affected parties collide and conflict in intractable disagreement. Either one side’s interests are neglected or the other’s. Only a participative discourse that brings all stakeholders – including the general public and other third parties – together has a chance to solve the issue. The conflicts are not merely of cognitive or evaluative character, but of normative impact (“What shall we do? What do we value?“). Therefore, a merely objective risk assessment based on data and probabilities is insufficient. The solution can only be found in a trade-off of risks and a careful participative deliberation on how to proceed – with other words: in making compromises.
22.214.171.124 Levels of Risk Debate
Ethics also come into play on the level of risk communication. The expectations on how to address and approach conflicts and their solutions are different among public stakeholders and S&T enactors. Ortwin Renn pointed out three levels of risk communication according to the degree of complexity and the intensity of the conflict (scheme A). In short, he stated that knowledge and expertise (for example provided by scientific data or professionals from a certain field) can only help solving conflicts to a limited extend. The majority of concerns (for example those expressed by the public) cannot be answered by scientists and risk researchers alone since they are related to moral and social values and touch or effect certain worldviews.
Even though the model originally described aspects of risk communication in a debate among stakeholders, it can certainly be applied to conflict assessment in general. Concerns or problems with comparably low conflict potential can be solved by scientific and technical knowledge and expertise, even when the complexity might be high. For example: The toxicity of nanoparticles injected into a patient can be investigated in advance as long as trustworthy methods are available. This is difficult, but not impossible. As soon as clear toxicological data is available, it can convince the patient of the safety of the treatment. When the solution of a conflict requires arguments that are beyond empirical research findings and expert knowledge, or when there is no scientific data available, people trust experts that prove to have experience and competence in a particular field. In the above-mentioned example, the inquiries on (legal) responsibility for certain side-effects might fall into this category. The patients want to know if sufficient regulations are in place to clarify responsibilities in case of adverse outcomes. The problem is not very complex, but bears a high conflict potential since a large number of patients might be affected as soon as the nanotechnological methods are available and approved for application in medical treatment. In a debate, a scientist or an engineer who shows and proves competence and experience in a wider range of aspects related to his research focus can make a stronger argument that is trusted by laymen, rather than, for example, a viewpoint expressed by a politician or a businessman. However, it is dissatisfying for the patient with a “what if” question – inquiring on the preparedness for unexpected cases, e.g. “What if after the treatment the nanoparticles damage other organs?” – when the S&T experts’s reply solely focuses on empirical data and statistical extrapolation (“So far, there is no evidence that the applied nanoparticles have undesired health effects.”). Furthermore, in many cases, a science or technology related problem is beyond any competence or expertise. When knowledge or experience is not available since the uncertainty is high, when new territory is explored or effects are unforeseeable, the strongest argument in a debate is one that refers to values and worldviews. Privacy issues of personalized (nano)medicine is such a case. This concern needs to be answered with statements about ethical guidelines, laws and regulations that preserve and protect values that a society finds important.
A small but significant detail shall be pointed out here: Renn’s two-dimensional scheme might be interpreted in a way that for conflicts with low intensity and low complexity, knowledge and expertise are completely sufficient to achieve a solution. However, the model can and should be regarded as “three-dimensional” in a way that the three domains fully overlap (scheme B). Values and worldviews still play an important role when scientific knowledge and experts’ findings are powerful arguments. Only in view of a normative framework that is constituted by value and belief systems it can be defined what counts as risk and what as benefit (and for whom), what has the power to serve as an convincing fact or argument to solve a conflict, and what kind of incident or concern has the potential to mobilise sufficient awareness and attention so that it finds its way beyond the pre-assessment stage into the contemporary S&T discourse agenda. In this respect, ethics is a fundamental and crucial element of risk discourse, for example in ELSI research and modern TA concepts such as “constructive TA”, “argumentative TA” and “Parliamentary TA”. With other words: Ethics does not only come into play when science and politics are not convincing enough, but it is underlying the whole debate.
A good example that illustrates the common misunderstandings in expert-laymen communications is an event organised by the Taiwanese agencies in cooperation with the TaiPower company concerning the construction of Taiwan’s 4th nuclear power plant near Keelung. It was intended as an information and discussion event in order to soothe the local citizen’s worries and fears. These, naturally, had major concerns about safety aspects and emergency strategies, for example in case of earthquakes. The technical experts from TaiPower addressed those with technical data on how they make the facility safe, while the officials pointed out the low likelihood of accidents by referring to experiences with the other three facilities. None of them discussed the weighing of values (how to deal with emergency cases, evacuation priorities, etc., risk distribution in relation to benefits, etc.). The angry and disappointed citizen left the event even more upset than before. This incident also shows how different ideas of the purpose of such citizen panels can conflict with each other: Some take them as information and education events, hoping that protesting and angry citizen can be convinced of the usefulness of a strategy or plan. In principle, then, the event is seen as a tool to generate acceptance for a political or economic decision. In most of these cases, the events fail in achieving anything. Better experiences have been made with roundtable discussions and workshops that take the citizen’s concerns serious and gives them the feeling of influencing the actual decision-making process. This form of participation requires a direct form of deliberative democracy.
4.3.7 Practice Questions
- Which of the four risk assessment pathways should be chosen for the following risk issues:
- Getting lung cancer from smoking
- A black-out in Taiwan‘s electricity network
- Consumption of seafood caught close to Fukushima
- Protection of patient data generated by nanomedical methods
- Rise of sea level due to global warming
- What are possible approaches for the risk assessment of Google‘s self-driving car?
- Think of technical aspects, but also legal, ethical and social implications!
- You work in the Public Relations department of a big company. Your job is to maintain a digital (online) communication platform for customers and clients. The boss asks your opinion on how to improve the system. What can be your reply?