Today, I’d like to elaborate a bit on my professional field, science and technology assessment. There is no doubt that these two domains have a massive impact on our lives. Not only does scientific investigation generate empirical knowledge of physical features of the world and of its systems (for example society, environment, human psyche), and not only does technology development create technical artefacts and other products, the ubiquitous scientific and technological mission also influences the way people perceive the world and think of the lifeworld experiences they make. Scientific realism and physical reductionism, but also the vision that everything that one could imagine and desire is technically feasible and “engineerable” – including emotional, abstract or normative entities like love, happiness, politics, etc. – dominates our age. For many years (roughly up to the 1960s in Europe and USA, in Asia still ongoing), positivism was the driving paradigm of modernism: as long as we put sufficient efforts into something, we can achieve everything and will also always be able to correct negative effects into beneficial ones! Just let science and technology do! In terms of my tree of knowledge: Science and technology are not only branches in the tree, they also create new channels of meaning-construction through which other branches (like politics, economy, culture) are fed. Here are a few thoughts on that.
A first necessary clarification must be made on the relation between science and technology. The common belief is that science comes first and produces the necessary knowledge that – in the next step – is applied and exploited for the design and engineering of technical artefacts. This view is contested by empirical research on the history of S&T. The steam machine, for example, was developed by craftsmen (James Watt, Thomas Newcomen) who had no background in physics or other sciences. The practical problems and flaws of the steam machine that occurred in the years after its invention triggered a more systematic scientific study of thermodynamics and mechanics. In this respect, we can say, a technological challenge that engineers and craftsmen faced was taken up by scientist in order to help solving it. Technology leaps ahead science in most of the cases. Moreover, undoubtedly, man created artefacts long before the elaboration of a scientific methodology.
Then, there was the idea that technological progress is somehow inevitable and unstoppable. Early philosophers of technology formulated the paradigm of technological determinism according to which technological progress follows predestined courses and shapes society. Common examples (citing Karl Marx) are the windmill bringing about the feudal system and the steam mill inducing the transition to an industrial society. Around the 1960s, this paradigm shifted dramatically. Facilitated by the great system thinkers Niklas Luhmann, Jürgen Habermas, Thomas Kuhn and others, supported by pragmatists (e.g. James Dewey), phenomenologists (e.g. Edmund Husserl, Martin Heidegger) and the early constructivists (here, especially, Ludwik Fleck, Gregory Bateson, Peter Berger, Thomas Luckmann, Paul Watzlawick, and others), the new predominant model of social constructivism (in America often termed “constructionism”) draws a picture of society shaping technological progress according to its needs, demands and desires. This fits well with the tree of knowledge idea: In the time after the second world war, people’s fears and concerns (e.g. the threat from nuclear weapons) were no longer satisfyingly soothed by politics or religion, so they sought meaning in technology as major factor to improve the quality of life. Technological artefacts were produced as a response to social needs (for example by economic market thinking and profit prospects), not because “it was possible”. At the same time – in face of nuclear threat and increasing environmental destruction – technological development and its risks and uncertainties moved more and more into the focus of social sciences. Since the deterministic thinking (“There’s nothing we can do about it, anyway!”) was replaced by constructivist thinking (“We can intervene in the construction process!”), there was big optimism that technology governance can influence the risk-benefit balance in favour of the (intended) positive outcome. This was the time that the US government installed the “Office of Technology Assessment” and, a bit later, European countries established similar institutions.
The question at that time (around the 1960s, 70s) looking at the past was: “How could we do Science and Technology without social sciences?!”. 40 years later (which is now) the technology assessors ask themselves “How could we do Science, Technology and Society (STS) studies without Ethics?!”. It was around the 1990s and early 2000s, significantly triggered by the rise of biotechnology and nanotechnology, that many disciplines (the sciences themselves, sociology, politics, philosophy, but also the public) recognised the need for more profound reflection on ethical issues of S&T. The widespread, irrational, but in parts aggressive opposition of the public against genetic engineering surprised the enactors of this S&T field, and left them hamstrung. Great prospects (envisioned by the scientists, medical practitioners, politicians) were juxtaposed with great moral challenges and imagined threats for humankind. The same can be said for nanotechnology, a field in which the major concern arises from “unclear risks” (expecting risks without knowing what the particular risks can be, how strong they impact and who is exposed). The picture of “value-free science” and “neutral technology” had to be given up for good. The challenge of established and unquestioned normative frameworks – and also ways of meaning-construction – by technological progress led to normative uncertainty and gave rise to a call for ethical analysis, since the common tools and reasoning strategies proved inefficient in light of conflict potentials. This is aptly illustrated in a statement by Glenn McGee (in his essay Pragmatic epistemology and the activity of bioethics., in “Pragmatist ethics for a technological culture”, edited by Jozef Keulartz, Michiel Korthals, Maartje Schermer, and Tsjalling Swierstra, Springer, 2002, p.112):
[We] really are only able to, and need to, question our basic assumptions in the moment when we collide with an element of the complexity of our life, a tear in the routine of experience that requires us to rethink things in order that we might progress along our current (or any other alternate) course.
New forms of technology assessment attempted to include the public in decision-making on S&T development (participatory TA), or to accompany progress from the beginning with studies on ethical, legal and social implications (ELSI) (e.g. constructive TA). Commissions on particular S&T topics in the established parliamentary TA institutions involved more and more ethics experts (“ethicists”) besides the technical, political, economic and social experts. A problem of the early years of ethical evaluation of S&T was the “speculative” character of S&T ethics, and the expert-driven, very intellectual-academic “top down” approach the experts preferred (from ethical theory down to particular problems). Meanwhile, however, a whole set of useful and valuable methodologies and approaches for the ethical assessment of S&T has been developed (for example by experienced scholars in the field such as Armin Grunwald, Arie Rip, Ortwin Renn, Tsjalling Swierstra, Alfred Nordmann, and others). Again, constructivism and pragmatism had a major impact on the (self-)understanding of ethics in S&T domains: It is only worth the efforts when it comes to practicable, viable, plausible, down-to-earth solutions. The key for the success of it is interdisciplinarity: Scientists and practitioners engage in collaborative discourse with social scientists, ethicists, philosophers and political decision-makers, and sometimes with representatives of the “wider public” (often NGOs, or other affected interest groups). The difficulties that arise from the wide variety of expectations and viewpoints can again be illustrated by the tree of knowledge: All these stakeholders tend to use different channels for meaning-construction. In order to get closer to what Habermas and Apel called ideal discourse – one in which all participants can contribute arguments without any power hierarchies, one in which the best argument wins and not the most popular – it could be useful to reconstruct arguments according to this scheme: How did a discourse participant construct meaning? What is the root (fear? expertise? emotion? selfish greed?)? What is the argumentative channel (inconsiderate default setting? (religious) dogmatism? empirical reason? profit thinking?)?
This point brings me – after describing past and presence of S&T assessment – to a future vision: Maybe in 20 or 30 years from now, maybe sooner, maybe later, we will look back at this time and think “How could we assess ethical and social implications of science and technology without psychology?!”. Isn’t the understanding of how we construct meaning a field for psychological research rather than for ethics (or philosophy in general) or social sciences? As far as I can see it, the specific sub-discipline of social psychology is already implemented in STS, but I am thinking of something different. Let me explain with an example from the field of media ethics: It is commonly accepted that ethical issues in media have to be separated into a “producer ethics” (What is ethically acceptable concerning the production and dissemination of media content?) and a “consumer ethics” (What constitutes “ethically acceptable” consumption of media content and usage of media infrastructure?). So far, technology ethics, in this respect, focussed almost exclusively on “producer ethics”, taking “the public”, “the society” or “the citizen” as a grey black-boxed group. Even more, it seems to me that many technology assessors have a “responsible, interested, engaged citizen” in mind when reflecting on public participation in S&T policy. Is that tenable? Isn’t the majority of society members (with variances between different countries, of course) selfish, disinterested, lazy, uninformed, dumb people? Example: A citizen panel providing participation opportunity in decision-making on selecting radioactive waste disposal sites – a topic with presumably big conflict potential – for local citizen attracted 8 (eight!) people (and only with the incentive for getting paid for their participation) in an urban catchment of 200,000 inhabitants in England. Not to speak of the highly anti-intellectual, ill-informed, religiously biased and regressive public policy discourses observed in the USA, a country that is obviously full of fools (how else can it be explained that they elect Donald Trump to be their president?!)! The first “psychological” question is, therefore: What matters to the people and why? Are the experts’ estimations on what matters to the public always realistic and appropriate? However, a second psychological question appears much more important to me: What is the “consumer ethics” of technology? What makes people purchase, use or reject a certain technology (besides sociological answers to this question)? How do people construct meaning from the existence and availability of technological artefacts? Only with this question, for example, would it be possible to perform an assessment of smartphone technology. Imagine what this question would in return mean for the responsibility of technology producers: If it turns out that a technology supports undesirable psychological traits (addiction, emotional coldness, increasing social isolation, aggression, etc.), would it be advisable to refrain from the production and dissemination (which could be driven by profit expectation, knowing that people will buy it)? How paternalistic may S&T development be? Here might be a specific entry point for “Buddhist technology ethics”: Does the development support “suffering” in the sense that it feeds the mind poisons (esp. attachment, or greed), or should it be channelled in a way that it facilitates liberation from it? We will see. Currently, the psychological aspects of social construction of technology are dealt with in the same way that ethical issues have been treated for long: somehow in the background, without granting it the level of expertise that it deserves. It was believed that everybody can do ethics. In the face of intractable conflicts it turned out to be crucial, anyhow, to include professional ethical expertise. Currently, it is the psychological aspects that are given only a marginal importance. “Everybody can do psychology!” – Really? Maybe soon in the future we will include social, environmental and “cognitive” psychologists in our S&T assessments. I would welcome that!
The inclusion of psychology might also help “grounding” and clarifying many philosophical and ethical positions on S&T issues. Take, for example, this statement by Jean-Pierre Dupuy in the “Companion to the Philosophy of Technology” (edited by Jan Kyrre Berg Olsen, Stig Andur Pedersen and Vincent F. Hendricks, Routledge, 2009, chapter 38, p.216):
Indeed, the metaphysics of the NBIC convergence dreams of overcoming once and for all every given that is a part of the human condition, especially the finiteness of a human life – its mortality and its beginning in birth.
These are great words, but is that really true? Probably we can assume that existential fears are the predominant drivers of most human activity, but it sounds a bit far-fetched to claim that the underlying motivation (“dream”) of Nano-Bio-Info-Cogno enactors is – in a conscious fashion – to overcome mortality. All the dedicated “transhumanists” are people outside the S&T development domain who just conclude from recent S&T trends that transhumanism is a goal worth achieving. However, deeper insights into these driving forces of S&T progress might be delivered rather by psychological disciplines than by sociology or philosophy.
Here are some books that serve as great “further reading” on this topic. At least, they recently inspired me to write this letter.
- J.K. Berg Olsen, S.A. Pedersen and V.F. Hendricks (eds.), “A Companion to the Philosophy of Technology“, Wiley-Blackwell, 2009
- A. Grunwald, “Responsible Nanobiotechnology – Philosophy and Ethics“, CRC Press, 2012
- F. Lucivero, “Ethical Assessment of Emerging Technologies. Appraising the moral plausibility of technological visions.”, Springer, 2016