Home » Discourse Ethics » Lecture 08

Lecture 08

4.3 Science and Technology Ethics

In the context of science and technology (S&T) ethics, we will learn important concepts like sustainability, responsibility, and risk discourse. These are important not only for S&T, but for many discourse situations in daily life, be it with friends or family, with both or co-workers, or in other professional interactions (same as the paternalism concept we learned in medical ethics, or the slippery-slope argument in political discourse). Moreover, same as in all other realms of applied ethics, we approach it from two (or more) sides: The producers and developers of technology, and the consumers and users of technology.

4.3.1 Ethical dimensions of technology

Technology is best understood as a system of things, actions and knowledge that is embedded into complex social, cultural, political and historical contexts. Technology ethics is not so much concerned about particular artefacts (for example, the ethics of cars), but about implications of technological development that affects the individual and social life of people and the environment. Let’s have a closer look at the elements of technological development so that we get an idea of how complex it is:


The arrow indicates that technological development proceeds in steps with starting conditions or initial phases, and with an output. This must not be understood in the temporal but in a conceptual sense. There are elements that play an important role in the early phase of development while others are concerned about later stages. However, all elements are impacting the development at all times, not only at the beginning, the middle or the end.

On the input side, the most evident element of technology is knowledge (know-how, technical expertise). Without that, no artefact can be invented. This knowledge can be scientific, but not necessarily. Early artefacts up to the invention of the steam engine were invented by “non-scientists”, but creative minds with practical goals. This might rather be categorised as engineering knowledge. Besides, thinking of technology as a societally embedded system, also sociological (and related) knowledge might be of importance for the success of failure of technological progress. The second element that is a crucial basis for technological development are the employed materials, including (natural) resources and energy. For this, manpower and the environment are needed.

On the other end, the output end, we have the factors that motivate the development: either a specific demand (understood as “pulling” the development) or a market (as “pushing” the development). The difference I want to depict here is that either a development can be the response to specific articulated interests and desires (the demand for a technological solution for something), or  the possibility-driven equipment of market niches that exist without prior awareness among customers and users. We will come back to this distinction frequently in our later reflections.

The development itself can be characterised by its conditions, its goals and purposes, and its implications and impacts. As seen above, artefacts and techniques are developed and applied as means to achieve certain ends. Manifold conditions (society, culture, politics, etc.) determine what kind of ends are regarded as desirable or urgent, and also set the conditions for what type of means can be realised. Moreover, besides the successful or failed fulfilment of these goals, technology may also have various unintended side-effects and impacts. Generally, technology’s effects on social and environmental stakeholders are either goods – then we speak of benefits – or evils – then we speak of risks or harms. This outcome of technology is predominantly determined by the way the development proceeds, the middle section of the arrow. It contains of design (planning and conceptualising technology), production (the assemblage, construction or replication of items), management (distribution, sale, marketing), and disposal or recycling (the treatment of technology after its usage). In all stages the conditions, implications, and purposes of technology play an important role on the performance and role expectations of designers, producers, managers and disposers. In order to minimise risks and maximise benefits, the most important considerations in each phase are related to safety aspects, the possibilities of misuse or dual use (applying technology for undesirable ends besides the desirable intended ends), and terms of sustainability.

Let’s get more concrete by switching from elements and instances to people that are involved in technological development:


Knowledge is provided by those who produce it – generally labelled, here, as scientists – and those who learned how to apply it as know-how – here called engineers. These are the developers of technology. The resources and materials are provided by the natural environment, strictly speaking including the human workforce as part of nature. Stakeholder, then, is the eco-system since this is the entity for which something is “at stake”. It is important to note that the natural environment is affected by technological progress in two roles: as supplier and as affected “third party” (the “recipient” of technology effects). Industry and our economic system are the spheres in which technology is designed, produced and distributed. Designers are often the engineers involved in the development process. Managers and CEOs often carry the responsibility for the production- and distribution-related aspects, while workers engaged in the production process are directly (by the produced items and their processes themselves) and indirectly (by the technology-dependent economic system and its jobs) affected by technology. Furthermore, the biggest group of stakeholders is the “society”: consumers, users, appliers, and everyone who is getting in touch with technology, but also all those who are affected indirectly by technological progress without even using it intentionally (here called third party). When we talk about responsibility later, this distinction will become very important! Moreover, in recent years another group of stakeholders became more and more important and involved: those who don’t do (produce, apply) technology but talk about it. Evaluators and technology assessors analyse and study technological progress in order to understand its mechanisms and effects. Among them are social scientists and – for the evaluative part – ethicists. Their insights feed the decision-making processes by regulators and policy-makers, mostly politicians and legislators.

All stakeholders, from scientists to designers, workers, consumers, ethicists, regulators, and environment, have one collective interest: keeping the risks low and increasing the benefits. What exactly is understood by risk or benefit is subject of endless debate and conflict. That’s why we need technology ethics.

Let’s have a closer look at the connection between society and technology. Basically, there have been two major viewpoints in the history of technology sociology: Technology shapes society, and society shapes technology. The former is termed technological determinism and holds that technological progress is inevitable, unstoppable and somehow happening to us. In contrast, the latter is called social constructivism, believing that technologies are embedded in a network of social demands, technical possibilities, cultural identifications and various forms of social practices. Both positions find examples to substantiate their viewpoint. Determinists refer to historical examples like the hand mill that brought about the feudal system of the medieval ages with its lords and peasants and the steam mill that changed the society into the industrial one with bosses and workers. Constructivists show pathways of technological development that prove the dependence of technology on the knowledge of the time, the social acceptance and the economic mechanisms of demand and market potentials. The two viewpoints may be illustrated in this way: Determinism takes the technological progress as a given constant continuous process and the social development as stepwise according to significant technological inventions. For example, after the invention of book printing the society was a different one, the invention of the steam engine changed it again, the automobile did, and most recently the internet. In contrast, constructivism takes the social development as a more or less constant and smooth process, but the technologies come and go dependent on the stage and level of society.


Determinism was the predominant view until the late 1960s. Influential philosophers, sociologists, psychologists and cybernetics scientists at that time popularised and elaborated constructivism which gained much more influence. Today, it is hard to find any serious determinist. Constructivism is by far the most accepted conceptual framework for technological development and its study.

This has significant consequences! The change from deterministic to constructivistic technology models initiated a postmodern (後現代主義) and post-positivistic (後實證主義) understanding of technology. Most importantly this implies that technological progress is regarded as assessable, controllable, debatable and designable. It doesn’t simply happen to us, but we make it happen and are in control of it. Moreover, technology is not value-neutral, but its effects are matters of responsibility, justice and fairness. Only on this basis (controllability, debatability) can technology become a political endeavour. Especially, it becomes a subject of ethical discourse in the form of risk assessment, technology foresight and technology assessment (TA). Additionally, it is believed that technological progress is in any case more sustainable when the public is participating in its discourse, and not only various “experts”. Under the constructivist paradigm we see the “larger picture” of technology:


This overview marks a development in several respects: First, it describes levels of complexity concerning the activities of different enactors (risk researchers, STS researchers, TA researchers) and there concepts of “sustainable development”. Second, it also describes advancements in the development of S&T-accompanying studies over the past 2-3 decades. The “easiest” (and oldest) form of risk analysis is the empirical risk assessment that identifies hazards (e.g. toxicity of substances), studies the exposure (how much? for who? where? when?) and characterises the risk on the basis of these findings. This risk is communicated and managed as best as possible. This approach responds to the risk perception and awareness of public or other stakeholders of a technological development. It turned out that this is not sufficient. In order to gain public acceptance – a basis for the “success” of a new technology – clear definitions of standards concerning environmental, health and safety issues (EHS) are necessary. Many companies defined these standards for their internal safety and quality measures, both for worker protection and for appearing trustworthy to the public. In science and research, an increasing awareness for the importance of reflecting “ethical, legal and social implication” (ELSI) of scientific activity could be observed. Especially national or EU-funded research programs implemented work packages on “ELSI”. Some even talked about this trend as “elsification of science”: There was no more value-free science! Whereas these approaches still somehow separated the institutions technology and public – “here the technology enactors that work on progress, there the public that has to accept it” – latest approaches aim at societal embedding of S&T progress. Technology Foresight as a sociological method wants to guide the development by proper methodologies like scenario analysis, modeling, assessments, etc. Technology Assessment (TA, 技術評估) goes one step further and aims at enriching developments in interdisciplinary and transdisciplinary discourse on S&T and institutionalising the ELSI reflection as a governance tool. With that, we have a platform or arena for performing technology ethics, or more generally: ethical assessment of technological progress.

However, it is not clear if and how technology has an ethical content. Many philosophers, but also technology enactors of various kinds, expressed that technology itself must be neutral. Karl Popper said “Technology is neither good nor evil, but can be used for both good and evil.”. Another ethicist expressed “Technology tells us what we can do, not what we should do.”


There are various forms of the neutrality thesis concerning technology. Remember the three dimensions of technology that we identified in our definition: Artefacts (actual things), techniques (actions), and knowledge (abilities and skills). The strong neutrality thesis states that all these forms of technology are neutral. A more moderate version admits that artefacts can’t be neutral since their purpose – and with this an ethical content – is inscribed in them by design, but that technology-related procedures (e.g. production of technological items) and the knowledge about them is per se neutral. A weak formulation only holds the knowledge realm neutral. Is any of these theses tenable?

Let’s consider 4 examples:

  1. With a washing machine, we can do the laundry, but we can also kill a cat.
  2. With a guillotine, we can behead people, but we could also chop cabbage with it.
  3. With a knife, we can cut bread, or we can kill our mother-in-law.
  4. With the internet, we can communicate globally, but we can also distribute racist propaganda.


The ambivalence in usage of these technological artefacts leads the supporters of the strong neutrality thesis to conclude that per se they are neutral and that only the users add the ethical dimension by their intentions to use them for good or bad purposes. However, the relation between artefact and its application is not that simple. This is clear for the washing machine and the guillotine. Both are invented and designed with a clear purpose in mind: The washing machine is for washing clothes, the guillotine to chop off heads. We can say, the intended actions are inscribed into them. Clearly, we can evaluate these purposes: Facilitating an easier way to do the laundry is a good purpose, executing people is a bad (unethical) purpose (at least in societies that abandoned and condemned the death penalty). With other words: Some technical artefacts do not merely have (neutral) instrumental means character, but serve ethically evaluable ends. The case of the knife and the internet are more complex since clear ends are difficult to identify and confine. Multi-purpose usage is expectable and in some cases even desired. However, it is too easy to locate the “ethical duty” solely at the users and appliers. Surely, it is their particular act that is ethical or not, but in many cases it is the employed artefacts that enable certain actions and possibilities. The fixation of values, however, does not – as is the case for washing machine and guillotine – happen at the stage of invention or design, but in the legal and societal context. The more complex a machine the greater is the ambivalence of good and bad application potentials. Whereas in the first case (washing machine, guillotine) the attributes good and bad are related to their intended means-ends-relations and intended purposes and applications, in the second case (knives, internet) they either refer to the success rate with which ethically unacceptable unintended means-ends-relations are suppressed and disabled or to the expectation on the (side) effects of the artefact on the life quality of current and future generations.

When things (artefacts as the embodiment or material manifestation of technology) are not ethically neutral, actions (techniques and procedures as technologically enabled phenomena) also can’t be reasonably classified as neutral (as the moderate neutrality thesis does). The same argumentation on ethically evaluable means-ends-relations applies here. In order to give another supporting viewpoint that critiques of the thesis bring up we may understand technology and its manifestations as agents themselves. This follows a strategy that was prominently applied in the Actor-Network-Theory (ANT) promoted by Bruno Latour and some others. In order to explain actual trends in society concerning decision-making and following or realising desires and needs, this model understands all items that have an impact on our particular choices and actions as agents connected in a dense network of options and pathways. Technology acts as an entity that pulls or pushes decision-making by enabling actions or simply by being available and opening action potentials. Then, also technological actions such as inventing items, constructing and producing items or buying and applying items (not only the items themselves) arise from the social context they are embedded in. As such, they can be evaluated as explained above. Current theories of knowledge also understand knowledge as socially and culturally highly contextualised, so that in the same manner even the weak neutrality thesis does not hold.

After clarifying that scientific and technological development has a normative-ethical dimension and that it is worth spending efforts on debating those, we need to find an orientational framework for this discourse. In the past decades, this framework has been established under the headline sustainability.

4.3.2 Sustainability

As the word sustainability suggests, it describes the ability of something (an event, an incidence, a development, a process, a phenomenon) to “sustain”, that means to keep itself proceeding smoothly and continuously without exhausting itself and without drifting towards decay or catastrophe. One of the oldest records of “sustainability” – without calling it as such – might be a statement by Mengzi (孟子):

不違農時,穀不可勝食也;數罟不入洿池,魚鼈不可勝食也;斧斤以時入山林,材木不可勝用也。穀與魚鼈不可勝食,材木不可勝用,是使民養生喪死無憾也。養生喪死無憾,王道之始也。[If the seasons of husbandry be not interfered with, the grain will be more than can be eaten. If close nets are not allowed to enter the pools and ponds, the fishes and turtles will be more than can be consumed. If the axes and bills enter the hills and forests only at the proper time, the wood will be more than can be used. When the grain and fish and turtles are more than can be eaten, and there is more wood than can be used, this enables the people to nourish their living and mourn for their dead, without any feeling against any. This condition is the first step of royal government.] (Mencius 1A3)

Mengzi’s three examples (agriculture, fishing, forestry) are crucial for the society and its survival. In this advice for a leader, he recognises that the most important factor for long-term social stability is to make sure that the applied techniques to generate food and exploit natural resources enable a steady recovery of resources so that they don’t run out. Natural balance and harmony – in good accordance with Chinese cosmology (Yijing), Daoism and Confucius’ teachings – must not be messed up since they are the basis for human life. With other words: Agriculture, fishing and forestry can only sustain when they are done in the right way.

In the Western world, the history of sustainability as an important principle is much shorter. After many centuries of a paradigm that suggested endless resources and unlimited exploitation of “God’s creation” (Earth as a gift of God to mankind), deeper systematic and scientific insights into nature and society facilitated a higher awareness for the vulnerability of the environment and the limits of exploitation. The first record of sustainability (as its German word, Nachhaltigkeit) is from a German forestry regulation in the middle of the 19th century. It explains how to maintain a forest in a sustainable way by finding the balance between cutting wood and letting trees grow.

In 1980, the World Conservation Strategy was produced by the International Union for the Conservation of Nature, the World Wide Fund for Nature, and the United Nations Environment Programme. This first international attempt to apply sustainability on a political scale was driven by non-governmental organisations. Soon, this pressure led to political action: In 1987, the World Commission on Environment and Development (also known as Brundtland Commission, named after its chairman) produced the report Our Common Future with strategies for global environmental balance and a call for action in the face of progressing environmental destruction and pollution. Another milestone was the United Nations Conference on Environment and Development (UNCED) – the Earth Summit – in Rio de Janeiro in 1992. Here, conventions on climate change and biodiversity, guidelines of forest principles, a declaration on Environment and Development, and an extensive international agenda for action for sustainable development for the twenty-first century (Agenda 21) have been presented as major output. Sustainability was also a topic at the United Nation’s Millennium Summit in 2000 in New York, but with a slightly different focus than the others: major outputs were more focused on “human affairs” rather than the environment, such as the Millennium Development Goals concerning eradication of poverty and hunger, universal education, gender equality, child and maternal health, combating HIV and AIDS, and global partnership. Worth mentioning here is also the World Summit on Sustainable Development (WSSD) in Johannesburg in 2002. Its report gives more details on what is understood as sustainability. The participants agreed upon a declaration that committed to ‘‘a collective responsibility to advance and strengthen the interdependent and mutually reinforcing pillars of sustainable development – economic development, social development, and environmental protection – at local, national and global levels“.

This last definition – sustainability as a matter of society, environment and economy – can help us understand sustainability as a regime between different interests of different stakeholders – instances or people for which something is at stake.


There is the society with its members’ interest in a good quality of life (whatever that means), in education, community (family and friend life), and equal chances for everybody. Then, there is the economic system as a special field of human activity with its interest in profit and growth, but also in risk prevention and smoothly running processes. And, there is the environment with its interest (as far as we can state that) in an healthy ecosystem, biodiversity and the steady chance of recovery from impacts. All these spheres may have internal conflicts of interest, so that we can speak of “social sustainability” when all interests within society are met (the same for the other spheres). Our larger idea of sustainability, however, starts where the interests of different spheres overlap. The quality of human life, obviously, depends strongly on the health of the ecosystem, so that both instances (society and environment) have an interest in a balanced ecosystem. This can be achieved by following principles of environmental justice, for example resource stewardship (which is more than just “resource management”) on a local and global scale. Society’s and economy’s interests overlap in the call for business ethics, healthy work force (or “human resources”, to use this terrible term), protecting worker’s rights, and fair trade. Economy and environment share the demand for energy efficiency (not wasting sources of energy) and a profitable use of resources.

We might ask why economy has such a strong position in this definition. It looks like it is on equal footing among three interest groups. from my perspective, this is only because industrial representatives in the summits in which these definitions are made have a powerful voice, because it is widely believed that our current well-being and wealth is generated by our economic system. For politicians and other decision-makers, the threat of losing jobs and economic profitability is one of the worst possible scenarios, losing trust and votes. However, what is truly at stake in human activities like S&T and other global affairs is the balance between human well-being and environmental health. I (personally) believe that true sustainability can only be reached when the destructive power of our current economic system is broken rather than given a strong voice in the discourse. However, I’d like to encourage you to think about this aspect by yourself.

Let’s fill this scheme with a simple real life example (before we do that for the rest of the course with “technology”). In Taiwan, chewing betelnuts (檳榔) is very widespread. In view of society, we may ask “Is betelnut consumption sustainable?“, in economy we ask “Is betelnut business sustainable?“, and for the environment it is important to ask “Is betelnut cultivation sustainable?“. First, let us look at arguments that are related solely to the social realm. There is, obviously a desire to chew betelnuts, because there are certain pleasures or lifestyles associated with it, comparable to other addictions like smoking or the drinking alcohol. People who choose to chew betelnuts may claim it an element of their quality of life. However, the adverse health effects of betelnut consumption are well-known, for example bone decay (teeth and jar) and throat cancer. Members of the society who choose not to chew betelnut for exactly that reason may complain that those who do put stress on the national health care system, because their treatment needs a lot of money (paid by everybody who has a health insurance). They lack of responsibility that betelnut consumers show is, then, regarded as unethical. When the stability and long-term profitability of the health care system is threatened by betelnut consumption, this is clearly unsustainable, especially when the only benefit is a short-term and unhealthy “low level” pleasure.

When we look at the economic side, the asked questions are very different. Someone who runs (or considers running) a betelnut shop has to ask “Is the business profitable? Is there a market? How many betelnut shops are there around? Where is a good spot for a shop (for example near the highway exit)?”. The business is sustainable – generating profit and successful for many years or generations – when the sales numbers are good and the seller’s family can have a good life from it. The most important question is “How can we sell more?”. Ethical concerns that are of interest for the entire betelnut industry are whether the producers and sellers are fair and cooperative. An example of practices that impact the entire betelnut business is the phenomenon of “betelnut beauties”: In order to attract more customers, shop owners hire pretty girls and – dressed in bikinis or other sexy dresses – place them prominently. In order to compete, many shops follow that example. Some find it immoral to exploit female clerks as sexual objects. The argument is: If something immoral (exposing almost naked girls) has to be done to make the business run well, the business itself is unsustainable.

The environmental impact of betelnut culture is very obvious when visiting the mountains. The native forests on the mountain slopes are removed in favour of betelnut plantages. These thin trees with only grass in between them are not capable of replacing a healthy forest ecosystem with its flora and fauna. The original biotope is gone. For a long-term health of the local ecosystem that is a disaster and clearly unsustainable.

There are additional factors in the overlapping areas of interest that make betelnut culture appear even more unsustainable:

  • Society-Environment: The interest of both society and environment in a healthy ecosystem is dramatically clearly violated when mud slides and other destructive forces occur (for example after typhoons) because the shallow roots of the betelnut trees give way to erosion which the deep roots of the native vegetation could prevent. Here, the environmental destruction impacts the people who lose their houses or even their life.
  • Society-Economy: The simple work of cutting, wrapping and selling betelnuts is paid very poorly. In the long run, this threatens both the business practices and the people’s willingness to work in this field.
  • Economy-Environment: When betelnut trees lead to a damaged ecosystem it has clear effect on other business fields (for example, honey production due to disappearing of bees) or even agriculture as a whole).

From this overview, we learn that sustainability is an empty concept when it is not filled with ethical arguments. Sustainability itself has no concrete ethical claims to make, besides providing a referential framework for justice and fairness as the major concepts. Sustainable is what appears just and/or fair to all involved stakeholders (including the environment). It connects normative parameters like quality of life, environmental justice or risk with the actual consequences of human activity (like business or technological progress). We still have to “do ethics” in order to determine what good quality of life means, what justice is, or how to evaluate certain risk situations.

Let us now close the circle to our topic, technology. For this, we consult Mengzi’s statement one more time. His three examples can serve as an illustration of three different notions of what is technology:

  • Husbandry (or agriculture) – This represents a system of knowledge as a guideline for action, with other words: technology as know-how and know-that,
  • Fishing nets – Here, technology means technical artefacts,
  • Axes and bills – As particular ways to cut wood in the forest, these stand for technology understood as techniques or procedural methods.

Technology, therefore, can be sustainable or not in view of its application. Know-how (if not ignored) has the potential to manage activities in a way that they become more sustainable. Technical artefacts can be sustainable or not through their particular design and usage. Techniques and methods can be sustainable or not through the impact they have on the environment in which they are applied (landscapes, societies, social (sub-)systems, etc.).

Mengzi also gives a reason for the importance of sustainability: “This enables the people to nourish their living“. For him as an Confucianist, that means above all that they have time to cultivate and follow their rituals (li). Only when their existential needs are fulfilled through knowingly sustainable production methods, they will have the temporal, mental and spiritual capacity to actively make their life quality better. This could be taken as an example for an anthropocentric view on sustainability. Grain, fish, turtles and the forest are not of interest for their own sake, but in view of their importance for human well-being. For modern approaches to sustainability aspects of technology, we have to keep in mind that technology is produced and developed by humans for human purposes. This, however, doesn’t make its assessment necessarily anthropocentric! Technological advancements clearly have an impact on all abovementioned spheres (society, environment, economy), either as direct or indirect effects, which can both be either intended or totally unexpected. Whether these effects are evaluated as positive or negative depends on how we apply ethical reasoning (which we will do in later classes of this course).

Finally, Mengzi also explains where (in which realm) sustainability is debated: “The first step of royal government“. Sustainability is a political task. Stakeholders in the discourse can be manifold from all kinds of institutions and organisations, but in the end it must be politically manifested in the form of decisions, action plans, agendas, regulations or laws. The underlying ideal is that politics’ priority is the governance of different interests and viewpoints in order to promote and support the well-being of the entire society and its environmental foundations. Politics is also the arena in which ethics leaves the academic surrounding of pure theoretical reflection and enters the stage of real-life relevance and pragmatic application.

Now, we can turn to filling the sustainability concept for science and technology with ethical considerations. The most established strategy is to define a set of values that covers what is valued by technology enactors and stakeholders including the wider public and the environment (“third parties”). The ethical principles (like freedom, autonomy, privacy, etc.) are then defined by how they relate to those values. The most famous set of values is the “Oktogon” by the VDI (German Engineers’ Association). Originally consisting of eight items (therefore the name), two are summarised in one so that there are seven boxes now.


First, technology is evaluated in regard of its functionality, that is, how well it meets its proclaimed means-ends-relations. Besides, aspects of safety play an important role. From the economic side, significant values are profitability (or efficiency, depending on how to translate the German term Wirtschaftlichkeit) and economic wealth. The members of the society are interested in personal health and environmental quality. Moreover, the quality of life is affected by social quality (or balance) and options for personality development. These items are connected in two ways. The first (indicated by a one-headed arrow) is an instrumental relation: One supports (or stronger: is necessary for) the other. For example, the functionality of an item determines its safety and also its efficiency. More safety means (usually) better health. Higher environmental quality also has a positive effect on health and social quality. And so on. The other type of connection is a competitive one (indicated by two colliding arrows), indicating that one can’t be supported without diminishing the other. Profitability often conflicts with safety aspects and health effects. Increase in economic wealth usually goes along with decrease in environmental quality. With this scheme, it is then possible to classify arguments by supporting or neglecting one or more of those values, and to identify their support of or conflict with other arguments in favour of or against other values.

As a next step, we need to clarify who is in charge of what value. With other words: Who is responsible for what aspect of technological development and its intended and side effects?

4.3.3 Responsibility

This concept plays a crucial rule in many fields of applied ethics and received notable attention from scholars in the field and also in the popular literature (see, for example, Hans Jonas’ The responsibility principle). Indeed, there is a lot to say about it and it is not as simple as it might seem at first. A first hint for its complexity is the obvious ambivalent character: We can associate responsibility with praise and with dispraise. On the one hand, the label a responsible person is usually intended as a compliment or admiration for someone who fulfils his or her duties very well. On the other hand, the statement that someone is responsible (for this mess, for example) puts a burden or pressure on that someone. It can’t be easily decided whether responsibility is a blessing, a virtue, a duty, or an onerous burden.

At a closer examination, we find that responsibility is never just one-dimensional as in “someone is responsible”. There must, at least, be a second dimension, that which that someone is responsible for. Moreover, it can be analysed what “is” means in this case. Where does responsible come from? Usually it is attributed by someone to someone, or in some way expected by someone from someone, or delegated by someone to someone. These two someones should be in any way related to each other, otherwise it would be a case of “none of your business”. Last but not least, there is also a fourth dimension: Someone is attributed responsibility by someone for something in relation to a certain body of rules or a level of knowledge. These rules and this knowledge must be somehow related to the object of responsibility, otherwise it is a false claim for responsibility. We will now examine all four dimensions for the field of technology and its social manifestation.


In order to get clearer about what exactly technology enactors (everybody who has anything to do with technology and its effect in the world) are (or: can be) held responsible for, we need to make a few further definitions of responsibility. The first is a time dimension: We can be responsible for our actions and decisions in the past that have an effect onto now. In that case, we usually refer to it as accountability, or retrospective responsibility. We can also be responsible for future effects of our current actions and decisions (or those to come in the nearer future). In both cases, we are accountable or responsible NOW, which is important for practical and legal reasons. The words express the idea: The accountability for past actions and decisions is evaluated like an account of positive and negative positions (motivations, success, failure, conduct, etc.) in order to determine the overall contribution (causal or correlative) to a present state. The responsibility for future actions and decisions is formulated as the expectation on someone to be able to respond to certain questions and inquiries related to that case, for example in terms of knowledge and expertise, or leadership and other social roles.


It is also important to distinguish moral, legal, organisational and political responsibility. Many responsibilities in the context of technology (esp. consumer goods) are governed in legal terms, for example warranty regulations for malfunctioning devices. When there is no law or regulations, we still feel morally responsible, for example for the avoidance of misuse of a technology like online activities using the internet. Organisational responsibility refers to role expectations within organisational hierarchies in corporations and other organisations. For example, engineers and producers have certain obligations towards their bosses or the company they are employed in. This can, of course, conflict with their personal legal or moral duties and responsibilities, for example when it comes to design and safety of technical items. Finally, there is also political responsibility of elected social representatives and other decision-makers from economy, sports, etc. whose decisions and actions may have political relevance.

Responsibility is usually attributed to people for particular values that are affected by a technological artefact or its application. Of course, an engineer can’t be held responsible for the washing machine per se (the existence of the kind washing machine). Also, it will be difficult to hold anyone responsible for nanotechnology or agricultural development as such. This is where the ethical principles come into play. For example, we may take the VDI oktogon and examine what responsibilities are connected to the preservation or violation of particular values. For example, engineers might be held reasonably and plausibly responsible or accountable for the functionality of an artefact, whereas CEOs and company directors may be attributed responsibility for business decisions affecting profitability. Since most values are somehow connected (supportive or competitive), the responsibilities are often unclear or shared among several stakeholders. However, we may claim that it is these values that the stakeholders are responsible for, rather than the artefact, techniques or knowledge as such. As mentioned above, in many ways the protection of values is already regulated by laws (for example the warranty regulations for delegating the responsibility for proper functionality of an item to companies and their engineers). A more conflict bearing debate is held on moral responsibilities of technology enactors for societal values and environmental impact. Here, it would be naïve to believe that these could be sufficiently regulated by policy-making. Moral and organisational responsibilities conflict the most in this respect.

Not everyone is in the position to attribute responsibility for something to anyone. When I tell you that you are responsible for treating your parents or your partner well and threaten to punish you for misconduct, you are justified to tell me that it is not my business how you decide to treat your boyfriend or girlfriend. That is true, because on this private level, there is no connection between us. In a very general sense, attributors of responsibilities are those who are in any way affected by the actions and decisions of someone and who share the same social network (locally or globally) of cause-effect-pathways and role expectations. Consumers of technology may hold engineers and producers responsible, because they are entrusted with this job in a society that is based on functional differentiation and expertise-based co-operation. Whether engineers may hold consumers responsible for proper application of their inventions is a much more complex question that would require the clarification of many preconditions.

A few more words on what it means to be held responsible in relation to rules and knowledge: People can only be attributed responsibility when there is a reasonable ground for expecting that they are able to fulfil these responsibility duties. For example, a person held responsible for something must be able to understand what it means to be responsible and what the involved rules and agreements are. Moreover, the person needs to have the required degree of knowledge that is necessary to have a chance to succeed in carrying responsibility. In the context of technology, there are a few clear responsibilities in regard of the stakeholders’ professional and social role:

  • Engineers, scientists – They have knowledge about technical aspects, functionality and safety of technological items. They are able to generate design solutions for problems and have the state-of-the-art scientific knowledge.
  • Economists, industrialists – They may be expected to have knowledge of market situations and of economic impact of certain technology projects.
  • Regulators – They are experts on legal implications of technology and its implementation.
  • Consumers, appliers – They may be expected to have gained the necessary information before using technology (for example by reading the instructions). Moreover, as members of a society they can reasonably be expected to have basic knowledge of daily life conduct, to have at least an average degree of rational capacity and moral integrity. Special cases, here, are children, seniors and mentally impaired citizen.

Then, there are a few more difficult and unclear cases of responsibility conditions, mostly due to their complexity and uncertainty:

  • Politicians, legislators – They are expected to consider long-term effects of technological progress on the society, the environment and the economy, in other words: to support sustainability. This is so complex that it is hardly possible to hold them in any way responsible when it fails.
  • Technology foresight, assessment – These experts are concerned about ethical and social implications of technology, mostly in terms of risks and benefits, both intended and unintended. However, if their studies and prophecies turn out to be false, they are usually not held responsible for any actual (unpredicted) effects of technology.

A special case are instances of collective responsibility. Naturally, there is no person who could be blamed (or, as an American philosopher expressed it, no soul to damn and no body to kick). What can a company know? What can a society know? What, then, can they be held responsible for? Here is a short suggestion of what collective holders of responsibility may be expected to be knowledgeable about:

  • Companies, corporations – Besides the particular knowledge and expertise of their employees and staff, the companies as such may be expected to have a sense of good business conduct and to be run accordingly. As social actors, companies are firmly embedded into social and normative guidelines. Knowing those makes them responsible for respecting and following them.
  • Government, parliament – Political organs in a democratic system are empowered on the foundations of social justification, devoted to justice. No political enactor can claim not to know this. Their responsibility is usually related to this basic understanding of governance and leadership.
  • Society – A society as such (not as in “all its members”) can be expected to have a certain degree of problem awareness, especially in our modern knowledge societies in which information and knowledge is available easily for everybody. Moreover, what keeps the society or culture together is a fundamental willingness to co-operate. Social responsibility is based on the idea that – even in case that some particular members have a lack of knowledge of something – the society as a whole can’t reasonable claim to be ignorant of technology effects.

4.3.4 Practice Questions

  • Analyse the responsibility situation in terms of the 4 dimensions for the following cases:
    • A friendship relation
    • The malfunctioning of a toaster
    • Distribution of pirated software and music on the internet
    • The traffic death rate
    • Global warming
  • When you drive a car or scooter, who is involved in making this possible?
    • Try to list as many enactors as possible. Where does it end?
  • Do we use smartphones because they were invented, or were smartphones invented because we want to use them?
    • Try to illustrate the complexity of technological innovation pathways.
    • A part of the question is: Do we build smartphones because we (technically) can (first: ability; then: application)? Or do we make it possible to build them because we want/need them (first: vision, then: research à ability)?
  • Is the automobile (car) a sustainable technology?
    • Think of „car“ as a technoscientific system that includes know-how (how to drive, traffic rules), the particular design of the car (aspects of safety, resource consumption, etc.), and the concept of the car (individualised traffic, available infrastructure like roads or gas stations).
    • Would a change (e.g. introduction of electric or self-driving cars) challenge the sustainability or support it?
  • In August 2014, a gas pipeline exploded in Kaohsiung City (高雄市). Who is responsible?