Intermission – Writing ethical statements
An ethical statement differs from a mere expression of an opinion or preference. It is made with the intention to convince someone of an argument or of the plausibility of a certain viewpoint. For this purpose, it is required to present relevant background information and normative premises (statements about values, virtues, “goods” and “evils”) with clarity and logical connection. As an orientation, here is a 5-point list for communicating ethical statements (presented orally, but mostly in written form):
- What is the situation? – The statement should start with a clear reference to what you are talking about. If the topic is too big (for example, as in one of our practice questions, “traffic” or “the automobile”), it might be required to narrow down the particular aspect that you intend to comment on. Give as much information as necessary. Be sure that the points you bring up are really relevant for your later argument, because too much background might confuse the reader/listener.
- What is the problem? What is the (actual or potential) conflict? – Not all conflicts are of ethical nature, but rather legal, political or personal. Explain in which way the conflict you address has an ethical dimension (one of values and worldviews between different parties) that needs clarification.
- Pro and contra arguments: What are possible positions? – This point is the core part of your statement. Here, you need to show that you have an overview of the possible positions. This proves to the opponent that you don’t base your viewpoint on one-sided arguments but take all arguments into account, especially also those of the opponent’s side. Rhetorically, it is even advisable to point out the opponent’s arguments first, those that you don’t support. Then, you contrast them with the alternative approaches and views that you yourself are supporting. Try not to judge any of these arguments, yet. The goal in this part of the statement is simply to give a descriptive overview of what different people might argue. The evaluation comes in the next part:
- Comparison: What are the underlying premises and assumptions of the arguments? Which make more sense? – Here, you may write sentences like “If you see it from the position of an anthropocentric deontologist, you would probably agree that… However, this position is flawed, because… Instead, the utilitarian biocentrist argues that… Based on the assumption that… this seems to be much more convincing.” In this section you introduce the normative justifications that you arrive at by actually “doing ethics” (applying ethical theories or principles, or other philosophical insights). The more convincing your starting point of reasoning the more acceptable and plausible are your judgments and conclusions.
- What is your own position, your conclusion? – Finally, you write how you would solve the issue, or what you would recommend to those in charge of solving it. Sometimes, this point is not even necessary or important – many ethicists don’t write about their own viewpoint when it is believed to be irrelevant. A mere comparison and evaluation of arguments might be sufficient. However, if you are asked for it or if you share a clear viewpoint, you may state it shortly in the end.
All in all, parts 3 and 4 are the most important ones and should be more than 50% of the text. This is independent from how long your statement is altogether. A longer ethical essay would have this structure, same as a one page statement in an ethics course exam. If the time and space doesn’t allow for a complete argumentation (for example, because the topic is “too big”), try to focus on the core arguments that are competing, or narrow down the topic to a specific question, or refer to a particular conflict case.
6. Technology and Ethics
The previous chapters have hopefully shown how the sphere technology is affecting many other social spheres and how it combines the actions and decisions of many different actors and stakeholders. Conflicts and disputes are unavoidable. Thus, technology is a matter for applied ethics and ethical discourse. In this section, we will shed light on various approaches to evaluate technological development, with a special focus on aspects of responsibility and reflections on risk. Before we do that, however, we need to clarify in which way the often claimed neutrality thesis for technology does not hold.
6.1 Neutral Technology?
Many philosophers, but also technology enactors of various kinds, expressed that technology itself must be neutral. Karl Popper said “Technology is neither good nor evil, but can be used for both good and evil.”. Another ethicist expressed “Technology tell us what we can do, not what we should do.”
There are various forms of the neutrality thesis concerning technology. Remember the three dimensions of technology that we identified in our definition: Artefacts (actual things), techniques (actions), and knowledge (abilities and skills). The strong neutrality thesis states that all these forms of technology are neutral. A more moderate version admits that artefacts can’t be neutral since their purpose – and with this an ethical content – is inscribed in them by design, but that technology-related procedures (e.g. production of technological items) and the knowledge about them is per se neutral. A weak formulation only holds the knowledge realm neutral. Is any of these theses tenable?
Let’s consider 4 examples:
- With a washing machine, we can do the laundry, but we can also kill a cat.
- With a guillotine, we can behead people, but we could also chop cabbage with it.
- With a knife, we can cut bread, or we can kill our mother-in-law.
- With the internet, we can communicate globally, but we can also distribute racist propaganda.
The ambivalence in usage of these technological artefacts leads the supporters of the strong neutrality thesis to conclude that per se they are neutral and that only the users add the ethical dimension by their intentions to use them for good or bad purposes. However, the relation between artefact and its application is not that simple. This is clear for the washing machine and the guillotine. Both are invented and designed with a clear purpose in mind: The washing machine is for washing clothes, the guillotine to chop off heads. We can say, the intended actions are inscribed into them. Clearly, we can evaluate these purposes: Facilitating an easier way to do the laundry is a good purpose, executing people is a bad (unethical) purpose (at least in societies that abandoned and condemned the death penalty). With other words: Some technical artefacts do not mere (neutral) instrumental means character, but serve ethically evaluable ends. The case of the knife and the internet are more complex since clear ends are difficult to identify and confine. Multi-purpose usage is expectable and in some cases even desired. However, it is too easy to locate the “ethical duty” solely at the users and appliers. Surely, it is their particular act that is ethical or not, but in many cases it is the employed artefacts that enable certain actions and possibilities. The fixation of values, however, does not – as is the case for washing machine and guillotine – happen at the stage of invention or design, but in the legal and societal context. The more complex a machine the greater is the ambivalence of good and bad application potentials. Whereas in the first case (washing machine, guillotine) the attributes good and bad are related to their intended means-ends-relations and intended purposes and applications, in the second case (knives, internet) they either refer to the success rate with which ethically unacceptable unintended means-ends-relations are suppressed and disabled or to the expectation on the (side) effects of the artefact on the life quality of current and future generations.
When things – artefacts as the embodiment or material manifestation of technology – are not ethically neutral, actions – techniques and procedures as technologically enabled phenomena – also can’t be reasonably classified as neutral (as the moderate neutrality thesis does). The same argumentation on ethically evaluable means-ends-relations applies here. In order to give another supporting viewpoint that critiques of the thesis bring up we may understand technology and its manifestations as agents themselves. This follows a strategy that was prominently applied in the Actor-Network-Theory (ANT) promoted by Bruno Latour and some others. In order to explain actual trends in society concerning decision-making and following or realising desires and needs, this model understands all items that have an impact on our particular choices and actions as agents connected in a dense network of options and pathways. Technology acts as an entity that pulls or pushes decision-making by enabling actions or simply by being available and opening action potentials. Then, also technological actions such as inventing items, constructing and producing items or buying and applying items (not only the items themselves) arise from the social context they are embedded in. As such, they can be evaluated as explained above. Current theories of knowledge also understand knowledge as socially and culturally highly contextualised, so that in the same manner even the weak neutrality thesis does not hold.
6.2 Technology Ethics Approaches
If – as we have seen – technology is never neutral, the question remains what is good technology and how conflicting viewpoints and disagreements on priorities of values concerning technology and its effect in the world can be settled. Before we take a closer look at four different approaches, it is useful to distinguish four different fields of ethics and technology (as James Moor did) in order to clarify what kind of ethical impact of technology we talk about.
First, we need to distinguish cases in which we talk about ethical judgments and evaluations that arise in the context of technology and its application but that is not directly linked to the technology itself but to other phenomena or normative aspects of human life from cases in which the technological items are directly linked to ethical judgments and evaluations. Moor calls the former “ethics of technology” and the latter “ethics in technology”. Among the former are so called normative agents (see the link to the ANT by calling technology actor) that are evaluated as good or bad according to how well they perform their tasks. A good toaster is a toaster that toasts toast well. A good calculator is one that produces the correct output (measured compared to some kind of standard or expectation). Another group are the ethical impact agents that are designed and applied for normative and ethically relevant purposes. An example are jockey robots that are employed for camel races in Arabia and, by that, free little boys from this often deadly task. In both cases the normative or ethical questions (What is a well-toasted toast? What is a correct calculation? Is the exploitation of boys for camel races acceptable?) can be settled independent from the technological agent (for example the artefact) itself. It is rather a matter of ethics in general. This is different for ethics in technology: Moor distinguishes implicit and explicit ethical agents. The former are items that are designed in a way that moral acts are supported and immoral acts are avoided. Examples are autopilots (that have to ensure the passengers’ safety) or ATM’s (that have to be programmed in a way that no fraud can be committed with users’ account data). Their functions are directly linked to ethically relevant aspects of (human) daily life. The latter are the special case of artefacts that are equipped with the ability to make ethically relevant decisions, for example artificial intelligences. Since to date there are no known artefacts that fulfil the conditions for a “strong” AI, this category is more or less irrelevant for us now. Also, we will try to stick to technologically relevant ethical issues so that most more general ethical cases of the first two categories are too broad for this course. We are mostly interested in ethics implicitly involved in technology (implicit ethical agents). Note that Moor’s scheme is not a classification of artefacts but of ethical relevance. The same item (or particular artefact) can have ethical implications in more than one respect or even in all four.
How, then, are we going to perform ethics? Remember the scheme I presented in the introductory lecture:
The consideration is rather pragmatic: Top-down moral philosophy turned out to be too abstract and theoretical, often not fruitful for practical approaches of technology ethics in real-life discourse arenas. Bottom-up casuistry is inefficient by debating the same or similar cases over and over again, sometimes appearing arbitrary and inconsistent. The middle way proved useful and goal-oriented for a deliberative discourse among stakeholders in the discourse situations we discussed in section 5.2.3. With the input of ethicists and based on the experiences of involved actors who know the “hot topics”, ethical principles are elaborated that serve as an orientation towards conflict solving and argument evaluation. In the following, four approaches that are the most established ones according to the common literature are discussed. All four can be understood as a strategy to define and clarify the principles needed for a viable application in the technology assessment procedure.
The first (and most established) strategy is to define a set of values that covers what is valued by technology enactors and stakeholders including the wider public and the environment (“third parties”). The ethical principles (like freedom, autonomy, privacy, etc.) are then defined by how they relate to those values. The most famous set of values is the “Oktogon” by the VDI (German Engineers’ Association). Originally consisting of eight items (therefore the name), two are summarised in one so that there are seven boxes now.
First, technology is evaluated in regard of its functionality, that is, how well it meets its proclaimed means-ends-relations. Besides, aspects of safety play an important role. From the economic side, significant values are profitability (or efficiency, depending on how to translate the German term Wirtschaftlichkeit) and economic wealth. The members of the society are interested in personal health and environmental quality. Moreover, the quality of life is affected by social quality (or balance) and options for personality development. These items are connected in two ways. The first (indicated by a one-headed arrow) is an instrumental relation: One supports (or stronger: is necessary for) the other. For example, the functionality of an item determines its safety and also its efficiency. More safety means (usually) better health. Higher environmental quality also has a positive effect on health and social quality. And so on. The other type of connection is a competitive one (indicated by two colliding arrows), indicating that one can’t be supported without diminishing the other. Profitability often conflicts with safety aspects and health effects. Increase in economic wealth usually goes along with decrease in environmental quality. With this scheme, it is then possible to classify arguments by supporting or neglecting one or more of those values, and to identify their support of or conflict with other arguments in favour of or against other values.
There are a few difficulties with this approach. It doesn’t provide any orientation on how to prioritise or hierarchise these values. Exactly that, however, is necessary in conflict situation, so that the value set doesn’t bring the ethical discourse any further besides a few clarifications. Another criticism was expressed concerning the argumentative reasoning of those values. One option is a coherent-reconstructive reasoning that derives values from tradition and obvious social consent. “We have always done it like this!”, or “Since 200 years, the society X is based on these values!”. This approach is not very “ethical” but rather descriptive and, therefore, lacks ethical justification. Another option is intuitionism, a holistic conceptualisation of a coherent value system. Again, it is highly questionable whether this theoretical reflection has any justifiable foundations. A third option draws conclusions from reflexive reasoning from discourses or reflexive action. We will discuss this in the next subsection. It is by far not clear why the VDI value list should be in any way complete or consistent. Many alternative sets have been proposed, for example one by Hentig: 1. Life, 2. Freedom, 3. Peace, 4. Peace of mind, 5. Justice, 6. Solidarity, 7. Truth, 8. Education, 9. Love, 10. Health, 11. Honour, 12. Beauty. Of course, there are relations between the VDI values and these ones here. Some (love, honour, beauty) are not as such represented by the VDI oktogon. This shows, that the value approach appears helpful in some respects but still lacks ethical consistency and doesn’t settle any of the most urgent ethical debates.
We have learned that TA is a highly interactive, interdisciplinary and communicative endeavour. Many stakeholders contribute their arguments and opinions. This makes TA a perfect example for an applied discourse. Conducting a discourse in the right way is not a trivial thing! There are linguistic difficulties, structural and formal factors, psychological and social obstacles. German philosopher and sociologist Jürgen Habermas (in parts together with his colleague Karl-Otto Apel) conceptualised the ideal discourse. The most important characteristics of an ideal discourse are these:
- All participants are using the same linguistic expressions in the same way.
- No relevant argument is suppressed or excluded by the participants.
- No force except that of the better argument is exerted.
- All the participants are motivated only by a concern for the better argument.
- Everyone would agree to the universal validity of the claim that is concluded.
- Everyone capable of speech and action is entitled to participate, and everyone is equally entitled to introduce new topics or express attitudes, needs or desires.
- No validity claim is exempt in principle from critical evaluation in argumentation.
Certainly, no perfectly ideal discourse can ever be achieved. This doesn’t mean it is not worth trying to get close to it. Habermas claimed that ethics is always the product of communication between people with different opinion. Therefore, on the basis of an ideal discourse, the outcome of such a discourse is most likely what we can regard as morally right or good. Some doubt that this concept of discourse ethics counts as an ethical theory like consequentialism or deontology, and it is true that substantial parts of Habermas’ theoretical considerations are based on a Kantian deontology (for example, his rationality claims). However, for practical discourses in the arena of stakeholder discussions as in TA/ELSI approaches, this reasoning strategy is easily applicable and can be exploited for a more efficient and fruitful debate conduct in terms of clarifying the ethical principles that are at stake.
Let me give you an example that is often found in the debate on Nanotechnology: A frequently expressed concern about Nanotechnology (and in Biotechnology and Genetics) is this: Manipulation of matter at this scale and using it for modifying the constitution of humans and nature is like “Playing God”. It is “not natural”. Arguments of this type are laden with difficulties. Terms like Nature and God require careful reasoning and definition in order to fulfil the condition of being logic and universally valid. The claim that something is “natural” and something else is not and, therefore, not good is in most cases a naturalistic fallacy (自然主義謬誤): It lacks the proper normative premise that explains why “natural” means “good”, or why “not natural (not found in the natural environment)” necessarily means “not good”. Religious arguments bear the danger of a certain dogmatism (教條主義; 獨斷論): It is based on the foundation of belief in an entity like God that is not further questioned. Here we face the risk of a dead end argument: Claiming atheistically that “there is no God like the (mono-)theistic religions believe” kills the debate and is as dogmatic as the religious viewpoint. Certainly, in our modern enlightened world secular (世俗的) viewpoints are taken more seriously and are prioritised over theological (神學的) arguments. The more reasonable, rational or logically valid argument will always win over the dogmatic (“God told us! No more discussion necessary!“), traditional (“We always did it like this!“) or superstitious (“If we do this, great misery will fall upon us!“) arguments. Let’s see what that means for an ideal discourse: We have seen that all viewpoints should have be given the chance of being expressed without any restrictions. The same goes for counter-arguments. Many ethics commissions that debate ethical implications of Nanotechnology (for example nanomedicine) involve a representative from a church (as a social institution and promoter of morality), for example a priest or an academic theologist. He knows that if he reasons his viewpoints with God, other participants will most likely not accept it. If he remains silent because of lack of confidence, because of pressure or a priori denial of his arguments, it is not an ideal debate. He should be given the chance to state his point of view, and then others can respond to that with their point of view. The opposing arguments must then be reasoned and compared on the basis of the goals that are to be achieved. It could be, for certain discourse constitutions, that a religious argument is the strongest. The ideal situation is that always the better argument wins: the more logic one, the better informed one, the more consistent and efficient, goal-oriented one, and not the most opportunistic, the most influential, the one expressed by the most respected or powerful participant, or the one that is most popular. The closer the discourse is to the ideal the more it is ensured that the conclusions of the discussion are the ethically favoured ones.
6.2.3 Negative Utilitarianism
We got to know utilitarianism as the maxim that the maximal benefits for the largest number of people/stakeholders is the ethically best situation. However, this bears difficulties in real-life discourses since it often happens that stakeholders are not able to find an agreement of what are benefits. People seem to value very different things and, therefore, define benefits very differently. Therefore, it was proposed to apply a negative utilitarianism: The ethically best is what causes the minimum risk (or harm) for the smallest number of people/stakeholders. While there is a large variety in opinions on what counts as good, there is a wide agreement on the evils, maybe even globally. Therefore, it seems argumentatively easier and clearer to define values and principles by the negative approach of defining what people usually don’t want.
There are several problems with this approach, too! First of all, it is logically inconsistent. Assuming that all life is constantly at risk (of dying), it would be better not to be born at all. In an extreme case, since suffering is worse than living at all, the argument might favour killing of people in order to reduce suffering and risk exposure. Besides this absurdity, the approach also neglects the widespread occurrence that people are willingly taking suffering or risk into account in order to achieve benefits in other respects that they value higher or more worthwhile. In some conflict situations, suffering or risk may be compensated by other authorities. Think of the construction of an airport and the necessary relocation of local residents: Their inconveniences might be outweighed by governmental support to find new homes, so that their willingness to co-operate is increased. That means, the conflict is solved by compensation, not by reducing the risk (which remains the same) to a minimum. Another criticised point is that in practice this approach is purely anthropocentric and can’t be applied to environmental issues. Since those issues are an essential part in the debate on sustainable technologies, the negative utilitarian approach is widely neglected.
6.2.4 Human Rights
Some ethicists tried to fill the technology debate with ethical principles that are derived from human rights. There are many approaches to define and justify human rights, most of them from deontological theories and based on the principle of human dignity. The most famous list is the Charta of Human Rights of the UN from 1949. Here, time and space are not enough to discuss all those approaches. Instead, I’d like to discuss one approach inspired by Maslow’s pyramid of needs. He distinguishes three levels of human needs, manifested in 5 steps of particular interests. The basic needs are the most fundamental physiological needs (enough food and water, sufficient warmth and the chance to rest) and safety needs (being free from harm and danger). Then, there are psychological needs such as belongingness and love (having relationships, family, friends) and esteem needs (feeling productive and being merited for ones accomplishments). Finally, people have self-fulfilment or self-actualisation needs (having hobbies, being creative, expressing and satisfying one’s inner states).
This pyramid can be “read” in various ways. For example, the suggested hierarchy may be understood as an order of development of both human civilisation as a whole and individual human beings in particular. Another reading that is of relevance for our topic is the relation between those needs and the granting of human rights. The more basic a need the more we are inclined to grant the satisfaction of that need as a human right. It is important to distinguish negative rights (the right of freedom from something) from positive rights (the right of freedom to something). From my understanding, Maslow’s pyramid implies that from top to bottom the freedom from rights increase in significance and importance. Everybody might agree that people should have the right of freedom from being blocked from access to food, warmth and sleep. But not everybody agrees that people have a right of being loved or a right of having a job or a right of committing to a passionate hobby (or, strictly speaking, in terms of negative rights: the right of freedom from being blocked from access to it). The positive rights, in contrast, increase from bottom to top: People are granted the right of freedom to choose their hobby, their favourite music, their religion or their job. Usually, people are also free to choose their friends and partner (not the parents and siblings, though). However, in case of the basic needs, they are usually not spoken of in connection with terms of freedom of choice. It appears plausible, however, to understand the physiological and safety needs as “more urgent” than, for example the need to have a hobby or a job. This hierarchy is also mirrored in international agreements on human rights protection and manifested in actual law-and-order systems. When imprisoning criminals, their right of freedom to choose their activities, their destinations or their social surrounding is taken from them (so to say), but even in a prison it must be ensured – according to common sense – that they have enough to eat, a place to sleep safely and that they are not tortured or humiliated. On a less “political” but more “familiar” level, we might make the example of parents that bar their 10-year-old daughter from having a tattoo with the argument that her safety (from harmful health effects of the carcinogenic ink) outweighs her freedom of self-actualisation (which, as she believes, having a tattoo is part of). Here, it is also obvious that from bottom to top the number of options to choose from are increasing immensely. On the basic level, we simply have to eat, sleep and stay away from unhealthy environmental conditions. It is also clear what safety and security imply. The ways to serve the need of friendship and love are much more manifold, not to speak of the choices for esteem and self-fulfilment needs.
Third, there is an ethical reading in the pyramid – even though I wonder if Maslow or others who exploit this illustration would think of it in this way. Ethics as the attempt to find solutions for conflicts and problems that occur in the inter-sphere between individual people, societies and cultures is concerned with strategies of argumentation that can convince parties of the rightness or wrongness of certain viewpoints, decisions and/or actions. People have different interests, desires and preferences. When these collide, a solution is needed as an orientation for what would be a proper way to proceed. Commonly, people agree that “my rights end where your rights start”, but that is often too simplistic and not helpful for many conflict cases. This pyramid may serve as an orientation for a hierarchy of rights. When two need-based rights collide, the one further down in the pyramid is to be prioritised over the one further up. When a politician’s interest in power (as a form of prestige) and votes leads him to making decisions that are undermining the social stability of his country (like Trump in USA), it is unethical. When I neglect my children’s need to spend quality time with their father because I am more interested in my job or my hobby, it is unethical. This reading is connected to the second reading on rights: Limiting someone’s options for self-fulfilment is less ethically problematic than limiting someone’s options for seeking safety. On the socio-political level, when a legislation prohibits smoking in public places (as in Germany) some people complain, but it is not a big problem. When a legislation prohibits homosexual relationships (as in Russia), thus limiting the satisfaction of relationship needs for a significant group of the population, it is ethically highly questionable. When a legislation is not putting sufficient energy into the social balance (as in Myanmar, not governing the conflict between Buddhists and Muslims), it is losing its justification. When a legislation is not even trying to feed its population (as in North Korea), this legislation is better put out of power (forcefully, if necessary) since this is clearly a violation of human rights.
Inspired by Maslow’s pyramid (that makes good sense to me), I thought about an additional or even supplementary pyramid of necessities for life quality. The pyramid of needs doesn’t say anything about the sources for the satisfaction of those needs. What must be given for a certain life quality? How can that be prioritised or hierarchised in order to come to insights that can serve as orientations for actions and decisions (such as the “human rights” approach based on the hierarchy of needs)? Here is the result of my reflections:
The basic necessity that is needed for survival is environmental stability. Embedded into an ecosystem, human beings can’t survive without it. If the fine-tuned environmental balance is disrupted, the whole system will be affected, for example through changes in biodiversity, food chains, climate, chemical constitution of the atmosphere, etc. Environmental health is the basis for our food sources, for access to fresh water, for breathable air and the ecological niche of the human race. All anthropogenic activity (including system formation such as society, culture, economy, money, etc.) is dependent on it and, therefore, secondary to it. Second, human needs can only be satisfied when there is a certain level of social stability. In extreme cases (war, riots, anarchy, violence), this can affect the survival chances. In a more moderate sense, political stability provides autonomy and grants rights to the citizen that it is governing, thus enabling integrity. Here, integrity means inviolacy and the ability to act at all. However, it gradually (in the pyramid upwards) takes up the meaning of righteousness (ethical integrity) when the levels further down are taken care of. The third level that corresponds to Maslow’s belongingness and love needs is labelled ethical stability. With this, I mean an atmosphere of trust and co-operation among family members, neighbours, colleagues and peers (those in direct vicinity of one’s life). Only in that kind of surrounding can people start building close ties and rely on each other, increasing each others’ life quality by mutual support and collaboration. Only such a society is able to establish a system that offers livelihood options. This might be the most critical and debatable part of my pyramid. It implies that – as soon as a society reaches a certain level of integral peace and co-operation, people will feel the desire to act as parts of this society, bringing in their skills and abilities. They do that, as I believe, out of self-motivation and not because the social system forces them to. Moreover, it is not clear to everyone why economic needs play a role in this fourth level rather than on the first level (providing food, housing, clothes, etc.). The economic system we have, arisen from a functionally differentiated society (to use Niklas Luhmann’s term), dictates a lifestyle of shared competences in various types of jobs. Only in this kind of system depends the daily supply of food, housing, etc. on the financial income from one’s job (livelihood). This is man-made and not a universal law – it could be different. That’s why the basic needs (or here: the basic necessities) have, in principle, nothing to do with the economic system that we established. Having a job is only a necessity because we as a society chose to live like that. This fourth level in my pyramid is rather referring to livelihood options as a multitude of ways to unleash one’s productivity potentials because that is what we naturally fill our lives with when the lower three levels are secured. When survival is certain and the personal integrity secured, we start being concerned about our identity. We define ourselves through our social ties with family, friends and peers, but also – and maybe predominantly – through our social roles as competent experts in a particular field of skills or knowledge. Ultimately, when there is sufficient capacity and time for it, we form habits of thought or action that agglomerate to what we call culture. People use their creativity and intellect to engage with art, philosophy and spirituality. They choose hobbies (“spare time activities”) and fill leisure time with joyful and pleasurable endeavours. Some of those are part of the identity formation mechanisms, others are simply a luxury in the sense of “they are not really necessary for our life”. However, in any case, it is usually those aspects of life that give us the feeling that it is worth living for.
Same as for the needs pyramid, also the necessity pyramid can be understood as a development description, analogue to the one given above. More interesting – and the main reason why I think this way of putting it produces further insights – are the political and ethical dimensions in it. In both fields (politics and ethics) we asks “What shall we do?”. When taking this pyramid as a decision guideline, the answer is: “Start at the bottom, fix the problems, and work your way up!”. In reality, however, we observe trends that proceed in the opposite direction. Governments are eagerly promoting industrial aims for the sake of job creation and material wealth while resources and energy demands ruin the environment and the eco-system. The climate changes in an accelerated fashion under the influence of human activity, but important decision-makers and consumers seem not to care due to the conveniences they desire on the 5th level (self-fulfilment needs and cultural necessities). Religious and societal conflicts dominate the News (for example islamistic terrorism, racism or homophobia, unemployment rate) while the serious global problems arising from atmosphere warming, pollution and species extinction are marginalised and only peripherally brought to people’s awareness, at least not as an “urgent issue”, not to speak of one that is wholeheartedly worked on.
I suggest that crimes are punished on the basis of this pyramid. Environmental destruction and pollution (for example by corporations or shipping companies) as the worst possible crimes are punished with lifelong imprisonment. Terrorism, genocide and tyranny are punished accordingly. Corruption, brainwashing through media or educational curricula, all forms of fascism and discrimination might fall into that same category when they threaten the social stability. The next level are crimes that undermine the ethical integrity of the society: intriguing, fraud, betrayal, abuse, harassment, etc. Stealing money (no matter how much) or other commodities, however, is not a big deal since it is motivated by greed and avarice – character traits that mostly the criminal himself is suffering from, as such already punished. These people need help, not punishment. Crimes in the art/culture realm are hardly possible, then. Copyright violations (for example by downloading music and movies illegally) are a bagatelle compared to crimes that target the more fundamental necessities of human life.
This brings me to reflections on technology. Basically, I (alongside many scholars in Philosophy of Technology) regard the creation and usage of technology as the result of needs and desires. People invent and apply artefacts in order to make their life easier. The oldest known tools (if understood as technology, as I do) helped their users to ensure a sufficient supply of food, clothes, housing and warmth. Still today, many branches of technology are serving purposes of survival, be it for food production, medical technology, housing, protection from natural forces, etc. Other items serve social purposes, for example transportation systems or mass media. Relationship needs are addressed in various forms of communication technology, but also indirectly in the form of making work processes less time-consuming, thus enabling more time with loved ones and for socialising. Technical artefacts enable many new forms of jobs and ways to be a productive member of a community, for example scientists and engineers. Moreover, technological solutions are strongly interwoven into cultural practices, arts, entertainment, and alike. However, at the same time, technology also has negative impact on all levels of human needs and necessities: technology-caused environmental destruction and pollution, social imbalances due to unjust distribution of access to technology-induced wealth, interpersonal and individual conflicts arising from misuse of technology, limitations of livelihood options due to replacement of human workforce by technological solutions, and personal numbness and blunting as a consequence of mindless consumption and application of “cold” technology. In technology assessment, negative and positive effects of technological progress, often referred to as “risks and benefits” are analysed and evaluated according to certain parameters. In the same fashion as I categorised the heaviness of crimes, I suggest to evaluate technology on the basis of my pyramid of necessities: In the first instance, technology must be “environmentally friendly”, that means its design, production, implementation and application must not interfere with the environmental integrity and balance. If it does, no matter how useful it is in serving needs of the upper levels, refrain from it! In the second instance, it should be ensured that it serves social stability by promoting justice and fairness through its general availability and non-discriminatory effects. Then we can start asking in which way it affects people’s life habits (interaction within families, among friends, with colleagues) and people’s options to choose doing anything meaningful in their life. Then – and only then – may we take into account all those intended purposes and anticipated effects that the technology in focus has on the amenities of human daily life. There is a lot of technology (in the widest sense) currently firmly implemented in our daily life that would fail this assessment: individual auto-mobility (cars and motorcycles), cosmetics, agricultural techniques (especially meat production), energy production from fossil fuels, just to name a few examples.
6.2.5 Practice questions
- Can a technology discourse in Taiwan be „ideal“? What are the difficulties?
- What kind of cultural practices and habits stand against an ideal discourse in Taiwan?
- How can the problem be overcome?
- How are the VDI values connected to the Human Rights approach? Try to relate each value to the human rights as derived from Maslow‘s pyramid!
- Hint: some values (for example functionality) can only be indirectly (in connection with other values) be related to human needs and necessities.
- Try to „assess“ the current traffic situation in Taiwan (number and type of vehicles, road conditions, driving style, road safety) according to the VDI values.