2. What scientists do
When reflecting on what scientists do all day in their job, we have certain ideas: We think of researchers in their laboratories, conducting experiments, performing measurements, analysing the data, drawing conclusions and visualising them. We also know that research usually starts with formulating a hypothesis that drives a scientist’s research agenda, that means it determines the choice of experiments and the type of information that has to be gathered. Also, we know that the production of new insights is not the end point of scientific activity: It also must be communicated. Let’s have a more detailed look at this chain of research activities. This back bone – research design, hypothesis formulation, experimentation, making scientific statements, producing knowledge and communicating this knowledge – is labelled “the scientific method”. What exactly happens in each step?
When designing research, a scientist has an idea for a particular research project that he acquired by communicating with his peers (for example his colleagues at his institute, by identifying certain questions when studying the scientific literature, or inspired by presentations at scientific conferences that he visits). In case the researcher is a Master or PhD student, he or she certainly has a thesis in mind and plans a project accordingly (with input from the principle investigator (PI)). Before starting any experiment, it is often necessary to apply for grants, that means the financial funding for a project, since scientists need to get paid and a study most likely requires material and other resources that have to be paid. Moreover, the decision to conduct a certain research project is of course embedded into a scientist’s career plan: He or she will do studies that match the research portfolio and the CV and the desired future academic profile of the researcher.
When this is settled – when the idea turned into a solid plan – hypotheses (or sometimes only one) are made. These hypotheses are not coming out of air, but must be grounded in present literature and in commonly accepted theories and laws (not legal laws, I mean scientific “laws” like Ohm’s law or the Nernst equation). We will see later what determines a “good” hypothesis. However, it is obvious that the hypothesis paves the way for the course of experiments conducted and investigations initiated and carried out. The experimental setup and the data collected must be related to the hypothesis in a way that they allow insights into the validity of the hypothesis.
What does “conducting experiments” mean? The points made so far apply to all kinds of “scientists”, to everyone who follows a systematic approach for the generation of new knowledge, including natural scientists (physicists, chemists, biologists, geologists, etc.), social scientists, psychologists, medical researchers, but also historians, ethnologists and philosophers. They differ, however, in the methodologies of experimentation and data acquisition. The “hard sciences” need to perform measurements of certain factors that are relevant according to the hypothesis (for example, physical and chemical properties, biological or medical properties, etc.). Semi-empirical sciences like social sciences collect data in their own way (not measuring “natural” properties, but “soft” behavioural, cultural, or psychological factors). Normative sciences like philosophy and the humanities often don’t collect data (in form of numbers) but “arguments” that are then subjected into logic analysis.
The information collected (data, observations, arguments) must then be exploited to generate scientific statements. When are statements “scientific”? Some say, sentences are scientific when they state facts. This is a very problematic definition, because many statements that have been concluded by scientists in the past turned out to be wrong later (sometimes very much later). For now, we can distinguish scientific statements from other (for example religious) statements be referring to their following logic reasoning and being verified or falsified systematically. We will later learn that especially the method of falsifications plays a crucial rule, more than verification (which intuitively might be the more important approach).
Scientific statements, then, allow us to “know” something that we did not know before. But what exactly is that knowledge? Is that “truth” or “facts”? Many philosophers of science today doubt that. We will see that the strongest benefit of the scientific method is the ability to generalise from certain findings towards expectable behaviour of something (matter, nature, social systems, human behaviour, etc.). New knowledge is evaluated according to its viability rather than its truth: It is good when it is good for something (for example explaining certain phenomena, or applying it to improve or invent a technical artefact).
This knowledge is only valuable when it is communicated. By communicating it – for example in the form of article or book publications, or at conferences – the scientist also confronts it with feedback and possible criticism from colleagues, fellow scientists, peers or even the entire public. This is a crucial element of science: debating, questioning, doubting and re-thinking, not in an individual isolated fashion, but as an endeavour of the scientific community and those who collaborate with it (for example industry or science policy). This discourse is such an important aspect in the research methodology that we can consider it part of the scientific method.
To summarise this: The goal of science is to create knowledge about the world and its elements. The method to acquire such knowledge is investigation and empirical reasoning by applying strategies of logic consistency. Systematic doubt is applied in order to refine and secure this knowledge. This is fundamentally different from religious or spiritual inquiry. To make that clearer, let’s look at the tree of knowledge again (see section 1.2, Lesson 1). The scientific method is a possible channel of world explanation and sense-making in the trunk of the tree, processing experiences and leading to the flourishing of certain branches. It is even an important element of the scientific method to induce certain experiences – to make the invisible visible, or to draw something hidden into our awareness. These experiences are then processed with logic, rationality and empirical reason. With this kind of knowledge we “feed” the spheres of our daily life (for example, the organisation of our society, the way we do politics, the economic system, the design and dissipation of technology, our understanding of the physical world and ourselves, etc.). We are not satisfied anymore with the dogmatic teachings of a religious elite (like the church), but want to be convinced by evidences that can withstand a critical inquiry. In this way, science shapes and influences our whole life and the way we choose to live it.
Scientific rationality is devoted to logic. As seen above, we want scientific inquiry to be logically consistent because it is exactly that which differentiates it from intuitive or spiritual inquiry (“belief”). But what is “logic”? Let’s play a little game!
I will give you the first three numbers of a sequence of numbers that follows a certain rule. The rule is in my head. You have to find out the rule. Please make a suggestion for the next number in the sequence. In case it is a correct suggestion, tell a possible rule. Here are the first three numbers:
2, 4, 6
Student: “8!” – Yes, that is a possible next number. What could be the rule? Student: “It is all the even numbers.” – No, that is not the rule in my head.
2, 4, 6, 8 – What could be the next number?
Student: “10!” – Yes, possible. Rule? “The next number is the previous number plus 2.” – No, not my rule, sorry.
2, 4, 6, 8, 10 – Any more suggestions?
Student: “17?” – Yes, that is a possible next number! Student: “A random sequence of numbers?” – No, that is not the rule!
2, 4, 6, 8, 10, 17 – What could be next?
Student: “3?” – No, that is not a possible number! Student: “A number must be higher than the previous number?” – Yes, that is my rule!
What happened here? Most of you applied a principle that most researchers use for their experiment designs and strategies: a positive confirmation. After 2, 4 and 6, you make a theory in your mind about what the rule could be, for example “all even numbers”. According to your theory, the next number must then be 8. So you ask for 8. And when you find 8, you think your theory is confirmed.
Let’s have a look at different forms of logical reasoning to make this clearer. We distinguish deductive, inductive and abductive logic.
Deductive logic follows a path from a “rule”, a “law” or any known (in mathematics: axiomatic) starting point via a “case” or the occurrence of a certain condition to a conclusion that is drawn by relating the condition to the rule. A common example is the statement that “All humans are mortal!” (which can reasonably believed to be true) and the case of Peter being human (the condition of Peter), so that it can be concluded (deducted) that Peter is mortal. Of course, it is possible to make mistakes here! We could, for example, claim that all humans are mortal, observe that Peter dies (which means he is mortal), and deduct from this that Peter must be human. This is clearly wrong, because Peter could be member of a different species which is also mortal (but not mentioned in our rule). However, if we apply deductive logic correctly, it has the advantage of a high certainty in its predictive power. As long as rule and condition are “true”, the (correctly deducted) conclusion is also true. However, we learn nothing new by deductive logic, because the result is inherently contained in rule and condition (we only need to “pull it out”).
In science, however, we are interested in increasing our knowledge of rules, laws and principles. They are the end point of our inquiry, not the starting point. We start from observations, from “results”. In this little game, I gave you the first three numbers, they were your initial observation. In order to come to the rule you “made up” cases to extend your observation. This is similar to a scientist conducting experiments in order to “go beyond” what is naturally observable. This manipulation and directed observation is expected to enlighten the researcher on the underlying mechanisms that constitute the rule that he or she intends to find out. This is the advantage of the inductive method: It reveals new insights. However, the downside is that we can never be sure if we are correct with our conclusion – there is always uncertainty. Indeed, you have seen, you “observed” something (Number 8 was “correct”), but the rule you concluded was still wrong. It took you many guesses and tries until you found the correct rule. One reason for this is that you (probably) went a different way: that of abduction.
Abductive logic starts from certain observations (the “results” as effects of a rule or principle), but formulates a rule first before “making up” a case that confirms or falsifies that rule. You saw the number sequence (2,4,6), thought that it “must be the even numbers”, and made the case “8”. You even found “8” (I told you it is “correct”). Therefore, you concluded that your rule was confirmed. However, it was not! The game was solved when one of you had the courage to say something completely beyond expectation. Abduction is – in most of the cases – a form of “bad science”. It is useful when no other form of knowledge acquisition is possible, but it can never result in (secure) new knowledge but only in “the best explanation”. There is a fine line to be drawn between induction and abduction. As we found earlier, a scientist doesn’t conduct experiments into the void, without any orientation. There will always be pre-formed ideas and directions, manifested in the formulated hypotheses. We can say that this is a form of “abduction”, because there is already an assumption concerning the rule even before we investigate our “cases”. The main difference is how easily we are satisfied with the results of our case investigation (the experiments). Good scientists don’t look for confirmation of their hypotheses or a prove for their claimed theories (“rules”). They try to challenge them by the method of falsification! If you want your theory to be confirmed, do a negative test! Ask something that would be excluded by your theory! If you still find it, your theory must be wrong! All the “big scientific theories” that sustained over the centuries have been solidified by negative confirmation, not by positive confirmation! The most prominent example is the biological evolution of this planet and its life forms that nobody seriously doubts today. Darwin didn’t publish his insights before years of experiments and investigations with “negative” experiments to make sure his interpretations of observations are correct.
The scientific method tries to combine both advantages of deduction and induction. The overall logic is induction, but this strategy of inquiry only works efficiently with iterative deductive confirmations. With other words, science is “macro-induction” by “micro-deductions”. The experiments we design to extend our observations towards insights about rules and theories have to make sense in a deductive logic sense. In this way, we can have both certainty in reasoning and progress into new fields of knowledge. However, this way is full of pitfalls and fallacies, as we will see.
2.3 Steps in a research project
In the following, we will have a look at the elements of conducting a research project in much greater detail than the overview compiled by our brainstorming. We will do that with the visual support of the scientific knowledge acquisition web as proposed by Lee (in “The scientific endeavour: A primer on scientific principles and practices.”, p.140, 2000). However, simply going through that scheme step by step will be tiring and boring. Therefore, we will fill it with life by designing and running our own project in our imagination. Since you all have different backgrounds and might not all be familiar with physics, chemistry or any other scientific discipline, I suggest to make a research project that can be understood by everyone: We will investigate the fluffiness of bread!
- Identify the problem area
Every research project starts with the observation of “missing knowledge”. Something needs to be known that is not known, yet. Scientists usually become aware of a lack of certain knowledge from reports, conversations with peers, communications with non-scientists (for example in industry, public) who have a problem that can be solved by scientific inquiry. Even though there is a trend towards all research endeavours being meaningful and purposeful in one way or another, there are still projects that have no obvious benefit besides gaining knowledge for the sake of knowing it.
Bakers report: The fluffiness of bread appears to turn out different on humid and dry days when all other procedures of the baking process are kept the same. Some customers complained that they can’t trust that their favourite bread is the same every time they buy it. Bakers want to know how to control the fluffiness of bread because it impacts their business.
- Checking established knowledge
It has to be made sure first that the issue in question is not already answered by someone in a study that is analogue or even similar to the intended one. If nothing can be found, the body of existing knowledge (in form of available literature) is scanned for useful information that is related to the project idea. This can provide hints on how to specify the research questions (the hypotheses) and on where to start with investigating. Furthermore, it embeds the project in the state-of-the-art of science which will be of utmost importance in peer review and external evaluation.
Literature review reveals that obviously nobody studied the relation between air humidity and yeast systematically, yet. However, we find essays that suggest a link between yeast activity and presence of humidity.
- Develop a hypothesis
The researcher sets the scene for his investigations by formulating a hypothesis. Usually, this is a statement that expresses a certain expectation in the form of “Condition A always and reliably leads to the effect E.”. By this, the researcher is guided in his experimentation (for example, inducing condition A and showing the effect E can be observed with a sufficient reliability and reproducibility). There are criteria for “good” hypotheses:
- Fruitfulness: Elaborating on these hypotheses will result in useful and applicable knowledge and insights for further progress, for academic and scientific purposes, or for a clear purpose in industry, policy, society, etc.
- Clarity, precision, and testability: The hypotheses suggest a clear experimental or investigative strategy for verifying or falsifying them.
- Framework for organising the analysis: It becomes clear from the hypotheses what sort of knowledge from what kind of knowledge sources is necessary to generate reasonable insights.
- Relation to existing knowledge: It becomes clear from the hypotheses formulation how to set the findings into perspective to experiences and approaches in other parts of the world and in other fields of expertise and application. Sometimes, the major goal is to connect and draw relations between fields of knowledge that haven’t been connected so far.
- Resources: The hypotheses should indicate and determine a workload that is feasible given the available resources (funding, lab equipment, manpower, etc.).
- Interest: The hypotheses should indicate a field of research and a scientific question that the investigators are academically and also personally interested in.
Hypotheses are often too vague, for example assuming a “relation” between two factors without specifying whether it is sought for a correlation or a causal relation. Most hypotheses are written in a “positive” wording that suggests what a scientist is looking for and expects to prove. It must be noted, however, that a scientist should have a neutral stance towards his hypotheses, rather interested in falsifying than verifying them! There are also many reported cases in which researchers change the hypotheses at the end of a study in order to make them match with the results they obtained. This can be classified as “bad science”.
Hypothesis: There is a positive correlation between the fluffiness of bread and surrounding humidity.
- Study design
While the hypothesis outlines what shall be brought into knowledge at the end, the study design is sketched in order to have a clear plan on how to achieve that goal. It states in particular what kind of measures are required to generate useful insights, and how these measures can be operationalised. It must convincingly explain why a certain experiment is intended to be carried out and in which way the acquired data has any relation to the question posed in the hypothesis. A clear proposal also describes the contribution of additional competences, external expertise, special equipment and interdisciplinary collaborations.
The measure “fluffiness” must be operationalised, that means a way must be found to measure it reliably and reproducibly. It would be insufficient to simply evaluate the resulting bread as “very fluffy”, “medium fluffy” and “only slightly fluffy”. One way could be to measure the volume of a bread based on 500g flour after baking, since the volume is proportional to the air entrapped in the dough which is the major determinant of “fluffiness”. Measuring the humidity is easily done with a hygrometer, but a way must be found to control the humidity. Furthermore, all kinds of extraneous factors have to be anticipated (e.g. pressure of kneading and kneading time) and kept constant over a serious of test bakings.
- Collect information
In this step, the scientist performs his experiments, collects data or analyses arguments’ consistency and validity (here, I include all kinds of academic researchers, from natural scientists to social scientists to historians and philosophers). Experimental apparatuses are set-up, calibrated and used to perform measurements. Or: Statistical data from real-life observations are subjected into and compared to models (e.g. by social scientists). Or: Historical and contemporary sources of knowledge are exploited to compose and justify arguments (e.g. by philosophers).
A strict baking protocol is followed to bake test-bread. All procedural factors are kept constant except the humidity. The humidity is varied in a controlled fashion and the volume of the resulting bread is measured. These values are graphically illustrated in dependence of humidity.
- Serious problem?
It is very seldom that a research plan yields useful results upon the first attempt. Experimental setups might have flaws, procedural difficulties occur, delegation of (sub-)tasks are unclear or inconsistent, factors that the researcher has been unaware of become visible. Sometimes the acquired data give a hint that something must be wrong with the way the data is collected (statistical insignificance, inconsistency, etc.), so that procedural and systematic errors in the experimentation, measurement of knowledge compilation method can be identified. Problems of this kind usually don’t put the entire project into question, but require a revision and improvement of the study design (going back to point 4).
According to the experimental protocols, it seems the results obtained on Mondays differ from those yielded on Tuesdays. A look into the lab organisation reveals that different technicians were in charge of kneading the bread dough on different weekdays. A skinny dainty girl kneaded on Mondays, a tall muscular guy kneaded on Tuesdays. It may be assumed that this makes a significant difference for the baking protocol. Indeed, after delegating the task of all kneading work to the guy, the results are more consistent.
- Analysing information
Datasets and information compilations have to be analysed. This usually does not happen after all data has been collected but in parallel to further data acquisition. Raw data is subjected into calculations with certain equations, statistically checked, graphically illustrated, and related to its meaningful content. A few things have to be ensured:
- Validity – the obtained data indeed allows insight into a certain factor that was intended to be analysed. The measures and values represent the “real” occurrence (as far as it is possible to confirm that).
- Reliability – The applied methods result in consistent output and don’t vary due to procedural or systematic errors. Here is an example for how validity and reliability are related: Imagine a brain scanner that measures the size of brain tumors. Option 1: If in several measurements it always gives the same value and this value resembles the actual size of the tumor, it has high reliability and high validity. Option 2: If all measurements give the same value, but that value is 40% smaller than the actual size, it has high reliability, but insufficient validity. Option 3: If the variance of the values is large, but their average is very close to the actual size of the tumor, it has low reliability and high validity (but, maybe, more by chance). Option 4: If the measured values vary largely and neither any of the values nor their average is close to the actual size, the scanner has a low reliability and its output a low validity. Only case 1 is acceptable. The other scanners need to be repaired or discarded.
- Reproducibility/replicability – A measurement you made on one day should turn out the same on the other day (given that all conditions are kept the same). Values you once obtained should occur again when the experiment is repeated. Reproducibility also refers to the clarity of the experiments and methodologies in their descriptions according to your laboratory notebook or the report you write: someone else should be able to recapitulate what you did and should obtain the same data when repeating the experiment. There are, of course, experiments and data acquisitions that can’t be repeated (e.g. observations on astrophysical events or seismic data during earthquakes for geoscientific research). In this case, even though the measurements are not replicable, the conduct of the experiment (setup, data processing, etc.) can be reproduced and reconstructed theoretically.
- Relations – One of the most crucial points in scientific analysis and debate is the relation between measures. Some experiments (sometimes mere observations) are certainly “only” descriptive: They describe phenomena without stating anything about relations. In other cases, researchers attempt to make statements about correlations between variables, or even causal relations. Example: researchers in Seoul (Korea) found that the number of suicide cases is always significantly higher two days after a day with exceptionally high air pollution. Referring to the collected statistical data, that is a simple description of a phenomenon. In the next step, they claimed a correlation between the two, because toxicologists could show that ozone radicals (whose concentration is higher in polluted air) affect a certain center in the brain that is believed to play a role in depression. Whether this is even a causal relation (“ozone radicals make people commit suicide”) remains questionable though, since the social, psychological and pathological mechanisms that lead to the decision to commit suicide are much too complex to draw it down to a simple cause-effect-relationship (more on that in section 2.4).
When depicting the breads’ volume (our measure for fluffiness) versus the humidity (our controlled variable), the graph shows an increasing fluffiness for higher values of humidity, but from a certain humidity on, the fluffiness decreases again. If desired, it can also be attempted to explain this observation: Maybe too much humidity makes the dough “too heavy”, so that the CO2 produced by the yeast is not able to make the dough loaf grow bigger.
- Do the results match with the prediction?
If the results match with the prediction – that means, they are in accordance to the theoretical and conceptual framework that was construed based on available knowledge – and, therefore, confirm the hypothesis, it can be proceeded to writing a report and communicating the findings (step 9). If they don’t, it must be decided how to proceed (step 10).
The observation described in step 7 – environments with different humidity result in different fluffiness, maybe even quantitatively determinable – matches with the hypothesis made in step 3. This can be communicated in form of a research report and, later on, in a publication.
- Write a report
A report consists of a description of the problem, the necessary background knowledge that determines the procedural and theoretical framework of the project, the hypothesis, methodologies and experimental design, the collected data, its interpretation, a conclusion. Sometimes the “data” is so voluminous (especially in social or psychological research, for example in form of audio files of recorded interviews) that it is attached in an appendix or as “supplementary information” in an online database.
We write a report on the influence of humidity on the fluffiness of bread with all the necessary information as listed above.
- “Negative” result: Still worth a contribution?
Sometimes, a mismatch between data and the expectation is not necessarily a “bad outcome”. Maybe the results tell something important in a different way than expected. Or the “negative” relation is also an important insight. It is bad style, however, to simply re-formulate the hypothesis in a way that the results “confirm” our expectations. In case the results are still exploitable and insightful for a report, it is proceeded with step 9. If the results pose major problems and confusion, we proceed with step 11.
If the result is that there is no obvious relation between humidity and fluffiness, this might also be an important information for bakers – then they don’t have to care about it anymore. However, if the results are not clear at all and leave too many open question, we better don’t write anything, yet, but try to analyse the source of the mismatch.
- Is it worth continuing the project?
When the results are not as expected it must be checked thoroughly what the reason could be. There is, of course, the possibility that the entire project is misconceptualised or based on improper assumptions. However, maybe something has been forgotten or overlooked. Maybe the unexpected results occur due to false initial assumptions, a flawed theoretical or conceptual framework, or an improper formulation of hypotheses. In many cases, especially in interdisciplinary collaborations and those with industrial partners, “giving up” is no option. Then it is even the researchers duty to go on with the study (here: step 13) rather than announcing its end (here: step 12).
Giving up the study on the fluffiness of bread should only be the last resort – when we can’t find any clue on what influences the fluffiness and would have to search entirely in the dark. Most likely, however, it will be possible to identify a strategy for the efficient improvement of the study. Maybe our study is even financed by the bakers’ guild and it is written in the contract that we have to yield results.
- Abandon the idea
A researchers must know when it is time to bury a project on the graveyard of discarded ideas! It wouldn’t be the first one! When it can be expected that a project would “burn” more resources and efforts than reasonably acceptable compared to the outcome, it might be advisable or even ethically demandable to abandon the idea and focus on other projects. This can safe resources, both financially and concerning manpower. However, this decision is especially difficult for projects that PhD and Master students work on, since their theses might depend on it.
The idea of a simple relation between fluffiness and humidity is stupid! The experiments have shown that the issue is far too complex to be studied like this. We will tell the bakers that they better focus on their craftsmanship’s expertise and experience and that science can’t help them with their competence.
- Problem identification
In case we decide to continue the project – which, by the way, happens every day since there is almost no research project that runs smoothly from idea to publication – we need to analyse what could be the problem. One option is that the obtained (confusing, unexpected, unexplainable) results are simply wrong in a way that their validity must be questioned. Another option is that those results are, actually, correct but either don’t represent a significant factor of importance (as we mistakenly believed) or indicate that the focus of our study has been on the wrong direction. Here, however, we must distinguish between errors that occur due to improper experimenting (as mentioned in step 6) and errors that occur due to a flawed research concept. As a result, we can’t simply solve the problem by re-thinking the study design (which is its experimental details), we must revise the entire study, which means we might need to re-evaluate and eventually change our hypotheses.
Other factors besides humidity, for example the kneading time and pressure, or surrounding temperature during dough preparation, are more significant. A deeper literature research, for example into the behaviour of biopolymers (like starch and flour), reveals that external pressure can force them to uncoil and intertwine with neighbouring polymer chains. This indicates that kneading time as fluffiness factor should be investigated, too.
- Revise hypothesis
If the problem analysis in step 13 results in deeper insights into the matter, the initial hypotheses can be revised and adapted to the new framework of background knowledge. This, of course, might influence the entire study design and its course of experimentation, so that all the steps 4 to 9 have to be gone through again, and also with the risk to end up at step 11 again.
Our new hypothesis: The fluffiness of bread is significantly depended on the kneading time and pressure applied to the raw dough.
- Submit for publication
After the study finally resulted in an acceptable and meaningful output that is summarised in a study report (step 9), it should be communicated in the next step. The most common way to communicate research findings is to publish an essay in an appropriate journal. We will talk about publishing and its implications in greater detail in lesson 4. For now, we keep it simple and just assume that one of the more than 10000 scientific journals is suitable for our research report.
We write an essay entitled “Environmental factors impacting the fluffiness of wheat flower baking products” and submit it to the “International Journal for Breadology”.
- Peer review
As pointed out before, the internal feedback and control system of the scientific community is a crucial aspect for the institutional and societal justification and implementation of science in its whole. Peer review is one of the tools that has been established to ensure a certain quality level and to supervise the compliance to common standards. The editor of the journal that we chose sends the draft to two or three reviewers for them to evaluate the essay. Their recommendations are taken into account in the editor’s decision to reject or accept the article, or to ask the author for revisions before being able to accept the essay.
Our editor sends the article to three other baking experts and fluffiness researchers in order to get their opinion on our essay. In a “double blind” review process, they don’t know our names, and we won’t know theirs. This helps reducing the risk of bias in the review process.
- Not passed
There can be several reasons for rejecting a submission. Maybe the article is – according to the editor’s opinion – not suitable for this particular journal. In that case we may submit it to another journal. It can be that the reviewers come to the conclusion that the quality of the study (better: the quality of the description of the study) is too low, that important possible experiments are missing, that the results are insignificant or that the interpretation of data is unsatisfying. In that case it is advised to re-evaluate the study and discuss how to proceed (back to step 11).
Option 1: We slightly change our article and submit it to another journal, the “Journal of Baking Theory and Practice”. Option 2: One reviewer remarked that our claim of a causal effect of humidity on the fluffiness needs more experimental support. We decide to design a more sophisticated quantitative study in cooperation with the baking engineering department, hopefully yielding in more comprehensive results that justify a new, longer, more convincing research article with more authors.
- Passed: Publication
When an article passed the review process it is published in the next issue of the journal, often earlier online. By this, the new insight becomes publicly available knowledge, as such it is part of step 2 and may serve as background knowledge for other research project designers.
Now, there is a little bit more knowledge for baking theory.
- Additional tests
In most of the cases, a research project doesn’t result in only one publication. With other words: most research projects are not finished after the publication of a related research article. Despite its acceptance for publication we might still not be entirely satisfied with our study outcome. Then we continue the investigation, perform more experiments, gather more data and solidify our insights with more evidence.
Our study on humidity and fluffiness feels incomplete. Some of the experiments suggested that surrounding air humidity is less significant than the amount of water added to the dough as part of the recipe. We assign a PhD student in our group with an additional investigation on this matter (start again with step 4).
Sometimes the knowledge that is revealed in form of published research articles finds its way into particular applications. Other researchers cite your articles, attention is paid to your essay, maybe engineers obtain important insights from your experimental results and use it for the improvement or invention of a technical artefact.
Bakers decide to control the humidity in their bakeries in order to obtain ideal baking results. Upon their request (with our article as a convincing argument), engineers invent an oven that have an in-built humidity control unit.
Now we have a rough overview of the course of a research project. Many of these aspects will show up again during the next lessons. For now, let us turn to the most basic elements of scientific thinking and reasoning.
2.4 Questions in Science
During his or her research, a scientist asks many questions:
- What? – Description of observations and phenomena,
- How? – Causality and other relational mechanisms of processes,
- When? – Predicting when an effect occurs, both temporally (at what time) and conditionally (under what circumstances),
- Why? – Explanation of certain phenomena.
Some fields of science ask almost exclusively what-questions, while others focus on how-questions. Why-questions are addressed by surprisingly few scientific disciplines. This might be – as you maybe agree – due to the fact that why-questions are much harder to answer than what– and how-questions. Causality – or more precise: aspects of relation, including correlation and causal relations – is as such difficult to clarify. Imagine the following scenario: Today I have a headache. I believe it is because I did not sleep enough last night. My wife implies that it is because I stared too much at the PC screen in order to prepare today’s presentation slides. Moreover, I had too much stress lately, which could also be a cause of my headache. There are four options of how these things are related.
The first is that one of the mentioned possible causes (lack of sleep, staring too much at the PC screen, too much stress) leads to the effect “headache”. The second possibility is that all these causes lead to the effect in a chain-like fashion: First, I had stress, so I forced myself to finish the presentation and looked at the screen for too long, this decreased my sleeping quality and time, and this, ultimately, gave me a headache. Option 3 is that it needs all three factors together to cause me a headache. If only one was missing, I wouldn’t have a headache. The last option is that the cause-effect-relation-network is far more complicated. Maybe we can’t even say what is cause and what is effect. It is very likely that all are somehow cause and effect for everything else, not symmetrically though, but in an imbalanced way. Looking at the screen too long is caused by my inner mental stress level, both together cause a slight headache (maybe unnoticed, yet), which decreased my sleeping quality, which in return made my headache stronger and also increased my stress level additionally.
Remember what we said about deductive and inductive logic. A prediction can be best made following a deductive reasoning approach: When we know a law that links a certain effect E to the occurrence of a condition A, we can predict that E occurs as soon as condition A is established/happening/induced. When we have no clue of the law (or rule, principle, etc.), in order to give an explanation for the observation of an effect E, we need to gain insight into the causal mechanisms of a condition A leading to effect E. This resembles an inductive reasoning approach. Again, we can see why why-questions are harder to answer than when-questions, and that both, however, are necessary to deliver reliable insights in a scientific way: Only by confirmation through correct predictions is it possible to verify or falsify possible explanations.
2.5 History and Paradigms of Science
Last but not least, let us have a look at how the philosophy of science influenced the understanding of what science is about and what it is able to achieve. We will see that it is closely linked to epistemological insights that people had at different epochs in history.
By using our cognitive tools we perceive the world we are living in. The most naïve view is that of a real world that presents itself to us. Our task, then, is to watch it with a clear mind (and clarifying the mind is a practice of philosophy) so that we are able to see as many facets of it as possible in order to increase the chances of a “successful” and fulfilled life in this world.
This was the idea of the Ancient Greek philosophers, starting from Heraklit and Parmenides up to Sokrates, Platon and Aristoteles. It was all about “the world”. Its features and properties (its “truth”) can be recognised by us so that we – by careful watching and philosophical reflection – get the most realistic image of it. Only then we can fulfil our most “human” task of overcoming our natural boundaries and get closer to the divine, closer to perfection. This is the basic idea: The specifically “human” element in us is the ability to go beyond ourselves, to exit the inevitable and be free. With an accurate picture of the real world that surrounds us in mind, this movement towards the divine is facilitated significantly!
There are two dangers in this idea, and both are deeply entrenched in the further course of European-Western philosophy. The first is the dualistic division into “outside” and “inside”, into “outer world” and “inner me”, finding its climax in the reflections of René Descartes (17th century). The consequences are tremendous! It took ages and the influence of East-Asian philosophy to correct this flawed idea. The second is the realist scientific worldview with its idea of “discovering” knowledge about real features of the world. Even though this realism has been replaced by constructivism in recent decades, many scientists, engineers, researchers, but also most scientific laymen are still convinced that the knowledge we can acquire by scientific investigation describes a somehow manifested actuality.
Immanuel Kant is the most prominent philosopher who modified this image of world perception. His basic idea was that we can only get aware of those features of the world that we have a pre-formed image of, that means that somehow match with our previously made experiences. He distinguished “things-as-such” (the features of the real world) from the things as they appear in our mind.
As a consequence, we can never know for sure what the actual world is. It remains obscured. The world that is represented in our mind is fed by an image of the world, and at the same time it feeds this image (for example by making new experiences that requires a modification of the image). In this view, “world” is all about the subject (or: the observer). Some even went so far to say that “world” only exists in the mind.
With this understanding of human possibilities to know anything about the world, dualism and realism are not overcome, yet. The apparent monism that “world is only idea (in the mind)” (we call that idealism) is a hidden dualism because it only emerges in view of its counterpart “materialism” that states that “world is only matter”. Moreover, it is still the somehow given (real) world with its “things-as-such” that impacts the human perception. In order to increase the chance that our image of the world is identical to the actual world, we need to attempt to uncover the hidden features of the world. The scientific method, in this view, is then a discovery of the world and what is to be known about it.
This direction was reversed by phenomenology, most prominently pushed forward by Edmund Husserl and later Martin Heidegger. The subject can’t be taken as a passive observer and constructor of the world. The cognitive process of observation itself gets into the focus.
An act of perception, in this view, is not a mere “streaming-in” of stimuli, but an active “looking-out” (figuratively! it covers all senses, not just the visual!) into the world. By nature, this is a highly selective process. Insights from biology, physics, psychology, anatomy, and other scientific disciplines that tell us about the human condition deliver a better understanding of how we construct “world” by making experiences. The crucial point is the human cognition, the “lens” that we are unable to take off. It confines the cut of the world that we are able to pay attention to, and it also colours and shapes the incoming signals. One of the most impressive experiments that was conducted to show our selective perception was this: People were asked to watch the video of a volleyball match and count how often the ball was passed between players all dressed in white. A man in a black gorilla costume appeared in the center of the scene during the match, beating his chest and making silly movements. The big majority of watchers didn’t see him, even though he was clearly visible among the white dressed players. Now, we can say that it was “unfair”, because the people were asked to concentrate on the ball, they can’t be blamed. But isn’t “life” exactly like that? We are always so busy focusing on certain clear cut aspects of life, occupying our full attention, that occurrences beyond this don’t find a way through to our awareness. Nobody can be “blamed” for that, however, since this is simply a neutral observation.
Phenomenology stresses the importance of “experience”. Every experience (drawn to every act of cognition) involves the entire set of experiences made in the past. An experience is the manifestation of all experiences. A simple example: When seeing only the front of a house, we “know” that this is a three-dimensional building because we know the concept “house” from former experiences. In every perception of a part of the world, we are awareness of the entire world, because only in this relation the experience makes sense. This sense-making is the basis of all experience. Not only do we align all experiences with our worldview (constructed from previous experiences), we also can only experience what fits into our margin of “sensefulness”. That’s why we don’t see the monkey during the volleyball match, because a monkey has no place in the world “volleyball”. The house front is automatically “completed” in our mind to an entire house. When walking around it we might find that it deviates from our imagination, for example the exact size, shape, etc., but these are just details. In the same way, we almost always succeed in identifying an item as a “table”, even when it is a very unusual modern art design, because its entire embedment into our world (including its functionality) is constantly present. Sometimes our imagination is fooled, misled, surprised or puzzled. When we walk around the house front and find that it is only the decoration of a movie set, for example. Then we either have to re-align the constructed reality (here: from the world “house as living space” to the world “movie making”), or we have to construct new meaning from the new experience.
How can we be sure that the way we construct meaning from experience is in any way supported by real features of the surrounding world, and by that somehow “justified”? How do I know that what I “see” is the same thing as that what you “see”? There could be a simple answer: by talking about it!
Both our world constructions don’t represent the actual world sufficiently, but if we integrate our two – almost necessarily deviating – images into one, we might get closer to what may count as “real”. This “discourse approach” to world conceptualisation was promoted in the later 20th century by Jürgen Habermas, Karl-Otto Apel, Niklas Luhmann and others. Mankind is a species that constitutes its environment through communication and collaboration. World construction is, therefore, always a process from the “inter”-space: inter-personal, inter-relational, inter-cultural. My world becomes my world by setting it into relation to yours. My experience is only valid (or not) in view of your experiences (and all others). In case there are insurmountable differences, we need to engage in a conversation (or a discourse) in order to create new clarity.
However, communication is not a trivial thing. Its most important tool is language. This includes our spoken language using words, but also numerical systems (mathematics) and symbolism, non-verbal interaction, body language, etc. Language itself is conditioned and constituted by experience, which means that we only have linguistic expressions for what is already part of our experience (made by any of our ancestors. Translatability of “thoughts” and other cognitive impressions is a difficult endeavour, not only between the different languages of different countries or cultures, but even on the very basic level of interpersonal conversation. Therefore, philosophy spends a great big deal on clarifying and defining words and terms. When all that is done it is still not guaranteed that one really understands the other, because experience is not fully transferable. With sufficient exchange of information I might be able to anticipate your experience, but since my framework of experiences and their connection is different from yours, I will never be able to see the same thing in the same light. Actually, “world” can be defined as exactly this “framework of connected experiences”. Then, it makes sense to talk about “worlds” rather than “the world”, because what is “world” for you is more or less different from what is “world” for me. Identifying and getting aware of the overlapping parts of our world is as interesting and inspiring as the deviations.
These epistemological changes affected the (self-)understanding of science massively! Let’s see what changed in particular:
- Truth → Viability (可行性) – The most striking paradigm shift is that away from truth-seeking towards a much more pragmatic endeavour of creating knowledge that is viable for something (applicable, exploitable, reliably replicable, meaningful).
- Realism → Constructivism – Even though scientific realism is still widespread among (natural) scientists, it has been replaced by a constructivist worldview in the philosophical discourse and also in parts in public understanding and in society’s institutions (esp. politics).
- Observation → Manipulation (操縱) – Mere observation of the world is insufficient. In order to gain knowledge about the world and ist components, we need to engage in de- and re-construction activities, which mostly means directed and strategic manipulation of the “given” by proper experimentation.
- Empiric rationality → + Discoursive rationality – Of course, scientific reasoning still requires a strong sence for empiric rationality! However, we know now that this is only one part of it. We also have to engage in discourses with a communicative rationality if we want to acquire a better image of the world.
- Causal Determinism → Conditionality (受限制性) – The ancient physicalism that led to the idea of strict causal determinism is substituted by more sophisticated concepts of conditionality according to which cause-effect-relations can be complex and highly intertwined.
- Reductionism (化約主義) → Holism (System thinking) – The idea that we can understand a thing by understanding its components is not tenable in system thinking. Only in view of the whole are we able to gain full insight into the mechanisms of the world.
- Separation (Dualism) → Integration (Monism) – The interconnectedness that is suggested by complex conditionality and expressed in holistic system thinking almost necessarily implies that the separation of entities (inside-outside, me-world, mind-matter) is not tenable. Integrative approaches in science are recently much more convincing and, therefore, find more and more followers among scientists.
- Clarity (parsimony, 簡約) → Complexity (復雜性) (e.g. Chaos theory) – For many centuries, it was claimed that “good” theories are those that are clear and simple. The “law of parsimony” states that if two theories have similar explanatory power, but one is simpler (i.e. requires fewer preconditions), it is to be preferred over the other one. Today, we know that simplicity has its limit, especially when the subject it deals with is highly complex. Recent scientific models, for example chaos theory, attempts to respond to this complexity by allowing theories to follow complex trajectories.
Neutrality thesis (中立性論旨)→ Ethical dimension of science – The old claim that scientific activity is by definition value-free is not tenable under these circumstances. Understood as an endeavour of (social) construction, it can never be free from certain preconditions and normative frameworks that are dominating in a societal or cultural margin. Therefore, there is an undeniable ethical dimension in science.
- Science as individual endeavour → Science as social activity/sphere – The image of the scientific hero that performs experiments in his home laboratory until a milestone in scientific discovery is reached almost disappeared. Today, science is institutionalised in academic and industrial settings and involves a large range of enactors ranging from lab technicians, graduate and post-graduate students to senior researchers, industrial collaborators and public investors. In this professional environment, the scientist has a social role that goes along with rights, duties and ethical obligations. These will be subject of the next lessons.