Tuesday, October 5, 2021

, ,

Empiricism and Positivism in Science

 


 

 Introduction

 

In this chapter we will discuss the main outlines of the empiricist account of natural science, and then go on to consider why and how the positivist tradition has tried to apply it to social scientific explanation.

 

Empiricism and the Theory of Knowledge

 

As we mentioned in Chapter 1, the history of modern science and the history of theories of knowledge have been closely bound up with each other. Sciences such as physics and chemistry, which rely a great deal on observation and experiment, have tended to justify their methods and knowledge-claims in terms of the empiricist view of knowledge. Empiricist philosophers have tended to return the compliment, by treating science as the highest form of genuine knowledge, or often even the only one. In the twentieth century, empiricist philosophers (particularly those, such as R. Carnap (1966), and the British philosopher A. J. Ayer (1946), who are known as the ‘logical positiv-ists’) have been especially concerned to draw a clear dividing line between science, as genuine knowledge, and various belief-systems such as religion, metaphysics, psychoanalysis and Marxism. In the empiricist view, these belief-systems, which sometimes present themselves as scientific, can be shown to be ‘pseudo-sciences’ (though it is a bit more complicated than this – one of the leading logical positivists, Otto Neurath, was also a Marxist). One of the diffi-culties they have encountered in trying to do this is that a very strict criterion of scientific status, which is adequate to the job of keeping out Marxism, psychoanalysis and the rest, generally also rules out a great deal of established science!

 

Although empiricist philosophy is concerned with the nature and scope of knowledge in general, our concern is more narrowly with its account of natural science. We will also be working with an ‘ideal-typical’ construct of empiricist

 

philosophy, which does not take much notice of the many different versions of empiricism. Anyone who wants to take these debates further will need to read more widely to get an idea of the more sophisticated variants of empiricism. For our purposes, the empiricist view of science can be characterized in terms of seven basic doctrines:

 

1.       The individual human mind starts out as a ‘blank sheet’. We acquire our knowledge from our sensory experience of the world and our interaction with it.

2.       Any genuine knowledge-claim is testable by experience (observation or experiment).

3.       This rules out knowledge-claims about beings or entities which cannot be observed.

4.       Scientific laws are statements about general, recurring patterns of experience.

 

5.       To explain a phenomenon scientifically is to show that it is an instance of a scientific law. This is sometimes referred to as the ‘covering law’ model of scientific explanation.

 

6.       If explaining a phenomenon is a matter of showing that it is an example or ‘instance’ of a general law, then knowing the law should enable us to predict future occurrences of phenomena of that type. The logic of predic-tion and explanation is the same. This is sometimes known as the thesis of the ‘symmetry of explanation and prediction’.

 

7.       Scientific objectivity rests on a clear separation of (testable) factual statements from (subjective) value judgements.

 

We can now put some flesh on these bare bones. The first doctrine of empir-icism is associated with it historically, but it is not essential. In the seventeenth and eighteenth centuries, empiricists tended to accept some version of the association of ideas as their theory of how the mind works, and how learning takes place. This governed their view of how individuals acquire their knowl-edge (that is, from experience, and not from the inheritance of innate ideas, or instinct). Today’s empiricists are not bound to accept this, and they generally make an important distinction between the process of gaining or acquiring knowledge (a matter for psychology) and the process of testing whether beliefs or hypotheses (however we acquired them) are true. In the terminology of Karl Popper, this is the distinction between the ‘context of discovery’ and the ‘context of justification’.

 

The second doctrine of empiricism is at the core of this philosophical approach. The basic point the empiricists are making is that if you want us to accept any claim as true, you should be able to state what the evidence for it is. If you can go on claiming it is true whatever evidence turns up, then you are not making a factual statement at all. If the manufacturer of a food additive claims that it is safe for human consumption, but cannot give evidence that anyone has yet consumed it, we would expect the official body concerned with food safety standards to refuse to accept their assurances. If they then provide results of tests on animal and subsequently human consumers of the product which show unexpected instances of symptoms of food-poisoning, but continue to insist the product is safe, we might start to suspect that they are not interested in the truth, but solely in selling the product. Thus far, this doctrine of empiricism accords very closely with widely held (and very reasonable!) intuitions.

 

It is important to note that our statement of the second doctrine of empiricism could be misleading. For empiricism, a statement can be accepted in this sense as genuine knowledge, or as scientific, without being true. The important point is that statements must be capable of being shown to be true or false, by referring to actual or possible sources of evidence. On this criterion, ‘The moon is made of green cheese’ is acceptable, because it can be made clear what evidence of the senses will count for it, and what evidence will count against it. A statement such as ‘God will reward the faithful’ is ruled out because it cannot be made clear what evidence would count for or against it, or because believers continue to believe in it whatever evidence turns up. This lat-ter possibility is significant, since for some empiricists the testability of a state-ment is not so much a matter of the properties of the statement as of the way believers in it respond to experiences which appear to count against it.

 

But once we recognize that there might be a choice about whether to give up our beliefs when we face evidence which seems to count against them, this raises problems about what it is to test a belief, or knowledge-claim. In a recently reported case, it was claimed by a group of researchers that rates of recovery of patients suffering from a potentially fatal disease who were under-going additional treatment at a complementary clinic were actually worse than those of patients not undergoing this treatment. This appeared to be strong evidence that the treatment was ineffective, if not actually harmful. Would it have been right for the clinic to have accepted these findings, and to have closed down forthwith? In the event, subsequent analysis of the data suggested that patients selected for the additional treatment had, on average, poorer prog-noses than those who were not. They were, in any case, less likely to recover, so that the research did not, after all, show the treatment to be ineffective or even harmful. Even had advocates of the ‘complementary’ treatment not been able to show this weakness in the research design, they might well have argued that a more prolonged investigation, or one which included the results of a number of different clinics offering the same sort of treatment, might have come up with more favourable evidence.

 

In this case, a potentially beneficial treatment might have been abandoned if its advocates had been too ready to accept apparent evidence against it. On the other hand, to keep hanging on to a belief against repeated failure of test-expectations starts to look suspicious. However, because tests rarely, if ever, provide conclusive proof or disproof of a knowledge-claim, judgement is generally involved in deciding how to weigh the significance of new evidence.

 

In practice it can be very difficult to see where to draw the line between someone who is being reasonably cautious in not abandoning their beliefs, and someone who is dogmatically hanging on to them come what may.

This is a big problem for the empiricist philosophers of science who want a sharp dividing line between science and pseudo-science, and want to base it on the criterion of ‘testability’ by observation or experiment. To preserve the distinctive status of scientific knowledge-claims they need to reduce the scope for legitimate disagreement about how to weigh evidence for or against a hypothesis. There are two obvious ways of doing this. One is to be very strict about what can count as a hypothesis, or scientific statement, so that the knowledge-claims it makes are very closely tied to the evidence for or against it. A general statement which just summarizes descriptions of direct observations might satisfy this requirement. A standard textbook example is ‘All swans are white.’ This is supported by every observation of a white swan, and actually disproved by any single observation of a non-white swan.

 

This example can also be used to illustrate the second way of tightening up on testability. If we consider the implications of the claim that all swans are white, it is clear that it is about an indefinitely large class of possible obser-vations. Someone interested in testing it could go out and observe large num-bers of swans of different species, in different habitats and in different countries. The more swans observed without encountering a non-white one, the more confidence the researcher is likely to have that the universal statement is true: each successive observation will tend to add to this confidence, and be counted as confirmation. This seems to be common sense, but, as we will see, there are serious problems with it. However, for empiricist philosophers of science, the issue is seen as one of finding a set of rules which will enable us to measure the degree of confidence we are entitled to have in the truth of a knowledge-claim (the degree of confirmation it has) on the basis of any given finite set of observations. A great deal of ingenuity has gone into applying mathematical probability theory to this problem.

 

The third doctrine of empiricism was initially meant to rule out as unscientific appeals to God’s intentions, or nature’s purposes, as explanatory principles. Darwin’s explanation of the adaptive character of many features of living organisms in terms of differential reproduction rates of random individ-ual variations over many generations made it possible to explain the appear-ance of design in nature without reference to God, the designer. But in many scientific, or would-be scientific, disciplines, researchers appeal to entities or forces which are not observable. Newton’s famous law of universal gravitation, for example, has been used to explain the rotation of the earth around the sun, the orbit of the moon, the motion of the tides, the path of projectiles, the acceleration of freely falling bodies near the earth’s surface and many other things. However, no one has ever seen gravity. It has been similar with the theory that matter is made up of minute particles, or atoms. This theory was accepted as scientific long before instruments were developed to detect atomic- and molecular-level processes. And even now that such instruments have been developed, the interpretation of observations and measurements made with them depends on theoretical assumptions – including the assumption that the atomic view of matter is true!

 

Other appeals to unobservable entities and forces have not been accepted. These include the view, widely held among biologists until the middle of the last century, that there were fundamental differences between living and non-living things. Living things displayed ‘spontaneity’, in the sense that they did not behave predictably in response to external influences, and they also showed something like ‘purposiveness’ in the way individuals develop from single cells to adult organisms. These distinctive features of living things were attributed, by ‘vitalist’ biologists, to an additional force, the ‘vital force’. The opponents of this view had several different criticisms of it. Some were philosophical materialists in their ontology, and were committed to finding explanations in terms of the chemistry of living things. But the vitalists were also criticized in empiricist terms for believing in unobservable forces and ‘essences’. More recently, the empiricists have directed their attention to psychoanalysis as a pseudo-science which postulates unobservable entities such as the unconscious, the superego and so on (Cioffi 1970; Craib 1989).

 

The fourth doctrine of empiricism is its account of the nature of scientific laws. It is acknowledged that a very large part of the achievement of modern science is its accumulation of general statements about regularities in nature. These are termed ‘scientific laws’, or ‘laws of nature’. We have already men-tioned Newton’s law of gravitation. Put simply, this states that all bodies in the universe attract each other with a force that is proportional to their masses, but also gets weaker the further they are apart. Not all laws are obviously universal in this way. For example, some naturally occurring materials are unstable and give off radiation. The elements concerned (such as uranium, radium and plu-tonium) exist in more than one form. The unstable form (or ‘isotope’) tends to emit radiation as its atoms ‘decay’. Depending on the isotope concerned, a constant proportion of its atoms will decay over a given time period. The law governing radioactive decay for each isotope is therefore statistical, or prob-abilistic, like a lot of the generalizations that are familiar in the social sciences. A common way of representing this is to state the time period over which, for each isotope, half of its atoms undergo decay. So, the half-life of uranium-235 is 700 million years, that of radon-220 a mere 52 seconds. Of course, this can also be represented as a universal law in the sense that each and every sam-ple of radon-220 will show the same statistical pattern.

 

In biology, it is harder to find generalizations which can count as universal in the same way. One of the best-known examples is provided by the work of the nineteenth-century Augustinian monk Gregor Mendel. He was interested in explaining how the characteristics of organisms get passed on from generation to generation. He did breeding experiments on different varieties of pea plants, using pairs of contrasting characteristics, or ‘traits’, such as round- versus

wrinkled-seed shapes, and yellow versus green colour. He showed that the off-spring of cross-breedings did not, as might be expected, show blending of these characters. On the contrary, the offspring in successive generations showed definite statistical patterns of occurrence of each of the parental traits. These statistical patterns are Mendel’s laws, and Mendel is generally acknowledged as the founder of modern genetics.

 

However, Mendel did not stop at simply making these statistical general-izations. He reasoned back from them to their implications for the nature of the process of biological inheritance itself. His results showed that some factor in the reproductive cells of the pea plants is responsible for each of the traits, that this factor remains constant through the generations, and that when two dif-ferent factors are present in the same cell (as must be the case for at least some of the offspring of cross-breeding), only one of them is active in producing the observed trait. Subsequently, it became conventional to refer to these factors as ‘genes’, and to distinguish between ‘dominant’ and ‘recessive’ genes according to which trait was produced when the genes for both were present together. This way of thinking also led to an important distinction between two different ways of describing the nature of an organism: in terms of its observable charac-teristics or traits (the phenotype), and in terms of its genetic constitution (the genotype).

 

With these examples of scientific generalizations in mind, we can see how well or badly the empiricist view fits them. As we saw above, empiricists are committed to accepting as scientific only those statements which are testable by observation or experiment. The most straightforward way to meet this requirement, we saw, was to limit scientific generalizations to mere summaries of observations. But it would be hard to represent Newton’s law of universal gravitation in this way. For one thing, the rotation of the earth and planets around the sun is affected to some degree by the gravitational forces of bodies outside the solar system. These forces have to be treated as constant, or for practical purposes as irrelevant, if the pattern of motions within the solar sys-tem is to be analyzed as the outcome of gravitational attractions operating between the sun and the planets, and among the planets themselves. The law of universal gravitation is therefore not a summary of observations, but the outcome of quite complex calculations on the basis of both empirical obser-vations and theoretical assumptions. Moreover, it could be arrived at only by virtue of the fact that the solar system exists as a naturally occurring closed system, in the sense that the gravitational forces operating between the sun and planets are very large compared with external influences.

 

But Newton’s law cannot be treated as a mere summary of observations for another reason, namely that it applies to the relationship between any bodies in the universe. The scope of the law, and so the range of possible observations required to conclusively establish its truth, is indefinitely large. No matter how many observations have been made, it is always possible that the next one will show that the law is false. It is, of course, also the case that we cannot go back in time to carry out the necessary measurements to find out if the law held throughout the past history of the universe. Nor will we ever know whether it holds in parts of the universe beyond the reach of measuring instruments. In fact, subsequent scientific developments have modified the status of Newton’s law to an approximation with restricted scope. However, it is arguable that if the law had not made a claim to universality, then the subsequent progress of science in testing its limitations and so revising it could not have taken place.

 

This suggests that it is in the nature of scientific laws that they make claims which go beyond the necessarily limited set of observations or experimental results upon which they are based. Having established that the half-life of radon is 52 seconds from a small number of samples, scientists simply assume that this will be true of any other sample. As we will see, this has been regarded as a fundamental flaw in scientific reasoning. It simply does not follow logi-cally, from the fact that some regularity has been observed repeatedly and without exception so far, that it will continue into the future. The leap that scientific laws make from the observation of a finite number of examples to a universal claim that ‘always’ this will happen cannot be justified by logic. This problem was made famous by the eighteenth-century Scottish philosopher David Hume, and it is known as the problem of ‘induction’. A common illustration (not unconnected with Newton’s law) is that we all expect the sun to rise tomorrow because it has always been observed to do so in the past, but we have no logical justification for expecting the future to be like the past. In fact, our past observations are simply a limited series, and so the logic is the same as if we were to say ‘It has been sunny every day this week, so it will be sunny tomorrow,’ or ‘Stock markets have risen constantly for the last ten years, so they will carry on doing so.’

 

As we saw above, a possible response to this problem for empiricists is to resort to a relatively weak criterion of testability, such that statements can be accepted as testable if they can be confirmed to a greater or lesser degree by accumulated observations. Intuitively, it seems that the more observations we have which support a universal law, without encountering any disconfirming instances, the more likely it is that the law is true. Unfortunately, this does not affect the logic of the problem of induction. No matter how many confirming instances we have, they remain an infinitesimally small proportion of the indef-initely large set of possible observations implied by a universal claim. So, in the terms allowed by empiricism, it seems that we are faced with a dilemma: either scientific laws must be excluded as unscientific, or it has to be accepted that science rests on an untestable and metaphysical faith in the uniformity and reg-ularity of nature.

 

This brings us to the empiricist account of what it is to explain something scientifically. Let us take a biological example. Some species of dragonfly emerge early in the spring. Unlike later-emerging species, they generally exhibit what is called ‘synchronized emergence’. The immature stages or ‘nymphs’ live underwater, but when they are ready to emerge they climb out of the water and shed their outer ‘skin’ to become air-breathing, flying, adult dragonflies. In these species, local populations will emerge together over a few days, even in some cases in one night. How can this be explained? The current view is that larval development ceases over the winter (a phenomenon known as ‘diapause’), leaving only the final stage of metamorphosis to be completed in the spring. A combination of increasing day length and reaching a certain temperature threshold switches on metamorphosis so that each individual emerges at more or less the same time. To explain why a particular population of a particular species emerged on a particular night would involve a pattern of reasoning somewhat like this:

 

Emergence is determined by day length d plus, combined with temperature t.

 

On 17 April, population p was exposed to temperature t, and day length d had already been passed.

————

 

Therefore: population p emerged on 17 April.

 

This could fairly easily be stated more formally as a logically valid argument, in which the premisses include the statement of a general law linking tempera-ture and day length with emergence and particular statements specifying actual day lengths and temperatures. The conclusion is the statement describing the emergence of the dragonflies – the event we are trying to explain. The ‘cover-ing law’, combined with the particular conditions, shows that the event to be explained was to be expected.

 

This analysis of the logic of scientific explanation also enables us to see why there is a close connection between scientific explanation and prediction. If we know an event has happened (for example, the dragonflies emerged on 17 April), then the law plus the statement of the particular circumstances (day length and temperature in this case) explains it. If, on the other hand, the emergence has not yet happened, we can use our knowledge of the law to predict that it will happen when the appropriate ‘initial conditions’ are satisfied. Knowledge of a scientific law can also be used to justify what are called ‘counterfactual’ statements. For example, we can say that the dragonflies would not have emerged if the temperature had not reached the threshold, or if they had been kept under artificial conditions with day length kept constant below d. And these counterfactuals can then be used in experimental tests of the law.

 

Again, what is clear from these examples is that a scientific law makes claims which go beyond the mere summary of past observations. If the event to be explained was already part of the observational evidence upon which the law was based, then the ‘explanation’ of the event would add nothing to what was already known. Similarly, if the law were treated simply as a summary of past observations, it would not provide us with any grounds for prediction. This point can be made clear by distinguishing between scientific laws, on the one hand, and mere ‘contingent’ or ‘accidental’ generalizations, on the other. The standard example, ‘All swans are white,’ is just such a ‘contingent generaliza-tion’. It just so happened that until Western observers encountered Australian swans they had only seen white ones. There was no scientific reason – only habit or prejudice – for expecting swans in another part of the world to be white. To call a generalization a law is to say that it encapsulates a regularity which is more than just coincidence: exceptions are ruled out as impossible, events ‘must’ obey the law and so on.

 

As we have seen, this presents problems for a thoroughgoing empiricist, since claims as strong and as wide in scope as those made by scientific laws can-not be conclusively tested by observation and experiment. One way out of this was recognized by the philosopher Karl Popper, and it formed the basis for a quite different approach to the nature of science (see Popper 1963, 1968). Popper pointed out the fundamental difference between confirming, or prov-ing, the truth of a scientific law, on the one hand, and disproving or ‘falsifying’ it, on the other. Any number of observations of dragonfly emergence which were consistent with the law would still not prove it to be true, but a single case of dragonflies emerging at lower temperatures, or during shorter day lengths, would be enough to conclusively disprove the law. On this basis, Popper argued that we should not see science as an attempt to establish the truth of laws, since this can never be done. Instead, we should see science as a process whereby researchers use their creative imaginations to suggest explanations – the more implausible the better – and then set out system-atically to prove them false. The best that can be said of current scientific beliefs is that they have so far not been falsified. So, for Popper, the testability of a statement is a matter of whether it is open to falsification.

 

Unfortunately, as Popper himself acknowledged, this doesn’t solve all the problems. As we saw above, evidence which appears to count against a belief or even to disprove it may itself be open to question. Countless experiments con-ducted in school science labs ‘disprove’ basic laws of electricity, magnetism, chemistry and so on, but scientists don’t see this as a reason for abandoning them. The assumption is that there were technical defects in the way the experiments were set up, instruments were misread or results were wrongly interpreted. Whether we view testability as a matter of verification or falsifi-cation, it cannot be avoided that judgements have to be made about whether any particular piece of evidence justifies abandonment or retention of existing beliefs. For this reason, Popper argued that in the end the distinguishing feature of science was not so much a matter of the logical relation between hypotheses and evidence as one of the normative commitment of researchers to the fallibility of their own knowledge-claims.

 

The empiricist aim of establishing the distinctive character and status of science implies separating out types of statements which can be scientific from those which cannot. We already saw that this means excluding statements which look like factual statements, but in the empiricist view are not, because they are not testable by experience (for example, statements of religious belief, utopian political programmes and so on). Moral or ethical judgements pose special problems for empiricists. They are not obviously factual, but when someone says that torture is evil, for example, they do seem to be making a substantive statement about something in the world.

 

Empiricists have tended to adopt one or another of two alternative approaches to moral judgements. One is to accept them as a special kind of factual judgement, by defining moral concepts in terms of observable proper-ties. Utilitarian moral theory is the best-known example. In its classical form, utilitarianism defines ‘good’ in terms of ‘happiness’, which is defined, in turn, in terms of the favourable balance of pleasure over pain. So, an action (or rule) is morally right if it (tends) to optimize the balance of pleasure over pain across all sentient beings.

 

However, in more recent empiricist philosophy of science it has been much more common to adopt the alternative approach to moral judgements. This is to say that they get their rhetorical or persuasive force from having a grammat-ical form which makes us think they are saying something factual. However, this is misleading, as all we are really doing when we make a moral judgement is expressing our subjective attitude to it, or feelings about it. This, interest-ingly, implies that there are no generally obligatory moral principles, and so leads to the position referred to in Chapter 1 as moral relativism.

 

Positivism and Sociology

 

The nineteenth-century French philosopher Auguste Comte is generally credited with inventing both of the terms ‘positivism’ and ‘sociology’ (see Andreski 1974; Keat and Urry 1975; Benton 1977; Halfpenny 1982). Comte was very much influenced in his early days by the utopian socialist Saint Simon, and he went on to develop his own view of history as governed by a progressive shift from one type of knowledge, or belief-system, to another. There are three basic stages in this developmental process. The initial, theological stage gives way to the metaphysical, in which events are explained in terms of abstract entities. This, in turn, is surpassed by the scientific stage, in which knowledge is based on observation and experiment. Writing in the wake of the French Rev-olution, and desiring the return of normality and social stability, Comte was inclined to explain continuing conflict and disorder in terms of the persistence of outdated metaphysical principles such as the rights of man. Such concepts and principles were effective for the ‘negative’ task of criticizing and opposing the old order of society, but in the post-revolutionary period what was needed was ‘positive’ knowledge for rebuilding social harmony.

 

This positive knowledge was, of course, science. However, the problem as Comte saw it was that each branch of knowledge goes through the three stages, but that they don’t all reach scientific maturity at the same time.

Astronomy, physics, chemistry and biology had all, Comte argued, arrived at the scientific stage, but accounts of human mental and social life were still languishing in the pre-scientific, metaphysical stage. The time was now ripe for setting the study of human social life on scientific foundations, and Comte set out to establish ‘social physics’, or ‘sociology’, as a scientific discipline. Since Comte’s day the term ‘positivism’ has been used extensively to charac-terize (often with derogatory connotations) approaches to social science which have made use of large data sets, quantitative measurement and statistical methods of analysis. We will try to use the term in a more precise and narrow sense than this, to describe those approaches which share the following four features:

 

 

1.       The empiricist account of the natural sciences is accepted.

 

2.       Science is valued as the highest or even the only genuine form of knowledge (since this is the view of most modern empiricists, it could conveniently be included under 1).

 

3.       Scientific method, as represented by the empiricists, can and should be extended to the study of human mental and social life, to establish these disciplines as social sciences.

 

4.       Once reliable social scientific knowledge has been established, it will be possible to apply it to control, or regulate the behaviour of individuals or groups in society. Social problems and conflicts can be identified and resolved one by one on the basis of expert knowledge offered by social scientists, in much the same way as natural scientific expertise is involved in solving practical problems in engineering and technology. This approach to the role of social science in projects for social reform is sometimes called ‘social engineering’.

 

There are several reasons why positivists might want to use the natural sciences as the model for work in the social sciences. The most obvious one is the enormous cultural authority possessed by the natural sciences. Governments routinely take advice on difficult matters of technical policy-making, from food safety to animal welfare and building standards, from committees largely composed of scientific experts. In public debate (until quite recently – see Beck 1992) scientists have had a largely unchallenged role in media discussions of such issues. Social scientists might well want to present their disciplines as sufficiently well established for them to be accorded this sort of authority. Not unconnected with this is the still controversial status of the social sciences within academic institutions. Strong claims made by social scientists about the reliability, objectivity and usefulness of the knowledge they have to offer may be used to support their claim to be well represented in university staffing and research council funding for their research. This was, of course, of particular significance in the nineteenth-century heyday of positivism when the newly emerging social sciences were still struggling for recognition.

That positivists should have accepted the empiricist account of science is not surprising, given the pre-eminence of this view of science until relatively recent times, and given its clear justification for science’s superiority over other forms of belief-system. However, the positivist commitment to extending scientific method to the human sciences is more obviously contestable. In later chapters (particularly 5, 6 and 7) we will consider in detail some of the most powerful arguments against this positivist doctrine, but for now we will just consider the case for it. We will use some of the work of Durkheim as our example here, but it is important to note that we are not claiming that Durkheim was himself a positivist (see Lukes 1973; Pearce 1989; Craib 1997). For our purposes in this book, he shares some important features in common with positivists, and this is what we will focus on.

 

In his classic work on suicide (Durkheim 1896, 1952), Durkheim drew on a vast array of statistical sources to show that there were consistent patterns in suicide rates. He showed that these patterns could not be accounted for in terms of a series of non-social factors, such as race, heredity, psychological disorder, climate, season and so on. He then went on to show that they could be accounted for in terms of variations in religious faith, marital status, employment in civilian or military occupations, sudden changes in income (in either direction) and so on. Table 1 shows the pattern for religious faith.

 

Although there are variations in suicide rates over time in each country, comparison between countries shows remarkable constancy – some countries having consistently higher or lower rates than others. Similarly with religious confession: though absolute rates vary a great deal for the same faith in differ-ent countries, there is constancy in that in each country Protestants have higher rates than Catholics, and Catholics higher rates than Jews. Durkheim argues that this pattern cannot be explained in terms of doctrinal differences between the religions, but, rather, is a consequence of the different ways the churches relate to individual followers:

 

If religion protects a man from the desire for self-destruction, it is not that it preaches the respect for his own person to him . . . but because it is a society. What constitutes this society is the existence of a certain number of beliefs and practices common to all the faithful, traditional and thus obligatory. The more numerous and strong these collective states of mind are, the stronger the integration of the religious community, and also the greater its preservative value. The details of dogmas and rites are second-ary. The essential thing is that they be capable of supporting a sufficiently intense collective life. And because the Protestant church has less consistency than the others it has less moderating effect upon suicide. (Durkheim 1952:170)

 

In his book on suicide, and his methodological classic The Rules of Sociological Method (1895, 1982), Durkheim uses a series of arguments to establish that society is a reality in its own right. The facts, ‘social facts’, of which this reality is made up exist independently of each individual, and exert what he calls a ‘coercive power’ over us. For example, each individual is born into a society whose institutions and practices are already in existence. Each of us, if we are to participate in our society, communicate with others and so on, must learn the necessary skills, including those involved in speaking and understanding the local language. In this sense, as well as in more obvious respects, we are coerced into following the established rules of our ‘social environment’, or ‘milieu’. There is a particularly powerful statement of this towards the end of Suicide:

 

[I]t is not true that society is made up only of individuals; it also includes material things, which play an essential role in the common life. The social fact is sometimes so far materialized as to become an element of the external world. For instance, a definite type of architecture is a social phenomenon; but it is partially embodied in houses and buildings of all sorts which, once constructed, become autonomous realities, independent of individuals. It is the same with avenues of communication and transportation, with instruments and machines used in industry or private life which express the state of technology at any moment in history, of written language, and so on. Social life, which is thus crystallized, as it were, and fixed on material supports, is by just so much externalized, and acts upon us from without. Avenues of communication which have been constructed before our time give a definite direction to our activities. (Durkheim 1952: 314)

 

This is enough for Durkheim to show that there is an order of facts, social facts, which are distinct from facts about individual people and their mental states, or biological characteristics. This class of facts, most obviously detected through the analysis of statistical patterns, justifies the existence of a distinct science – sociology – which takes it for its subject-matter. This science, having its own distinct subject-matter, will not be reducible to biology, or to psychology.

However, a further step in the argument is required. As practising partici-pants in social life, it could be argued that all of us possess knowledge of it – this seems to be implied in Durkheim’s own argument. If this is so, why do we need a specialist science to tell us what we already know? In answer to this Durkheim could point out that his analysis of statistical patterns in the occur-rence of suicide had come up with results which most people would find sur-prising. This apparently most individual and lonely of acts, when studied sociologically, turns out to be determined by variable features of the social environment. In the Rules of Sociological Method he offers us a more general argument. As the facts of social life exist prior to each individual, are indepen-dent of their will, and exert a coercive power, they resemble facts of nature. We all interact with natural materials and objects, and we do so through ‘lay’ or common-sense understandings of their properties, but just because of this we would not generally claim that there was no need for natural science. The history of the natural sciences shows innumerable instances of common-sense beliefs being corrected in the face of new scientific evidence and theory. So why should we assume that common-sense assumptions and prejudices give us reliable knowledge of the social world? If, in general, science progresses by increasingly distancing itself from common-sense assumptions, and gaining deeper understanding of its subject-matter, we should expect this to be true of the social sciences too.

 

Finally, some brief comments are due on the fourth doctrine of positivism – the proposal to apply social scientific knowledge in social policy-making. This view of the public role of social science has continued to be very widely held, and it provides yet another justification for extending the methods of the natu-ral sciences into the study of society. Only on the basis of the sorts of claims to quantitative reliability, objectivity and general applicability already made by the natural sciences could the social sciences expect to be taken seriously by policy-makers. Today in most countries official statistics are collected on virtually all aspects of social and economic life – on patterns of ill-health and death, on marriage and divorce, on unemployment, income differentials, attitudes and values, consumption patterns and so on – and social scientists are employed to collect and interpret these, as well as to give advice on policy implications (in the UK, such publications as Social Trends and British Social Attitudes contain selections from such statistical surveys).

 

The logical form of a scientific explanation as represented in the empiricist ‘covering law’ model shows how the link between such knowledge and policy might be made. To oversimplify considerably, the statistics might show that criminal behaviour by juveniles was more common among the children of divorced parents. This is not a universal law, but a statistical generalization (though the required element of universality might be present, if it is held that this statistical generalization holds across different cultures and historical periods). However, the basic structure of a scientific explanation can be maintained:

If there are high divorce rates then there will be high rates of juvenile crime.

 

Divorce rates are high.

 

————

 

Therefore: There are high rates of juvenile crime.

 

If policy-makers are convinced by public opinion that high rates of juvenile crime are a bad thing, and are charged with coming up with policies to reduce them, then this piece of scientific explanation will yield the policy recommen-dation to take action to reduce divorce rates. Of course, there are some obvi-ous complications here. One is that a mere statistical association between divorce rates and juvenile crime does not show that one causes the other. It could be that some third social fact, such as unemployment rates, causes both high rates of divorce and juvenile crime. A policy of dealing with unemploy-ment therefore might be more effective than trying to do something about divorce. But there might be more subtle problems with the statistical associa-tion. It might, for example, be that the association of juvenile crime with divorce holds only where divorce is stigmatized by prevailing values. If this were so, then the appropriate policy might be to work for a cultural shift in favour of more liberal social values. However, none of this counts against the positivist notion of ‘social engineering’ as such. Each of these possibilities can in principle be addressed by more exact data-gathering, and more sophisti-cated analytical methods. There are, however, other lines of criticism, which we will explore in the next chapter.

 

 

 

 

 

 

 

 

 

0 comments:

Post a Comment

Note: Only a member of this blog may post a comment.