To Epistemological Philosophy  of Causation 

Second law of thermodynamics. The second law of thermo­dynamics may not seem like an issue in the nature of efficient causation. Its main philosophical implications are usually portrayed as the discovery of the inevitability of the so-called “heat death” of the universe. But since it is a global regularity about change, it does describe states of affairs that are temporally related, like efficient cause and effect, and having seen how it is related to the other global regularities, we can see that it involved in every connection between efficient causes and their effects. Dispositions, such as the shattering of a fragile object, which are the paradigm case of efficient causation, are irreversible structural global regularities, and structural causes doing work are the stuff of which reproductive cycles and their ontological effects are made. To start with the second law of thermodynamic is, therefore, to go the heart of the problem with apparently irreducible laws.

The received explanation of thermo­dynamics, statistical mechanics, is often cited as a successful reduction of a theory to physics, but it is not completely successful in reducing these laws to the basic laws of physics. It is undoubtedly correct in taking heat energy to be the kinetic energy of the constituent molecules on the micro level, but statistical mechanics is not a reduction of the second law of thermo­dynamics to the basic laws of physics, because they do not entail that law.

The problem with the materialist reduction of the simplest case of entropy increase can be suggested by a very abstract puzzle about the direction of change in time. The second law of thermo­dynamics describes a regularity about change that is asymmetrical in time. But all the more basic laws of physics to which it would be reduced are temporally symmetrical. That is, the basic laws of physics can tell us, given the state of a system, how it will unfold over time. But those laws are just as valid for another system, just like the first, except that the objects (and photons) all have exactly opposite momentums. And they imply that the second system will unfold as if time were reversed in the first system. Thus, the basic laws of physics are symmetrical in time. But the second law of thermo­dynamics is not. It denies that time could be reversed. Entropy cannot decrease over time in an isolated system; it can only increase. The problem is how a time-asymmetrical law can be derived from time-symmetrical laws. This is sometimes called the puzzle about the “arrow of time.”[1] It is, as we shall see, the source of Loschmidt’s paradox.

The time-asymmetry of the tendency to randomness has an obvious explanation, according to spatio-materialism, because it is a regular change about the geometrical structures that holds of whole regions of dynamic processes over time. It is plausibly explained by space as an ontological cause, because both tendencies responsible for it are global change in the direction of a geometrical structure that resembles that of space itself. Potential energy becomes kinetic energy which becomes evenly distributed heat. It is the second tendency, the way in which kinetic energy is randomized, that is at issue in the reduction of the second law. What makes the tendency to randomness seem mysterious is overlooking the role of space itself.

Science does not recognize the existence of any substances not entailed by its efficient-cause explanations, and as we have seen, than means that space itself is not taken as a cause in explaining any phenomenon. Instead, physics gets by affirming only the truth of highly mathematical laws of nature and using them to predict quantitatively precise measurements. Though in this case, the mathematics is statistics, it still abstracts from the nature of space. Statistical mechanics is the attempt at a materialist reduction, rather than a spatiomaterialist reduction, and its inadequacy is shown by a paradox described by Loschmidt. The advantage of explaining the tendency to randomness to spatio-materialism can, therefore, be seen in how it removes that paradox.

It was Boltzmann who first showed that random states of closed or isolated systems of material objects could be analyzed statistically. He defined randomness for a gas contained in a box as a statistical equilibrium about the positions and momentums of its constituent molecules. Although the microstate of a gas depends on the positions and momentums of all its molecules, many different microstates are indistinguishable from a macroscopic standpoint, and Boltzmann’s idea was to measure the probabilities of different kinds of macrostates by the number of different microstates that could realize them. This makes sense statistically, if the possible microstates of a gas are all equally probable. But that requires a way of measuring how many different kinds of microstates would realize each kind of macrostate, and so Boltzmann introduced the notion of a six dimensional phase space to represent the state of each molecule in the gas. Three dimensions of phase space were used to represent its spatial location, and another three dimensions were used to represent its momentum in each of the three spatial dimensions, giving each molecule of the gas a certain location in six dimensional phase space. Thus, if this phase space were divided up into very many, equally sized cells, each molecule would be located in one or another of the cells of phase space (the limits of phase space being determined by the total energy of the gas and the size of its container). But since exchanging any two molecules in different cells of phase space would leave the gas in the same kind of macrostate, Boltzmann argued that the most probable macrostate of the gas would be the one in which the number of ways that molecules could be exchanged (or permuted) among the cells is maximum, for it would correspond to the largest number of different possible microstates. That state can be shown mathematically to be the one in which the molecules are most evenly distributed among the cells of six dimensional phase space. In that kind of macrostate, the molecules are said to be in statistical equilibrium.

Boltzmann’s definition clearly refers to the same kind of macrostate that was described in explaining the tendency to randomness, because an even distribution of molecules among the cells of his six dimensional phase space is equivalent to an even spatial distribution in three dimensional space of the three causally relevant factors: (1) the locations of molecules of each rest mass, (2) their kinetic energies, and (3) their directions of momentum. But these are basically different ways of defining randomness. Boltzmann’s definition is statistical, whereas the definition of randomness we have been using is geometrical. And whereas Boltzmann’s explanation is based on the assumption that all the possible microstates of a gas are equiprobable, no such assumption is needed to define randomness as evenness in the distribution of each of the causally relevant factors in real space. That is, instead of using a six dimensional phase space to count possible microstates of certain kinds, we used a geometrical fact about the distribution of causally relevant factors in uniform, three dimensional space not only to define non-randomness, but also to explain why such systems evolve in the direction of randomness over time. The unevenness in the spatial distribution of any of those factors is what causes it to be evened out, because any such unevenness entails that certain (symmetrically interacting) molecules will be in asymmetrical situations, and that will make them interact in ways that tend to equalize their distribution in space. That tendency will continue until there is no longer any unevenness to drive it. It is a change in the geometrical structure of the whole region.

The authority of mathematics may lead some contemporary naturalists to argue that Boltzmann’s statistical definition of randomness is just a mathematically more rigorous way of stating the geometrical definition. But it is not, for his six dimensional phase space is a mathematical abstraction that precludes explaining the tendency to randomness geometrically. To be sure, Boltzmann’s definition of randomness as a statistical equilibrium implies that it is overwhelmingly probable that any system we happen to examine will be random. But that does not explain why the system has a tendency to become more random over time. Indeed, his statistical explanation denies that there is any real tendency toward randomness, if that means that change really has a direction in time, for it holds only that we will almost always find them in random states, if one samples many such systems at many different times. But that does not explain the tendency to randomness by showing that change really has that direction over time.

On the contrary, Boltzmann’s definition of randomness gives rise to Loschmidt’s reversibility paradox. The basic laws of physics are time-symmetrical, which means that, if the molecules all have the same locations, but exactly opposite momentums, change will take place as if time were reversed. That means, as Loschmidt pointed out, that for every non-random microstate that evolves toward randomness, there must be another microstate that evolves toward non-randomness. Indeed, since the statistics by which Boltzmann defines randomness assume that every possible microstate is equally probable, his definition implies that for every non-random microstate that evolves toward randomness, there must be another microstate—the one in which the momentums of all the molecules are exactly reversed—that proceeds towards the non-random state. Changes in either direction should occur equally often. But in fact, we never observe closed systems becoming non-random spontaneously.[2]

The basic source of Loschmidt’s reversibility paradox is overlooking space as an ontological cause. It was Boltzmann who first overlooked space when he argued that randomness is a “statistical equilibrium” about the molecules in the gas. And the reason our ontological explanation does not generate Loschmidt’s reversibility paradox is that it does not have to assume that all possible microstates of the system are equally probable. This is not to deny that, among the abstractly possible microstates that would appear to be random from the macroscopic standpoint, there are some that would evolve into non-random macrostates, if they occurred. That possibility is a consequence of the time symmetry of the basic laws of physics, which we accept as part of the essential nature of matter. But the geometrical explanation need not admit that such microstates ever actually occur as the result of the motion and interaction of molecules that are already random. Nor is that problematic, since no one has ever given a good reason to believe that all mathematically possible microstates are equally probable.[3]

Loschmidt’s paradox is a rigorous way of showing that the statistical definition of randomness does not explain the time-asymmetry of this most basic instance of the second law of thermo­dynamics. We can now see that his reversibility paradox comes from using a statistical approach that abstracts from the geometrical structure of space. Our ontological reduction of the tendency to randomness avoids Loschmidt’s paradox and explains why the change has a direction in time, because instead of relying on mathematical abstractions, it takes the wholeness of space into account as an ontological cause. The material objects (with their kinetic energies and directions of motion) have certain locations in the whole region, and that gives the region the geometrical structure as a whole which is, as we have seen, the cause of the tendency to randomness. Our ontological causes enable us to see intuitively why non-random states tend to become random over time. In the uniform geometrical structure of space, any unevenness in the distribution of causally relevant factors is a geometrical structure about the whole region of molecules that causes them to be evened out. It puts molecules in local situations where their motion and symmetrical, elastic interactions will add up over time in the structure of space to randomness, that is, toward their being evenly distributed on the micro level.

This is not to deny that Boltzmann’s statistical definition may provide thermo­dynamics with a useful way of measuring randomness (and lack of randomness) or representing them mathematically. Indeed, the confirmation of quantitative predictions of statistical mechanics suggests that it is. But a measure of randomness is not the same as an explanation of why systems tend to become random over time. For that, we must reduce the mathematical representations to spatio-materialist ontology.

This is to resolve one of the anomalies that arises in the program of reductionistic materialism, where it is assumed that regularities are explained by deducing them from the basic laws of physics, initial and boundary conditions, and relevant mathematical theorems. Bus as we can now see, the attempt to give an efficient-cause explanations of the second law of thermodynamics is the mistake. It requires an ontological explanation, that is, an explanation of the same kind that explains why the basic laws of physics are true. Those time-symmetrical laws physical laws are relevant in explaining this time-asymmetrical regularity, but only because they characterize the essential nature of the matter contained in the region of space. It is the how such bits of matter work together with the wholeness of space that explains the tendency to randomness, for as we have seen, it is the geometrical structure about the distribution of any of the three causally relevant factors that puts material objects in situations where their behavior in accordance with physical laws will tend to even out their distribution, resulting in evenly distributed heat. Indeed, geometrical structures about the locations, motions and interactions of the material objects in which entropy can increase are what geometrical structures of material objects must coincide with in order for them to use the free energy to do work.

The explanation of the second law of thermodynamics requires thinking outside the box. In this case, the box is the assumption that to explain is to give an efficient-cause explanations. What does not come under discussion in disputes about the status of the law of entropy increase is the assumption that any adequate explanation must fit the deductive-nomological model. It must be shown to follow from the laws of physics together with relevant initial and boundary conditions. And since there is nothing temporally asymmetrical about those laws (or the initial and boundary conditions), the second law of thermo­dynamics seems to be irreducible.

The time-asymmetry can be explained ontologically, because it replaces the laws of physics with matter of the appropriate kind and recognizes that they coincide with a substance with an opposite kind of essential nature. Though the regularities in the motion and interaction of such matter in space can be described by laws of physics using the language of mathematics, that is to abstract the local regularities about what happens in a spatiomaterial world like ours and to leave the global regularities behind. By bringing the ontological causes of the laws of physics to the surface, we recognize that they depend as much on the structure of space as they do on the nature of matter. But the structure of space entails its wholeness. All possible spatial relations among bits of matter fit together as part of the geometrical structure of space, and by seeing the distribution of the causally relevant factors (their locations, kinetic energies and directions of motion) against the background of the wholeness of space, we see it as a geometrical structure in the region as a whole. That is to recognize the efficient cause that produces the greater randomness, for it is that geometrical structure that puts material objects in situations where they tend to wipe out the geometrical structure.

To be sure, this efficient cause is what is measured by the statistical improbability developed by Boltzmann. But by abstracting geometrical structure as an arithmetic measure of randomness, Boltzmann hides the connection between this efficient cause and its effect. We can see how the geometrical distribution of causally relevant factors in the region tends to wipe itself out, because we have a factual of rational imagination, which includes spatio-temporal and structuro-temporal imagination, and we understand how the motion and interactions of the material objects tends to change their spatial relations, kinetic energies and directions of motion. As time passes, it adds up in the region to randomness. The connection between the efficient cause and its effect is necessary, because it is caused ontologically by the endurance of these substances through time. But this causal connection cannot be represented in a deductive-nomological explanation, because the only way it can be represented by a mathematical formula, like a law of nature, it as a basic law, like the second law of thermodynamics, which is irreducible to the other basic laws of physics. Hence, there is no solution as long as the only kind of explanation that is recognized to be legitimate are efficient-cause explanations. That is to be locked in the box of the deductive-nomological model of explanation.

Mechanical principles. A less obvious doubt about the reducibility of the causal connections in scientific explanation to the basic laws of physics has to do with the principles of mechanics. The irreducibility of the structural aspects of mechanical principles has been used by Hilary Putnam and others to cast doubt on using physics as the foundation for a complete explanation of the world. Their arguments have contributed to general consensus about rejecting all forms of reductionism. But the problems to which they are pointing are solved by spatiomaterialism. Just as Loschmidt’s’ reversibility paradox arises from failing to recognize how material global regularities can be explained ontologically, these critic are pointing to three problems that arise from failing to recognize how structural global regularities can be explained ontologically. The significance of ontological philosophy is, in part, therefore, the restoration of the good name reductionism.

Putnam’s Board-and-Peg Argument. Many years ago, Hilary Putnam (1975, 296-7) cited a simple regularity that he argued was not reducible to the basic laws of physics as required by the materialists’ reductionistic program. It can, however, be reduced to spatiomaterialism by way of the ontological explanation of structural global regularities.

Putnam illustrated a basic problem about reductive explanations with a simple physical system – “a board with two holes, a circle one inch in diameter and a square one inch high, and a cubical peg one-sixteenth of an inch less than one inch high.” The peg passes through the square hole, but not the round hole. This regularity would not be explained, Putnam holds, even if it could be deduced from the laws of physics governing the behavior of matter in this system.

“One might say that the peg is, after all, a cloud or, better, a rigid lattice of atoms. One might even attempt to give a description of that lattice, compute its electrical potential, worry about why it does not collapse, produce some quantum mechanics to explain why it is stable, etc. The board is also a lattice of atoms, I will call the peg ‘system A’, and the holes ‘region 1’ and ‘region 2’. One could compute all possible trajectories of system A (there are, by the way very serious questions about these computations, their effectiveness, feasibility, and so on, but let us assume this), and perhaps one could deduce from just the laws of particle mechanics or quantum electrodynamics that system A never passes through region 1, but that there is at least one trajectory which enables it to pass through region 2.”[4]

Putnam argued that a deduction of this regularity from physics, if it is possible at all, is not really an explanation. What explains why the square peg fits in the square hole, but not in the round hole, is not the basic laws of physics governing the ultimate constituents. It is the higher level structure. All that matters is that “the board is rigid, the peg is rigid, and as a matter of geometrical fact, the round hole is smaller than the peg, the square hole is bigger than the cross-section of the peg.” This explanation would hold regardless of what the peg and board are made of, as long as they are rigid, and so Putnam argues that such higher-level structural explanations are “autonomous” and not reducible to physics. It is our interests, Putnam claims, that make it look as if irreducible higher-order structures like these are causally relevant.

What Putnam is getting at in his example is obviously, however, structural ontological causation. It is just an instance of a reversible structural global regularity, like our example of the box of gas. What is regular in this case is that certain material objects moving and interacting in the region always have unchanging geometrical structures. That is a global regularity, even though all of the global changes are reversible, for it means that the region itself has a kind of geometrical structure that does not change over time. The bare existence of those material structures moving around randomly in the region includes the fact that the peg is sometimes in one hole, but not the other. By denying that the structure of this dynamic process can be deduced from the laws of physics, Putnam is, in effect, making the case for recognizing material structures and the global aspect of space (that is, its wholeness) as ontological causes.

Putnam is not, however, arguing for spatio-materialism. He accepts the materialist ontology, and he argues that these explanations refer to geometrical structures only because “we are much more interested in generalizing to other structures which are rigid and have various geometrical relations, than we are in generalizing to the next peg that has exactly this molecular structure.”[5] That role of special interests is what leads him to argue that “structural features” are a “higher level” that is “autonomous” from physics.

Let me emphasize, however, that space is an ontological cause of this simple global regularity in two ways.

First, the global aspect of space, which is entailed by its structure, is an ontological cause, along with these derivative ontological causes, of the simple global regularity being explained. It connects the geometrical structures of different material objects as parts of the same world and enables them to interact as geometrical structures.

Second, the global aspect of space is an essential ontological cause of the formation of the unchanging geometrical structures of these material objects, since material structures are derivative ontological causes. They are by-products of the tendency of potential energy to become kinetic. And since the spatial relations of the parts of the material object are constituted by the space that contains them, the geometrical structures of the board and peg are not universals, but no less concrete than the material objects that embody them. What enables the board and peg to move across space without changing their geometrical structures is that every region of space contains every possible geometrical structure. It is hard to avoid the conclusion that the anomaly in this case comes from materialists overlooking that space is a substance, because to account for this simple global regularity, all we need is to recognize that space has the same ontological status as matter.

The Supervenience of Dispositional Properties. Other philosophers trying to carry out the materialist reductionistic program have noticed certain anomalies that arise in the reduction of dispositional properties. For example, Bigelow and Pargetter (1987, p. 190) call fragility a “supervenient” property, because it cannot be reduced to the laws of physics.

Properties are said to be “supervenient” when they cannot be reduced to physical properties in the sense of being defined in terms of them. A definition would pick out exactly the same objects by identifying in terms of physical properties what is meant by the supervenient property, which would be another way identifying the same property. But such definitions cannot be given in some cases.

The most obvious are functional properties, such as “being a clock,” which may be realized by objects whose physical structures range from machines worn on the wrist to tree rings, sun dials, and the amount of radioactive decay. There is no way to pick out all clocks by their physical properties, because when one looks for physical properties, one is force to start listing all the different kinds of physical objects that could serve as clocks.

In particular causes, supervenient properties are thought to be identical to the physical properties of the object having the supervenient property. Thus, they hold that any object that is physically similar to one that has a supervenient property must also have the supervenient property. But supervenient properties are not reducible, because it is not possible to describe the physical properties that are both necessary and sufficient for supervenient properties. There are just too many different kinds of cases and no principle by which a list of them can be completed. The reduction involves, at most, therefore, only an identity between the tokens on the two levels, not an identity between types. That is what it means to say that the properties supervene on physical traits.

Reduction would require an identity between the functional and the physical types, or what is called “type-identity.” But since functional properties are supervenient, all the holds is that the functional property in this case is nothing but the physical properties. Since only the token of the functional property is identical to the token of the physical property, or what is called “token-identity.”

In the case of the disposition, fragility, what Bigelow and Pargetter (1987, p. 190) apparently mean by “supervenience” is that fragile objects of the same and different kinds can break up or shatter in different ways in different situations. Different physical properties are responsible for what happens in different cases, and there is no physical property that they all have in common by which all the kinds of cases can all be included.

Supervenience theorists are eager to reassure us, however, that they are not saying that non-physical causes are responsible for the exhibition of such supervenient properties. In each particular case, it is possible, in principle, to explain physically what happened, and any case that is physically like it in all relevant respects will also break up in the same way (or not break up at all). But the disposition is not reducible to those physical properties (and their effects according to basic laws of physics), because there is no natural physical kind or type that is identical to this type of disposition, that is, which includes all and only fragile objects, making fragility a supervenient property.

Physical dispositions can be explained, as we have seen, by spatiomaterialism as forms of structural global regularities. In addition to the wholeness of space, the structural ontological causes of the global regularities are the geometrical structures of the material objects involved and free energy that is supplied somehow by the conditions under which the disposition is exhibited.

What makes fragility irreducible to the laws of physics is the difficulty in identifying the structural cause of the irreversible change in the object before it occurs. A fragile object will break up in different ways, depending on the precise way free energy is supplied under the test conditions. That is because different structural causes are embedded in the same material object. The structural cause in each case is all the parts of the composite object that do not come apart (though breaking up may involve a series of such structural causes), for they are the unchanging structures that determine how objects break up. That is, fragile objects are just machines that use the free energy provided by the conditions of its expression to do the mechanical work of separating chucks of itself from one another. But they are complex machines that do it different ways in different cases, depending on how free energy is supplied.

Does the existence in the material object of many different structural causes generating many different global regularities mean that fragility is a supervenient property? That can’t be correct, for if it were, we wouldn’t be able to see how all the global regularities are alike. And we can. Given that the bonds among the parts of the object are inelastic and cannot absorb much of the free energy supplied by the impact, we can see how the forces are communicated by their bonds and spread out geometrically so that whole groups of bonds break together or not at all. For example, we can see why a wine glass dropped on concrete will shatter, but when dropped on a rug which absorbs some of the initial shock, it is more likely to break at the stem. What happens is just a result of how the motion and interaction of bits of matter add up in space over time, including how forces are communicated among the parts of the fragile object, and with our capacity for spatio-temporal imagination, we can “see” the similarity about what happens in each case. The similarities among cases of objects breaking up under impact (including different kinds of fragile objects) are basically geometrical, but nonetheless real. Thus, there is a type-type identity between the ontological causes and the disposition (or global regularity) they determine.

Supervenience is just an appearance that a spatiomaterialist world has because science seeks only efficient-cause explanations. What makes fragility seem to be irreducible is the assumption that the reductive explanation must be formulated as a deductive argument from laws of physics together with initial conditions and mathematics theorems.

The basic laws of physics are local regularities about change that are constituted jointly by space and matter. They depend on the structure of space as much as the essential natures of the forms of matter contained by space. Thus, when the laws of physics are taken as basic in an efficient-cause explanation, only some of the relevant aspects of the ontological causes are represented. The structure is space is included only insofar as it helps constitute the local regularities described by the laws, but that is to abstract from the wholeness that is also entailed by the geometrical structure of space. The wholeness of space is just as relevant to how change unfolds over time as the aspects of space that are represented by the laws of physics. It includes all the geometrical aspects of the motions and interactions of the bits of matter that add up over time to a certain structural effect. But the wholeness of space is excluded, according to the deductive-nomological model, from efficient-cause explanations. 

It is not easy to translate the geometrical factors that are relevant such an ontological reduction of dispositions into mathematical formulas that can be used in conjunction with the laws of physics to derive a description of the breaking up or shattering. The motion and interaction of material structures do not add up to simple quantities, like those involved in the conservation of momentum and energy. They add up to geometrical structures. But limitations in the capacity of mathematical formulas to represent geometrical structures should not be taken as grounds for denying their role or the role of the geometrical structure of space itself in the ontological reduction of dispositions.

It is not necessary to construct deductions using mathematical formulas, because the cause that explains the kind of structural global regularity is a material structure and how its motion and interaction add up in the wholeness of space, and that can be understood by using spatio-temporal imagination. It is a matter of seeing how the forces imposed by the impact are communicated to other parts and how they build up in certain locations. Insofar as the structural effects depend on quantitative aspects, such as the strength of the forces and the distances over which they are exerted, they can be approximated by computer models that take into account both the forces and the geometrical structures of each molecule or atom and their spatial relations to one another in the composite whole. This is, of course, how materials science has been explaining the properties of bulk matter ever since computers became widely available. The capacity of computer simulations to do what formal mathematical deductions cannot do is evidence of the relevance of the geometrical structures of the material objects and the geometrical structure of space itself as ontological causes of these global regularities.

Fragility and other such dispositions are, therefore, supervenient properties only in the sense that they cannot be deduced mathematically from the basic laws of physics together with appropriate initial and boundary conditions. But they are not supervenient relative to our ontology, because when we recognize that the dispositions are constituted by bits of matter that coincide with space as a substance, we can see how the wholeness of space so constrains the motion and interaction of the material structures that they add up over time to global regularities of certain kinds.

The example of fragility is complicated by the fact that one of its ontological causes is derivative. Material structures are not basic ontological causes, but depend on the tendency of potential energy to become kinetic, and fragility is a disposition in which the very existence of the ontological cause is at stake. It involves, in other words, the generation and corruption of (derivative) substances in our ontology, and thus, is special in way that parallels the generation and corruption of primary substances in Aristotle’s ontology.

The complication about generation and corruption encountered in the case of fragility is, however, general, for it holds of chemical interactions generally. They are unlike the interactions in which molecules serve as catalysts (or enzymes), for in those cases, the molecules have geometrical structures that persist through the change, making them ontological causes. But in chemical interactions, molecules have geometrical structures that contain many different structural ontological causes, like fragile objects, because their global regularities also depend on how the free energy that drives the irreversible processes is supplied.

A typical chemical interaction involves an exchange of clusters of atoms between two original molecules that result in two new molecules. Their shapes determine how the original molecules fit together and, so, which parts of each molecule interact with which parts of the other, and the total force exerted at such moments determines whether or not the molecules will interact chemically and exchange subgroups of atoms, forming new kinds of molecules. The free energy comes from the potential energy of the forces that parts of molecules exert on one another, but it is structured by spatial relations among parts that are not changed.[6] The structural causes in these cases are the clusters of atoms (or smaller molecules) that do not change their geometrical structures during the interaction, since only unchanging geometrical structures of matter are ontological causes. Thus, molecules will contain different structural causes depending on which other kinds of molecules are combined with them. But that does not mean that chemical interactions are supervenient properties or otherwise ontologically irreducible, at least, not when we recognize that substantival space is an ontological cause.

Putnam’s Argument from Countervailing Conditions. Although Putnam does not say that they are supervenient, he also argues that dispositions are often irreducible. His reason is that they are tendencies that hold only “other things being equal.” Putnam (1987) illustrates the irreducibility of disposition by considering the solubility of a sugar cube in water.[7]

It might not dissolve when placed in water, he argues, because the water might already be saturated with sugar. Or because the water might freeze before the cube can dissolve. Finally, he appeals to Loschmidt’s reversibility paradox as a countervailing condition. The water might happen to be in a state that is the exact time-reversal of a state that occurs when a larger cube was dissolving, so that the motions and interactions in this special case make the cube un-dissolve out of the water and form a crystal.

It is materialist reduction that Putnam is talking about, for the irreducibility of these disposition comes from trying to deduce them from premises that are “formulas in the language of fundamental physics”[8] which cannot take into account of all the various exceptional conditions that might prevent the expression of the disposition. On the deductive-nomological model of explanation, the only way to predict what will happen is to trace precisely the motion and interaction of all the objects in the region over a region of time and see where it leads, and Putnam denies that all the conditions that might be relevant to the exhibition of the disposition can be included in such a deductive argument.

The dissolving of the sugar cube in water is, according to the spatiomaterialist reduction, just a structured thermo­dynamic process. The free energy is the potential energy that comes from the forces that would form weak hydrogen bonds between the sugar and water molecules. The structural causes are the shapes of the water molecules, the shapes of the sugar molecules, and the material structure that results from packing sugar molecules together in the crystal.

In dissolving, weak bonds holding sugar molecules together in the crystal are replaced by weak bonds with water molecules as a result of their random motion and interaction with one another in the region. Opposite electric charges on opposite sides of the water molecules fit with similar charges on sugar molecules in such a way that the sugar molecules exchange their bonds with one another for stronger, less energy-rich bonds with water molecules, freeing kinetic energy in the process. Thus, when their random motion and interaction brings these molecules together, sugar molecules are released from their bonds in the crystal to form bonds with water, and a new kind of static order comes to exist. That is how matter, as energy, flows through geometrical structures from potential energy to evenly distributed heat in this case.

Putnam’s doubts about reducibility come from the impossibility of including countervailing conditions in the deduction. But if the disposition is recognized to be a global regularity, there can be no countervailing conditions that are not taken into account, because all the bits of matter in the region are involved in how their motion and interaction add up over time.

If what prevents the sugar cube from dissolving is that the water is already saturated with sugar molecules, it is simply the absence of the free energy in the region that the material structures use to do the work of freeing them from the crystal. The potential energy depends on certain spatial relations between the molecules exerting the forces, and since all the water molecules in the region are already bound to sugar molecules, the relevant spatial relations do not exist, and so there is no thermo­dynamic flow of matter from potential energy to evenly distributed heat to be structured. That condition is already taken into consideration, if it is explained ontologically as a global regularity by structural causes and the global aspect of space.

On the other hand, if what keeps the sugar cube from dissolving is a sudden freezing of the water, that is also something that is already taken into account by treating it as a global regularity. Global regularities are regularities about whole regions of space, and that means they must either be closed or else one must keep track of what is flowing in and what is flowing out of the region. Although a sudden freeze would certainly stop the irreversible special theory of relativity, there is no way it could happen unnoticed. Heat is a form of matter (that is, kinetic energy is explained ontologically as kinetic matter), and as a kind of substance, it cannot simply go out of existence. The tendency to randomness spreads heat throughout the region, and it can be removed from the region only if there is something colder in the region to which it can flow. That would be a thermo­dynamic flow of matter toward evenly distributed heat in the region that is clearly relevant in explaining the dissolving as a global regularity. Finally, nothing outside the region could make it freeze without violating the principle of local action. Thus, a sudden freezing is not an exception to an explanation of dissolving as a structural global regularity.

The final countervailing condition Putnam mentions is not explained by this reduction to spatiomaterialism, for it is just an illusion that comes from the attempt to carry out a materialist reduction of the tendency to randomness. Putnam is using Loschmidt’s paradox as a countervailing condition. But as we saw in the last chapter, when the tendency to randomness is explained geometrically, rather than mathematically, by statistics, there is no reason to believe the water and sugar molecules would ever be in a microstate that corresponds to one in which the sugar cube is dissolving except for all the molecules having exactly opposite momentums. Only the statistical definition of randomness requires us to believe that all possible microstates are equally probable.

None of the countervailing conditions to exhibiting solubility that Putnam mentions would be overlooked, therefore, by an explanation of this disposition as a global regularity, because when the global aspect of space is recognized as an ontological cause, the whole region where it occurs is causally relevant. Dispositions are not properties inherent in the nature of matter, but rather kinds of structural global regularities, which depend on structural causes, free energy supplied by a thermo­dynamic flow of matter toward evenly distributed heat, and a region of space where their geometrical structures coincide. What makes it seem that exceptional conditions preclude the ontological reduction of dispositions is the assumption that a reductive explanation must deduce a description of the regularity from “formulas in the language of fundamental physics,” as if the disposition had to follow from the basic laws of physics without taking account of how structural causes can channel the thermo­dynamic flow of forms of matter toward evenly distributed heat in the region. The role of space in imposing those regularities may make it hard to formulate these ontological explanations as deductions, but the reduction to the ontology of spatio-materialism leaves no room for surprises.

Finally, other apparently irreducible phenomena can be explained in similar ways.

Prigogine (1980), for example, points to the phenomenon of self-forming objects as irreducible.[9] He recognizes that it does not occur when entropy is maximum, but depends on open systems, in which there is a flow of mass and energy (so-called “dissipative systems”). But far from being an anomaly, this kind of phenomenon is entailed in a spatiomaterial world like ours, because self-forming objects are just instances of the tendency of potential energy to become kinetic.[10] See the discussion of crystal formation and the conformation of protein molecules in Structural global regularities.

“Chaos” is likewise cited as evidence of emergent phenomena. These are situations in which structural global regularities suddenly appear from apparently chaotic, or random, dynamic processes, such as a turbulent flow suddenly becoming highly structured. What makes them seem inexplicable, however, is failing to take space into account in one way or another, either by not recognizing the structural causes at work in the region, by not taking the geometrical structure of the boundary conditions of the system into account, or by ignoring the structure of the space within those boundaries. When they are taken into account, it is not surprising that the quantitative aspects of the motion and interaction of bits of matter in the region would fit together geometrically with those spatial structures so that their motion and interaction add up over time to certain regular, repeated patterns.[11] They are just structural global regularities. The anomalies all come from overlooking structural ontological causes and how they work together with the global aspect of space.

 To Functions

 

 

 



[1] L. Sklar (1992) reviews these issues and gives references to the literature in Chapter 3, “The Introduction of Probability into Physics”. He puts the problem of reducing them to the basic laws of physics as being unable to show that the probabilistic assumptions of statistical mechanics are “nonautonomous” (p. 121). See also Sklar (1993)). 

[2] Loschmidt’s paradox does not mean that Boltzmann’s statistical explanation of this tendency is falsified by observation, for it can be held that the reason we never observe random systems spontaneously becoming nonrandom is that the random microstates that lead to nonrandom states are statistically so overwhelmingly improbable that they virtually never occur in nature. And the reason why we do observe many cases of nonrandom states becoming random can be explained by the existence of other kinds of processes in nature that impose nonrandom initial states on closed systems. This bias in our sample of systems makes what is just an atemporal statistical fact about such systems appear to be a tendency to become more random over time.

                 This may save the appearances, but it does not salvage Boltzmann’s definition of randomness as an explanation of the tendency to randomness. To be sure, there are sources of usable, or “free”, energy in nature that can impose nonrandom initial states on closed systems, and a more general version of the second law of thermo­dynamics would have to cover the systems of which they are parts. These sources of free energy include not only other systems with nonrandom distributions of elasticly interacting objects, but also systems in which the objects have potential energy because of forces they exert on one another. But their existence does not explain the tendency to randomness as a change with a direction in time. It only explains why there are so many examples of that tendency in our surroundings. There is still no reason to believe that systems that start off in a nonrandom state will become random, except that most such systems examined must be in a random state, if all possible microstates are equally probable. At best, the existence of natural processes that impose nonrandom initial states on closed systems will so bias our sample that it will appear that change has a direction in time. But that is no part of the statistical explanation of the tendency to randomness, for if its statistics did take into account the existence of such natural processes, it could not assume that all possible microstates are equally probable.

[3] Attempts to show the equiprobability of all possible microstates introduce another kind of phase space to represent the microstates of the gas. The position and momentum of each molecule in a box can be represented by six numbers, three each for its position and momentum, and since the state of the whole box can be represented by six numbers for each molecule, it is possible to think of the microstate of the box as the location of a single point in a “space” whose number of dimensions is six times the number of molecules. This is misleadingly similar to the real, three dimensional space from which it abstracts, for changes in the state of the box, which actually depend on the molecules all moving and interacting in real space according to the laws of physics, are represented as the “motion” of this “phase point” in a “phase space” with an enormous number of dimensions (the limits of the phase space being determined by the total energy of the gas and the size of its container). Although it can be shown that the phase point will not move around to every point, it can be shown that it will eventually spend the same amount of time in every small region of this phase space. This theorem (the ergodic theorem) is used to justify the assumption that all the points in phase space are equally probable. But as long as it shows only that the phase point will visit every region of phase space equally often, and not every point, there is no good reason to believe that the kinds of random microstates that would lead to non-random states will ever occur, because there is no reason to believe that minor differences in micro states will not add up to big differences, such as not being non-random on the macro level. The importance of such small differences is an example of the “butterfly effect” to which chaos theorists have recently been drawing attention. See J. Gleick, Chaos: The Making of a New Science (New York: Penguin Books, 1987).

[4] “Philosophy and Our Mental Life”, Putnam (1975, pp. 295-6). Putnam (1978, pp. 42-3) calls it the “Laplacean super-mind’s deduction”.

[5] For Putnam (1975, pp. 296-7), structural features are singled out because of our interests, because of what is salient from our special point of view, or because of the “pur­poses for which we use the notion of explanation”, rather than because of the role of material structures in constituting global regularities about change over time.

[6] For many chemical interactions, the molecules must collide with enough energy to distort one another’s geometrical structures so that their parts are in a position to exert the forces that result in exchanging parts of themselves with one another, and thus, the likelihood of such reac­tions depends on the mean kinetic energy of their random motion and interaction, or tempera­ture. Although combustion does not start spontaneously, once it does start, it can be self-sustaining. Once some molecules interact energeti­cally enough to form the stronger, lower-energy bonds, they release enough energy to put other molecules in a position to do the same.

[7] Putnam also uses this argument elsewhere, for example, in Putnam (1992, p. 62).

[8] Putnam, 1987, p. 11.

[9] See also Kauffman (1993, 1995).

[10] The more elaborate examples in which one kind of chemical reaction is followed by another in cycles can also be explained as global regularities, for they are simply cases in which the free energy of the thermo­dynamic processes to be structured is supplied by the forces exerted by the molecules in the region on one another and the chemical interactions are changing the kinds of molecules that are present in the region.

[11] See Gleick (1987).