What's Wrong with Intelligent Design? Part 3
by, 24th July 2012 at 04:24 PM (4521 Views)
What Evidence Supports Intelligent Design?
2. Functionally Specified Complex Information
Functionally specified complex information (FSCI) comes from a term originally coined by a British chemist, Leslie Orgel, as “specified information,” – a characteristic of living organisms. William Dembski in the 1990’s began using the term and altering it to “functionally specified complex information,” as a way of distinguishing natural processes (i.e., process for which an intelligent agent is not responsible) from intelligently designed processes.
Towards that end, Dembski and his adherents propose a simple probability metric: if the system in question is above a certain threshold of information content (and/or probability of occurring by chance – the two concepts are almost always conflated in such work), then it must be intelligently designed. The threshold offered by Dembski in his work, inclding No Free Lunch (http://www.amazon.com/Free-Lunch-Spe.../dp/0742512975) is 10150. In other words, any system which is found to contain more than 10150 “bits,” of FSCI is inferred to be intelligently designed.
This metric is supported by people like Douglas Axe, who use the universal probability bound to argue that a full length protein, for example, could not occur “naturally,” (that is, without being intelligently designed) because those proteins contain FSCI above that bound.
What’s Wrong with FSCI?
To understand the totality of what’s wrong with FSCI as a scientific hypothesis, one must understand the probabilistic arguments made by Dembski and others, and how they’re flawed. In short, the argument is a re-capitulation of the “tornado in a junkyard,” argument of a half-century ago – the idea that, because the random probability of producing a certain structure is very low, that structure cannot occur without the action of an intelligent agent.
Unfortunately, this is a totally unproven idea. Furthermore, the probabilistic arguments are universally flawed – they assume a number of things that are not borne out factually. For example, it’s assumed that for a functioning protein, only a single amino acid residue can exist at a given primary sequence position; this is far from true, as most proteins are highly tolerant to point mutation outside of particular residues which are important for a specific function (i.e., a catalytic triad in a metalloprotease). Furthermore, mutation even of highly conserved residues in functional proteins to analogous residues (say, glutamic acid to aspartic acid) is often neutral, or at least does not cause a total loss of function. Such calculations also assume that only a single functioning sequence, of a given length as we observe it in modern organisms, is fit to do the job at hand. Unfortunately, this is not true – although evolution selects for more and more efficient protein workhorses, like enzymes, that does not mean that only a single sequence is capable of performing a particular job (there are, for example, far less selective – but functional – enzymes for a large number of metabolic reactions in the human body; it’s just that evolution favors the most selective and efficient enzymes).
Yet the largest problem with such probabilistic calculations is that nature is not “random,” – proteins are not assembled or evolved purely by chance.
Still, all of that would be fine provided that the FSCI hypothesis is not falsified – sure, one could quibble about whether or not the universal probability bound is accurate, but provided there is a bound, the FSCI hypothesis isn’t falsified by that argument alone. The FSCI hypothesis states that we should not find any system above the universal probability bound of 10150 to evolve.
That hypothesis has been conclusively falsified – by the previously mentioned work of B. G. Hall, but equally elegantly by the discovery of nylon eating bacteria (http://en.wikipedia.org/wiki/Nylon_eating_bacteria). Discovered in the mid-1970’s, a particular strain of Flavobacterium has been found that is capable of digesting nylon. Nylon is an entirely synthetic fiber that did not exist on Earth prior to its creation by Wallace Carothers at DuPont in 1935.
The digestion of nylon requires an enzyme – nylonase – a protein which is 239 amino acids long, or has a universal probability metric calculated to be 10380. Given that nylon didn’t exist prior to 1935, this means that a protein well above the universal probability metric offered by Dembski evolved in 40 years. This means that the hypothesis offered by FSCI – that anything above the 10150 bound could not evolve – has been falsified.
For more objections to FSCI, see the sources below.
3. The Universe is Fine-Tuned
The argument for a fine-tuned universe is relevant to cosmology as well as chemistry and biology; essentially, the argument is that the universe as we observe it – from the quantities of fundamental constants (such as the speed of light in a vacuum) to the nature of the material that make up the universe (such as the charge and mass of the proton) – is incredibly fine-tuned for the existence of life. The argument goes that, out of all the possible values that one can potentially imagine, that our universe exhibits these particular values is remarkable – so remarkable that the universe must have been intentionally fine tuned for the existence of life.
What’s Wrong with Fine-Tuning?
Unlike the arguments from irreducible complexity or functionally specified complex information, the fine tuning argument is far more nebulous and does not make any concrete predictions. Yet because of that nature, the fine-tuning argument is hardly a scientific hypothesis; although it’s compelling as a matter of rhetoric, it’s unproductive as a line of inquiry.
The fact of the matter is, we don’t know how likely a universe like ours is. We don’t even know if there are other universes, much less what they look like. We don’t know if it’s a chance of one in a billion that a universe fit for life will develop from a singularity like the big bang, or if it happens once every ten times, or every time.
Additionally, if the universe is fine tuned for life, why does life appear to be so vanishingly rare in our universe – or that so much of the universe is hostile to life as we know it? Or, alternatively, is life as we know it only one of a large number of possible “living,” configurations, and that our universe – or other universes – are not so hostile to life as we suppose?
In sum, the fine-tuning argument is hardly a compelling scientific argument in favor of intelligent design – the best it offers is questions to ponder about whether or not the universe truly is fine-tuned. In any case, the lack of data – and the possible objections to the fine-tuning argument – suggest that, at best for intelligent design proponents, the jury is out.
For more information, see the sources below.
Are Arguments for Intelligent Design Really Valid?
Now, even if we take the above counterarguments to intelligent design as being insufficient – that irreducible complexity and FSCI hypotheses have not been falsified, or that the fine-tuning argument is a strong one – the arguments in favor of intelligent design are still incorrect.
In other words, even if every argument made remained scientifically valid – which they are not – the very concept is flawed because it’s based entirely on an argument from ignorance.
An argument from ignorance is an argument made, and argued as valid by its proponent, due to a lack of evidence to the contrary. In other words, one may argue that intelligent design is a valid argument because one cannot disprove the existence of an intelligent designer.
Unfortunately, an argument from ignorance constitutes a logical fallacy: absence of evidence to the contrary is not evidence for a positive claim.
So even if we did not know that bacterial flagella can evolve from other systems in discrete evolutionary steps, the argument from irreducibly complex systems would still not be proof of intelligent design. Irreducible complexity only suggests, at best, that we didn’t know how certain structures had evolved.
Likewise with functionally specified complex information – even if we took it to be true that sufficiently complex systems cannot happen by natural mechanisms that we know of, this does not mean that a supernatural explanation is necessary – only that a naturalistic mechanism has yet to be discovered.
Now, one could argue – from irreducible complexity or FSCI – that an intelligent agent is acting in the universe. One could argue, for example, that nylonase was evolved due to the action of an intelligent agent – that intelligent design was in action from 1935 to 1975 in the production of nylonase in Flavobacterium. The problem with this argument remains that it is an argument from ignorance: it offers no falsifiable hypothesis, and instead is assumed by its proponents to be correct unless one can show that an intelligent designer was not or could not act (possibly due to its nonexistence).
Unfortunately for proponents of intelligent design, this is not how science – or, indeed, any formalized pursuit of human knowledge – works. A positive claim requires positive proof, not lack of evidence to the contrary.0 Thanks, 0 Likes, 0 Dislikes
Total Trackbacks 0