I still tend to call myself a historian of mathematics although my historical interests have long since expanded to include a much wider field of science and technology, in fact I have recently been considering just calling myself a historian to avoid being pushed into a ghetto by those who don’t take the history of science seriously. Whatever, I have never lost my initial love for the history of mathematics and will automatically follow any link offering some of the same. So it was that I arrived on the Mother Nature Network and a blog post titled 5 brilliant mathematicians and their impact on the modern world. The author, Shea Gunther, had actually chosen 5 brilliant mathematicians with Isaac Newton, Carl Gauss, John von Neumann, Alan Turing and Benoit Mandelbrot and had even managed to avoid the temptation of calling them ‘the greatest’ or something similar. However a closer examination of his commentaries on his chosen subjects reveals some pretty dodgy not to say down right crappy claims, which I shall now correct in my usual restrained style.
He starts of fairly well on Newton with the following:
There aren’t many subjects that Newton didn’t have a huge impact in — he was one of the inventors of calculus, built the first reflecting telescope and helped establish the field of classical mechanics with his seminal work, “Philosophiæ Naturalis Principia Mathematica.” He was the first to decompose white light into its constituent colors and gave us, the three laws of motion, now known as Newton’s laws.
But then blows it completely with his closing paragraph:
We would live in a very different world had Sir Isaac Newton not been born. Other scientists would probably have worked out most of his ideas eventually, but there is no telling how long it would have taken and how far behind we might have fallen from our current technological trajectory.
This is the type of hagiographical claim that fans of great scientists tend to make who have no real idea of the context in which their hero worked. Let’s examine step by step each of the achievements of Newton listed here and see if the claim made in this final paragraph actually holds up.
Ignoring the problems inherent in the claim that Newton invented calculus, which I’ve discussed here, the author acknowledges that Newton was only co-inventor together with Leibniz and although Newton almost certainly developed his system first it was Leibniz who published first and it was his system that spread throughout Europe and eventually the world so no changes here if Isaac had not been born.
Newton did indeed construct the first functioning reflecting telescope but as I explained here it was by no means the first. It would also be fifty years before John Hadley succeeded in repeating Newton’s feat and finally making the commercial production of reflecting telescopes viable. However Hadley also succeeded in making working models of James Gregory’s reflecting telescope, which actually predated Newton’s and it was the Gregorian that, principally in the hands of James Short, became the dominant model in the eighteenth century. Although to be fair one should mention that William Herschel made his discoveries with Newtonians. Once again our author’s claim fails to hold water.
Sticking with optics for the moment it is a little know and even less acknowledge fact that the Bohemian physicus and mathematician Jan Marek Marci (1595 – 1667) actually decomposed white light into its constituent colours before Newton. Remaining for a time with optics, James Gregory, Francesco Maria Grimaldi, Christian Huygens and Robert Hooke were all on a level with Newton although none of them wrote such an influential book as Newton’s Optics on the subject. Now this was not all positive. Due to the influence won through the Principia, The Optics became all dominant preventing the introduction of the wave theory of light developed by Huygens and Hooke and even slowing down its acceptance in the nineteenth century when proposed by Fresnel and Young. If Newton hadn’t been born optics might even have developed and advance more quickly than it did.
This just leaves the field of classical mechanics Newton real scientific monument. Now, as I’ve pointed out several times before the three laws of motion were all borrowed by Newton from others and the inverse square law of gravity was general public property in the second half of the seventeenth century. Newton’s true genius lay in his mathematical combination of the various elements to create a whole. Now the question is how quickly might this synthesis come about had Newton never lived. Both Huygens and Leibniz had made substantial contribution to mechanics contemporaneously with Newton and the succeeding generation of French and Swiss-German mathematicians created a synthesis of Newton’s, Leibniz’s and Huygens’ work and it is this that is what we know as the field of classical mechanics. Without Newton’s undoubtedly massive contribution this synthesis might have taken a little longer to come into being but I don’t think the delay would have radically changed the world in which we live.
Like almost all great scientists Newton’s discoveries were of their time and he was only a fraction ahead of and sometimes even behind his rivals. His non-existence would probably not have had that much impact on the development of history.
Moving on to Gauss we will have other problems. Our author again makes a good start:
Isaac Newton is a hard act to follow, but if anyone can pull it off, it’s Carl Gauss. If Newton is considered the greatest scientist of all time, Gauss could easily be called the greatest mathematician ever.
Very hyperbolic and hagiographic but if anybody could be called the greatest mathematician ever then Gauss would be a serious candidate. However in the next paragraph we go off the rails. The paragraph starts OK:
Carl Friedrich Gauss was born to a poor family in Germany in 1777 and quickly showed himself to be a brilliant mathematician. He published “Arithmetical Investigations,” a foundational textbook that laid out the tenets of number theory (the study of whole numbers).
So far so good but then our author demonstrates his lack of knowledge of the subject on a grand scale:
Without number theory, you could kiss computers goodbye. Computers operate, on a the most basic level, using just two digits — 1 and 0…
Here we have gone over to the binary number system, with which Gauss book on number theory has nothing to do, what so ever. In modern European mathematics the binary number system was first investigated in depth by Gottfried Leibniz in 1679 more than one hundred years before Gauss wrote his Disquisitiones Arithmeticae, which as already stated has nothing on the subject. The use of the binary number system in computing is an application of the two valued symbolic logic of George Boole the 1 and 0 standing for true and false in programing and on and off in circuit design. All of which has nothing to do with Gauss. Gauss made so many epochal contributions to mathematics, physics, cartography, surveying and god knows what else so why credit him with something he didn’t do?
Moving on to John von Neumann we again have a case of credit being given where credit is not due but to be fair to our author, this time he is probably not to blame for this misattribution. Our author ends his von Neumann description as follows:
Before his death in 1957, von Neumann made important discoveries in set theory, geometry, quantum mechanics, game theory, statistics, computer science and was a vital member of the Manhattan Project.
This paragraph is fine and if Shea Gunther had chosen to feature von Neumann’s invention of game theory or three valued quantum logic I would have said fine, praised the writer for his knowledge and moved on without comment. However instead our author dishes up one of the biggest myths in the history of the computer.
…he went on to design the architecture underlying nearly every single computer built on the planet today. Right now, whatever device or computer that you are reading this on, be it phone or computer, is cycling through a series of basic steps billions of times over each second; steps that allow it to do things like render Internet articles and play videos and music, steps that were first thought up by John von Neumann.
Now any standard computer is called a von Neumann machine in terms of its architecture because of a paper that von Neumann published in 1945, First Draft of a Report on the EDVAC. This paper described the architecture of the EDVAC one of the earliest stored memory computers but von Neumann was not responsible for the design, the team led by Eckert and Mauchly were. Von Neumann had merely described and analysed the architecture. His publication caused massive problems for the design team because the information now being in the public realm it meant that they were no longer able to patent their innovations. Also von Neumann’s name as author on the report meant that people, including our author, falsely believed that he had designed the EDVAC. Of historical interest is the fact that Charles Babbage’s Analytical Engine in the nineteenth century already possessed von Neumann architecture!
Unsurprisingly we walk straight into another couple of history of the computer myths when we turn to Alan Turing. We start with the Enigma story:
During World War II, Turing bent his brain to the problem of breaking Nazi crypto-code and was the one to finally unravel messages protected by the infamous Enigma machine.
There were various versions of the Enigma machine and various codes used by different branches of the German armed forces. The Polish Cipher Bureau were the first to break an Enigma code in 1932. Various other forms of the Enigma codes were broken by various teams at Bletchley Park without Turing. Turing was responsible for cracking the German Naval Enigma. The statement above denies credit to the Polish Cipher Bureau and the other 9000 workers in Bletchley Park for their contributions to encoding Enigma.
Besides helping to stop Nazi Germany from achieving world domination, Alan Turing was instrumental in the development of the modern day computer. His design for a so-called “Turing machine” remains central to how computers operate today.
I’ve lost count of how many times that I’ve seen variations on the claim in the above paragraph in the last eighteen months or so, all equally incorrect. What such comments demonstrate is that their authors actually have no idea what a Turing machine is or how it relates to computer design.
In 1936 Alan Turing, a mathematician, published a paper entitled On Computable Numbers, with an Application to the Entscheidungsproblem. This was in fact one of four contemporaneous solutions offered to a problem in meta-mathematics first broached by David Hilbert, the Entscheidungsproblem. The other solutions, which needn’t concern us here, apart from the fact that Post’s solution is strongly similar to Turing’s, were from Kurt Gödel, Alonso Church and Emil Post. Entscheidung is the German for decision and the Entscheidungsproblem asks if for a given axiomatic system whether it is also possible with the help of an algorithm to decide if a given statement in that axiom system is true or false. The straightforward answer that all four men arrived at by different strategies is that it isn’t. There will always be undecidable statements within any sufficiently complex axiomatic system.
Turing’s solution to the Entscheidungsproblem is simple, elegant and ingenious. He hypothesised a very simple machine that was capable of reading a potentially infinite tape and following instruction encoded on that tape. Instruction that moved the tape either right or left or simply stopped the whole process. Through this analogy Turing was able to show that within an axiomatic system some problems would never be Entscheidbar or in English decidable. What Turing’s work does is, on a very abstract level, to delineate the maximum computability of any automated calculating system. Only much later, in the 1950s, after the invention of electronic computers a process in which Turing also played a role did it occur to people to describe the computational abilities of real computers with the expression ‘Turing machine’. A Turing machine is not a design for a computer it is term used to described the capabilities of a computer.
To be quite open and honest I don’t know enough about Benoit Mandelbrot and fractals to be able to say whether our author at least got that one right, so I’m going to cut him some slack and assume that he did. If he didn’t I hope somebody who knows more about the subject that I will provide the necessary corrections in the comments.
All of the errors listed above are errors that could have been easily avoided if the author of the article had cared in anyway about historical accuracy and truth. However as is all to often the case in the history of science or in this case mathematics people are prepared to dish up a collection of half baked myths, misconceptions and not to put too fine a point on it crap and think they are performing some sort of public service in doing so. Sometimes I despair.