Monday, November 10, 2008

FACT: Only Computer Illiterates believe in "Man-Made" Global Warming


"Today's scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality."
- Nikola Tesla, 1934



What people do not understand is that there is no proof of "Man-Made" Global Warming without using irrelevant computer models. Yes computer models have a place in engineering but are utterly useless at fortune telling, I mean "climate prediction". With engineering you can build and test in the real world to confirm the computer model's accuracy. You can do no such thing with the planet Earth and it's climate. You cannot build a planet and it's atmosphere to "test" your computer climate model.

I am a computer analyst and can program a computer model to do whatever I want. If you program a computer model so that X amount of CO2 increase "forces" X amount of temperature increase then it will happen, this does not make this true in the real world.

Virtual reality can be whatever you want it to be and computer climate models are just that, they are the code based on the subjective opinions of the scientists creating them. The real world has no such bias.


"...all of our models have errors which mean that they will inevitably fail to track reality within a few days irrespective of how well they are initialised." - James Annan, William Connolley, RealClimate.org


GIGO: Garbage in = Garbage out

Computers need exact information and the exact procedures to process that information to get accurate answers, without that you get useless results, period. There is no way around this. Computers cannot fill in the blanks for you like nature does when you do an experiment in the real world. With computers everything must be programmed into them from the beginning and everything that is programmed into them must be 100% understood and 100% accurate. Even the most advanced and expensive computer climate models include various approximations known as 'parametrizations'. These "guesses" include:

- Cloud Cover
- Convection
- Hydrology
- Transfer of Solar Radiation in the Atmosphere

The existence of parametrizations (approximated assumptions) means that various calculations are not fully resolved to scale and thus the models are flawed by design. You have results based on estimated calculations and thus worthless results. No hand waving can change this. Any computer code that is not 100% perfect will produce meaningless results with scientific and math calculations.

Fundamental Computer Science - if a Computer Model includes merely one approximation for what latter dependent calculations or data are derived from then the output of the model is useless. This is Computer Science 101.

Computing incomplete, biased or flat out wrong data (guesses and assumptions) based on poorly understood climate physics in a "model" will give you useless output. But since these models have been "tuned" (guesstimated or deliberately altered to get the results they want) they get results that "seem" likely or even convincing to the average computer illiterate, yet they are absolutely meaningless for prediction. What the modelers do is they keep playing with the numbers until they think they guess right, a useless exercise. Technically they are mathematically adjusting various climate related equations based on theoretical assumptions.


Question: Would you use a mathematically broken calculator?

Nothing is emotional about computers they are pure logical machines, 1 + 1 must = 2. Imagine trying to use a mathematically broken calculator based on poorly understood arithmetic to get a correct answer but you have no way to confirm that "correct answer" except to wait 50-100 years. Sound crazy? Welcome to Global Climate Modeling. Yes the models have to be exact to give any sort of relevant results. That is like saying a calculator does not have to be based on accurate arithmetic to be a useful tool in mathematics - utter propaganda.


Weather vs. Climate

Computer models are used to predict your weather and you know how accurate they are. But Al Gore and Gavin Schmidt can certainly tell your what the climate will be 50-100 years from now. Give me a break! Don't be fooled that the basic principles of how computers work changes whether you are modeling the climate or the weather. Nor is one more accurate than the other long term. Computer code is computer code no matter what name you give it and how a computer works does not change because you change the name you call the code. You cannot simply excuse away missing data, substitute mathematically created observations, parametrize what you are unable to model and then run the model over a longer time and think your results have any remote relation to reality.


The Myth of Testing

Testing a model against past climate is an advanced exercise in curve fitting, nothing more and proves absolutely nothing. What this means is you are attempting to have your model's output match the existing historical output that has been recorded. For example matching the global mean temperature curve over 100 years. Even if you match this temperature curve with your model it is meaningless. Your model could be using some irrelevant calculation that simply matches the curve but does not relate to the real world. With a computer model there are an infinite number of ways to match the temperature curve but only one way that represents the real world. It is impossible for computer models to prove which combination of climate physics correctly matches the real world. Do not be fooled this logic is irrefutable by anyone who understands computer science and computer modeling.


"These codes are what they are - the result of 30 years and more effort by dozens of different scientists (note, not professional software engineers), around a dozen different software platforms and a transition from punch-cards of Fortran 66, to Fortran 95 on massively parallel systems." - Gavin Schmidt, RealClimate.org


Computer Science vs. Natural Science

To make matters worse it is not computer scientists creating these models but natural scientists coding them using Fortran. These natural scientists do not even begin to have the basic understanding of computer science or proper coding practices. Their code is not 100% available publicly and you do not have independent auditing or code validation. Sloppy and buggy code is very likely littered inside these climate model programs yet there is next to no accountability for any of this. How do you separate a programming error from a temperature anomaly? How can you replace observational data with a complex mathematical equation? You can't.

How many of the models used by the IPCC have had ANY bug fixes or code changes since the most recent IPCC report? If they have had ANY - all previous model run results become null and void based on simple logic thus easily invalidating the ridiculous conclusions of the IPCC report.


"No complex code can ever be proven 'true' (let alone demonstrated to be bug free). Thus publications reporting GCM results can only be suggestive." - Gavin Schmidt, RealClimate.org


Alarmism

Alarmist scientists presenting their "predictions" as fancy graphs or nicely colored renderings does nothing for the accuracy of their predictions. They like to use colors such as yellow, orange and red to have an emotional effect for worthless computer generated results. You can have a change in temperature of only a fraction of a degree yet since it's percentage change compared to a previous value might be high they will make it bright red. Disingenuous tricks like these are intended to alarm. Caution needs to be taken when reading any graphical depiction of temperature changes.


Conclusion

All the computer illiterates are convinced that because something is done on a "super computer" that costs "millions of dollars" it is infallible. The more complex the model, the more "mysterious" it seems to the average person. The public gives computer climate models this mystical aura because they are largely computer illiterate about how they actually work and when they hear the term "computer" they do not want to sound or feel stupid, so they nod their heads and go along with it.

Why are we not turning to models to predict the future for everything? Because they can't, not even remotely. Some of them work "sort of" for the weather in very, very short term results (1-3 days) until all the data they are processing that is wrong combined with all the data they are missing and the millions of variables they are not accounting for start to kick in and grow exponentially the farther out the model runs and wham - the model is wrong. No kidding, there are simply way too many variables that they cannot account for and the computer power necessary to even start to take these variables into account does not exist.

The sheer ignorance of the scientists creating these models is astounding. The fact that they have absolutely no scientific understanding of computer systems in any remote way is appalling. They instead rely on the computer illiteracy of the general public and their perceived standing as "intellectuals" to get away with this fraud.

You are expected to believe that they can "model" the climate 50-100 years in the future when they cannot even give you accurate weather 3 days out? Don't be fools, I do this for a living, Computer Models cannot predict the future with anything as complex as the Earth's climate.


Resources:

Dyson: Climate models are rubbish (The Register, UK)
"My first heresy says that all the fuss about global warming is grossly exaggerated. Here I am opposing the holy brotherhood of climate model experts and the crowd of deluded citizens who believe the numbers predicted by the computer models. [...] But I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models." - Freeman Dyson, Professor Emeritus of Physics, Princeton University

Fighting climate 'fluff' (National Post, Canada)
"Prof. Dyson explains that the many components of climate models are divorced from first principles and are "parameterized" -- incorporated by reference to their measured effects.

"They are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere," he states.

Prof. Dyson learned about the pitfalls of modelling early in his career, in 1953, and from good authority: physicist Enrico Fermi, who had built the first nuclear reactor in 1942. The young Prof. Dyson and his team of graduate students and post-docs had proudly developed what seemed like a remarkably reliable model of subatomic behaviour that corresponded with Fermi's actual measurements. To Prof. Dyson's dismay, Fermi quickly dismissed his model.

"In desperation, I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, 'How many arbitrary parameters did you use for your calculations?' I thought for a moment about our cut-off procedures and said, 'Four.' He said, 'I remember my friend Johnny von Neumann [the co-creator of game theory] used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.' With that, the conversation was over."

Prof. Dyson soon abandoned this line of inquiry. Only years later, after Fermi's death, did new developments in science confirm that the impressive agreement between Prof. Dyson's model and Fermi's measurements was bogus, and that Prof. Dyson and his students had been spared years of grief by Fermi's wise dismissal of his speculative model. Although it seemed elegant, it was no foundation upon which to base sound science." - Freeman Dyson, Professor Emeritus of Physics, Princeton University
Antarctic Temperatures Disagree With Climate Model Predictions (Ohio State University)
Climate Models Overheat Antarctica, New Study Finds (National Center for Atmospheric Research)
Computer Climate Models: Voodoo for Scientists (PDF) (Executive Intelligence Review)
Faith-Based Models (Peter Huber, Ph.D. Mechanical Engineering, MIT)
Limitations of Climate Models as Predictors of Climate Change (PDF) (David R. Legates, Ph.D. Professor of Climatology)
New Study Increases Concerns About Climate Model Reliability (International Journal of Climatology)
Overconfidence Leads To Bias In Climate Change Estimations (Penn State)
The Mathematical Reason Why Long-run Climatic prediction is Impossible (PDF) (Christopher Monckton, Mathematician)
The Science Isn’t Settled - The Limitations of Global Climate Models (PDF) (Tim F. Ball, Ph.D. Climatology)

The Anti "Man-Made" Global Warming Resource

2 comments:

  1. Andrew,
    Thanks so much for your blog. I have been vaguely disconcerted about global warming, due mostly to some of my friends who say it's a bunch of propaganda. But you've got some really great resources (here on your blog and also in your blogroll) that really provide concrete facts and data. Anyway, due to the fact that I trust your blog now, I'd like to suggest that you write a short review about Raxco.com (nothing to do with Global Warming btw) It's a defrag software for Windows, and it's amazing. I think you might have reviewed Diskeeper in the past, a rival of Raxco's PerfectDisk. So, thanks again, and I hope you give Raxco a review.

    ReplyDelete
  2. What an excellent post! I've used computer modelling in chemistry research since 1995 and I was amazed from day one, how colleagues trust the output from the software. I worked as a programmer before I took up on chemistry. Let's be honest about it, if your first code contains less than one syntax error/line, you've worked very well. Debugging is really 90% of the work. What horrified me from molecular modelling was:
    1. Investigating obviously erroneous inter-atomic distances, I checked the parameter files (I think it was the CHARMM force field). I found comment lines such as: "WRONG!" and "Pure guess". When I informed senior colleagues, I was brushed off. "Everyone uses this software, so if there was a problem with it, someone would have corrected it." Yeah, right. The paper passed peer-review and was published with ridiculous Cartesian co-ordinates.
    2. My supervisor told me several times "not to be so critical" of a certain Karplus equation. I unsuccessfully tried to inform him that it was not my critisism, I merely quoted the authors of the original paper!
    3. My professor bit my head off for quantifying NOE distances: small - medium - large. He wanted a numerical quantification. I tried to explain that it was pointless. (Tip: Never argue against you professor.) So I quantified numerically. Six months later, new software was bought. Re-quantification. Completely different numbers... What's the point in exactly quantifying something that changes with something like a factor of 5-10 with different software?
    4. A colleague of mine was doing ab initio quantum mechanic calculations. The professor didn't like the end result, he wanted another conformation. My colleague asked me for another start conformation. No problem, I had 100k+ MM-minimized to choose from. He picked one from the top ten, which looked "nice". The professor was pleased.
    5. Searching for good Karplus-equations, I finally found one much newer and hopefully better. At least, it was a lot more complicated and 30 years younger than the old workhorse. For some reason I could not resist comparing how the two equations matched reality. It turned out that the old and simpler equation outperformed the young and fancy one.

    I could go on forever... In 2008 an IBM Roadrunner outperforms a Cray 2 from 1983 by six magnitudes. One million times more computing power over 25 years. Is the weather forecast more reliable today?

    ReplyDelete