CORRELATION DOES NOT IMPLY CAUSATION*

>Wrong. I'm saying it doesn't necessarily increase. You assumed it does.
Can you describe a concrete example where it wouldn't?

I gave an example that models the situation and asked you if you agree with it. Do you?

>You are getting very confused here. Learning of the causal relationship would make P(Ca) = 1, which would mean P(Co) = P(Co|Ca)(1)+P(Co|-Ca)(0) = P(Co|Ca)
No, seriously, you're the one who is confused. In the expression "P(Co) = P(Co|Ca)", "P(Co)" is the prior probability of Co. As in prior to learning anything about Ca. The expression is saying that the probability of Co before learning about Ca is the same as the probability of Co conditional on Ca being true.

What you've done above is move the situation forward in time to a point after we've learned that Ca is true and then derive the tautology that after we know that Ca is true, the probability of Co is the same as the probability of Co conditional on Ca being true. Well duh. Obviously once you know something, conditioning on it again isn't going to change anything.

>Before you know of the causal relationship, whether P(Co) = P(Co|-Ca) is dependent on several factors which your example does not elucidate.
Like what? Again, can you give a concrete example of a situation in which two things A and B have a causal relationship, yet learning about this relationship should not increase the probability of B correlating with A?

>Wrong. I'm saying it doesn't necessarily increase. You assumed it does.
Can you describe a concrete example where it wouldn't?

I gave an example that models the situation and asked you if you agree with it. Do you?

>You are getting very confused here. Learning of the causal relationship would make P(Ca) = 1, which would mean P(Co) = P(Co|Ca)(1)+P(Co|-Ca)(0) = P(Co|Ca)
No, seriously, you're the one who is confused. In the expression "P(Co) = P(Co|Ca)", "P(Co)" is the prior probability of Co. As in prior to learning anything about Ca. The expression is saying that the probability of Co before learning about Ca is the same as the probability of Co conditional on Ca being true.

What you've done above is move the situation forward in time to a point after we've learned that Ca is true and then derive the tautology that after we know that Ca is true, the probability of Co is the same as the probability of Co conditional on Ca being true. Well duh. Obviously once you know something conditioning on it again isn't going to change anything.

>Before you know of the causal relationship, whether P(Co) = P(Co|-Ca) is dependent on several factors which your example does not elucidate.
Like what? Again, can you give a concrete example of a situation in which two things A and B have a causal relationship, yet learning about this relationship should not increase the probability of B correlating with A?

(Other than ones where, like I've said, you already have definite knowledge of the correlation)

>>And why does SimpletonPseudoScience start the graph in 1990?
>Because that's when the projection was made.
Wrong, that's what the projection was changed to, after the utter failure of the original prediction from UN IPCC AR4 which starts at 1979. "hindsight" is not "prediction."


>>The UN IPCC diagram clearly shows the baseline to be 1979. Pic related, from UN IPCC AR4, Fig 10-26.
>How exactly did you determine the baseline from that blown up graph? You didn't, you just made up the baseline.

You idiot, the zero point of that graph is at about 1979. The starting value of temperature anomaly predictions is, of course, zero. That makes the baseline 1979.

>And the data you overlayed can't even stay inside the blown up line.
Gosh, what is smoothing.

So Nice to talk to an autist.

First this Second, evolution defined broadly as "change of allele frequency over time" is observable in anything with a short enough lifespan. MRSA is probably the most commonly cited such example, we saw it happen. It's also a fair proof for natural selection, because we saw the mutations happen and we saw them proliferate because of an advantageous trait. Evolution describes nothing beyond that which is already observable with the right technology.

>Can you describe a concrete example where it wouldn't?
I already did. Can you just prove mathematically that P(Co) < P(Co|Ca) instead of making examples? you can't prove a blanket statement like this with examples.

>I gave an example that models the situation and asked you if you agree with it. Do you?
I already explained that your example lacks the information to calculate the relevant probabilities, so I neither agree nor disagree. You also make the same mistake of representing my argument as "it won't increase" when it should be "it doesn't have to increase."

>No, seriously, you're the one who is confused. In the expression "P(Co) = P(Co|Ca)", "P(Co)" is the prior probability of Co. As in prior to learning anything about Ca.
No, P(Co) is simply the probability of A correlating with B. Nowhere did we say anything about it being prior, that would depend totally on context. And I already talked about both cases of before and after learning about the causation, so this seems like an irrelevant point.

>What you've done above is move the situation forward in time to a point after we've learned that Ca is true and then derive the tautology that after we know that Ca is true, the probability of Co is the same as the probability of Co conditional on Ca being true. Well duh. Obviously once you know something, conditioning on it again isn't going to change anything.
So what? I don't see what the point is in repeating what I said.

>Like what? Again, can you give a concrete example of a situation in which two things A and B have a causal relationship, yet learning about this relationship should not increase the probability of B correlating with A?
I already described the situation completely, which is whenever P(Co|Ca) = P(Co|-Ca). An example of this would be when P(Co|Ca) = 1/2 and P(Co|-Ca) = 1/2. If you would like to prove this is impossible, go ahead. Until then, your argument is non-mathematical.

>Instead you focus on Hansen's 1988 predictions which used a high climate sensitivity and you claim that scenario A describes true emissions when it doesn't. In fact, none of the scenarios describe the emissions that actually occurred.

Now you're just flat out lying. CO2 output has GROWN almost every year in the past 50 years or so. That's scenario A. Pic related; CO2 emission GROWTH per year.

Data Source: wri.org/resources/data_sets

P.S. The fact that you'll just brazenly make up crap to defend an unfalsifiable belief system is a sure sign of a paid shill.

>Wrong, that's what the projection was changed to, after the utter failure of the original prediction from UN IPCC AR4 which starts at 1979. "hindsight" is not "prediction."
You just contradicted yourself in two sentences. AR4 was in 2007. So is hindsight a prediction or not? Did the "prediction" start at 1979 or 2007?

>You idiot, the zero point of that graph is at about 1979. The starting value of temperature anomaly predictions is, of course, zero. That makes the baseline 1979.
You idiot, "the zero point" or starting value of the graph is not the baseline. A baseline is normally the average of a range of temperatures. It will cross the data at some point, but where it cross the data is not necessarily the baseline. And you can't even tell where it crosses the data, since you blew up a tiny graph. So either you purposefully made up the baseline being 1979, or you have no idea what you're talking about. Which is it?

>Gosh, what is smoothing.
Smoothing wouldn't make some part more extreme and others less extreme. Nice deflection dumbass.

>This makes the same non-response as above. It then responds to the fact that the graph shows no error bars with the non sequitur that the IPCC 1990 predictions are inaccurate, which we already know from is false

You "know" from a post-hoc rewrite by Simpleton Science? Sorry buddy, the IPCC said it themselves here
Stop referencing that dishonest clod John Crook. He has no credibility

JOHN COOK DEBUNKED:
motls.blogspot.com/2010/03/john-cook-skeptical-science.html

JOHN COOK LIES
hiizuru.wordpress.com/2014/01/27/john-cook-is-a-filthy-liar/
www.forbes.com/ sites/ jamestaylor/ 2013/ 05/ 30/ global-warming-alarmists-caught-doctoring-97-percent-consensus-claims /
wattsupwiththat .com/2012/02/03/monckton-responds-to-skeptical-science/
impactofcc.blogspot.com/2013/05/john-cook-et-al-willfully-lie.html
populartechnology.net/2012/03/truth-about-skeptical-science.html

>Smoothing wouldn't make some part more extreme and others less extreme. Nice deflection dumbass.
You're the dumbass. I'm talking about the smoothness of the originally graphed data vs. The higher variability of the (less smoothed) overlaid data.

You really are an autist.

>Now you're just flat out lying. CO2 output has GROWN almost every year in the past 50 years or so. That's scenario A.
so the scenarios only describe CO2 emmissions? No, it's about all GHG forcings! So again we must ask, did you know that and only mentioned CO2 to be misleading, or did you not know that and are you just making arguments from rank ignorance?

But yeah, I'm clearly the liar since you utterly misunderstand what Hansen's scenarios are.

>P.S. The fact that you'll just brazenly make up crap to defend an unfalsifiable belief system is a sure sign of a paid shill.
The pot calling the kettle black, and I just proved it.