Probability Theory

Can someone explain conditional expectations to me?

We have a probability space [math] (\Omega, \mathcal{F}, \mathbb{P}) [/math], and a [math] \mathcal{F} [/math]-measurable random variable [math] X [/math]. Now WHY THE FUCK is:

[eqn] \mathbb{E} [ X | \mathcal{F} ] = X [/eqn]

How the hell do conditional expectations with respect to a sigma algebra even work?

Other urls found in this thread:

math.stackexchange.com/questions/690531/intuition-for-random-variable-being-sigma-algebra-measurable
twitter.com/NSFWRedditImage

>this autistic circlejerk
>probability theory
>wondering why he doesn't understand anything
lmao

This thread is a no-bully zone.

because the sigma algebra doesn't give any information about the random variable.

why would it change anything?

No, but the interesting thing here is that the expectation isn't a constant but a random variable again. Don't you think that's strange?

One look at the Wikipedia article where they discuss possible definitions makes it uninteresting, so no

Yes but I need a measure theoretic proof, please I am too retard to derive it myself.

E[X|F] is defined as an F-measurable random variable Y such that

[math]\int_A Y\, dP = \int_A X\, dP[/math] for all A in F. Clearly X satisfies this.

Wait, it's defined that way?

That's the definition in Dudley's Real Analysis and Probability.

I believe the definition of E[X|F], where F is a sigma-algebra is a generalization of E[X|Y] where Y is a random variable. E[X|Y] should be a random variable since the given information, i.e. Y, is random and not fixed.

that photo sure gave me a boner user ;)

First, if [math]\mathcal{F'} \subseteq \mathcal{F}[/math] is a [math]\sigma[/math]-subalgebra, then we have an inclusion of Banach spaces [math]L^p(\Omega, \mathcal{F'}) \subseteq L^p(\Omega, \mathcal{F})[/math], since any function that is [math]\mathcal{F'}[/math]-measurable is certainly [math]\mathcal{F}[/math]-measurable. The random variable [math]X|\mathcal{F'}[/math] for [math]X \in L^p(\Omega, \mathcal{F})[/math] is the [math]\|\cdot\|_p[/math]-closest element of [math]L^p(\Omega,\mathcal{F'})[/math] to [math]X[/math], viewed as a linear subspace. In the [math]p=2[/math] case this is just the orthogonal projection. In the case [math]\mathcal{F'}=\mathcal{F}[/math], this just says that the closest element to [math]X[/math] in [math]\mathcal{F}[/math] is [math]X[/math].

replace [math]\mathcal{F}[/math] by the [math]\sigma[/math]-algebra [math]\{\emptyset, \Omega\}[/math]. The [math]X[/math] is just a constant.

Where can I read more about this? I'm guessing I have to google on "measure theoretic probability" or something, right? Do you specific book recommendations?

why would you ever make this definition for exponents other than p=2?

OP here, for anyone else interested, I've got a nice intuitive explanation here:

math.stackexchange.com/questions/690531/intuition-for-random-variable-being-sigma-algebra-measurable

oh you're right, that's fucked up!

but a sigma algebra isn't a random variable though, it's just the allowed subsets of omega.

Let's say X is the number of dots on the role of a fair dice. so Omega is {1,2,3,4,5,6}.

then filtration is just the set of all possible outcomes.

So the expectation of X given the filtration is simply the expectation of X (since the filtration tells us absolutely nothing about X we did not already know) which is equal to 3.5


So NO
E[X|F] is NOT always a random variable

Bad maths DISPROVEN by counter example

But I think that's wrong. [math] \mathbb{E} [ X | \mathcal{F} ] [/math] is a [math]\mathcal{F}[/math] -measurable function again.

Recall that [math] X [/math] is measurable, so if we have [math] \omega \in \Omega [/math], given the sigma algebra, we know which value X takes. So then:

[eqn] \mathbb{E} [ X | \mathcal{F} ](\omega) = X(\omega) [/eqn]

Here's a tip: don't post an answer if you have no idea what you're talking about

It kinda makes sense for L1, because we don't always have to assume second moment exists.

Even if your calculation was right (protip: it's not), a constant would still be a random variable, as it is a measurable map.

yeah that's true, but if you define the conditional expectation that way (i.e. as L1 projection), then you don't get the property [math] \int_A E(X\mid F)dP=\int_A XdP[/math] (in fact, this property is equivalent to saying that E(X| F) is the L2 projection of X onto the corresponding subspace)

the most common approach is to first define it as L2 projection for square-integrable X, and then extend this to all integrable X by approximating X with a sequence of L2 random variables

Bump