There is this very nice object which appears in the study of diffusion models (cf Montanari)
There is this very nice object which appears in the study of diffusion models (cf Montanari)
Take real random variable $\alpha$ and consider a Brownian motion with constant drift $\alpha$, i.e. $\mathrm d X_t=\mathrm dB_t +\alpha\mathrm dt$ (where $\mathrm dB_t$ is independent of $\alpha$, then this is a Markov process
Take real random variable
α and consider a Brownian motion with constant drift
α, i.e.
dXt=dBt+αdt (where
dBt is independent of
α, then this is a Markov process
And the property that I find a priori surprising is that $X_t$ is Markovian (even if we don't observe $\alpha$ directly... the process ends up revealing to us the value of $\alpha$)
And the property that I find a priori surprising is that
Xt is Markovian (even if we don't observe
α directly... the process ends up revealing to us the value of
α)
The classical way to prove this is by looking at $(tX_{1/t})_{t\geq 0}$, which starts from $\alpha$... and to argue that the time inversion of a Markov process is also Markov... but I don't really like this process
The classical way to prove this is by looking at
(tX1/t)t≥0, which starts from
α... and to argue that the time inversion of a Markov process is also Markov... but I don't really like this process
Can we argue that there is Markovianity using something like De Finetti's Theorem?
Can we argue that there is Markovianity using something like De Finetti's Theorem?
In the Polyá's Urn problem, we have a reinforcement process (hence obviously Markovian) with exchangeable samples, and we can argue that hence the whole thing is conditionally independent
In the Polyá's Urn problem, we have a reinforcement process (hence obviously Markovian) with exchangeable samples, and we can argue that hence the whole thing is conditionally independent
In this case, it is a bit the opposite: we would like to understand the reinforcement process from the conditionally independent formulation
In this case, it is a bit the opposite: we would like to understand the reinforcement process from the conditionally independent formulation
This is not technically granted by De Finetti (for what I know) but we could perhaps make a variant of Polya's Urn to get this...
This is not technically granted by De Finetti (for what I know) but we could perhaps make a variant of Polya's Urn to get this...
I would like to gain more insight about the way we learn about $\alpha$ as the time passes... we start with an a priori on $\alpha$, and we just update this using the position of $X_t$
I would like to gain more insight about the way we learn about
α as the time passes... we start with an a priori on
α, and we just update this using the position of
Xt
A basic intuition (don't know if this makes sense or not) is that if we discretize time and we condition on the various steps of the past, we should see some cancellation that should look like the kind of cancellations that appear for the Polyá's Urn
A basic intuition (don't know if this makes sense or not) is that if we discretize time and we condition on the various steps of the past, we should see some cancellation that should look like the kind of cancellations that appear for the Polyá's Urn
.