Absorption Probabilities in Quantum Mechanics

Hello Veeky Forums. I am going to run through an argument in quantum mechanics and I want to see if you can help me say more or see if I have made any poor assumptions along the line.

The topic is computing absorption probabilities in a quantum mechanical system. Let us first start with an underlying space [math]\Omega [/math] such that state vectors [math]\psi[/math] are elements of [math]L^2(\Omega )[/math]. Let [math]U:L^2(\Omega )\rightarrow L^2(\Omega )[/math] be a unitary transformation governing the evolution of the system after a single time step. Let [math]S\subset\Omega [/math] and [math]S'\subset S[/math] where elements in [math]S[/math] are absorbing boundaries. The goal of this line of thinking is to compute the probability that we eventually observe the particle in [math]S'[/math] and not in [math]S\cap S'^c[/math].

Other urls found in this thread:

math.stackexchange.com/questions/337071/cauchy-integral-formula-for-matrices
en.wikipedia.org/wiki/Kronecker_product)
math.stackexchange.com/questions/123923/a-matrix-and-its-transpose-have-the-same-set-of-eigenvalues
en.wikipedia.org/wiki/Quantum_Zeno_effect).
hairer.org/notes/Regularity.pdf.
arxiv.org/pdf/0903.3297.pdf)
twitter.com/AnonBabble

Now let [math]P:L^2(\Omega )\rightarrow L^2(\Omega )[/math] be a projection onto [math]S[/math] and likewise for [math]P'[/math]. These operators are necessarily self-adjoint. Recall the probabilistic interpretation of quantum mechanics which states that if we measure a state [math]\psi[/math] over a set [math]S[/math], the unnormalized state [math]\psi[/math] becomes [math]P\psi [/math] with probability [math]\lVert P\psi\rVert [/math] (the particle is observed) or [math](I-P)\psi [/math] with probability [math]\lVert (I-P)\psi\rVert [/math] otherwise.

For the remainder of these posts I am going to assume [math]\Omega =\{ 1,...,n\}[/math] for ease of notation so that everything may be represented by matrices, although we could certainly extend the coming arguments for more general sets.

So are you actually going to post something or not? I don't know much about quantum physics but if you want to do some functional analysis I'm in (or will be tomorrow; as a Europoor I'm off to bed now).

thanks boss I will keep posting because of you

For convenience, let [math]S'=\{\omega\}[/math] be a single element set. We will treat the subset [math]S[/math] as an absorbing boundary; that is, we will take a measurement at each element of [math]S[/math]. If we find the particle there, then we will terminate the experiment. Otherwise, we will continue.

I'm not sure what you mean by "absorbing boundary". From what you say, it seems you want to describe properties of [math] M_{\omega_k} \left( U \left( M_{\omega_{k-1}} ... M_{\omega_1} \left( \psi \right) \right) \right), [/math] where [math] M [/math] stands for "taking a measurement" and has an uncertain outcome. I'm assuming time passes between measurements as you wouldn't have introduced [math] U [/math] otherwise.

Think about the absorbing boundary in terms of a random walk. If you end up at the absorbing boundary, you terminate the process, otherwise you keep walking. This is made a little more complicated in the quantum domain since you need to measure the state to see whether the particle will be absorbed or not. In between the measurements we evolve the state one step by [math]U[/math], I probably should have made that clear.

Recall we want to find the probability that the particle is EVENTUALLY absorbed in [math]S'[/math] but not in [math]S[/math]. If the particle is absorbed, then it will be absorbed at some time [math]t[/math] and no other. Thus, we can describe the total absorption probability [math]\mathcal{P}[/math] as:

[math]\mathcal{P}=\sum _{t=1}^\infty |\langle\psi _1|U(PU)^{t-1}|\psi _0\rangle |^2[/math]

Here, [math]\psi _0[/math] is some initial condition, [math]\psi _1[/math] is a unit vector such that [math]\psi _1\psi _1^T=P'[/math]. To be clear, [math]v^T[/math] is the conjugate transpose of [math]v[/math].

I just want to explain the previous formula briefly. Basically the eventual probability of absorption can be time. The process can be thought of as a two step process:
1) Evolve the state by [math]U[/math]
2) Measure the state by [math]P[/math]
Each summand in [math]\mathcal{P}[/math] can be thought of as the amplitude at [math]S'[/math] at time [math]t[/math] before absorption.

One way to compute this is to construct generating functions. We consider the generating function [math]f(z)[/math] defined as follows:

[math]f(z)=\sum _{t=1}^\infty \langle\psi _1|U(PU)^{t-1}|\psi _0\rangle z^t[/math]

It may not immediately be obvious how to get from [math]f(z)[/math] to [math]\mathcal{P}[/math] but don't worry, that will be addressed in the next post. We can abandon bra-ket notation in the generating function and write this out entirely in matrix/vector notation:

[math]f(z)=\sum _{t=1}^\infty \psi _1^TU(PU)^{t-1}\psi _0z^t[/math]

We can pull terms out to focus on a matrix valued generating function [math]F(z)[/math] satisfying [math]f(z)=\psi _1^TUF(z)\psi _0[/math].

If we stop the experiment when we hit the absorbing boundary, shouldn't we use [math] I-P [/math] in the equation for [math] \mathcal{P} [/math]? Other than that I have no trouble understanding the equation.

>Basically the eventual probability of absorption can be time.
I don't know what you mean by this, however. Did you forget a word or two? I'm assuming you're talking about how we want to know the probability of eventual absorption, meaning we have to consider all times [math] t [/math].

And it looks like [math] \mathcal{P} [/math] is just [math] f(1) \cdot \bar{f}(1)[/math].

Ignore the bit about [math] f(1) \cdot \bar{f}(z) [/math]. I'm retarded. Go on.

yes yes you're right that using my current notation I should be using [math]I-P[/math] and not [math]P[/math]. However, to simplify notation in the long run, let us redefine [math]P[/math] to project OFF of [math]S[/math] such that my formula is right. However, this means that we should now write [math]\psi _1\psi _1^T=I-P'[/math] where [math]P'[/math] projects OFF of [math]S'[/math].

Let us look more carefully at the function [math]F(z)[/math]. This function can be written as the following series:

[math]F(z)=\sum _{t=1}^\infty (PU)^{t-1}z^t[/math]

Clearly, the eigenvalues of [math]PU[/math] have absolute value less than or equal to 1. However, I will make the assumption that [math]PU[/math] has no eigenvalues with absolute value 1. In practice this is a reasonable assumption to make and it covers a wide variety of cases. In practice, this means that any initial state is totally absorbed by [math]S[/math] in the limit although if [math]S'\ne S[/math] we will have [math]\mathcal{P}\le 1[/math].

The first utility of this assumption is to write this matrix generating function as [math]F(z)=(I-PUz)^{-1}[/math]. To see this, just consider the Taylor expansion and and treat it like a geometric series. By our assumption, this is a well defined function and analytic for [math]|z|\le 1[/math].

To elaborate on my reasoning a little, here's how I see it:
The probability of absorption at [math] S' [/math] at time [math] t=1 [/math] is clearly [math] (\psi_1 , U \psi_0) [/math].
In order for absorption to happen at [math] t=2 [/math], it is necessarily the case that absorption did not happen at all at [math] t=1 [/math], so we want to ignore the cases where that happened. Meaning we need to consider [math] (I-P) U ( \psi_0 ) [/math], making the probability of absorption at [math] S' [/math] at time [math] t=2 [/math] is equal to [math] \left( \psi_1, U \left( (I-P) U ( \psi_0) \right) \right) [/math].

Using this new representation of [math]F(z)[/math], we can write our original generating function as:

[math]f(z)=\psi _1^TU(I-PUz)^{-1}\psi _0[/math]

Now I will elucidate the connection between [math]f(z)[/math] and [math]\mathcal{P}[/math]. To do this, we must use a tool called the Hadamard product. Let [math]f(z)=\sum _{k=0}^\infty a_kz^k[/math] (for the moment) and [math]g(z)=\sum _{k=0}^\infty b_kz^k[/math]. We say that the Hadamard product of [math]f[/math] and [math]g[/math] evaluated at [math]z[/math] is defined as:

[math](f\cdot g)(z)=\sum _{k=0}^\infty a_kb_kz^k[/math]

This is basically a termwise multiplication of Taylor series. In general these Hadamard products are impossible to express, for example consider [math]f(z)=g(z)=e^z[/math]. However, we do have an integral representation of the Hadamard product which we will use to our benefit:

[math](f\cdot g)(z)=\frac{1}{2\pi i}\int _\gamma\frac{1}{w}f(w)g\left(\frac{z}{w}\right) dw[/math]

If [math]f[/math] has a radius of convergence of [math]R'[/math] and [math]g[/math] has a radius of convergence of [math]R'[/math], then points in the [math]w[/math] plane enclosed by [math]\gamma[/math] must include 0 and satisfy [math]|w|

Exactly. I'm not sure if you read the previous post while you wrote that, but I'm now saying that [math]P[/math] projects OFF of [math]S[/math] so [math]I-P[/math] from the previous notation (or the one you're referring to in your post) now becomes simply [math]P[/math].

By the way, this information about the Hadamard product can be found in Titchmarsh "Theory of Functions" around page 160, or section 4.7 or so. It proves the integral expression.

We can now express [math]\mathcal{P}[/math] using a Hadamard product:

[math]\mathcal{P}=\left( f(z)\odot\overline{f(\overline{z})}\right)(1)[/math]

In the integral form, we can let [math]\gamma[/math] be the counterclockwise unit circle and write:

[math]\mathcal{P}=\frac{1}{2\pi i}\int _{|z|=1}\frac{1}{z}f(z)\overline{f\left(\frac{1}{\bar{z}}\right)}dz[/math]

My mistake on the last equation; give me a second to debug.
[math]\mathcal{P}=\frac{1}{2\pi i}\int _{|z|=1}\frac{1}{z}f(z)\overline{f\left(\frac{1}{\overline{z}}\right)}dz[/math]

The confusion about projections has been cleared up. Thank you.

quick correction, we actually have [math]F(z)=z(I-PUz)^{-1}[/math] such that in the generating function actually satisfies:

[math]f(z)=z\psi _1^TU(I-PUz)^{-1}\psi _0[/math]

Basically I forgot a factor of [math]z[/math]

This next part is a little tricky. We are going to compute [math]\overline{f\left(\frac{1}{\bar{z}}\right)}[/math]. We can replace the overline with conjugate transpose since [math]f(z)[/math] is a scalar. This is going to lead to the following:

[math]\overline{f\left(\frac{1}{\bar{z}}\right)}=\frac{1}{z}\psi _0^T\left[(I-\frac{1}{\bar{z}}PU)^{-1}\right]^TU^T\psi _1[/math]

Since the inverse and conjugate transpose operations commute, we will now say:

[math]\overline{f\left(\frac{1}{\bar{z}}\right)}=\frac{1}{z}\psi _0^T(I-\frac{1}{z}U^TP)^{-1}U^T\psi _1[/math]

Here, we have used the fact that [math]P^T=P[/math]. Now bringing the factor of [math]\frac{1}{z}[/math] from inside the inverse to the outside, we have:

[math]\overline{f\left(\frac{1}{\bar{z}}\right)}=\psi _0^T(zI-U^TP)^{-1}U^T\psi _1[/math]

Now plugging this formula into the Hadamard product integral representation we have:

[math]\mathcal{P}=\frac{1}{2\pi i}\int _{|z|=1}\frac{1}{z}\left[\psi _0^T(zI-U^TP)^{-1}U^T\psi _1\right]\left[ z\psi _1^T U(I-PUz)^{-1}\psi _0\right] dz[/math]

Here, we have reversed the order of [math]f(z)[/math] and [math]\overline{f\left(\frac{1}{\bar{z}}\right)}[/math] and we can do this again because [math]f(z)[/math] is scalar. Now we can pull the [math]\psi _0[/math] terms out of the integral and write:

[math]\psi _0^T\left[\frac{1}{2\pi i}\int _{|z|=1}(zI-U^TP)^{-1}U^T(I-P')U(I-PUz)^{-1}dz\right]\psi _0[/math]

I am going to take a break for a bit but the next part is where shit gets real

So far so good. I'll check back in tomorrow.

Hey this seems like an interesting topic OP. I know jack shit about quantum mechanics, I study math, but this looks like something I'd want to know more about. Any good resources to study this stuff from?

I already know my measure theory and probability.

I just got my PhD in math and my thesis was on these discrete quantum mechanical processes called "quantum walks". I know only introductory level quantum mechanics, so if you want to study this stuff and only have a math background you'll probably be fine. I'll try to outline how I got started in the field:

1) Functional Analysis. This is probably the most important math prerequisite to study. If you studied measure theory through Royden you might be familiar with some of the concepts depending on how deep you went. I recommend Kreyszig for a beginner; his book is one of the easiest books to understand of any book I have ever read. If you want a deeper knowledge maybe try Reed and Simon, but I don't particularly like their book and the extra material is not entirely necessary.

2) You definitely need some introduction to quantum mechanics. Griffiths is a great easy book and if you read the first 150 pages you'll probably be all set. Most of the difficulty is in adjusting to the bra-ket notation, but once you figure it out QM is really not all that hard.

Those two are necessities, but otherwise you should probably have some working knowledge of other areas. Complex analysis, harmonic analysis, and some combinatorics are three areas that I called upon, but depending on how you would like to frame the information I think that algebra/topology/geometry could help, but I never used any of it.

If you want to study quantum walks, I would start with one of the introductory papers by Kempe or Venegas-Andraca. I then looked at "One-Dimensional Quantum Walks" by Ambainis et. al. and started reading papers by Norio Konno (I am not him or any of his students btw). I started looking at those papers though because I could relate to them best out of the citations in V-A; you should probably take a look at it yourself and see what appeals to you.

Also I read a book "Quantum Probability" by Stanley Gudder that was really interesting maybe check that out.

Any prequisite for Functional Analysis?

What do you think about Dirac's book?

If you know measure theory you have more than enough. Kreyszig starts at a very low level you should be fine.

We will now split the integrand via partial fractions. Let us write [math]C=U^T(I-P')U[/math]. We hypothesize that the integrand may be written as:

[math](zI-U^TP)^{-1}C(I-PUz)^{-1}=(zI-U^TP)^{-1}X+Y(I-PUz)^{-1}[/math]

where [math]X[/math] and [math]Y[/math] are matrices. By multiplying both sides of the equation by [math](zI-U^TP)[/math] to the left and [math](I-PUz)[/math] to the right, we have

[math]C=X(I-PUz)+(zI-U^TP)Y[/math]

This gives us a system of equations:

[math]C=X-U^TPY[/math]
[math]0=-XPU+Y[/math]

Plugging the second into the first, we arrive at the equation:

[math]C=X-U^TPXPU[/math]

We now ask the question; can we solve this system for [math]X[/math]? Here is where my knowledge gets a little shoddy. If we treat [math]X[/math] and [math]C[/math] as vectors of length [math]n^2[/math], then the system we look to solve is:

[math]\left[(I\otimes I)-(U^TP\otimes PU)\right] X=C[/math]

Very helpful and inspiring post, thank you for that.

>This gives us a system of equations:
How does this work exactly? I get that the equations are obtained by grouping the terms with and without [math]z[/math], but isn't that only valid if [math]X[/math] and [math]Y[/math] don't depend on [math]z[/math]? Is that part of the hypothesis?

Correct, we are working under the assumption that [math]X[/math] and [math]Y[/math] are independent of [math]z[/math], I should have said that. It now remains to be seen whether this is a well defined system.

This system will have a unique solution of [math]X[/math] for every [math]C[/math] if [math]\det\left[ (I\otimes I)-(U^TP\otimes PU)\right]\ne 0[/math]. Suppose for the moment that the determinant does vanish. Then there exists a nonzero vector [math]v[/math] such that [math]\left[ (I\otimes I)-(U^TP\otimes PU)\right] v=0[/math]. Rearranging this, we see that this vector satisfies [math](U^TP\otimes PU)v=v[/math], or is an eigenvector of the given matrix with eigenvalue 1. Consider the Kronecker product [math]A\otimes B[/math] where [math]\sigma (A)=\{ a_1,...,a_n\}[/math] and [math]\sigma (B)=\{ b_1,...,b_m\}[/math] (here, [math]\sigma[/math] is the spectrum (eigenvalues) of the operator). Then [math]\sigma (A\otimes B)=\{ a_ib_j\} _{i\le n,j\le m}[/math]. Recall our hypothesis that [math]PU[/math] has eigenvalues with absolute value strictly less than 1. Since the eigenvalues of [math]U^TP[/math] are less than or equal to one, by our previous statement this implies that [math](U^TP\otimes PU)[/math] has eigenvalues strictly less than absolute value 1, and thus we have a unique solution for [math]X[/math].

Let us suppose now that we have matrices [math]X[/math] and [math]Y[/math] which satisfy the system in Plugging these back into our integral equation for [math]\mathcal{P}[/math], we can write:

[math]\mathcal{P}=\psi _0^T\left[\frac{1}{2\pi i}\int _{|z|=1}(zI-U^TP)^{-1}dz\right] X\psi _0 +\psi _0^TY\left[\frac{1}{2\pi i}\int _{|z|=1}(I-PUz)^{-1}dz\right] \psi _0[/math]

So now we've reduced the problem to computing contour integrals of inverse matrix expressions. Here is what I think happens. Suppose we have a matrix [math]A[/math] and a contour [math]\gamma[/math] which encloses [math]\sigma (A)[/math], and let [math]f(z)[/math] be an analytic function. Then I'm pretty sure the following holds:

[math]\frac{1}{2\pi i}\int _{\gamma} f(z)(zI-A)^{-1}dz=f(A)[/math]

Here, [math]f(A)[/math] is a sensible expression if we use a power series representation of [math]f[/math]. This accounts for the left integral. For the right integral, I think it vanishes since [math](I-PUz)^{-1}[/math] is anallytic. Thus, we have as a final expression:

[math]\mathcal{P}=\psi _0^TX\psi _0 [/math]

This bugged me so I tried it out with some simple 2x2 matrices. I've found that the equation here and the last one in are not equivalent. I'm not going to write out the entire thing, but consider [math]ABC[/math] (all 2x2 matrices) and [math](A\otimes C)B[/math] ([math]B[/math] being a vector with 4 entries) for a generic [math]B[/math] and [math]A[/math] and [math]C[/math] each a triangular matrix with 1s in the nonzero entries. Then [math]ABC[/math] and [math](A\otimes C)B[/math] are not equivalent.

The stuff in and seems right, but unfortunately it's not my specialty so I can't say for sure (you seem to be uncertain as well). I'll see if I can find a reference for the "matrix integral formula."

math.stackexchange.com/questions/337071/cauchy-integral-formula-for-matrices
That was easy. The problem with the reformulation in is still unsolved, though.

Okay so the formula I have written in was basically made up, but I have found from the wikipedia article on the Kronecker product (en.wikipedia.org/wiki/Kronecker_product) that we can write:
[math]AXB\sim (B^T\otimes A)X[/math]
where [math]X[/math] is a vector constructed by COLUMN stacking the matrix. This leads us to the following formula governing [math]X[/math]:

[math]\left[ (I\otimes I)-(U^TP\otimes U^TP)\right] X=C[/math]

Recall our assumption that [math]PU[/math] has eigenvalues with absolute value strictly less than 1. Does this mean the eigenvalues of [math]U^TP[/math] also have eigenvalues with absolute value strictly less than 1? I haven't looked into this, but my hunch is yes this is true. This would basically confirm the claim about the final solution of [math]\mathcal{P}[/math] made in

New equation looks good; I verified it for 2x2 matrices just in case.
>Does this mean the eigenvalues of [math]U^T P[/math] also have eigenvalues with absolute value strictly less than 1?
[math]U^T P = (PU)^T[/math], so these two matrices have the same spectrum. This is immediate in finite dimensions:
We know that [math]U^T P[/math] is invertible, so there exists a unitary matrix [math]L[/math] such that [math]L (PU) L^{-1} =: D[/math] is diagonal. Now [math]U^T P = (L^{-1} D L)^T [/math] and so [math]L (U^T P) L^{-1} = D^T = D[/math] (using unitarity of [math]L[/math]).
This should work in infinite dimensions as well; using the same idea, we should be able to show that the operator norm (which "doesn't care" about unitary transformations) of [math]U^T P[/math] is at most that of [math]PU[/math]. We're currently working with finite dimensions so I'll leave it at that for now, but if you want to look at that again later just say the word.

>We know that [math]U^TP[/math] is invertible
I meant to say
>We know that [math]PU[/math] is invertible

Though I guess we never assumed that to be the case, so maybe my argument doesn't work as such. The trace of a matrix is still invariant under transposition though so I'm fairly certain they still have the same spectrum.

Right, [math]PU[/math] should NOT be invertible. I would guess the spectrum of [math]U^TP[/math] relates in a simple way

So is this the end of the argument and you want to iron out the details from here on (and maybe generalize a bit later)?

In any case, I think I have a prove for finite dimensions:
Let [math]v_i[/math] and [math]\lambda_i[/math] denote the nonzero normalized eigenvectors and the corresponding eigenvalues of [math]PU[/math]. Then [math]PUv_i=\lambda_iv_i[/math] implies that [math] v_i^T(U^TP) = \lambda_iv_i^T[/math], which further implies that [math] v_i^T(U^TP)v_i = \lambda_iv_i^Tv_i[/math] and so [math] (v_i,U^TPv_i)=(v_i,\lambda_iv_i)[/math].
Now let [math] w=\sum_i a_iv_i[/math] be an element of the span of the eigenvectors of [math]PU[/math]. Then
[eqn]
(w,U^TPv_j) = \sum_i a_i (v_i,U^TPv_j) = \sum_i a_i v_i^T(U^TPv_j) = \sum_i a_i (v_i^TU^TP) v_j = \sum_i a_i (PUv_i)^T v_j = \sum_i a_i \lambda_i \delta_{i,j} =a_j \lambda_j (v_i,v_j) = (w,\lambda_j v_j).
[/eqn]
For vectors not in the span of these eigenvectors the expression would be zero and so [math] (w,U^TPv_i)=(w,\lambda_iv_i)[/math] holds for all [math]w[/math] and thus [math]U^TPv_i=\lambda_iv_i[/math]. It follows that [math]PU[/math] and [math]U^TP[/math] have the same spectrum.

Followed your argument up until this line:

[math]\sum _i a_i(PUv_i)^Tv_j=\sum _i a_i\lambda _i\delta _{i,j}[/math]

For unitary matrices we have that the eigenvectors are orthogonal, but [math]PU[/math] isn't unitary. In any case, I think this article has what we need:

math.stackexchange.com/questions/123923/a-matrix-and-its-transpose-have-the-same-set-of-eigenvalues

The spectrum of [math]U^TP[/math] is the complex conjugate of the spectrum of [math]PU[/math]

>[math]PU[/math] isn't unitary.
Force of habit.
>this article is what we need
Indeed it is. I can't believe I missed such a simple proof.
This ended up being quite elementary all things considered. I feel like I didn't contribute much of anything, but it was still interesting.
If you want we can try to generalize this to [math] \Omega = \mathbb{R}^d [/math]. The main issue looks to be the existence of an [math] X [/math] satisfying the equation in ; I think the proof from for [math] f(A) [/math] still works for bounded operators [math] A: L^2 \rightarrow L^2 [/math].
I'll post again when I have something; for now I'll actually work on my Master's thesis.

thanks for the input though my guy, always helpful to have an extra set of eyes

what's your

Is there a way to do this in continuous time at all? I'm not sure how we could get an [math] f [/math] as before when we allow the time between measurements to vary.

My master's thesis is related to neural networks. I'm not actually sure where it's going right now. My professor just gave me an article about "Deep Convolutional Neural Networks" (DCNN for short; they're not actually neural networks) and wants me to use the ideas presented there (think: wavelet analysis and frame theory) together with rough integration / regularity structures.
If you're not familiar with those, wavelet analysis is about parametrizing functions via their inner products or convolutions with wavelets, wavelets being "nice functions." If you play around with this some more you eventually get to DCNNs, which is just an elaborate method to parametrize a function. The way you achieve this is similar to neural networks, hence the name.
As both rough integration and regularity structures use similar ideas; rough integration basically generates additional parameters on top of a given function and reg. structures defines integration via wavelets. I'm now supposed to do something by combining ideas, though currently neither I nor my professor know how one would go about doing that.

I'm a math undergraduate and I like probability and measure theory, how do I become like you guys?

It's honestly not that difficult. I started out focusing on financial mathematics after getting my Bachelor's, but then liked the theoretical aspects more and so ended up doing a course on rough integration.
Really once you're done with Bachelor's / undergrad you'll probably have enough mathematical maturity that you can pick up the gist of almost any subject within 1 semester by doing a course on it and studying it seriously. Measure theory is a good base so you're already partway there, even if you don't realize it. Try to do functional analysis sometime soon though, I personally like the subject and it's extremely useful in pretty much everything, physics included.

>Continuous time
One way to do this is to note that for any unitary matrix [math]U[/math], there exists a Hermitian matrix [math]H[/math] such that [math]U=e^{iH}[/math] in the matrix exponential sense. So without measuring the system, we would find that for an initial state [math]\psi _0[/math], the system becomes [math]e^{itH}\psi _0[/math] at time [math]t[/math]. If we want to take partial measurements of the state, we run into a problem called the Quantum Zeno effect (en.wikipedia.org/wiki/Quantum_Zeno_effect). I haven't worked this out yet, but I'm predicting this means that if you continuously observe an absorbing boundary, you will disturb the quantum system in such a way that you never find the particle. People have worked around this by taking measurements at random times, but I haven't looked too much into their work.

>Neural Networks
A few professors in my department are very involved in neural networks, so I have been to a few related seminars and I did learn a little about wavelets in functional analysis. But I haven't heard of regularity structures or frame theory, it sounds interesting. Does your professor have a main result that xe wants to prove?

Is this you? ---This post was basically spot on I will give you my personal story too. I was a very gifted student in my youth and I was pushed ahead in elementary school a year. In high school I enjoyed math but I was never really challenged like I could have been. I took AP Calculus and got a 5 like everybody else. When I went to undergraduate, I started in a pharmacy program but I didn't really like it and my advisors didn't understand why I was taking multivariate calc/linear algebra/ode on the side. After a year, I decided to switch my major to applied math when I realized that I could get out of undergrad in 3 years.

(cont)
Because I went through undergrad so fast and I was at a mediocre state school, I spend more of my energy trying to get out early than on pushing myself to learn new material. I was 20 when I graduated, but I didn't really "get" analysis when I took it and I had virtually no topology or algebra.

I got accepted into a decent grad school for pure math but my background was weak compared to my peers so the first semester was brutal. I was taking Differential Topology/PDE/Measure Theory/Algebra. In PDE the entire course was about Sobolev spaces and proving various norm inequalities and I didn't even know what a norm was. Same with difftop, there were all sorts of topology arguments that I flat out missed because I didn't have the background and it was a struggle to keep up. Fortunately, I am a genius so I eventually figured it out, but for a year it was difficult and frustrating.

The most important thing you should get from an education in maths is the ability to pick up any paper and follow its logic if you spend enough time studying it; basically what said. The way you get there is by just reading enough math and really immersing yourself in the proofs, especially in the introductory texts. If you're an undergrad and you already have background in measure theory you'll probably end up okay; you're ahead of where I was at the time. Just challenge yourself to learn new things and when you come across terms you don't know and arguments you don't follow, take the time to look them over until you understand them. You will be much better for it and you will have a deeper understanding than if you take statements at face value and skip ahead.

>I'm predicting this means that if you continuously observe an absorbing boundary, you will disturb the quantum system in such a way that you never find the particle.
That sounds interesting as fuck. Do you have a link or a name for a paper on random measurement times? I wouldn't mind refreshing my stochastic calculus a bit.

>But I haven't heard of regularity structures or frame theory, it sounds interesting.
If you know about wavelets you'll have no problems with frames. A frame is a generalization of the concept of a basis in that it is a collection of vectors that spans some space, but we allow the collection to contain redundancies. This means the family is no longer necessarily orthonormal, but it also allows descriptions to be more sparse, which is useful for implementation purposes. I've only done introductory reading on this so I don't have a large library to recommend, but if you need a text you could try S. Mallat's "A wavelet tour of signal processing: The sparse way". It's a bit too engineering-oriented on the theory vs practice scale for me, but still quite interesting.
Regularity structures are very abstract and I don't have a complete grasp of the topic as of yet so I don't want to confuse you with my flawed understanding; what it does is it allows us to integrate along very rough (think [math] \alpha [/math]-Hölder continuous with [math] \alpha < \frac{1}{2} [/math]) trajectories. Eventually this can be used to mathematically justify re-normalization as it is used in physics (you may be familiar with this). If you want to know more, read hairer.org/notes/Regularity.pdf.
>Does your professor have a main result that xe wants to prove?
For my paper in particular, no. I'll summarize the DCNN paper for now and then we'll discuss it further.

I'd like to add to my previous post () that it's important to understand the intuition behind proofs. I can't think of a good example right now, but if a proof has a lot of small steps, try to describe them in "normal" language. (E.g. "Use density of [nice class of functions] to reduce the problem to the case of [nice property] functions.") This also helps you remember the necessary conditions for theorems, which is something I had trouble with in my earlier Bachelor semesters. And if you have trouble with anything, always ask your professor or TA; that's what they're there for.

Why don't you use Fermi's golden rule? It should work fine, i guess.

Would Fermi's golden rule apply if we are evolving the state by a non-unitary transformation? (i.e. alternating unitary evolution with measurement)

I can't find the exact paper I was looking at, but this paper seems legit and probably gives you a good idea of the quantum zeno effect (arxiv.org/pdf/0903.3297.pdf)

Those regularity structures sound lit tho. Could you use them to analyze trajectories of Brownian motion?

>Brownian Motion
You're not thinking big enough. Rough integration allows us to define pathwise (meaning we don't make use of probabilistic properties) integrals along even fractional BM. I don't want to talk out of my ass about things I don't completely understand and give off wrong impressions so if you're interested, read the pdf I linked above. The tl;dr is that you can construct a global description of a (generalized) function from local descriptions in the form of "generalized Taylor expansions"; meaning that instead of expanding some function as a sum of polynomials (which have regularity equal to their degree), you expand it as a sum of lower regularity terms (which can have negative regularities). This "global description" is then of course an integral of the "local description" (i.e. the PDE).

Thanks for the tips guys, yes I'm the guy from the other post. I will try to take a functional analysis course as soon as possible. Really this stuff sounds interesting.

bump, this is the best thread in Veeky Forums right now

Right now? Try a year. What a surprise since there's been a flood of summerfriends.

It is, yeah.

Okay, I hit ht first problem for the [math] \Omega = \mathbb{R}^d [/math] case. For and we can just change the notation a little, using the inner product instead of kets. For , we can argue that the operator norm of [math] PU [/math] is at most one. Then the geometric series works as usual.
The problem comes in at , specifically the calculation of [math] \bar{f\left(\frac{1}{\bar{z}}\right)} [/math]. Since we're no longer working with matrices, we can't just use a hermitian transpose. So for now we have
[eqn]
\bar{f(\alpha)} = \bar{\left( \alpha \psi_1, U (Id-PU\alpha)^{-1} \psi_0 \right)} = \left(U (Id-PU\alpha)^{-1} \psi_0, \alpha \psi_1\right) [/eqn]
[eqn]
= \bar{\alpha} \left( (Id-PU\alpha)^{-1} \psi_0 , U^{-1} \psi_1 \right), [/eqn] where [math] \alpha = \frac{1}{\bar{z}} [/math] for ease of notation. It's easy to show that the adjoint of [math] (Id-PU\alpha) [/math] is [math] (Id-\bar{\alpha}U^{-1}P), [/math] but I'm not sure how we can use that for its inverse. I may be missing something easy again; if anyone has a solution I would be grateful.

bump

Turns out I'm an idiot. We have
[eqn] \left( (A^{-1})^{ad}u,Av\right) = \left(u,A^{-1}Av\right) = \left(u,v\right) = \left( A^{ad} (A^{ad})^{-1} u,v\right) = \left( (A^{ad})^{-1} u,Av\right), [/eqn] which together with the fact that [math] A [/math] has nonzero eigenvalues implies that [math] (A^{ad})^{-1} = (A^{-1})^{ad}. [/math] This gives us [math] \bar{f}(\alpha) = \bar{\alpha} \left(\psi_0, (Id-\bar{\alpha}U^{-1}P)^{-1} U^{-1} \psi_1\right), [/math] which is just the formula from , noting that the hermitian transpose of [math] U [/math] is its inverse as it is unitary.

So I've tried to get the formula in and almost succeeded. I'm not going to write the whole thing down (it's the same ideas as before, it just takes some extra justifications; if you really want to see it tell me and I might), but basically I'm getting
[eqn] \frac{1}{z} f(z) \bar{f}\left(\frac{1}{\bar{z}}\right) = \frac{1}{z} \left( \psi_0, (Id-U^{-1}P\frac{1}{z})^{-1} U^{-1} (Id-P') U (Id-PUz)^{-1} \psi_0 \right), [/eqn]which is exactly correct except for the first factor, which would need to be [math] \frac{1}{\bar{z}} [/math] instead. The reason for the difference is that if we write the second last equation in as an inner product (which we have to do in infinite dimensions), pulling the factor [math] \frac{1}{z} [/math] into the inverse causes it to become conjugated as it would go into the second argument of the inner product. Of course, we can also write it as an inner product in finite dimensions, which leads to a problem even in that case. I'm currently confused by this.
Assuming this gets resolved, works as is; however to write the equation in we would have to deal with the abstract tensor product, which is not one of my strengths. My guess is that it still works, but a thorough proof would be nice.
After that, the second term in will vanish as before, so all that would be left would be the Cauchy integral formula for general linear operators.

If there's even anyone else left in the thread now is the time to pitch in.

DO NOT let this thread die please.

Keep this alive boys

Keeping a dead thread on life support won't improve board quality, and clearly interest in this thread has died. It's unfortunate that this one will most likely be replaced by another shitposting thread, but that's the current state of Veeky Forums for you. If you want to improve the board make new threads that aren't shit and report shitposting.