L'Hopital rule analogue for matrix expressions

I am interested in deriving a closed form for the following expression:

[math]\lim _{k\rightarrow 0}(I-P)(U^k-P)^{-1}[/math]

Here, [math]P[/math] is a projection and [math]U[/math] is unitary, but I don't know if that really matters. The expression in the inverse will tend to a singular matrix, but the expression [math]I-P[/math] will knock out the arbitrarily high values in the inverse. I have tried doing a block partition of the matrix via the projection but it is messy as balls. how the fuck do i do this

isn't this just id by continuity of the operations involved?

yeah, doesn't lim(U^k), k->0 = I ?

if you manage to define U^k, then yeah it should.

no, I'm working on an example where [math]U[/math] is [math]2\times 2[/math]

yes but [math]I-P[/math] is non-invertible, so you get a sort of indeterminate form in matrix flavor, necessitating the use of L'Hopital to evaluate

it doesn't matter if it's 2x2, why would it matter?

it's easy to show that if e->0 then A/(A+e)^(-1) tends to id in your case (A singular, A+e nonsingular) as the inverse is continuous. try to use this, attempting to generalize l'hop is overkill

>yes but I−P is non-invertible, so you get a sort of indeterminate form in matrix flavor, necessitating the use of L'Hopital to evaluate

would it be an identity matrix with a few zeros missing?

if (I-P) is not full rank then the product will not be full-rank, but it seems like it would converge to the identity, or rather, the identity for some sub-space

>few zeros missing?

* a few ones missing i mean.

how can it approach the identity when (I-P) is not full rank?

a limit of singular matrices can be nonsingular

wait it can't, can it?

(I-P) doesn't even depend on k though. you can pull it out of the limit, and then you're multiplying the limit by a singular matrix

i think it can, but that's irrelevant

it's easy to show is wrong and I'm an idiot. sorry

not necessarily and this is what I've been grappling with. So let's suppose WLOG that the projection [math]P[/math] has the form [math]P=\begin{bmatrix}I & 0 \\ 0 & 0\end{bmatrix}[/math]. Assuming [math]U^k-P[/math] has an inverse for very small [math]k>0[/math], we find that this inverse looks something like:

[math](U^k-P)^{-1}\begin{bmatrix} A & B \\ C & D\end{bmatrix}[/math]

[math]A[/math] gets arbitrarily large as [math]k[/math] goes to zero, [math]B[/math] and [math]C[/math] approach some nontrivial matrices, and [math]D[/math] approaches an identity matrix. The matrices [math]B[/math] and [math]C[/math] are what I am after. Like I said I will show you a closed form for the [math]2\times 2[/math] case in a bit.

i don't believe that give me an example

the inverse isn't continuous in a neighborhood containing a singular matrix

you're right the resulting limit will not be full rank. The block [math]D[/math] seems to converge to the identity but it's those off diagonal nontrivial blocks that have value

it can't, det is continuous

he said the limit of singular matrices can be nonsingular, not the other way around

it can't, det is continuous

lmao sorry

>A gets arbitrarily large as k goes to zero,

this doesn't matter though, right?

>B and C approach some nontrivial matrices

wouldn't they approach 0?

Doesn't matter is we're projecting off of the top row. Try it numerically and see what happens

for the real 2x2 case, it's always of the form cost, sint, -sint, cost so that should give some ideas

>The matrices B and C are what I am after

pretty sure they go to zero.

U^k = VD^kV' where D is diagonal with entries either +1 or -1. Since D^k approaches I, and V,V' are also unitary matrices, then U^k has to approach I, right? correct me if i'm wrong

sure [math]U^k\rightarrow I[/math], but I am talking about [math](U^k-P)^{-1}[/math]

this is an odd question anyway. you have to think about what it's inverse approaches as it approaches a matrix with no inverse, so i'm tempted to say it's not defined

but A, B, and C all go away after multiplication by I-P, like you said

>the expression I−P will knock out the arbitrarily high values in the inverse.

if it does exist, it would probably have to be (I-P), but that doesn't make much sense

Okay here we go for the [math]2\times 2[/math] case. Let us write the eigendecomposition [math]U=S\Lambda S^{-1}[/math] where [math]S[/math] is unitary and has representation [math]S=\begin{bmatrix}a & b \\ -\bar{b} & \bar{a}\end{bmatrix}[/math]. If we let [math]\Lambda =\begin{bmatrix} \lambda _1 & 0 \\ 0 & \lambda _2\end{bmatrix}[/math] and [math]P=\begin{bmatrix} 1 & 0 \\ 0 & 0\end{bmatrix}[/math], we can write:

[math]U^k-P=\begin{bmatrix}|a|^2\lambda _1 ^k+|b|^2\lambda _2^k-1 & ab(\lambda _2 ^k-\lambda _1^k) \\ \bar{a}\bar{b}(\lambda _2^k-\lambda _1^k) & |a|^2\lambda _2^k-|b|^2\lambda_1^k\end{bmatrix}[/math]

The inverse of this matrix is simple by the [math]2\times 2[/math] inversion formula. The determinant of this matrix may be written as:

[math]\det\left( U^k-P\right) =\lambda _1^k\lambda _2^k-\lambda _1^k+|a|^2(\lambda _1^k-\lambda _2^k)[/math]

I will finish the argument in a second

i see what you mean. A gets big, and B and C will depend entirely on the direction that U^k approaches I from

if U=I then this is undefined. so then wouldn't the limit is undefined without further restrictions on U?

Let's go through trying to compute the limits of each term in the matrix as [math]k\rightarrow 0[/math]. The top left term satisfies:

[math]A=\frac{|a|^2(\lambda _2^k+\lambda _1^k)-\lambda _1^k}{\lambda _1\lambda _2-\lambda _1^k+|a|^2(\lambda _1^k-\lambda _2^k)}[/math]

The numerator approaches [math]-|b|^2[/math] while the denominator approaches [math]0[/math], so this entry does not have a limit.

The top right term satisfies:

[math]B=\frac{ab(\lambda _1^k-\lambda _2^k)}{\lambda _1\lambda _2-\lambda _1^k+|a|^2(\lambda _1^k-\lambda _2^k)}[/math]

Rearranging terms and using L'Hopital's rule, we find that this entry equals:

[math]B=ab\left(\frac{\log (x)-\log (y)}{|a|^2\log (x)+|b|^2\log (y)}\right)[/math]

The other off-diagonal entry is similar:

[math]C=\bar{a}\bar{b}\left(\frac{\log (x)-\log (y)}{|a|^2\log (x)+|b|^2\log (y)}\right)[/math]

Without going into the specifics, it is easy to show that [math]D=1[/math]. So here is the solution for the easy case; how does this generalize to larger matrices?

sorry, here I mean [math]x=\lambda _1[/math] and [math]y=\lambda _2[/math]

U^k-P is also singular whenever U^k and P share an eigenvector and an eigenvalue (=1?)

so if there are real, positive eigenvalues (which have to be equal to 1?) then P has to map their corresponding eigenvectors to zero otherwise you can't take the limit.

yes, I have tacitly assumed that U and P do not share an eigenvalue

are they both real?

no

well you got me then, i'm still trying to picture what happens when U is orthogonal.

U has to have at least one negative eigenvalue for the limit to be defined (right?), and when this is the case, U^k will have an imaginary part when k close to zero. and now we're into complex numbers and i'm not sure how to picture this

in any case, at least some eigenvalues of U^k will have an imaginary part whenever the limit is defined, right?

for k close to zero, i mean

Call the limit B, assume it exists. You’re going to basically need (I-P)=B (I-P) since U^k goes to I. (B-I)(I-P)=0. So find the kernel of (I-P) transpose or some such. Might be a way to get a necessary condition for B

>You’re going to basically need (I-P)=B (I-P) since U^k goes to I.

i don't see how this follows from U^k -> I.

For “small” k you should have approximately I-P= B (I+k U -P).
In the limit expect I-P = B (I-P).
Since P is a projection,
B=I-P satisfies this equation

>B=I-P

the pseudoinverse of (U^k - P) at k=0 is (I-P), but that's because the pseudoinverse is not continuous at this point.

and i'm still not sure how you're getting what you said in the first place.

Try expressing the inverse as a polynomial in U^k - P using Cayley Hamilton and then apply the limit. I think you can then use the fact that (I - P) is a projection to simplify that polynomial