Here, [math]P[/math] is a projection and [math]U[/math] is unitary, but I don't know if that really matters. The expression in the inverse will tend to a singular matrix, but the expression [math]I-P[/math] will knock out the arbitrarily high values in the inverse. I have tried doing a block partition of the matrix via the projection but it is messy as balls. how the fuck do i do this
Hunter Nelson
isn't this just id by continuity of the operations involved?
Anthony Foster
yeah, doesn't lim(U^k), k->0 = I ?
Julian Cook
if you manage to define U^k, then yeah it should.
Ryder Powell
no, I'm working on an example where [math]U[/math] is [math]2\times 2[/math]
yes but [math]I-P[/math] is non-invertible, so you get a sort of indeterminate form in matrix flavor, necessitating the use of L'Hopital to evaluate
Leo Rivera
it doesn't matter if it's 2x2, why would it matter?
it's easy to show that if e->0 then A/(A+e)^(-1) tends to id in your case (A singular, A+e nonsingular) as the inverse is continuous. try to use this, attempting to generalize l'hop is overkill
Brayden Jones
>yes but I−P is non-invertible, so you get a sort of indeterminate form in matrix flavor, necessitating the use of L'Hopital to evaluate
would it be an identity matrix with a few zeros missing?
if (I-P) is not full rank then the product will not be full-rank, but it seems like it would converge to the identity, or rather, the identity for some sub-space
James Scott
>few zeros missing?
* a few ones missing i mean.
Jaxon Barnes
how can it approach the identity when (I-P) is not full rank?
David Campbell
a limit of singular matrices can be nonsingular
Jaxson King
wait it can't, can it?
Parker Powell
(I-P) doesn't even depend on k though. you can pull it out of the limit, and then you're multiplying the limit by a singular matrix
Xavier Ross
i think it can, but that's irrelevant
Cooper Walker
it's easy to show is wrong and I'm an idiot. sorry
Asher Phillips
not necessarily and this is what I've been grappling with. So let's suppose WLOG that the projection [math]P[/math] has the form [math]P=\begin{bmatrix}I & 0 \\ 0 & 0\end{bmatrix}[/math]. Assuming [math]U^k-P[/math] has an inverse for very small [math]k>0[/math], we find that this inverse looks something like:
[math](U^k-P)^{-1}\begin{bmatrix} A & B \\ C & D\end{bmatrix}[/math]
[math]A[/math] gets arbitrarily large as [math]k[/math] goes to zero, [math]B[/math] and [math]C[/math] approach some nontrivial matrices, and [math]D[/math] approaches an identity matrix. The matrices [math]B[/math] and [math]C[/math] are what I am after. Like I said I will show you a closed form for the [math]2\times 2[/math] case in a bit.
i don't believe that give me an example
the inverse isn't continuous in a neighborhood containing a singular matrix
you're right the resulting limit will not be full rank. The block [math]D[/math] seems to converge to the identity but it's those off diagonal nontrivial blocks that have value
Kevin Williams
it can't, det is continuous
Connor Lewis
he said the limit of singular matrices can be nonsingular, not the other way around
Joshua Peterson
it can't, det is continuous
Hudson Baker
lmao sorry
John Carter
>A gets arbitrarily large as k goes to zero,
this doesn't matter though, right?
>B and C approach some nontrivial matrices
wouldn't they approach 0?
Lincoln Edwards
Doesn't matter is we're projecting off of the top row. Try it numerically and see what happens
Josiah Bell
for the real 2x2 case, it's always of the form cost, sint, -sint, cost so that should give some ideas
Angel Hall
>The matrices B and C are what I am after
pretty sure they go to zero.
U^k = VD^kV' where D is diagonal with entries either +1 or -1. Since D^k approaches I, and V,V' are also unitary matrices, then U^k has to approach I, right? correct me if i'm wrong
Jose Richardson
sure [math]U^k\rightarrow I[/math], but I am talking about [math](U^k-P)^{-1}[/math]
Lincoln Butler
this is an odd question anyway. you have to think about what it's inverse approaches as it approaches a matrix with no inverse, so i'm tempted to say it's not defined
but A, B, and C all go away after multiplication by I-P, like you said
>the expression I−P will knock out the arbitrarily high values in the inverse.
Ryder Williams
if it does exist, it would probably have to be (I-P), but that doesn't make much sense
Evan Morgan
Okay here we go for the [math]2\times 2[/math] case. Let us write the eigendecomposition [math]U=S\Lambda S^{-1}[/math] where [math]S[/math] is unitary and has representation [math]S=\begin{bmatrix}a & b \\ -\bar{b} & \bar{a}\end{bmatrix}[/math]. If we let [math]\Lambda =\begin{bmatrix} \lambda _1 & 0 \\ 0 & \lambda _2\end{bmatrix}[/math] and [math]P=\begin{bmatrix} 1 & 0 \\ 0 & 0\end{bmatrix}[/math], we can write:
Without going into the specifics, it is easy to show that [math]D=1[/math]. So here is the solution for the easy case; how does this generalize to larger matrices?
Grayson Kelly
sorry, here I mean [math]x=\lambda _1[/math] and [math]y=\lambda _2[/math]
Easton Thomas
U^k-P is also singular whenever U^k and P share an eigenvector and an eigenvalue (=1?)
Wyatt Russell
so if there are real, positive eigenvalues (which have to be equal to 1?) then P has to map their corresponding eigenvectors to zero otherwise you can't take the limit.
Benjamin Diaz
yes, I have tacitly assumed that U and P do not share an eigenvalue
Jaxon Young
are they both real?
Lincoln Thomas
no
Angel Barnes
well you got me then, i'm still trying to picture what happens when U is orthogonal.
U has to have at least one negative eigenvalue for the limit to be defined (right?), and when this is the case, U^k will have an imaginary part when k close to zero. and now we're into complex numbers and i'm not sure how to picture this
Camden Peterson
in any case, at least some eigenvalues of U^k will have an imaginary part whenever the limit is defined, right?
Ryan Kelly
for k close to zero, i mean
Gabriel Morales
Call the limit B, assume it exists. You’re going to basically need (I-P)=B (I-P) since U^k goes to I. (B-I)(I-P)=0. So find the kernel of (I-P) transpose or some such. Might be a way to get a necessary condition for B
James Reed
>You’re going to basically need (I-P)=B (I-P) since U^k goes to I.
i don't see how this follows from U^k -> I.
Blake Wilson
For “small” k you should have approximately I-P= B (I+k U -P). In the limit expect I-P = B (I-P). Since P is a projection, B=I-P satisfies this equation
Isaac Phillips
>B=I-P
the pseudoinverse of (U^k - P) at k=0 is (I-P), but that's because the pseudoinverse is not continuous at this point.
and i'm still not sure how you're getting what you said in the first place.
Luke Jones
Try expressing the inverse as a polynomial in U^k - P using Cayley Hamilton and then apply the limit. I think you can then use the fact that (I - P) is a projection to simplify that polynomial