/mg/ maths general: Langlands Edition

Talk maths

The Work of Robert Langlands:
publications.ias.edu/rpl/

ncatlab.org/nlab/show/Langlands program

Previous thread

Other urls found in this thread:

en.wikipedia.org/wiki/Analytization_trick
en.wikipedia.org/wiki/Homomorphism
youtube.com/watch?v=kjBOesZCoqc&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab
twitter.com/SFWRedditImages

What's the worst mathematical Wikipedia page?

en.wikipedia.org/wiki/Analytization_trick

sup piggots

why exactly is a linear transformation defined with these two properties in mind (over real vector spaces):

[math]T(\mathbf{u}+\mathbf{v}) = T(\mathbf{u})+T(\mathbf{v})[/math]
[math]T(c\mathbf{u}) = cT(\mathbf{u})[/math]

How else would you define it?

>why exactly is a linear transformation defined with these two properties in mind (over real vector spaces):
That's how it's defined for all vector spaces, not just real ones.

The only things you can do in a vector space are add vectors or scale vectors, so linear transformations are defined to "preserve" that structure.

This structure-preserving property is very useful when considering functions between two algebraic structures of the same type:
en.wikipedia.org/wiki/Homomorphism

thanks

like said, whenever you have some kind of structure, you should investigate maps which preserve this structure in some sense

in this case, it follows that every linear map can be represented by a matrix and composition of maps becomes matrix multiplication. this makes linear maps extremely easy to analyze. linear maps also have very clear geometric meaning: they are rotations, reflections, shears and scalings (i.e. the things you can do at photoshop).

>every linear map can be represented by a matrix
Wrong.

>thinks you can describe translations by a linear map.

Where was that implied?

>(i.e. the things you can do at photoshop).

'bout to blow my FUCKING brains out . FUCK LINEAR ALGEBRA. FUCK IT. FUCK IT. FUUUUCCCCK IT ALLLLLLLLLLLLLLL

>LINEAR ALGEBRA
literally the easiest maths

Linear algebra is one of the simplest mathematical topics. Just because you can't handle taking an ordered basis doesn't mean you should insult La-chan.

i have no idea how to change coordinates.

Why are you even at college if you can't even understand linear algebra? Why waste your money like that?

To the user here I answered your question here .

if you have a basis [math]\{v_i\}_{i\leq n}[/math] and want to change it to a basis [math]\{w_k\}_{k\leq n}[/math], then in particular you can write any vector [math]w_k[/math] in terms of sums of the [math]v_i[/math]. But then you have a system [math]w_k = a_{1,k}v_1 + a_{2,k}v_2+...+a_{n,k}v_n[/math] for every[math]k[/math]. So you can write the matrix of the transformation in terms of the [math]a_{i,j}[/math], and this is your transformation matrix to change any coordinate in terms of the [math]v_i[/math] to coordinates in the basis [math]w_k[/math].

linear algebra depends on the teacher

you can have either an autist that wants all proofs or someone that plugs and chugs into systems of linear equations for u engi majors

pic is a problem on the final that fucked pretty much everyone so he dropped it rofl

Why is/are there no linear transform(s) to get the transpose?

>Why is/are there no linear transform(s) to get the transpose?
What do you mean? The transpose is a linear transformation.

Aporogees for poor Engrish..


Why is there no way to find B such that

[math]A*B = A^T[/math]

Really? Did he not cover basic matrix factorizations in class, then?

(Though he should have put some quantifier on "n". As, "for any positive integer n".)

Is diagonalizing a matrix considered hard on Amerifatland?

Given square matrices [math]A,B[/math] and an inverse [math]M[/math] for [math]1-AB[/math], show that there is an inverse for [math]1-BA[/math] expressed in terms of [math]A,B,M[/math].

I want to start learning more about discrete stuff / combinatorics. I have a strong background in differential geometry and functional analysis / PDE. Is it possible to use this knowledge to my advantage? Are there scenarios in combinatorics where methods / intuition from the aforementioned fields can be applied?

What have you tried?

I know how to solve it. This is a nice exercise for you guys. You can try convincing yourself that 1-AB is invertible if and only if 1-BA is invertible; that can be done abstractly without expressing the inverse for one in terms of A,B and the inverse for the other.

I'm not a "guy".

its more of wtf is the question asking

That's a very simple problem, once you know that every self-adjoint matrix is diagonalizable.

faggot mentally ill nigger bitch ass

>Given square matrices A,B and an inverse M for 1−AB, show that there is an inverse for 1−BA expressed in terms of A,B,M.
Please no homework in this thread

>pic is a problem on the final that fucked pretty much everyone so he dropped it rofl
Which school for brainlets do you go to?

1-BA trivially has inverse BA-1, no need for M.

This is apparently an interview question from Microsoft.

Multiplicative inverse is what's asked for, not additive.

>faggot mentally ill nigger bitch ass
Are you okay?

saint louis university in stl

The closest interactions with combinatorics those fields have is ergodic theory. Mainly the representation theory of discrete amenable groups has some nice connections with geometric group theory and ergodic theory both of which are connected to combinatorics. There is some more abstract work that is connected with combinatorics under the guise of operads, specifically through Stasheff polytopes.

How old are you?

pls

If * is matrix product then it definitely is possible sometimes, like when A is invertible. Can you be more specific? Maybe you mean *when* is there no way?

This is why brainlets should stay away from universities.

Thanks for the laugh

I've seen the light. After journeying to hell and back, and mustering every last IQ point I have, I now understand where I went wrong. Forgive me, linear algebra. Take me back into your embrace.

If you're that brainlet who was struggling with LA, check this playlist out for a nice intuition based introduction to LA.It's pretty good and not too long either.

youtube.com/watch?v=kjBOesZCoqc&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab

Are these the most challenging maths textbooks of all time?

Are those TAOCPs even worth the time?

Cayley-Hamilton.

Why are physishits so retarded?

>solving other people's homework for free
Fag.

lel gb2

>Fag.
Why the homophobia?

No, IUT is

Good lord it's actually real

>homophobia
Fag.

For small real numbers a and b
1/(1-ba)=1+b(1+ab+(ab)^2+...)a
=1+bma
Be inspired therefrom.

I'm not your guy, buddy.

Everyone else who answered this is educated stupid; I'll tell you the real reason because this is a good question not left to undergrads who just memorized this 2 years ago or whatever. Linear transformations are defined this way because it is just the precise way of saying "Knowing how the basis vectors transform tells you how ALL vectors in your space transform."

>>thinks you can describe translations by a linear map.
you can tho

[eqn]\begin{bmatrix}a & b & r \\ c & d & s \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix}x \\ y \\ 1\end{bmatrix} = \begin{bmatrix} ax+by+r \\ cx+dy+s \\ 1\end{bmatrix} [/eqn]

not that other guy but you can represent translations with matrix multiplication, this is called homogeneous coordinates where you look at translations as being rotations restricted to a surface.

[math](1-BA)B = B(1-AB)[/math], hence [math](1-BA)B(1-AB)^{-1} = B[/math] and [math](1-BA)B(1-AB)^{-1}A = BA[/math].
Finally, [math]1-BA + (1-BA)B(1-AB)^{-1}A = (1-AB)(1+B(1-AB)^{-1}A) = 1[/math].
It's easy to check that [math]1+B(1-AB)^{-1}A[/math] is also a left inverse.

So that it is a homomorphism for the operations of vector spaces.
Generally a homorphism is a map between algebraic structures A and B (f:A-->B) with which:
computing in A and then sending in B
is the same as
sending in B and computing in B

Why study homomorphisms?
Because with them you can study part of B from the point of view of A, and vice versa.
How?
Partition A in cells where in each cell you have elements that are sent to the same element of B. i.e. a1,a2 are in the same cell whenever f(a1)=f(a2).
It is possible define "new" operations on these cells: (Cell where a1 is) * (Cell where a2 is) = (Cell where a1*a2 is) and this operation is independent of which elements of the cell you picked.
The cells along with those operations form a structure which is the same (except in names) as the image of A under f ( f(A) ).
This is called "First Isomorphism theorem".

Does anyone have the wolframalpha android app?
Is there a point buying it or is it the same as using the browser version?

Still not technically a linear map, since you always need the last component to be 1. They're elements of the projective general linear group on R^n, which is the quotient group of GL(R^n+1) by scalar multiplication (isomorphic to R*).

Of course it is a linear map, but it is not a translation in all of R^3. Still, it restricts to a translation on the plane {z=1}

In the same vein, but easier: Let A and B be square matrices such that [math]A+B=AB[/math]. Prove that A and B commute

Oh...you mean a B that works for all A.

If you set A = I then you get B = I so obviously that won't work.

it's asking you to prove that n-th root of a matrix exists if the matrix is hermitian and positive semidefinite.

This is a REALLY simple problem. It's there to test if you know the real spectral theorem.
Your class is retarded / you professor did a terrible job at teaching.

no point, he has enough shekels already

Linear maps are defined on vector spaces, which {z=1} is not. It's the restriction of a linear map on an affine subset.

>No, IUT is
IUT is a series of papers, not a textbook.

>Linear transformations are defined this way because it is just the precise way of saying "Knowing how the basis vectors transform tells you how ALL vectors in your space transform."
But that's not true at all, linear transformations are still defined that way even for vector spaces that don't have a basis.

meant to reply to

>vector spaces that don't have a basis
I sense a rain of pro-AC posts incoming.

>vector spaces that don't have a basis
In non-retarded circles "vector space" means "free module over a field". Perhaps you meant to say that they were defined the same way for all homomorphisms of modules?

>vector spaces that don't have a basis
autism or ignorance

he means that defining linear transformations in infinite dimension is independent of AC, clearly

>he
I'm not a "he".

>autism or ignorance
Speak for yourself.

shut the fuck up, faggot

>In non-retarded circles "vector space" means "free module over a field".
You meant "module over a field".

>independent of AC
Yes, "every free module over a field is free" is independent of AC. Your point?
No, I meant "free module over a field".

>No, I meant "free module over a field".
Then your statement is not true.

Find one (1) source that defines a vector spaces as such.

>faggot
Why the homophobia?

>Find one (1) source that defines a vector spaces as such.
The book I'm currently writing.

I'm not a "homophobe".

Because faggots are not human.

But with AC comes a basis, and so that wording is there to tell xį does not require choice to be axiomatically true. Your interpretation of žůr post is incorrect.

What are your preferred axioms?

Axiom of Equality: every human is to be treated the same way.
Axiom of Infinity: there is an infinite amount of genders.
Axiom of Racial Purity: only white people (at least 57%) are to be considered human.

Ps. I'm not a "you". Please refer to me as "thou" from now on.

The second two can be derived from the axiom of faggotry

Why the homophobia? You should try homotopia instead. Just imagine you were sitting on the lap of some nice guy explaining him Quillen's model categories work, and he would then reward you with an intense kiss. So much more fun!

>i know i'm retarded but i can't for the life of me get this same answer for the lcm
What do you get?

oh fuck me i'm so stupid i realized what i've done.
i blame the calculator interface it confused me. nevermind i'll delete my posts now

I don't understand. What's the point of studying LCD's in the real numbers? The real numbers have no non-trivial divisibility structure. Everything divides everything. What the fuck?