How linear is linear algebra? Where is curvy algebra?
How linear is linear algebra? Where is curvy algebra?
>Simplification ahead
Linear algebra basically deals with planes, lines, and spaces.
Matrices are vector spaces. Everything is Euclidean geometry.
Euclidean basically means that right angles are 90° and that, given a line, only one parallel line can go through a point.
it's 'non-linear' not 'curvy'
linear algebra is rooted in the study of systems of linear equations, i.e. all exponents on variables are equal to 1
x+y+z=1
x+y=1 etc.
non-linear is stuff like
x^2=1
x^2+y=1
[math] (2·X) + (2·Y) = 2·(X + Y) [/math]
linear
[math] (2·X)^2 + (2·Y)^3 = ? [/math]
non-linear
>Linear Algebra
Studying solutions to linear equations
>Algebraic Geometry
Studying solutions to nonlinear equations.
> implying the polynomial ring isn't a vector space
where did i imply that?
because rings cuuuurve
vector spaces are modules over rings though
This
idk
>curvy algebra
Who high test curvy algebra here?
What about nonalgebraic equations?
C U R V A T U R E
pde is called pde.
Do you mean differential equations or transcendental equations?
ever wondered why we have interuniversal teichmuller theory?
This is why nobody takes us seriously.
Who is 'us'?
bruh that sounds easy
Use the projection operator for faggotry on OPs post vector.
It is, its literally throwing [math]y = mx + b[/math] into a bunch of boxes and drawing their lines.
>implying
YOU'RE A FUCKING RETARDED MONKEY FLAILING AROUND WHO CLEARLY DOESN'T KNOW SHIT
YOU BETTER NOT FUCKING THINK OF MAKING AN ACADEMIC CAREER BECAUSE I'M GOING TO FUCKING TRACK YOU DOWN AND MAKE YOUR LIFE HELL YOU FUCKING INBRED BABOON
So linear algebra is pretty much just high school algebra, circa Intro to Graphing?
Is it like a whole term of going over y=mx+b?
>non-linear is stuff like
>x^2=1
No
You learn the algebra of multidimensional linear systems which is useful in many different areas. Most if it is matrix operations and eigenvectors and their uses. Can do some pretty cool stuff with matrices like solving ode's and writing an explicit formula for the Fibonacci sequence. It happens that all the actual calculations you do are simple arithmetic, it is the concepts that are difficult
>y = mx+b in higher dimension
wowowowowow
Not exactly. Linearity is an abstract concept that boils down to preservation of addition and scalar multiplication under a mapping. Basically, it boils down to abstract vector spaces and linear mappings which are just abstract mathematical objects. Euclidean translations, rotations, and magnifications are one example. The derivative operator, integral operator, polynomials, continuous functions spaces, and so on are also abstract vector spaces that are linear. You're studying a certain kind of mathematical property called "linearity". In the end, you can model anything that is linear using real n-tuples and matrices but the objects you are studying are much more general than "y = mx+b in higher dimensions".
fun fact: discrete Fourier transform is a system of linear equations.
>given a line, only one parallel line can go through a point.
what?
I had a fun problem the other day on linear algebra. Let [math]A \in \operatorname{GL}_n (\mathbb C)[/math] be an invertible matrix with eigenvalues [math]\lambda_1, \dotsc, \lambda_n[/math]. Let [math]V[/math] be the vector space of all [math]n \times n[/math]-matrices over [math]\mathbb C[/math]. What are the eigenvalues of the linear operator [math]M \mapsto A^{-1} M A[/math] on [math]V[/math]?
are they all 1?
No.
>The derivative operator, integral operator, polynomials, continuous functions spaces, and so on are also abstract vector spaces that are linear.
Not that user?
How is a vector space related to an operator exactly? What determines that relationship?
In my freshman understanding, an operator is something you do to an equation.
On the basis {1, x, x^2}, represent the derivative operator as a matrix, cube it and check the result. Then on another side in the paper, compute the 3rd order derivative of a function in the form: ax^2 + bx + c.
This seems like as good a place as any to ask.
How well understood are infinite-dimensional vector spaces?
An operator is a function between vector spaces, that's it. Usually they are linear operators.
>How is a vector space related to an operator exactly?
If you're an algebraist, the only proper way to study objects is to study the transformations between them which keep the algebra intact. In fact, there is a theorem which says that knowing all of the ways to transform an object in this way tells you absolutely everything about the object. In the particular case of vector spaces, the algebraic structure is linearity. If I have two points in my vector space, I have the line between them. The transformations of vector spaces are what we call "linear transformations" (linear operators is reserved for a transformation from a space to itself). A linear transformation [math]T[/math] keeps linearity intact by requiring that [math]T(cv + w) = cT(v) + T(w)[/math] for vectors [math]v,w[/math] and a scalar [math]c[/math]. Linear transformations are encoded by matrices. This is really the only way to understand matrices; for instance, matrix multiplication encodes composition of linear transformations.
Banach spaces are well understood
given an [math] \mathbf{A} \in \text{GL}_n(\mathbb{C})[/math], the vector space [math] V [/math] of [math] n \times n [/math] matrices over [math] \mathbb{C} [/math], and a linear map [math] F:V\to V~,~ F\mathbf{M = A^{-1}MA} [/math]
first, look at the determinant of [math] F\mathbf{M} [/math]:
[math] F\mathbf{M = A^{-1}MA} \implies \det(F\mathbf{M})=\det(\mathbf{A^{-1}MA}) [/math]
[math] \implies \det(F)\det(\mathbf{M}) = \det(\mathbf{M}) \implies \det(F) = 1 [/math]
now, we're looking for eigenvalues of F, which in this case are complex numbers [math] \mathbf{z} [/math] such that [math] F\mathbf{M = zM} [/math]
take the determinant of both sides [math] \det(F\mathbf{M}) = \det(\mathbf{zM}) [/math]
recall that [math] \det(F\mathbf{M}) = \det(\mathbf{M}) [/math]
so we have [math] \mathbf{\det(M)=\det(zM)} [/math]
by a determinant property, [math] \det(\mathbf{zM}) = \mathbf{z}^n\det(\mathbf{M}) [/math]
[math] \det(\mathbf{M}) = \mathbf{z}^n\det(\mathbf{M}) \implies \mathbf{z}^n=1 [/math]
so the solutions to this, and thus the eigenvalues of F, are the roots of unity
is that correct?
> The transformations of vector spaces are what we call "linear transformations" (linear operators is reserved for a transformation from a space to itself)
can you explain this subtle distinction?
No, it isn't. Note that [math]F[/math] is an [math]n^2 \times n^2[/math]-matrix which does not act on [math]M[/math] by multiplication like that. The difficulty in this question lies in this abstraction -- we are thinking of the eigenvalues of a linear operator which is itself operating on linear operators! So while we are in fact applying [math]F[/math] to [math]M[/math] in some sense, they are sort of on different levels. Matrix multiplication denotes composition, which doesn't make sense here.
A good first step is showing that it suffices to consider [math]A[/math] in Jordan form (this is always a good first step). Then find a nice basis for [math]V[/math] so that the matrix of [math]F[/math] becomes upper triangular.
>Studying solutions to nonlinear equations.
Wrong, retard. Algebraic geometry only studies solutions to polynomial equations.
It's not really subtle. A linear transformation is just any map [math]T: V \to W[/math] which satisfies whatever condition I said before. If [math]W=V[/math], we use a different word sometimes. This is because the theory is a little richer. We can talk about eigenvalues, powers of matrices, etc. The punchline is that linear operators are square matrices.
I like how this thread went from: Wow this is easy to WTF is going on here, in less than 50 posts. I actually had to check up on some of the stuff, since some of the terminology and symbols are different from what I learned.
A nonlinear equation is any polynomial which is not linear.
Let [math]V[/math] be a vector space over some closed field [math]\mathbb{K}[/math]. A linear map [math]\mathcal{L}[/math] is an operator on [math]V[/math] if it is an endomorphism of the space; that is, if [math]\mathcal{L} \colon V \to V[/math]. Note that, if we equip the the set of all such maps (denote it by [math]X[/math]) with some appropriate norm [math]||\:\cdot\,||_{\mathrm{op}}[/math] (completeness, definiteness, etc.), then we have a Banach space of linear maps over [math]V[/math], [math]B(V,V) := (X,||\:\cdot\,||_{\mathrm{op}})[/math].
Linear maps are also readily generalized outside endomorphisms. Let [math]W[/math] be another vector space (also over [math]\mathbb{K}[/math]) and consider the set of all continuous linear maps [math]T \colon V \to W[/math]. If we equip norms on [math]V[/math] and [math]W[/math] (either directly or induced by e.g. a sesquilinear inner product) we can restrict our study of bounded continuous linear maps, i.e. all [math]T[/math] which satisfy
[eqn]||Tv||_{W} \leq M||v||_{V} \quad \forall v \in V,[/eqn] for some [math]M \in \mathbb{R}^+[/math]. Thus we see how we can employ the notion of linear mapping across different normed spaces. Boundedness is important for classifying the set of all such maps (plus an appropriate norm, e.g. [math]||\:\cdot\,|| \colon T \mapsto \inf_{v \in V}{M}[/math]) as a Banach space as well, [math]A(V,W)[/math]. (Actually we require a slightly stronger notion, that is completeness in [math]W[/math] with respect to its norm.) This is particularly useful for showing that [math]V \cong W[/math] if, for some [math]T \in A(V,W),\, \exists T^{-1}[/math] which is also continuous and bijective (that is, [math]T[/math] is an isomorphism between [math]V[/math] and [math]W[/math]).
post sauce
That's a pretty great summary, actually. Nice.
Take your "freshman explanation" and pedophile comics back to
Homotopical algebra
bump
...
anyone know some lecture videos on linear algebra that isn't for the plug and chug majors i.e more proofs and less calcs?
The Harvard Intro Abstract Algebra lectures. There is a section on linear algebra in like the middle of the course.
Actually, it's the first lecture.
>Spherical geometry
>Line is the intersection of plane with sphere
That implies that a line is 2 dimensional.
What?
>1 lecture of linear algebra review
Do you have something that kind of goes into more depth on linear algebra as a course?
Ok now this is uncontrolled shit posting.
No it isn't.
There are like 4-5 lectures in the middle of the course discussing Vectors spaces (over general fields), quotient spaces, linear maps, bases, eigenvectors, etc.
>sin and log are polynomials
Nonlinear equation generally means a polynomial, in the same way linear equation general means a polynomial.
You can have equations linear or nonlinear in sin,log,exp,etc. , those are called transcendental equations.
no it isn't, think about cutting a slice out of an orange. You just intersected a plane with a sphere, which produced a 1-dimensional path on the surface of the sphere (the place on the surface where you cut it).
Okay I get it but then that image is incorrect.
A line is the points of the intersection of the plane with the surface of the sphere.
A sphere intersected with a plane would produce a circle.
However I guess it makes sense if you think of the sphere as a hollow object.
We're looking at the geometry *on the surface* of the sphere. Clearly the sphere itself is just Euclidean, but a "straight line" on its surface (i.e. shortest path between two points) is not Euclidean.
My bad, "shortest path between two points" is just a line segment. A path [math]\mathbb{R} \to M[/math] that is locally a line segment and can't be extended any further in either direction is a line.
Got that geometry book downloaded
The stupid question thread is basically dead so, I have a question
>how do I define a relation based on a set of nine intervals?
>how do I define a relation based on a set of nine intervals?
Fuck is that vague.
What kind of relation?
Just an arbitrary relation? If so to where? From some of the intervals to others?
Or do you mean a binary function such that your set of intervals if closed under it?
Also, a set whose elements are the nine intervals or a set that is the union of the intervals?
If you just want to define any relation then let A be the union of your intervals and let F be the relation. Then a relation is defined by
a element of A implies that (a,a) element of F.
Sorry here's the question. the intervals are stuff like [0,6)
>(b) You have 9 sets now. Define a relation ρ to be “is a subset of” ⊆, on the set consisting of all 9 intervals you get. Write down the ordered pairs of this relation and draw the Hasse diagram of this partial ordering
any good sites or books explaining linear algebra?
>studying it and I'm in over my head
Getting deeply concerned how I just got an A- in linear algebra at a top 20 uni and can hardly understand a good number of terms thrown out in this thread. Was this because I took a summer course or that there are more advanced classes for linear algebra?
Dude, that's high school geometry. How can you have a problem with this concept?