Intuitive understanding of linear algebra

pic unrelated: What does wildberger have his hand in here?

I know that linear algebra is a very important and well-developed theory, but I can't get the concepts to click on an intuitive level. Specifically, the axiomatic formulation of a vector space seems kind of arbitrary to me, and I don't understand why matrix-vector algebra has to be so qualitatively different from regular-old, single-dimensional algebra.

I am just now starting to feel comfortable with addition and multiplication as the natural/fundamental operations defined on the field of rational numbers (infinite sets & the reals are a meme). The way I see it, a number system is given by a particular choice of origin and unit. The addition operation arises from symmetry with respect to shifting the origin while multiplication corresponds to symmetry with respect to scaling the unit. This feels right I guess.

But this perspective breaks when I try to extend the analogy to a vector space over Q. I like to think of vectors as 'n-dimensional numbers', so it would be nice if the operations on regular numbers just extended naturally to vectors. This is in fact the case for vector addition, but as we all know, component-wise multiplication with vectors is usually nonsense ([v1,v2]*[u1,u2] = [v1*u1,v2*u2]) and nobody really talks about it. Instead, the vector space formulation includes scalar multiplication and the dot product is introduced as something different.

AFAIK, a vector space is closed under this component-wise multiplication, so why is it not important and included in the axioms? It seems right that vector addition would correspond to symmetry w.r.t. the origin and vector multiplication to symmetry w.r.t. the basis (or multi-dimensional unit). Multiplication by a vector would be scaling all the basis vectors or a 'multi-dimensional scaling'. What is wrong with this perspective /sci?

Other urls found in this thread:

youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab
en.wikipedia.org/wiki/Algebra_over_a_field
youtube.com/watch?v=yAb12PWrhV0&list=PL01A21B9E302D50C1
twitter.com/SFWRedditGifs

Your brain isn't wired for visualizing linear algebra intuitively. All you can hope for is eventually getting an intuition for the abstractions, but it's not like, say, classical mechanics, where the intuition is very visual.

It get's intuitive once you understand the fundamentals of algebra. Just give it time user.

One thing i'd say is really non-intuitive are Fourier transformations

I find Fourier transformations incredibly intuitive, but I can barely solve a system of linear equations

Wot, okay

How do you find thermodynamics?

incomprehensible
ask me what the fuck entropy is and I'll parrot back some half-assed rambling about disorder

You are looking for an Algebraic structure which is not included in the definition of a Vector Space.

The most intuitive and complete algebraic construction of a vector space starts with a bit of group theory. If you are willing to wait an hour or so, I can type you up a "For Brainlets" image. I've made a few of these in the past an people liked them. I'll post them here for your interest.

Here's one on span I wrote when for an user who was studying linear algebra a while back.

but fourier transforms are linear algebra

I would not say that Fourier Transforms are fundamental to linear algebra though. they lie more at the intersection of algebra, analysis, and some topology.

what are you babbling about

...

this user is right

there are various structures which are roughly "a vector space with a multiplication" depending on how you want that multiplication to interact with the vector addition and scaling.

You might be looking for something like R-algebras.

Also
really? vectors simply have "scaling" and "translation", both pretty intuitive ideas. Especially for fields of char 0.

ah, excuse me. you must be a shitty undergraduate to not already know that. Read a text on Functional Analysis to understand what I mean.

>intuitive
>geometric understanding of algebra

Then get Robert Valenza's book on linear algebra, and watch these videos with the Wildberger vids youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab

Firstly, Q is an infinite set, so I don't get what you gain by considering a VS over Q.

Secondly, you should not in general imagine a vector as a n-dim number. That simply does not work in almost all interesting VSs(C, L^p, etc.).

Thirdly componentwise multiplication of 2 vectors is stupid. Consider the VS of polynomials of degree 2. How would you define such a thing there? (Or on a VS without a countable basis even)

The "scaling" you are talking about is multiplication by an element of the field.

that multiplication is nothing special. it has no interesting structure. the operation in general, though...

en.wikipedia.org/wiki/Algebra_over_a_field

>Firstly, Q is an infinite set, so I don't get what you gain by considering a VS over Q.
what the fuck are you even saying brainlet

>That simply does not work...
he's obviously talking about finite dimensional vector spaces

>what the fuck are you even saying brainlet
If you don't want infinite sets in your theory then you can not use Q as it is an infinite set.
If you read the OP, he says he does not like infinite sets but still uses Q.
>he's obviously talking about finite dimensional vector spaces
Retard, learn a bit about LA before you talk. I even gave an example of a finite dimensional VS where this thinking is harmful.

Oh. Interesting then we are poles apart user... love me some Thermldynamics

Don't even know what advice to give you. Maybe your learning skills are very specific? Like kinesthesic or something like that?

Vector spaces are much more than just scaling and adding vectors in 3 dimensions. That's a very particular case of a vector space that you can visualize.

>I like to think of vectors as 'n-dimensional numbers'
Why would you want to do that? What you are looking for is the definition of a field (or a field extension).

Vector spaces are something much more general than fields (or n-dimensional numbers) and this is a good thing, because they appear much more often than fields, you should not try to shoehorn your preconceived convictions into this.

The crucial part to mention here is that you can possibly define a multiplication in a vector space over a field the way you said, but it then DOES NOT behave at all like a field again, most notably is the fact that not every nonzero vector has a multiplicative inverse, think about the vector (1,0), what would be its inverse? So the vector space Q^2 does not behave "like n-dimensional numbers" at all.

Meant for

>tfw cs major
>took linear algebra because I want to get into ai
>just memorized computational processes and definitions
>didn't learn much
>forgot what I did learn already
Thinking about taking a linear algebra edx course and buying a more advanced book than lay

that user never mentioned 3 dimensions. that user mentioned scaling and translating, which all vector spaces have

Yeah I guess. I just read intuitive understanding of linear algebra and replied that it's impossible to intuit abstract algebra (especially any arbitrary Rn space), I didn't bother reading his whole post lol

why don't you take mr. Wildberger's LinAlg course then?
youtube.com/watch?v=yAb12PWrhV0&list=PL01A21B9E302D50C1

>impossible to intuit abstract algebra
>especially [math]\mathbb{R}^n[/math]
Not sure what you mean by this. Vector spaces are all representable in tuple notation, which is very easy understand. Considering that tuples are just a representation of mappings from an index set (no restriction on cardinality), I think that makes vector spaces particularly simple.

>Not sure what you mean by this. Vector spaces are all representable in tuple notation
How do you represent L^p in tuple notation?
(Or any function space for that matter)

You can not understand Fourier series if you don't know what a VS is, it is prerequisite knowledge.

Linear algebra is so easy I feel like I'm doing something wrong the whole time

This is a good question to ask. It is not immediately clear that tuple notation is equivalent to function notation.
see in my post, I say:
> Considering that tuples are just a representation of mappings from an index set (no restriction on cardinality)
Consider the following tuple of elements of a field [math]\mathbb{F}[/math]: [math]\{x_j\}_{j\in J}[/math].
(We consider a field in the context of linear algebra. It does not really mater what kind of set [math]\mathbb{F}[/math] is.)
We call that set a [math]Card(J)[/math]-tuple, since the size of tuple is equal to [math]Card(J)[/math] (which should seem trivial).
Since the tuple is indexed by [math]J[/math], there is a bijection between the elements of the tuple and [math]J[/math]. In fact, the bijection is exactly the mapping:
[eqn]x:\{x_j\}_{j \in J} \rightarrow \mathbb{F} \qquad ; \qquad x(j) = x_j[/eqn]
Thus we can see that all tuples of elements from a set are actually functions mapping from an indexing set to some other set.
Since this construction does not rely on the value [math]Card(J)[/math], merely the fact that [math]J[/math] has a cardinality.
This means real functions of a real variable may be considered uncountable tuples, the indexing set is their domain.

Just as a note: this approach is used in Topology by James Munkres in the chapter on function spaces.

I remember these, good shit. can you post an album of them?it would be nice to have them on my comp. I understand span pretty well, but a quick reference is always nice to have.

Final remark: For clarity, a tuple is set of pairs represented as an ordered set.
e.g. the 3-tuple [math](a, b, c)[/math] may be indexed by the set [math]J=\{1, 2, 3\}[/math] and the symbol [math]x[/math] so that
[math](a ,b, c) = \{\{a, 1\}, \{b,2\},\{c,3\}\}=(x_1,x_2,x_3)=\{x_j\}_{j\in J}[/math]
Using a formal approach with functions above yields the equivalence between functions and tuples.

look the berger is holding a complex projective space?

>cs major
>didnt learn the material
color me surprised