What exactly is a matrix? To me, they only seem like a glorified sudoku

What exactly is a matrix? To me, they only seem like a glorified sudoku.

Easiest way to explain a matrix is it's just an easy way to organize numbers. But, it turns out that when you organize numbers in matrices, they have interesting properties. Really all there is to it.

The short version is that it rappresent a function, therefore it rappresent a lot of stuff

A representation of a tensor of rank 2.

/thread

I would think its an easy way to organize vectors

A linear transformation. Each column maps where the basis vector transform to.

Thanks for the insight! How much stuff does it represent so that it's deemed a lot?

Tenders? L-like, chicken tenders?!

It's a noetherian representation of an abelian manifold in 2d transformations. Hard to understand if you don't know about spencer rings and frobenian theory.

Take your pedophile cartoons back to .

why are weebs always manchildren who can't grasp elementary concepts?

>it's a set of numbers ordered into the shape of a rectangle

>this is the consequence of the diluted mathematics education catered to engineers

A linear application.

If A is a matrix, the following application X ---> AX is linear.

I'm studying engineering and I had enhorced maths and physics classes.

Use Stone-Weierstrass theorem and approximate [math]f[/math] with a polynomial [math]P_m[/math], where [math]P_M \rightarrow f[/math] as [math]m \rightarrow \infty[/math]. Pick [eqn]N = \operatorname{max}_{x \in I}\{n_x\}[/eqn] (which exists since [eqn]n: I \rightarrow \mathbb{N}[/eqn] is bounded and [math]I[/math] is compact), then [math]f^{(N)}[/math] vanishes at all points, meaning that [math]\forall n> N[/math] the coefficients [math]a_n[/math] of the polynomial [math]P_m[/math] vanishes for all [math] m [/math]. Thus taking the limit gives that [math]f[/math] is a polynomial.

Nice broken English btw.

i do not really understand why : $ n : x \mapsto n_x $ is bounded.

Seriously? Because [math]n \in \mathbb{N}[/math] for all [math]x\in I[/math].

Bounded means there is a K, such as for all x, n_x < K no ?

No. It means that there's a compact interval that strictly contains range.

There's also a clever solution by applying the Baire Category Theorem twice.

Yeah it was the thing I was thinking for.

Contains the* range.
Elaborate.

I wrote a nice proof for it last semester, but I can't seem to find it or remember it. It's Problem 10.28 in Royden-Fitzpatrick (4th Edition).

>calling your own proof nice
Cringed so hard.
>can't remember
Typical.

It's worded a bit differently, but the problem is pretty much the same.

It was the intended solution as proposed by the book. There was a hint to apply the BCT twice. I just can't remember how. Just give me a bit of time.

Hint :

Let [math]C_n = \{x, f^{n}(x) = 0 \} [/math]

Hence the union of the [math]C_n[\math] is [math]I[/math], one of the [math]C_n[/math] is non-enumerable.

You can pretty much avoid dealing with countability if you just apply the BCT iirc the solution from Royden.

it's a shorthand for doing arithmetic on many numbers at once, and it preserves the structure of your data sets.

I don't think this is a sensible way of looking at it, as it does nothing to motivate any matrix operations, such as multiplication for instance. The only way to be comfortable with matrices is to view them as representing linear functions with respect to some basis.

> linear functions
> bases
Not a helpful explanation for baby's first matrix algebra, which is often taught in middle and high school

A vector can be thought of as a collection of variables [math]x_1, x_2, x_3, \ldots, x_n[/math]. We can write a linear combination of the components of the vector as [math]y = a_1x_1 + a_2x_2 + a_3x_3 + \ldots + a_nx_n[/math]. But let's say that we want to write a vector whose components are linear combinations of the original [math]x[/math]. Then we have to write [math]y_1 = a_{11}x_1 + a_{12}x_2 + a_{13}x_3 + \ldots + a_{1n}x_n[/math], [math]y_2 = a_{21}x_1 + a_{22}x_2 + a_{23}x_3 + \ldots + a_{2n}x_n[/math], all the way up to [math]y_m[/math]. This is what we call a linear transformation of vectors, which is represented by a second-order mixed tensor or a matrix whose elements are the coefficients of the transformation.

Is this clear enough?

I agree with you, most people are just memorizing some made up algorithm otherwise.

Once you realize you can figure out the entries of a matrix by seeing simple things such as how it affects the basis vectors, then linearity, (literally the most important concept in _linear_ algebra) saves the day.

Simple example is rotation. 90 degree rotation is a simple choice, but the rest are easy to figure out with trig. The vector (1,0) gets mapped to (0,1) and the vector (0,1) get mapped to (-1,0) simple.

It makes sense to think of geometric transformations with matrices simply being the analytic manifestation of how you could make a computer do it.

The original explanation wasn't even an attempt at explanation, however. That also does the middle schooler no good.

>A vector can be thought of as a collection of variables

That is because vectors can and are commonly used to model a collection of variables but that is not at all a defining feature of vectors.

Of course. I was tailoring the explanation to a middle- or high-school student. An axiomatic explanation of vector spaces would be too difficult and out of scope.

Since [math]f[/math] belongs to [math]C^\infty[/math], it has a Taylor-series representation [math]a_ix^i[/math]. Differentiating [math]n[/math] times (the [math]n[/math] being the index of [math]C_n[/math]) gives you another Taylor series whose zeroes must be [math]C_n[/math]. But [math]C_n[/math] is uncountable. So the Taylor series, the [math]n[/math]th derivative of [math]f[/math], must be identically zero. Therefore, the original function is a polynomial.

If this is what you were going for, not all smooth functions are analytic. If you were going for something else, can I have another hint?

>Since fff belongs to C∞C∞C^\infty, it has a Taylor-series representation aixiaixia_ix^i.
No it doesn't. [math]f \in C^{\infty}[/math] does not imply [math]f[/math] is analytic. This is one of the distinctions between real and complex analysis

That's what I said. Read to the end. I was asking if this was what he was going for, and said why it wouldn't work.
>If this is what you were going for, not all smooth functions are analytic. If you were going for something else, can I have another hint?

>Read to the end
Lmao did you seriously expect me to actually read posts and carefully construct arguments in order to have a meaningful discussion instead of having a knee jerk response that has little content?

You're right. This is Veeky Forums. We should be talking about going to Mars or sth.

I think I remember the proof.

Let [math]X[/math] be the union of all open intervals for which [math]f[/math] does not coincide with a polynomial. [math]X[/math] is a perfect complete metric subspace of [math]I[/math]. Let [math]F_n[/math] be the set of all points [math]x \in X[/math] for which [math]\frac{d^nf}{dx^n}(x) = 0[/math]. Each [math]F_n[/math] is closed, and the union of all [math]F_n[/math] is [math]X[/math] itself. If [math]X[/math] is nonempty, there exists an [math]F_N[/math] that has a nonempty interior in [math]X[/math]. Let [math]A = (a, b)[/math] be an interval for which [math]A \cap X \subseteq F_N[/math]. Each point in [math]A[/math] is an accumulation point, so [math]A \subseteq F_n[/math] for all [math]n \geq N[/math]. [math]X[/math] obviously cannot completely cover [math]A[/math]. Let [math]C = (c, d)[/math] be a maximal interval subset of [math]A \cap X^c[/math] so that [math]c[/math] or [math]d[/math] is in [math]X[/math]. Wlog let this be [math]c[/math]. On [math]C \subseteq X^c[/math], [math]f[/math] coincides with some polynomial of degree [math]n[/math]. This means that [math]\frac{d^nf}{dx^n}(c) \neq 0[/math], so [math]n < N[/math], and [math]\frac{d^Nf}{dx^N}(c) = 0[/math]. We can conclude that [math]\frac{d^Nf}{dx^N}[/math] vanishes on [math]A[/math] and that [math]X[/math] is in fact empty.

The reason I brought this proof up was that [math]I[/math] doesn't have to be compact. The proof is still valid on unbounded intervals, so it is more general than .

Can someone check if this is right?

I can show you in some private math lessons, cutie.

...