Why are some vector operations so limiting in their definitions?

Is there actually any substantial difference between the following vectors:
(1,1), (1,1,0), (1,1,0,0), (1,1,0,0,0), ...

And why should vector addition or the dot product not be defined for vectors with different lengths? You can just append zeros for the missing coordinates and it would still make sense, provided the first few matching coordinates denote the same property.

Right?

Suppose the following vectors: (1,2) and (3,4,5,6). Normally you'd say: "You can't add them, because vector addition is only defined for vectors of the same length!". Well, you can always take the shorter vector and fill in zeros to get one that matches the length of the other, and then perform addition. And it doesn't really break any logic.

So (1,2) + (3,4,5,6) becomes (1,2,0,0) + (3,4,5,6) which is defined. This applies only when the first few matching coordinates denote the same property, for example "shift in direction of x-axis or y-axis" for the first two elements respectively.

Is there actually a sound reason why this should be forbidden?

Other urls found in this thread:

mathworld.wolfram.com/CommutativeMonoid.html
en.wikipedia.org/wiki/Disjoint_union#Set_theory_definition
twitter.com/NSFWRedditVideo

Who taught you vector calculus without explaining what a vector space is? Someone played a cruel trick on you.
That's like giving a knife to a chicken.

Can you explain what you mean without resorting to pretentious metaphors? Thanks

Yes. Because (1,2) is not the same as (1,2,0,0) and you can't add upp vectors that are parts of different R. This all makes more sense once you get into linear algebra and look what actually dimensions and spaces are.

>vector space
>metaphors
If only you knew the definition of the words you use, you would understand none of your post makes sense.

>Because (1,2) is not the same as (1,2,0,0)
Why not? There is no additional information in the second vector. The second vector appears to come from R^4, but is actually just an R^2.

If you took four vectors looking like the (1,2,0,0), you couldn't generate the R^4, only the R^2.

>That's like giving a knife to a chicken.
Isn't this a metaphor though? Can you just explain what you mean instead of being smug about your knowledge/opinion? Thanks.

whether two vectors can generate a space or not has nothing to do with the dimensionality of the vector. Example
(1,0,1) and (0,1,0) generate Z^3, despite, by your logic, being Z^2 and Z^1.

>Can you just explain what you mean
No I can't, because none of what you say makes any goddam sense.
You don't know what vectors are, you don't know what addition is.

What I "mean" is: understand what a vector is before making up vector addition rules.
Do not even bother replying to this post before you can answer the request: "define vector".

>no additional information

Of course there is. (1,2) means that you have a vector in R^2 while (1,2,0,0) means it's a vector in R^4. The zeroes are not "nothing". Dimensions are cruical when dealing with matrices and there you can't just cancel out zeroes because you think they don't mean anything.

You're basically typecasting stuff as you go along and you introduce an arbitrary convention.

You can do this, sure. But then you're leaving the real of algebra in which things aren't set up like this, and so you can't use any theorems - you'd have to reprove a lot of stuff.

For example, in a vector space (or any commutative group), taking the inverse is an involution.
Meaning, if you consider
v = (3,-6,0)
then
inverse(v) = (-3,6,0)
because
v+inverse(v) = (0,0,0)
and also, by linearity
inverse(v) = -v
And thus
inverse(inverse(v)) = v

This is just like how the function
i(x) := 1/x, with x in the positive reals
applied twice gives the identity
i(i(x)) = 1/(1/x) = x

But in your system, the inverse isn't unique anymore in the above ways:
The vector w = (-3,6)
has
v+w = (0,0,0)
too.

You left the axioms of linear algebra, so it's a new framework you introduced here.

By my logic, (1,0,1) is Z^3 and (0,1,0) is Z^2.

Only the trailing zeros matter. All trailing zeros are essentially worthless.

And you actually can't generate the Z^3 with only two vectors. You are only generating a plane within the Z^3.


I know what vectors are and I know what addition is. I am proposing an alternate definition that might be useful.

You cannot generate the R^4 from four vectors looking like the (1,2,0,0). So I don't think those are really representative of the R^4. You would need at least one vector that is truly an R^4, one which actually has a non-zero value on the 4th component.

>Dimensions are cruical when dealing with matrices
I am not talking about matrices yet, though. But that's a nice idea, perhaps I can find some consistencies with this approach in matrices as well.

Very insightful! Thanks for your honest reply, and thanks for not insulting me right away.

>I know what vectors are
I very much doubt that.
>I am proposing an alternate definition
No, no you are not.

(1,2,0,0) is not the same as (1,2). Assume the standard basis, (1,2) = 1*(1,0) + 2*(0,1). Now for the first one, assuming the standard basis it is actually just (1,2,0,0) = 1*(1,0,0,0) + 2*(0,1,0,0) + 0*(0,0,1,0) + 0*(0,0,0,1) which again, you might interpret as being the same but is fundamentally different because R^4 is literally a different vector space than R^2. Now what you can say is that R^2x0x0 as a subspace of R^4 is isomorphic to R^2 so R^2 can be embedded into R^4. So, (1,2,0,0) is the isomorphic image of (1,2) in R^4.

>R^4 is literally a different vector space than R^2
That's true. But an addition such as (1,2,0,0) + (3,4,5,6) can be seen as only affecting the components within the R^2. It doesn't affect the parts that are truly within the R^3 or R^4.

>(1,2,0,0) = 1*(1,0,0,0) + 2*(0,1,0,0) + 0*(0,0,1,0) + 0*(0,0,0,1)
Scalar multiplication yields
(1,2,0,0) = (1,0,0,0) + (0,2,0,0) + (0,0,0,0) + (0,0,0,0)

Which is the same as
(1,2,0,0) = (1,0,0,0) + (0,2,0,0)

Which is, I argue, the same as
(1,2) = (1,0) + (0,2)

Or perhaps even
(1,2) = 1 + (0,2)

>You cannot generate the R^4 from four vectors looking like the (1,2,0,0).

Let A be an element of R^(4x4). We can write every possible matrix in this space by setting the base as the sum of canonical unit vectors:

A=e_1(1,0,0,0)+e_2(0,1,0,0)+e_3(0,0,1,0)+e_4(0,0,0,1)

Each single vector lies in the algebraic ring R^(1x4) and is a subspace of R^4.

>ITT: OP discovers projection are linear operators

You can do this, not really a big deal as long as you know what you're talking about and other people understand you.

When you come into contact with the ideas of surfaces embedded in space or subspaces, then you can start to develop these ideas more.

Mostly everyone here is an undergrad autist, it's not as important as they seem to make it out to be. You're just learning and have more to learn.

For instance, vectors aren't little pointy arrows that you can draw in a space. That's just one example of a vector space. You can make tuna salad into a vector space.

>Very insightful! Thanks for your honest reply, and thanks for not insulting me right away.
np.

In fact I you still appear to have a unique additive identity, namely (0).
v+(0)=v
for v of any dimension, and e.g.
(0,0,0)+(0)=(0,0,0)+(0,0,0)=(0,0,0)
while e.g. (0,0,0) itself isn't an identity for (3,5)

You have a commutative monoid over the countable infinite disjoint union of R^n's,
where the elements are not uniquely invertible, so here are some "similar" structures.
mathworld.wolfram.com/CommutativeMonoid.html
en.wikipedia.org/wiki/Disjoint_union#Set_theory_definition

Since the components of the infinite union are all vector spaces over the same field R, the whole thing inherits a whole bunch of vector-space like features, e.g. it you don't run into troubles with something like
"3ยท( (3,4) + (2,4,7) )"
for
(9,12) + (6,12,21)
i.e. pulling out scalars.

No argument.

But (0,0,1,0) and (0,0,0,1) don't look like (1,2,0,0). The point was that all trailing zeros can be eliminated, yielding a sort of "reduced" vector. If you do this for (0,0,1,0), then you get the reduced version (0,0,1). If you do this for (0,0,0,1), then you get the reduced version (0,0,0,1), because there are no trailing zeros.


I always assume that the "longer" vector is sort of an extension of the shorter one. Let's stay in the R^3 and think of it as a coordinate system with the first component denoting the x-value, the second one denoting the y-value and the third one denoting the z-value. If we now take a vector (1,1), denoting x and y value, then I would argue the following:
The components of the shorter vector are each matching the respective "data type" of the components of the bigger vector. Thus they can be added without losing or generating additional information.

The same applies to a vector when you see it as a collection of primitive data. For example if you have a vector like this:

1st component: number rolled with the first dice
2nd component: number rolled with the second dice
3rd component: number rolled with the third dice

And another vector
1st component: number rolled with the first dice
2nd component: number rolled with the second dice

Then the first 2 data types are matching. If the task is given to compute the total number rolled with each dice, then you could model it like this:

for example
(3,6,2) + (1,4)
= (3,6,2) + (1,4,0)
= (4, 10, 2)

There is no additional information in the (1,4,0) vector and the total number still matches. It's important that the "data types" of the vectors match component-wise for all components of the shorter vector.

You're right,
1*(1,0,0,0)+2*(0,1,0,0)+0*(0,0,1,0)+0*(0,0,0,1) doesn't equal (1,2,0,0) and is clearly part of R^2. You're not bad OP, what is your background?

>1*(1,0,0,0)+2*(0,1,0,0)+0*(0,0,1,0)+0*(0,0,0,1) doesn't equal (1,2,0,0)
It does, but (1,2,0,0) can be reduced to (1,2), which is actually an R^2.

(1,2,0,0) isn't a "real" R^4 vector because it doesn't have a "real" value as its 4th component. Zero is irrelevant.


I know that this perspective might seem suspicious, and I don't claim it is consistent with all the theorems in Linear Algebra, but I think it might be useful somewhere. Why do you have to be so condescending?

You're like that one guy a couple weeks ago fervently arguing that derivative operators can be canceled out. According to him, things like d^2x/dt^2 can be "simplified" to x/t. Your "improvements" follow the same line.

My God OP have you covered change of basis yet?

>Why do you have to be so condescending?
Not him but probably because either:
>you had a class in linear algebra and you barely listened to any of it, given the approximate language you keep using
>you didn't have a class of linear algebra and you're trying to make sense of vector calculus for some reason, maybe your country is teaching things backward, I don't know
>none of the above and you're just trying to learn things by yourself, in which case going a few more pages forward would have answered all your questions

No. Derivatives and vectors are two very different things, so this is a false equivalency.

>change of basis
I'm not sure what specific problem you're pointing out.

Also I was talking very humbly about vector addition and dot product only, not change of basis (yet).

I would be thankful if you could simply produce an example that would destroy my reasoning. Instead you and others in this thread vaguely hint that it's wrong because of textbook definitions.

Thanks again!

>I would be thankful if you could simply produce an example that would destroy my reasoning.
>(1,2,0,0) isn't a "real" R^4 vector because it doesn't have a "real" value as its 4th component. Zero is irrelevant.
That same vector has all nonzero components in literally an infinity of basis. Like this one: (1/2)*{(-1,1,1,1),(1,-1,1,1),(1,1,-1,1),(1,1,1,-1)}

I'm only replying to you seriously out of the goodness of my heart. You can stop being a prissy little cunt with the "BAAAAAAWAAAAAAW WHY ARE YOU BEING MEAN TO ME". If you can't deal with people bluntly telling you you're wrong you might as well drop out now.

>I'm only replying to you seriously out of the goodness of my heart.
You really are a good-hearted person!

>You can stop being a prissy little cunt with the "BAAAAAAWAAAAAAW WHY ARE YOU BEING MEAN TO ME".
insult and hyperbole, please grow up

>If you can't deal with people bluntly telling you you're wrong you might as well drop out now.
I can, but I don't want to take their word for it. If I'm wrong, then you can correct me objectively, without insults or condescending comments.


Your example helped. I can see now that change of basis is something fragile within my framework. I don't see how vector addition or dot product are, though.

My favourite part was when you claimed you know what a vector is. Good show OP.

>If I'm wrong, then you can correct me objectively
Have you ever stopped and wondered if you deserved all your stupid questions answered seriously?

You can just abuse notation and add that rule, I'm not sure you'll end up with anything productive, but you can do whatever you like.

>Is there actually a sound reason why this should be forbidden?

You are essentially talking about the inverse limit of R^n, otherwise known as infinite-dimensional space. So yes, you can add them together but then you have to deal with all the problems that come with infinite dimensions.

>HS retard tries to come up with a clever way to reduce vectors
>multiple people show him exactly why this doesnt work
>he literally doesnt understand anything about linear algebra
>calls everyone invalid and claims ad hominem
>

Oh he's a highschooler?
Maybe I've been too harsh. I thought he was a retarded freshman.

You are just defining a new convention of writing down vectors from R^+oo.

Polynomials also form a vector space for example. And your kind of convention is sort of used there. For example 1 + x = 1 + 1*x + 0*x^2 + 0*x^3. Which could be rewritten as (1,1) = (1,1,0).

It is however important to understand the difference between R^n and R^+oo. E.g. (1,1) from R^2 is not the same thing as (1,1) from R^+oo with your notation, but you could probably define some sort of equivalence relation between members of all the R^n and R^+oo, which is maybe what you are kind of thinking about. So you can definitely do it, but it is not useful in any way I can think of.

it's actually a direct limit, friend

Is it? I think it may be a matter of his intention, and if he cares for how much is needed to represent his elements.
It seemed like his plan is to have more elements than [math] R^\infty [/math]. A priori, the structure has the union of all [math] R^n [/math] as underlying set
[math] U = {\mathbb R} + {\mathbb R}^2 + {\mathbb R}^3 + \dots + {\mathbb R}^n + \dots [/math]
where e.g. (4,6) and (4,6,0,0,0) are distinct and then some comparatively complicated addition defined on it.
You could then go and introduce a quotient, identifying all vectors such as (4,6) and (4,6,0,0,0) and this may be the same as [math] R^\infty [/math].
The distinction matters e.g. when naively defining functions like "the smallest entry of a vector".
For v1=(4,6), v2=(4,6,0), and v2=(4,6,0,0,0,0,...), how should the standard "min" be defined. Should min(v1) = min(v2)?

thx

It's the limit for the underlying sets (i.e. after you passed to Set with the forgetful functor), but it can't be a direct limit of vector spaces when what OP describes isn't a vector space itself

What if you had one vector (... 0, 1, 0, ...) and another (1, 1)? How would you add them?

How is it not a vector space? The direct limit of any directed system of vector spaces should exist I think.

For pragmatic reasons it _might_ be useful to treat (1,1,0,0) and (1,1) as the same thing, but mathematically and philosophically they are different objects.

And, in general if you have an algebraic category with only have operations of finite arity, then the limit of any "finitely directed" system will exist. You can view scalars as an infinite collection of unary operations, so it works out fine.

mechanical engineer here,
I do this often when solving problems, most commonly dropping trailing zeros to work in 2d space.

For example when working with the stress tensor, a vector in [math]\mathbb R^{3\times3} [/math] If the data only lies in the x,y plane we drop all elements representing the z dimension, or if it is the x or y that is zero, we drop those dimensions. As long as you remember to add them back in when you finish the problem by keeping track of how you permuted the matrix indices you can always undo it later and get the correct answer.

The case OP is interested in is just a 1d vector, but for any dimensional vector space I believe you can always permute the matrix elements and it will have no effect, just like in an equation ax+bx = bx+ax its meaning only depends on how you interpret it.

This is a nice way of thinking about it for engineering purposes. In the real world vectors always have a meaning associated with each index, like photon energy, or something concrete so you can always change you're interpretation and rearrange the indices in real world problems. I've never encountered a situation where this did not give the correct answer.

so fucking sick of posting on a board full of literal teens.

For example in engineering the stress tensor,
[eqn]
\sigma =

\begin{bmatrix}
\sigma_{1,1} & \sigma_{2,1} & \sigma_{3,1} \\
\sigma_{2,1} & \sigma_{2,2} & \sigma_{3,2} \\
\sigma_{3,1} & \sigma_{2,3} & \sigma_{3,3} \\
\end{bmatrix}

[/eqn]

is often rearranged to get into 2 dimensions. I'm not sure if permutation is the right word for this, but its used all the time to simplify solving problems. Maybe some math majors know the correct terminology the way its done in engineering is quite handwavy, and non rigorous, but it seems to work.

[eqn]
\begin{bmatrix}
\sigma_{1,1} & 0 & \sigma_{3,1} \\
0 & 0 & 0 \\
\sigma_{3,1} & 0 & \sigma_{3,3} \\
\end{bmatrix}

\Rightarrow

\textrm{permute tensor indices}
\begin{bmatrix}
\sigma_{1,1} & \sigma_{3,1} & 0\\
\sigma_{3,1} & \sigma_{3,3} & 0\\
0 & 0 & 0\\
\end{bmatrix}

\Rightarrow

\begin{bmatrix}
\sigma_{1,1} & \sigma_{3,1}\\
\sigma_{3,1} & \sigma_{3,3}\\
\end{bmatrix}
[/eqn]

oops reversed my indices in the first column

Is it? I think it may be a matter of his intention, and if he cares for how much is needed to represent his elements.
It seemed like his plan is to have more elements than [math] R^\infty [/math]. A priori, the structure has the union of all [math] R^n [/math] as underlying set
[math] U = {\mathbb R} + {\mathbb R}^2 + {\mathbb R}^3 + \dots + {\mathbb R}^n + \dots [/math]
where e.g. (4,6) and (4,6,0,0,0) are distinct and then some comparatively complicated addition defined on it.
You could then go and introduce a quotient, identifying all vectors such as (4,6) and (4,6,0,0,0) and this may be the same as [math] R^\infty [/math].
The distinction matters e.g. when naively defining functions like "the smallest entry of a vector".
For v1=(4,6), v2=(4,6,0), and v2=(4,6,0,0,0,0,...), how should the standard "min" be defined. Should min(v1) = min(v2)?

Let:
[math]U = \bigcup\limits_{i=1}^{\infty} \mathbb{R}^i[math]

You can define an equivalence on this set, so let [math]a \in \mathbb{R}^n[\math] and [math]b \in \mathbb{R}^m[\math],

if [math]m \le n[\math] then:

[math]a \sim b \iff what~OP~described~e.g.~(\forall i \le m) (a_i = b_i) \land (\forall i, m < i \le n a_i = 0)[\math]

if [math]m > n[\math] then the same thing but rewritten

This relation is an equivalence, so it factorizes the set [math]U[\math]. So you can get a quotioent set of [math]U[\math], let's denote it [math]U_q[\math]. You can define addition on this set in the way that OP described and you can make it into a vector space over R. But there is a bijection between [math]U_q[\math] and [math]\mathbb{R}^\infty[\math] (set of infinite vectors over R with finite number of nonzero elements) which is pretty straightforward. So [math]U_q[\math] doesn't bring you anything new in terms of possibilities other than maybe a notation and a little exercise in the use of equivalence classes.

ffs fucked it up, here it is with correct /

[math]U = \bigcup\limits_{i=1}^{\infty} \mathbb{R}^i[/math]

You can define an equivalence on this set, so let [math]a \in \mathbb{R}^n[/math] and [math]b \in \mathbb{R}^m[/math],

if [math]m \le n[/math] then:

[math]a \sim b \iff what~OP~described~e.g.~(\forall i \le m) (a_i = b_i) \land (\forall i, m < i \le n, a_i = 0)[/math]

if [math]m > n[/math] then the same thing but rewritten

This relation is an equivalence, so it factorizes the set [math]U[/math]. So you can get a quotioent set of [math]U[/math], let's denote it [math]U_q[/math]. You can define addition on this set in the way that OP described and you can make it into a vector space over R. But there is a bijection between [math]U_q[/math] and [math]\mathbb{R}^\infty[/math] (set of infinite vectors over R with finite number of nonzero elements) which is pretty straightforward. So [math]U_q[/math] doesn't bring you anything new in terms of possibilities other than maybe a notation and a little exercise in the use of equivalence classes.

While doing all the latex stuff I forgot to add the point about the functions you mentioned. There would be a difference between min function on U and min function on U_q.

But in using that you would still have to make sure the user of this knows how U_q was constructed and difference between it and simply U. So I doubt any usefulness of this other than a way to explain R^+oo in a bit more formal way.

I am not sure it may very well be the actual way R^+oo is constructed.

undefined because infinite vectors don't actually exist

R X R X R^infinity X R X R^infinity?

>hasn't taken a single class in functional analysis: the post

look op you seem like a well-meaning freshman who probably only took those math for engineers and scientists classes, and since I was in the same spot as an undergrad, I can sympathize. But the first thing you need to understand is that mathematical objects are constructed from the bottom up in a very specific way. It is meaningless to compare vectors like [math](1,1)^T[/math] and [math](1,1,0)^T[/math].

The vector space comes first; in particular, the set from which our elements arise is the first thing to determine. Let us consider the vector space [math]V[/math] over [math]\mathbb{R}^2 := \mathbb{R} \times \mathbb{R}[/math] and prescribe the standard topology on it via addition and scalar multiplication, i.e. define [math]V := (\mathbb{R}^2,+_V,\cdot_V)[/math] s.t. [eqn]+_V \: \mathbb{R}^2 \times \mathbb{R}^2 \to \mathbb{R}^2[/eqn] [eqn]\cdot_V : \mathbb{R} \times \mathbb{R}^2 \to \mathbb{R}^2.[/eqn] Now consider the vector space [math]W[/math] constructed in the same manner, but over the set [math]\mathbb{R}^3[/math] instead (and [math]+_W[/math] and [math]\cdot_W[/math] are defined appropriately). We can see that by definition, for any [math]u,v \in V[/math] and [math]w, x \in W[/math], the following operations are valid: [eqn]u +_V v,[/eqn] [eqn]w +_W x,[/eqn] while something of the form [eqn]v +_V w[/eqn] literally makes no sense -- the inputs of the mapping are not proper elements of its domain! Likewise, for any [math]a \in \mathbb{R}[/math], [math]a \cdot_V w[/math] is simply not possible ([math]w \notin V[/math]). This last bit is trivial but may be the most convincing aspect of the argument: since [math]\dim V \neq \, \dim W[/math], it readily follows that [math]V \ncong W[/math], i.e. there is no isomorphism between the two vector spaces.

You're basically talking about the general yoga of doing algebra in the category of vector spaces rather than within a particular one. Your convention is captured by a linear map from R^n to R^(n+m), and you can also look at sections to this map (projections from R^(n+m) to R^n that are the identity on the image of the first map). Linear maps are bridges between linear algebra in different vector spaces.

Read up on the category of vector spaces over a base field, there is a lot to learn. You are asking the right questions, and many people in this thread are just pedants that think they know more than they do.

>category theory

>not realizing that set theory is the tip of the iceberg
>not caring about higher-order type theories
>not understanding the utility of Kan extensions

Summer is over, time to leave sir.

It's a trivial difference, honestly. Yeah, (1,2) can't generate R^3, R^4, etc., but (1,2) is effectively the same as (1,2,0). (1,2) is a 2d vector, but when you look at it on the graph of R^3 it's the same as (1,2,0). Additionally, they both look the same on R^4, R^5, etc. I know what you are saying, but if you compare them in the same vector space they are the same.

Actually bro, you might want to look at the basics of K-theory. There, projections are added even if they are not defined over the same space. Just a matter of "adding zeroes" again, but they make it all rigorous as usual.

K-theory is fun by the way.

I'm not mad at OP because it's I think it's impossible to define general addition rules, of course it's possible.
Neither because I'm "pedant".

I'm mad at OP because he obviously got a course of linear algebra and yet uses a very approximate language and fails to define what he means properly when it's really not that hard if you paid attention.

But apparently people are telling ITT this is because of the kind of courses american engineers get?
If that's the case then it sends me back to my very first post It IS a very cruel thing to do to someone. Now OP is left with a bunch of calculation rules and no idea where they come from. Of course he's gonna ask questions. I'm still frustrated there would be some college education system where that happens. This is good for high school, not college, even engineering.

You are absolutely being a pedant. Obviously OP has been offered a disservice in their education, but you are only turning them further away from mathematical curiousity by shitting on them. These are good questions to be asking, and you are saying the asker is incompetent because they don't understand what they are asking about. Of course they don't, otherwise they wouldn't be asking.

All ideas start off imprecise and murky, and can later be refined appropriately. It's not contrary mathematical reasoning to have vague conceptions; indeed, there are useful ideas which evade full formalization and which overshadow the partial formalizations put forth. It does not mean that the content should be disregarded. Quite the opposite: more inquiry is necessary to understand them further.

Mathematics is a march from the world of philosophies to the world of formalizations, and starting at the formalization without asking about the preceding philosophy is in poor taste and accomplishes little.

>Hur hur why can't you just unrigorously mishmash fields together like it's nothing

?

>but you are only turning them further away from mathematical curiousity
Oh come on. Freshmen get told often enough that they are God's gift to mankind. I don't see the point of droping a bunch of proper generalizations of addition on him if he hasn't understood the structures he is talking about.

Undergrads getting taught by a bad educator are not getting the right kind of encouragement, I promise you. If I had even one teacher when I was in school encourage me to further examine my questions rather than just tell me that, "it just works this way," then I would have started learning higher math far earlier. I see no reason to discourage someone simply because others might be encouraging them anyways, but if they are asking this question here then I feel this is the place for someone to kindle the drive to ask good questions. It's not as though OP is being dense, either. These are the same questions that led to the development of linear algebra in its full, modern incarnation, after all.