Tensor calculus

I don't understand shit. Please explain tensors and how to use them in calculus

>falling for the tensors=arrays meme

A tensor is a thing that transforms like a tensor. That is all you need to know.

Say you have a force which can act in three directions (x, y, z) on some object but the object will move in any direction (x, y, z) even if it is only pushed in say the x direction.

So let's say x1, y1, and z1 can combine with x2, y2, and z2 in any combination. This gives 9 total possibilities: x1x2, x1y2, x1z2, y1x2, y1y2, y1z2, z1x2, z1y2, and z1z2.

It's useful in electrodynamics when an electrical field in the x direction causes a b field in the y direction.

Not OP, could you explain why a tensor is not an array? Wtf is it?

This is what I got from the NASA PDF, but it's introduced differently in calculus. Can you explain it in terms of calculus and how it works with coordinate systems?

also useful in higher level structural analysis for mech engineering

What about calculus?

Its literally a bunch of data ordered in an arbitrary way

you should probably understand what a tensor is first, then what a tensor space is.

What are they?

No idea mate, sorry lol

What do you mean??? Why don't you know, if you know about tensors?

tensors are not multidimensional arrays. tensors represent invariants, so they have to transform in a way that preserves this.

in order for that to happen you have to transform the components in a way that's opposite to the basis that it's represented in.

A simple example would be, the length of a vector squared is invariant. It's got that length no matter what. But its components depend on what basis you're using. So how do we change basis without fucking up everything? Well,

[eqn]|v|^2 = v^\top v[/eqn]

the left is the invariant the right is something that depends on your choice of coordinates since those are components of vectors coming together, squaring and adding.

What we can do is introduce an identity matrix between them to absorb some changes we make in a moment.

[eqn]|v|^2 = v^\top I v[/eqn]

So now if I want to change basis my components might change with some matrix J as,

[eqn] v = Ju[/eqn]

So plug this in and we get,

[eqn]|v|^2 = (Ju)^\top I (Ju) = u^\top J^\top J u[/eqn]

we still have a matrix in the middle now but instead of it being the identity matrix I it turned into the matrix [math]J^\top J[/math] but look, we still have the invariant |v|^2 completely unchanged.

We call this matrix in between the metric tensor and we call v and u the vectors, but really they're just vector components that depend on the basis. But at the end of the day we can always put it together in this way,

[eqn]|v|^2 = x^\top G x[/eqn]

no matter how we change the G and x as long as we do it this way, we have maintained the invariant quality of the length of the vector and that's really about all there is to something being a tensor, other than that you can have even more things going on increasing the rank of the tensor

I'm just a sophomore in mechanical engineering. I've had them explained to me, but I've never used them

Why does the picture say tensors are multi dimensional arrays? What is the superscript T? How does this preserve length when you change basis?

If you're changing basis using J, wouldn't J be specific to whatever basis you choose?

>How does this preserve length when you change basis?
Length is always preserved when you change basis in the same vector space. You're taking the same magnitude and projecting it in different ways.

But how are you calculating it?

T is transpose

How is this anything special? G is a tensor? That's it? How do you know if a matrix is a tensor or not?

If it transforms like a tensor then it is a tensor.

What can you do with it? Why would I want G?

What kind of question is that?

I know some of you have never talked abstractly about shit, but a vector is by definition a member of a vector space. I'm a physicist and I can hear you out, but this definition while it sounds abstract it has in it the idea that these objects are invariant of the choice of basis, and are members that can be operted as we would like them. You can then talk about the representation in a given basis, and the representation in another and using the identity transformation, you will en up with the same object, because it is that by definition. And the same goes for linear maps, you have some relation between vectors you want to represent, which is linear, you can define it abstractly as a function defined on your vector space, which when given some basis, you can then represent it as a matrix. Sometimes you have relationships that are more complicated, that is, you can have a relation that for every two vectors, it asigns some vector, if you try to find a matrix that represent this, you will end up with nothing as you need more indices, that is, a square array is not enough to represent tha transformation. Tensors are then, just representations of these relationships, and by definition, they cannot depend upon the basis you use. It just happens that some things need these sort of relationships.

Every multilinear map has a tensro representation.

Ya take a vector space and its dual space and then make a thing that maps r vectors and s covectors to the underlying field in a linear manner and whammo you've got a tensor.