BRAINLET GENERAL: Basic concepts in science only you don't understand

>vectors

Other urls found in this thread:

youtube.com/watch?v=6HCz1tFqIcs
eli.thegreenplace.net/2015/visualizing-matrix-multiplication-as-a-linear-combination/
youtube.com/watch?v=PFDu9oVAE-g
youtu.be/Orf8NkcIDig
youtu.be/OB4znOVsfnA
twitter.com/NSFWRedditGif

string theory

>inter-universal Teichmüller theory

>matrix multiplication

Induction.

THIS

I still can't tell the difference between rows and columns

An m*n matrix T is secretly a function with domain is the set of all vectors of length n, and codomain the set of all vectors of length m.
As a function it assigns the m-vector Tx to the n-vector x, i.e. [math]x \mapsto Tx[/math].

For example, the matrix [math] T = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6\end{pmatrix}[/math] assigns to each 3-vector [math]\begin{pmatrix} x \\ y \\ z\end{pmatrix}[/math] the 2-vector [math] \begin{pmatrix} x+2y+3z \\ 4x+5y+6z\end{pmatrix}[/math].
So e.g. the vector v=(42,0,-3) gets mapped to Tv=(42+0-9, 168+0-18) = (33,150).

Matrix multiplication then corresponds to function composition, i.e. for matrices [math]T_{m\times n}[/math] and [math]S_{n\times k}[/math] corresponding to the maps [math]v\mapsto Tv[/math] and [math]u\mapsto Su[/math], the matrix product TS maps [math]u \mapsto TSu[/math] which can be written as the composition [math]u \overset{S}{\mapsto } Su \overset{T}{\mapsto } TSu[/math].

I get vectors, but what exactly are tensors?

Hope this helps.

>youtube.com/watch?v=6HCz1tFqIcs

>secretly
secretly to engineers maybe

vectors
transformation formulae

what an electron is
; (

wtf is a vanishing cycle

Tensors are a generalization.
The skalar is a tensor with grade 0, you raise the grade of the tensor by adding base vectors to the number. So a vector is a tensor with grade 1 a matrix a 2 and so on.

hm, never heard it explained like this before. Is there anywhere I can read up more on this?

wow!

you must really be retarded

To give an informal description that lacks geometric intuition, just as vectors/vector fields let you assign an N-by-1 array to every point in N dimensional space, tensors are a generalization letting you assign N-by-N arrays etc to every point in a space.

Example application: Stress tensor. Think of the pressure applied to the faces of a tiny cube of material. You can have forces applied in the principal (x y z) directions orthogonal to cube faces, plus "shear" forces that deform the cube of material by rotating the faces' relative orientations by acting to each face (google an image of it). If you represent the these as a matrix with the forces orthogonal to faces in the diagonal and the ones parallel to the faces off the diagonal, you have a nice symmetric matrix. You can then use this to compute e.g. total amount of energy absorbed by a body under a certain combination of loading

There's not much to understand. For a regular joe it's just an overly complicated rule for multiplying two compatible vectors together.
Really the only two interesting things about it are that
1.) despite being so elaborate it's still transitive -
among same-sized square matrices anyway.
2.) the complex rules are not arbitrary. There are practical reasons for doing things the way they're done.
And that's about it. If you want to get good at that stuff, practice. But if you want to understand it, well, the above two points mostly cover it.

could shearing forces be modeled using affine transformations?

No.

Literally any linear algebra book that focuses on theory, rather than computation, will define matrices this way.
As points out, this way of thinking about matrices (i.e. as linear maps) is the norm among mathfags.

damn. i thought the points on a form shifting due to shearing stress could be calculated using an affine operator of some kind on each n-tuple of coordinates defining it.

Well, though not being an affine transform as far as I know per se, look up Mohr's circle as a way of eliminating the nondiagonal terms of the stress tensor locally by choosing an appropriate basis in standard engineering practice, which is of course implemented computationally through matrix multiplication.

>if any domino block falls, the next one falls also
>the first domino block falls
>therefore all domino blocks fall
Are you having trouble understanding the intuition on why induction works or is the problem in applying induction in practice?

How to get laid

"ayy bb check this out"

*calculate all primes up to order n*

literally unbeatable

>go to the gym
>get healthy
>act with confidence
done
women are hardwired to go to the alpha male
if you look and act like a beefcake brawler, you then become the alpha
women are hardwired to scorn betas, as a beta cannot protect them and their children from predators, while the alpha can

simple straightforward shit, it's baffling that /r9k/ exists at all

From the picture of Hume I'm guessing that he was making a joke.

>"ayy bb check this out"
>*calculate all primes up to order n*

Check out YouTube channel 3blue1brown and his "Essence of Linear Algebra" series.

Integral calculus.

>Redpill inc
eli.thegreenplace.net/2015/visualizing-matrix-multiplication-as-a-linear-combination/

This is how it really is user, classes don't teach it though because they don't want to confuse brainlets

I can't read equations.

D'Alamabert's principle of virtual displacement

>he doesn't know the average Veeky Forumsizen

>random variables

I didnt fucking die

>quantum path integrals
I am a fucking retard senpai

>eigenvalues/ eigenvectors
>hypothesis testing
>optimization

What does a determinant represent?

the volume of the image of the unit hypercube

it's pretty hard to break intuition. for most non math students in my school linear algebra is literally the last or second to last class they touch involving matrices

Eigenvalues are the values by with a vector is scaled, meaning it still spans itself only in bigger or smaller intervals.
An eigenvector is the vector corresponding to this value.

youtube.com/watch?v=PFDu9oVAE-g

Here faggot.

Dimensional Volume spanned by the basis of a matrix.

If 2x2 it's area.

I'm retarded.
The scaled vector is part of a nxm matrix.
In order for it to be considered an eigenvector it must stay on its span.
Watch the video.

>random variables
A random variable is a function from the sample space to the set of all possible values it can take.
For example, if the experiment is that we flip two coins, then the sample space is S = {Hh, Ht, Th, Tt}.
If we let X denote the random variable corresponding to the number of heads, then the set of values it can take is V = {0,1,2} and we can interpret X as a function [math]S \to V[/math] with X(Hh) = 2, X(Ht) = X(Th) = 1 and X(Tt) = 0.

Now if the coins are fair then each of the four experiment outcomes occurs with probability 1/4, which defines a probability distribution for X in the obvious way, i.e. P(X=0) = P(X=2) = 1/4, P(X=1) = 1/2. If the coins were weighted or the two flips were correlated (for whatever bizzare reason) then the probability distribution would be different, but the basic idea remains the same.

I still don't understand probability distributions.

Thanks for the help though

>>act with confidence
What does this mean?

DiffEq. All of it, I think. I'm not sure how I passed the class last semester but I retained nothing.

faggot

>probability distributions
Knowing the probability distribution for a random variable X helps you answer the question 'for each possible value [math]x \in V[/math] that X could take, what is the probability that X actually ends up taking that value?'
Using the same example, X is a discrete random variable taking the values {0,1,2}. Since X = 1 (for example) if and only if the two flips turn out to be either Ht ot Th, the probability that X is 1 should be the sum of the probability that the coins land Ht and the probability that the coins land Th. In this case we assume P(Ht) = P(Th) = 1/4 so the probability of having exactly one head, ie P(X = 1), is equal to P(Ht) + P(Th) = 1/2.

More precisely, the probability distribution for (discrete) X is a function [math]p_X: V \to \mathbb{R}[/math] that 'inherits' its values from the probabilities of the individual outcomes of S, in such a way that [math]x \mapsto p_X(x) = P(X = x) = \sum_{ s: X(s) = x } p_S(s)[/math].
The formula isn't pretty, but it makes perfect sense once you have a concrete example to show you how it works.

For continuous random variables you'll have to replace the sum with an integral, and at this point the definition gets more subtle. But unless you study measure-theoretic probability you won't need to care about these subtleties (though if you want to, you can look up the Lebesgue integral).

What are differentials?

The basic thing about a vector is that it's "two-things-all-at-once". Once you understand that, then you understand what a vector is.

It's a number, and a /direction/, bundled into the same thingy. This is commonly represented by a line segment with an arrow point at the end, which shows what direction the thingy points in. The longer the line, the "harder" it points in that direction.

Of course, in order to have a common sense of /direction/, we have to have some sort of standard space in which vectors are depicted. This is most commonly a two-dimensional space, or a three-dimensional space.

Vectors are commonly used in physics to depict forces acting upon a body, or a point, for discussion. See wiki and look up some exercises. Try drawing some problems! Then you'll get a feel for how vectors work.

The dual space of a vector space(over R) is the space of all linear maps V->R. If your vector space is over C then its dual will be the space of all linear maps from V->C. Every dual space is also its own vector space. Denote the dual of V by V*.

Tensors are multilinear maps of VxVx...VxV*xV*x...V*->R. By multilinear we mean linear in each argument. Example, f: VxV*->R. f(v,w)=r. Both f(av1+v2,w)=af(v1,w)+f(v2,w) and f(v,aw1+w2)=af(v,w1)+f(v,w2) are satisfied.

Say our vector space is R3. Its elements are column vectors with 3 components. The dual space is all row vectors with 3 components. The dual space elements, row vectors, are an example of tensors. A row vector multiplied by a column vector gives a real number. These are called type (0,1) tensors, they take 0 elements of the dual space and 1 element of the vector space, and spit out a real number.

Continuing with R3. The column vectors themselves are also tensors. Define v(v*)=v*(v)=r. So they are type (1,0) tensors.

We can write v*(v) as v_a v^a, where a is an index and we sum over all values of a. In our example above a=1,2,3. If we know how one of these vectors transforms, then the other must transform in the opposite way. See Chapter 2 of Schutz's general relativity.

addition

Sequences and Series in Analysis

>mitosis vs meiosis

cats have paws so cations are PAWSitive

We would like a way to assign a signed area(negative area is allowed) to linear maps from V to V. For this the key idea is the wedge product ∧ which when applied to basis vectors keeps track of orientation, and as we will see simplifies when setting the wedge space dimension equal to the vector space dimension.

Given a vector space V of dimension n we define the wedge product of vectors ∧ as being linear in both arguments and satisfying v∧v=0. This implies v∧w=-(w∧v). Call the space of these objects V∧2. Define V∧m analogously as the set of objects v1∧...∧vm. The dimension of V^m is (n m). Consider the n=2 case with m=2, then the only basis element is e1∧e2, since e1∧e1=0 and e2∧e2=0 and e2∧e1=-(e1∧e2).

Suppose we have a square matrix M:V->V. The map V∧m->V∧m defined by v1∧...∧vm->Mv1∧...∧Mvm is a linear map which we will call ∧(M). In the case when m=n, then V∧n is one dimensional and isomorphic to R. Thus ∧(M) is just multiplication by a constant. This constant is the determinant.

Example. V=R2. Consider M=(a b, c d). So Me1=ae1+ce2 and Me2=be1+de2. V∧2 has the basis e1∧e2. So the map ∧(M) sends e1∧e2 to Me1∧Me2 = (ae1+ce2)∧(be1+de2) = (ad - bc) e1∧e2. Thus the determinant of M is ad-bc.

The rule det(AB)=det(A)det(B) follows immediately from looking at the map ∧(AB) = ∧(A)∧(B) since these are one dimensional maps.

Also det(M)=0 immediately implies that we cannot find an inverse matrix, since V∧m is annihilated by ∧(M).

>What does this mean?
What did he mean by this?

What is it that you don't understand about theme? They're pretty simple senpai

Honestly though I don't understand induction. I don't understand how to apply it correctly, the examples always seem to have some odd form.

Column hold buildings up and are thus vertical. Rows are the other one.

Elementary probability and combinatorics. For some reason, my brain fries whenever I try anything of the sort.

How the fuck do particles work how can they not be in one place reeee

Paradoxes in Set Theory & Logic.

Like the Russell's paradox.

We know how they work but I don't think anyone knows why they work like that

you baiting r..right ?
we learn that shit in school

Lifting does not cure autism.
Its the main flaw of may fitizen.

>"you shouldn't care about what people think"
>the basis of ethics is caring about what other people think

I'm only insecure because I'm intelligent and ethical.

Literally anything past basic maths. Went for the humanities route in school. Psychology and Literature and ignored/forgotten the majority of basic science and anything past the most basic maths.

Katy Perry: Is math related to science?
youtu.be/Orf8NkcIDig
youtu.be/OB4znOVsfnA

I learned that shit in high school and it was easy as fuck.

Is this a joke?

A sequence or series (a series is a type of sequence) can either diverge, converge, or infinitely oscillate. What's not to understand?

N-dimensional array of numbers. Its that simple. It can represent anthing.

Nit really, you can represent tensors that way, but that isn't the tensor per se. What happens if you change coordinates then?

Magnetism

Its like a line pointing somewhere
A visual representation of basic algebra i guess. Multiple Dimensions added to it are just multiple factors.
Lets says you have a factory. Every cycle (represented by [x|-|-]) they produce 20 somethings (represented by [-|y|-]) and they spend 100$ (represented by [-|-|z]).
Im that case that vector is e:x=[1|20|-100]
Now you take that times the cycle and you have the different ressources.
I hope that is mathematical enough for Veeky Forums

>brainlet general
>must really be retarded

anyone who has ever taken a linear algeba class knows that, engineer or not.

Is 115 iq enough to study applied math?

yes

...

70 iq is pretty much enough for anything that isnt combinatorics

HOLY FUCK
Why didnt anyone tell me this two years ago i wouldve passed gen chem fuck

Anyone?

>covalent vs ionic bonding

...

kek

reality

like, scientifically speaking, what IS reality? *smokes weed*

why do people discredit wildberger when we all know deep down he's right

>posting a rapist
you people need to check some privilege

>writing complex proofs

Differential is a change in one thing with respect to another. So say you have distance on your y axis and time on x axis. A straight line going 45 degrees. Y=x

First order differential gives you velocity, the change in distance per unit time... Because this is constant gradient your velocity is constant also

If you then take the differential of velocity ( second order diff) you are looking at the change in velocity per unit time squared.... As the gradient is constant this will equal zero

Want to know how brainlet I am? everything in this thread I've never studied, tried, or learned before. I didn't have a deep interest in math before, because they never tried to help me understand only to obey, memorize, and finish test after test. I want to feel excited about Math instead of worried. But I don't know how and it's really hard.

wave functions and wave function collapse

... actually, just functions. F. Wtf is F?

A really big list (possibly infinite) of pairs like (A, B). Every time it gets an A, it returns a B.

I don't know what you mean

Imagine you have a bag of things, and imagine this bag has a big, capital letter A embroidered on it. We'll say that items in this bag are elements of the set A. Now imagine there's a second bag, with different items, labeled B. Now imagine there's an autistic kid called Francis that forms associations between objects - if you give him an item from bag A, he'll immediately run to bag B and pull out the item he's associated with the item you gave him. Francis is the function F.

a function f is a rule of assignment

If you have two sets, for every element in one of the sets, I can assign it an element from the other set. A function is this rule of assignment, where every element in the set gets assigned a single element of the other set.

A function is called surjective if every element in the "target" set gets hit by the function

A function is called injective if there are no elements in the target set that were assigned by function for more than 1 element