What's the significance of eigenvectors and eigenvalues with respect to real phenomena, Veeky Forums?

What's the significance of eigenvectors and eigenvalues with respect to real phenomena, Veeky Forums?

I already have a degree and I still never made the connection between the mathematical and the physical and it still bothers me...

To keep a simple case, let's say I have a 2x2 matrix $A$ and a vector $\vec{x}$. If I assert that $A \vec{x} = \lambda \vec{x}$, I am saying that my matrix can be replaced by a scalar. If the determinant is then 0, it means my vectors, at appropriate eigenvalues, are linearly dependant.

But what does that mean for the physics? How does this tie in with all the bullshit about oscillating at natural frequencies?

Other urls found in this thread:

math.nus.edu.sg/~matysh/ma3220/chap7.pdf
en.wikipedia.org/wiki/Linear_differential_equation#Homogeneous_equations_with_constant_coefficients
en.wikipedia.org/wiki/RLC_circuit#Series_RLC_circuit
en.wikipedia.org/wiki/Displacement_current#Current_in_capacitors
twitter.com/SFWRedditImages

Whelp that tex failed

I recognize this slut.

>muh physics
It's simply an interesting question to ask of a linear transformation.

What if I told you this was a density plot of the eigenvalues of random 5x5 matrices drawn from {-1, 0, 1}, and that we simply do not know why this pattern looks the way it does? Demands further investigation, don't you think?

They seem to be special for physical stuff though, like an electrical circuit can (in theory) oscillate indefinitely with no input once it has been started, and it will oscillate at the frequency given by the eigenvalues. And I don't get why this is.

...

Literally the whole of quantum mechanics involves eigenvalues and eigenvectors

If you have a linear transformation [math]L:V\rightarrow V[/math] where [math]V[/math] is a vector space over some field [math]F[/math], then an eigenvector [math]v[/math] for the transformation [math]L[/math] is any vector that satisfies [math]L(v) = \lambda v[/math] for some [math]\lambda \in F[/math]. [math]\lambda[/math] is referred to as an eigenvalue for that linear transformation.

As for your circuit question, it has to do with the fact that differentiation can be considered a linear operator (a linear transformation from one vector space into itself) on functions.

It can be proven (i can't remember exactly how atm, sorry) that circuits containing only passive elements (resistors, capacitors, inductors) will oscillate in a sinusoidal manner if either the input to the circuit is a sinusoid or if it is starting from some initial conditions with no input. Such circuits can be written as differential equations.

So, consider the function [math]A cos(\omega t + \phi)[/math] where [math]A[/math] is the amplitude, [math]\omega[/math] is the frequency, and [math]\phi[/math] is the phase shift. In electrical engineering, such signals are commonly represented like so: [math]Ae^{i(\omega t + \phi)}[/math]. If we let [math]A = e^a[/math], then

[math]Ae^{i(\omega t + \phi)} = e^a e^{i(\omega t + \phi)} = e^{i\omega t + (a + i\phi)}[/math]

...thereby allowing us to reduce the phase and amplitude information to a single complex number.

Now, observe what happens when we try to differentiate this:

[math]\frac{\rm{d}}{\rm{d}t}e^{i\omega t + (a + i\phi)} = i\omega e^{i\omega t + (a + i\phi)}[/math]

The result of differentiating this function is the same function times a complex scalar [math]i\omega[/math]. If we consider differentiation a linear transformation and functions vectors, this means that we've found an eigenfunction for the differentiation operator, and [math]i\omega[/math] is the associated eigenvalue. This is where that frequency comes from.

This probably isn't rigorous enough, but it's late so idc

I hope it helps at least

That did help a bit, I knew how to make the phase, gain etc into one but for some reason it never clicked that the frequency is a scalar. I still don't fully get why it HAS to oscillate though... sorry

Think of matricies at pictures in lets say a 2D space. Given that DetM !=0, the eigenvectors are indicating the direction in which the matrix scales up/down this picture and the corresponding eigenvalues are the the magintude of that scale. It is hard to grasp, matricies are somewhat a generalisation of scalar numbers.

Moreover, eigenvectors and eigenvalues become of increasing importance when looking at systems of differential equations. Their general solutions are a linear combination of the the eigenvectors of the matrix of coefficients of the system of differential eqn and e^(lambda*x), where lambda is the corresponding eigenvalue.

Check out this link:
math.nus.edu.sg/~matysh/ma3220/chap7.pdf

File from the National University of Singapore, excellent Engineering school by the way

What a pedantic, stupid, non-answer.

Thats very simple, A*e^(i*x) can be written as A*(cosx + i*sinx), basically a linear combination of two periodic functions. Hence the oscillation. Depending on the exact equation, different types of oscillations can be obtained. If for instance the exponent of e is negative, the function will approach 0 very fast for growing x , therefore the oscillations die away

In quantum mechanics every possible observable is represented by an operator. Now, your system states is represented by the egeinvector of the operator, and all the values you can predict are actually eigenvalues of the operator

It HAS to oscillate because you can write any combination of resistors, capacitors and inductors (with no input) as a series of homogenous linear differential equations.

en.wikipedia.org/wiki/Linear_differential_equation#Homogeneous_equations_with_constant_coefficients

The solutions to such equations take the form of linear combinations of [math]e^{zx}[/math] where [math]z[/math] is some complex number. With a bit of work you can obtain the frequency of oscillation.

Also, depending on the configuration of the circuit, it may not oscillate forever. See an example in pic. If we let V = 0 (a short) and set the initial current and voltage values on the components, the result will be some sinusoid whose amplitude decays to 0 with time.

You can't put C in series like that, I would become 0.

>What is fucking displacement current

Get out of here, brainlet.

For a DC (steady state) input, capacitors are open circuits, so the current would certainly be 0. However, this is not DC. Capacitors allow AC current through. If we take current and voltage across the capacitor to be the functions of time [math]I(t)[/math] and [math]V(t)[/math] respectively, then they are related as follows:

[math] I(t) = C \frac{\rm{d}V(t)}{\rm{d}t}[/math]

Where [math]C[/math] is the capacitance. Thus we can clearly see that if the voltage across the capacitor is changing, then so is the current "through" it, even though a capacitor is effectively an open circuit.

>AC current

I just realized I typed "alternating current current"

Yes if I get out of here I am sure to learn it better, right?

If it says V with a + and a - I would assume DC. Usually a sine wave indicates AC.

>I would assume DC.
Normally I would too, but that image came straight from the wikipedia article for RLC circuits so I was just assuming AC.

The current through the capacitor would be 0 unless the voltage is so high as to literally rip the material of the capacitor apart to create a momentary current.

>still thinks current would be 0
Jesus christ man, I hope you're baiting.

en.wikipedia.org/wiki/RLC_circuit#Series_RLC_circuit

Pic related is the impulse response of the circuit I posted here . This is just one counterexample to your claim.

You would not be able to get any such curve from a DC voltage source which was what you posted in the image.

Also I was talking about the current THROUGH the capacitor, not the current through the wire. The reason there could be a current at all is because the plates of the capacitor can store up some charges. But for charges to travel through the capacitor would need it to break. For example ionizing any insulating dielectricum.

>You would not be able to get any such curve from a DC voltage source which was what you posted in the image.
I know. If you have an issue with the picture, take it up with the guy that uploaded it to wikipedia, not me. But here, have a nice modified version I made. I hope that satisfies your autism.

>Also I was talking about the current THROUGH the capacitor, not the current through the wire.
Oh, come on. I understand it's not 100% accurate to say current runs THROUGH a capacitor, but current directly on either side will be the same while it's charging and discharging. It's more convenient to say the current runs "through" the capacitor when doing AC analysis, even though that's not really what's happening.

en.wikipedia.org/wiki/Displacement_current#Current_in_capacitors

It's not convenient, it is plain wrong. The AC pushes and pulls charges onto and off the plates and wires on respective sides. No current ever passes in between, unless the capacitor breaks.

If you read carefully in the article you see that they clearly write the displacement currents are fictitious.

Don't remember all the math, but it is used in modal analysis. eigen vectors are the a vibration mode and the eigen value is a frequency.
Basically when you have coupled equations, you have more than one dimension solution.

OP here, thanks for all the inputs, I know I'm being a dumbass but I can't make the jump between eigenvalues and natural frequency...

I know that if I have an LRC circuit and I excite it with AC, then turn the AC off, an ideal circuit will oscillate with a decreasing amplitude proportional to R. I also kinda know that it would oscillate with a natural frequency.

Apparently, eigenvalues can help me in calculating this natural frequency if I am given values for L, R and C.

So, to calculate my natural frequency, I want eigenvalues. To calculate eigenvalues, I need a matrix of some kind.

To try and make it easier I'll narrow it down to 2 questions:
>What terms is this matrix needed to calculate my eigenvalues (and therefore natural frequency) made of?
>Given L, R and C for a simple circuit, how would you calculate the natural frequency, and is this the same as the eigenvalue for the previous matrix, or have I been misled?

>No current ever passes in between, unless the capacitor breaks.
I. Know.

There is no conventional current passing between the plates. No actual transfer of charge happens through the capacitor. The concept of displacement current is a fictitious "current" that helps explain what's happening inside a capacitor. I'm not arguing that conventional current happens through a capacitor.

At this point, it's about terminology. Semantics. I'm not claiming that saying "current through a capacitor" is correct. I'm saying it's more convenient than the true behavior. It works out the same way, and it's easier to talk about in more complex circuits. Clearly you disagree with me:

>It's not convenient
I know it's wrong, but how is it not convenient?

Have you ever actually analyzed a circuit more complicated than series RLC?

Because the circuit stores and transforms energy between the electric field and the magnetic field, which are coupled by the capacitance and reactance.
Resonators work kind of like this.
In a normal circuit though, any resistance will quickly dissipate the energy.

Look up normal modes of coupled oscillators. An example is two masses connected with a spring between them and one connecting each to a wall. What configurations will have the masses oscillate at the same frequency? For this we introduce a position vector x=(x1,x2) which is the position of each of the masses. If we assert that they oscillate at the same frequency then x1=A1e^(iwt) and x2=A2e^(iwt). Thus the position vector can be rewritten (x1,x2)=(A1,A2)e^(iwt).

Now the equations of motion for this system are

m a1 = -2k x1 + k x2

m a2 = kx1 - 2k x2

Which can be written as a matrix equation a = Mx where M=k((-2,1),(1,-2)). Then using the above assertion we have -w^2 (A1,A2) e^iwt = M(A1,A2) e^iwt. The eigenvalues are w^2=k/m and w^2=3k/m with eigenvectors (1,1) and (1,-1). In other words they oscillate at the same frequency when they oscillate together or the opposite when the motion is mirrored between the two.

I have written a couple of circuit simulators in software which could handle a couple of thousand components maybe.

Things like this make them very convenient to systematically deal with large sparse linear equation systems and linear algebra.

But I don't do that very much anymore because I was just milked off of my efforts by a fat bunch of spies last time I gave it a shot.

>finite dimensional matrices
Spin measurements
>infinite dimensional matrices
Chladini patterns

Eeng masters here. Op, that's because there fucking is none unless we assign one.

There is a physical realm where you have physical devices like tantalum capacitors, MOSFETs, resistors, opamps, transmission lines etc. Those things behave like butt and we can only design working physical circuits by approximating those devices with models with various degree of accuracy and assigning certain tolerances to our results.

Then there is the realm of modeling. You can model a simple resistor by ohms law, a capacitor or inductor by complex impedance for harmonic signals or by differentiation in time domain, or by a Laplace transform for s domain, or whatever. These then result in all sorts of differential equations.

Now differential equations can be solved analytically by a mathematician (your class examples). However, large systems with lots of components will need a numerical solution carried out by a PC (in case of electronics, circuit simulators). Numerical solutions then involve all kinds of matrix algebra, and often end up needing eigenfunction solvers (finite elements, FDTD).

There are no vectors, no eigenfunctions, no matrices or whatever in the real world. These are all just tools we invented in order to understand and predict real world phenomena.

Notice they only show up in Linear Algebra when you start doing shit with fitting.
Same with Diff Eq, which is basically 'Guess/Fit the Model' the class