Post beautiful equations

Post beautiful equations.

Other urls found in this thread:

mathjax.org/cdn-shutting-down/
twitter.com/NSFWRedditGif

y=mx+b

bait

>functions are equations

[math]
y - y_{0} = m(x - x_{0})
[/math]

x

>Last week of grad complex analysis
>Talk about analytic continuation
>Professor proves -1/12 meme

What is beautiful in an equation? This is something I've never understood.

>beautiful
>isn't actually defined for the only part of the function that is important

c2 = a2 + b2

There is nothing beautiful about that equation. It's a definition. It says nothing beyond giving a name to something. It doesn't really relate two objects.

Concise, has a lot of unexpected complexity, seems to appear in random places hinting at deep mathematics. That kind of stuff.

A good example are the fully symmetrized Maxwell equations. Yeah, it's cliche, but for good reason. These equations are the foundation for a lot of interesting math, physics, and engineering applications. It's astonishing how much you can explain starting out with the same set of equations. These don't even represent the most sophisticated description of EM theory.

[eqn]
\begin{align}
&\nabla \times \vec{E} = -\vec{J}_m\\
&\nabla \times \vec{H} = \vec{J}_e\\
&\nabla \, \cdot \,\,\vec{D} = \rho_e\\
&\nabla \,\cdot \,\,\vec{B} = \rho_b
\end{align}
[/eqn]

1 + 1 = 2

Those Maxwell equations are useful, yes, but I don't think they are "beautiful". I just can't see it.

Selberg trace formulas and formulas from index theorems are up there for me

e^{i\π }+1=0

y = y

>Post beautiful equations
>proceeds to post a definition
Good start Giggly Figgly Faggington Fag

Anyone who uses those [math]\overrightarrow D[/math] and [math]\overrightarrow H[/math] fields have shit taste.

Anyone who only uses [math]\vec{E}[/math] and [math]\vec{B}[/math] have never built anything of practical value in their life.

...

The most beautiful one yet

A = P^-1*B*P

>grad complex analysis
i did this in my uk university in my second year, also in the last class

US education is really pathetic...

Well I am in my second year, but yeah I think only 3-4 /22 students are undergrads.

A = (NP)^-1*B*(NP)

f(zn) = sin(zn) + ez + c

69 * 222 * 51 *7

2+2=5

I just love it because it immediately makes you think about the nature of truth, percception and objectivity.

kys

fuck off

P=NP

...

x=x

nothing is beautiful anymore
mathjax.org/cdn-shutting-down/

...

...

test

[eqn]\sum_{k=1}^\infty k=-\frac{1}{12}[/eqn]

, said the engineering brainet

[math] \displaystyle
e=\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+\frac{1}{3!}+\frac{1}{4!}+\cdots
\\ \\
e^x=\frac{x^0}{0!}+\frac{x^1}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!}+\cdots
\\ \\
sin(x)=\frac{x^1}{1!}-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\frac{x^9}{9!}-\cdots
\\ \\
cos(x)=\frac{x^0}{0!}-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+\frac{x^8}{8!}-\cdots
\\ \\
cos(x)+sin(x)=1+x-\frac{x^2}{2!}-\frac{x^3}{3!}+\frac{x^4}{4!}+\frac{x^5}{5!}-\frac{x^6}{6!}-\frac{x^7}{7!}+\frac{x^8}{8!}+\frac{x^9}{9!}-\cdots
\\ \\
e^{ix}=\frac{(ix)^0}{0!}+\frac{(ix)^1}{1!}+\frac{(ix)^2}{2!}+\frac{(ix)^3}{3!}+\frac{(ix)^4}{4!}+\cdots
\\ \\
e^{ix}=1+ix-\frac{x^2}{2!}-\frac{ix^3}{3!}+\frac{x^4}{4!}+\frac{ix^5}{5!}-\frac{x^6}{6!}-\frac{ix^7}{7!}+\frac{x^8}{8!}+\frac{ix^9}{9!}-\cdots
\\ \\
e^{ix}=\left ( 1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+\frac{x^8}{8!}-\cdots \right )
+i \left ( x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\frac{x^9}{9!}-\cdots \right )
\\ \\
e^{ix}=cos(x)+i \, sin(x)
[/math]

You'll fill your ego faster if you don't use a anonymous board. Please stay at reddit

>not writing that first terms of cos+sin as x^0/0! + x^1/1!

there is a good reason for that, think again

chromsome= XX XY XYX

I can't see the reason. cos and sin have those too, just for cos+sin they are simplified and break the patern.

This is not an equation, you fucking moron.

[math]\displaystyle{\left(\sum_{k = 1}^{n} k \right)^2 = \sum_{k = 1}^{n} k^3} [/math]

Explain this to a brainlet that doesn't know math.
I know e is a certain constant and i is -1^(1/2). I also know that an exponent of a positive number (e) van't be negative.

To make up for OP's brain fart here's a beautiful equation involving the Riemann zeta.

tip:
[math] \displaystyle
\frac{(ix)^0}{0!}= \frac{x^0}{0!}
[/math]

I came

how advanced is japanese high school math?

Y=mx+b
I'm retarded

number theory is some crazy shit
post more

In Australia it's

q + xw = λ

>Explain this to a brainlet
no, bask in the mystery of our intellect

[eqn]\left(\pi\right)_{16} = \sum_{k=0}^{\infty} 16^{-k}\left[ \frac{4}{8k+1} -\frac{2}{8k+4} - \frac{1}{8k+5} - \frac{1}{8k+6} \right][/eqn]

2b=e

Bro, do you even Clausius–Mossotti?

e+pi=i+1+0

...

op said equations not transforms you fucking idiot

cute post senpai

[math] \lim_{z\to 1} \left( \sum_{n=1}^\infty n^m z^n - (-1)^{m+1} \dfrac{m!}{\log(z)^{m+1}} \right) = -\dfrac{1}{m+1} B_{m+1} [/math]

where [math] B_{m+1} [/math] are the Bernoulli numbers
1/6, 0, -1/30, 0, 1/42, 0, -1/30, ...

So
[math] \sum_{n=1}^\infty n^m [/math]
gives
-1/2, 0, 1/120, 0, -1/252, ...

OC

Chill. Just plug the second equation into the first. Think of it as a vector expansion.

Btw, that notatation is gay as fuck.

[eqn]
F(\omega) = \frac{1}{(2\pi)^\frac{n}{2}}\int_{\mathbb R^n} f(x) e^{+i \omega x} dx\\

f(\omega) = \frac{1}{(2\pi)^\frac{n}{2}}\int_{\mathbb R^n} F(\omega) e^{-i \omega x} d\omega
[/eqn]

ftw

...

>R^n
>scalar x and w

nigger

>I also know that an exponent of a positive number (e) van't be negative.
No, that's just wrong. Who told you that bullshit?

[math]a^{-b} = \frac1{a^b}[/math]

e^pi*i + 1 = 0

[math]{S_{GS}} = - T\int {{d^2}\sigma \left[ {\sqrt { - h} {h^{\alpha \beta }}{g_{\mu \nu }}\prod _\alpha ^\mu \prod _\beta ^\nu } \right] - T\int {{d^2}\sigma \left[ {{\varepsilon ^{\alpha \beta }}{\partial _\alpha }{X^\mu }\left( {{{\bar \Theta }^1}{\Gamma _\mu }{\partial _\beta }{\Theta ^1} - {{\bar \Theta }^2}{\Gamma _\mu }{\partial _\beta }{\Theta ^2}} \right) + {\varepsilon ^{\alpha \beta }}{{\bar \Theta }^1}{\Gamma ^\mu }{\partial _\alpha }{\Theta ^1}{{\bar \Theta }^2}{\Gamma _\mu }{\partial _\beta }{\Theta ^2}} \right]} } [/math]

the fuck I cant keep up with all the different actions for ST, which one is that? I assume GS stands for Green-Schwarz?

1

is that the TOS?

Kappa symmetry is pretty dope I can dig it.

You can write those terms as x^0/0! + x^1/1!, but if you do that then you also need to specify the additional requirement that 0^0 = 1.

The problem with making 0^0 = 1 a requirement is that its analysis is ambiguous. As x->0, the limit of x^0 is 1, but the limit of 0^x is 0. The fact that those two limits are different means that analysis cannot be used to conclude what 0^0 is. As a result, a special-case declaration by fiat is required to define 0^0 (if you even care about making such a definition).

Some mathematicians are fine with the assumption that 0^0 = 1, but some consider that definition to be an unnecessary complexity, and they prefer to leave 0^0 undefined.

There is no "right" answer to this question, because it depends on a person's subjective view about what is "elegant" and what is "unnecessary" in mathematics.

I agree but desu can there be models for the last one?

Brainlet here. Where on wikipedia can I go to read what this is?

Wow, I didn't know that.

196884 = 196883 + 1

The last two are inconsistent.

Fourier Transform.

The best way to think about it is as an expansion of a vector in a basis. It's weird because the "vectors" are functions and there are uncountably infinite dimensions.

That is, given a finite dimensional basis [math]\hat{e}_i[/math], where [math]\langle\hat{e}_i | \hat{e}_j\rangle=\hat{e}_i\cdot \hat{e}_j=\delta_{ij}[/math] (it's orthonormal w.r.t the "dot product" inner product), any vector [math]\vec{v}[/math] can be expressed like:

[eqn]
\begin{align}
\vec{v}&=\sum_i\alpha_i\hat{e}_i\\
\langle\vec{v}|\hat{e}_j\rangle&=\left\langle\sum_i\alpha_i\hat{e}_i|\hat{e}_j\right\rangle
=\left(\sum_i\alpha_i\hat{e}_i\right)\cdot\hat{e}_j=\sum_i\alpha_i(\hat{e}_i\cdot\hat{e}_j)=\sum_i \alpha_i\delta_{ij}=\alpha_j\\
\Rightarrow\vec{v}&=\sum_i\langle\vec{v}|\hat{e}_i\rangle\hat{e}_i
\end{align}
[/eqn]

Analogously, given the uncountably infinite orthonormal basis [math]e_{\omega}(x)=\frac{1}{\sqrt{2\pi}}e^{i\omega x}[/math], where [math]\langle e_{\omega}(x)|e_{\omega^\prime}(x)\rangle=\int e_\omega(x)e_{\omega^\prime}(x)^*dx=\delta(\omega-\omega^\prime)[/math] (it's orthonormal w.r.t. to a different inner product, the excercise is left to the reader), (m)any vector [math]f(x)[/math] can be expressed as

[eqn]
\begin{align}
f(x)&=\int\alpha(\omega)e_\omega(x)d\omega\\
\langle f(x)|e_{\omega^\prime}(x)\rangle&=\left\langle\int\alpha(\omega)e_\omega(x)d\omega | e_{\omega^\prime}(x)\right \rangle=\int\left(\int\alpha(\omega) e_\omega(x) d\omega \right ) e_{\omega^\prime}(x)^*dx=\int\alpha(\omega)\left ( \int e_\omega(x)e_{\omega^\prime}(x)^*dx \right )d\omega=\int\alpha(\omega)\delta(\omega-\omega^\prime)d\omega=\alpha(\omega^\prime) \\
\Rightarrow f(x)&=\int \langle f(x)|e_\omega(x)\rangle e_\omega(x)d\omega\\
&=\int\left(\int f(x)\frac{e^{-i\omega x}}{\sqrt{2\pi}} dx\right)\frac{e^{+i \omega x}}{\sqrt{2\pi}}d\omega
\end{align}
[/eqn]

The basis here are sinusoids. Thus, the inner product [math]\langle f(x)|e_\omega(x)\rangle[/math] is a spectrum.

If it's got an = then it's an equation

He meant that e^x is never negative in R^2

>1_r
That's the worst unit vector notation I've ever seen.

i bust a nut so hard i cant feel my left leg

so [eqn]\sum_{k=1}^\infty k^3=-\frac{1}{1728}[/eqn]

Alright, so in calculus, the derivative (or rate of change) of e^x is just e^x
In calculus, there is also something in calculus called taylor series, where you can write the approximate value of any function f (x) by using the formula in the picture, where f'(x) is the first derivative, f''(x) is the second derivative, and f^n(x) is the nth derivative. Xo is the number that the approximation is 'centered around', and can jjust be any number. The accepted approximation of e^x is centered around 0, and will be shown in the next post cont.

Clearly I fucked up the formatting int the middle. The sentence reads:

Analogously, given the uncountably infinite orthonormal basis
[math]e_\omega(x) = \frac{1}{\sqrt{2\pi}}e^{i\omega x}[/math], where [math]\langle e_\omega(x)|e_{\omega^\prime}(x) \rangle= \int e_\omega(x) e_{\omega^\prime}(x)^*dx = \delta(\omega - \omega^\prime)[/math] (it's orthinormal w.r.t a different inner product, the exercise is left to the reader, (m)any vector [math]f(x)[/math] can be expressed as

I should note that the little star is complex conjugate.

I should stress the weird "integral" inner product [math]\int f(x) g(x)^* dx[/math] I used is just as valid as a "dot" inner product [math]\vec{v}\cdot\vec{u}[/math], because both satisfy the inner product axioms:

[eqn]
\langle f(x) | g(x) \rangle = \langle g(x) | f(x) \rangle^* \\
\langle \alpha f(x) + \beta h(x) | g(x) \rangle = \alpha \langle f(x) | g(x) \rangle + \beta \langle h(x) | g(x) \rangle \\
\langle f(x) | f(x) \rangle \geq 0 \quad \vee\quad \langle f(x) | f(x) \rangle = 0 \Leftrightarrow f(x) = 0
[/eqn]

The Fourier Transform is a little hard to interpret for the uninitiated, but if you discretized it to try to approximate it (or are studying Fourier series), you will see you are building a function out of sinusoids of different frequency, amplitude, and phase.

Was this pretty clear?

ax + by = 1

So this is the expansion of e^x derived by using the equation in the previous post. Now instead of x, put in (i*x) for x, for the definition of i that you know, i^2 = -1, i^3 = -i, i^4 = 1

So some of the terms become negative and some still have i's in them and some terms have both. Now if you look at the sin x and cos x expansions, combined they form the expansion for e^x, and all of the sin terms would still have an i attached to them, and the i can be factored out, so that the equation forms e^ix = cos (x) + i*sin (x)

Instert pi into the eq, and it works

... + x3 + x2 + x + 1 + 1/x + 1/x2 + 1/x3 + ... = 0

Thanks I learnt something.

>Was this pretty clear?
It was mostly pretty wrong.

The orthonormal basis IS countable ( how can you even believe that it is not? Omega is in Z.), and NOT a basis of the L^2 vector space (only it's closure is) and can only approximate most L^2 functions.

1=1

bottom the page

x^2 = x + 1

Your mum + my dick = you

clever

For the Mario fans

Truly, you have selected the only truly beautiful equation among the infinite set of all possible equations. This has already made me ponder the question of wHY YOU HAVEN'T FUCKING KILLED YOURSELF ALREADY. FUCKING RETARDED, BRAINLET FAG.
Thank you