Like every Sunday...

Like every Sunday, the young Riemann went to Snickers (coffee shop) to read about new developments in stochastic processes, a casual pursuit of his. Having spread his papers all over the empty wooden table in a well lit corner of the ship, he couldn't help but hear that Leonie was there too. It shouldn't have been that much of a surprise, given the math department was straight around the corner, but Riemann just didn't take her for a coffee gal. She and her mater were getting louder and soon swear word fueled anxiety was governing the room. To be sure, nobody could overhear the discussion she was heaving with her mates, even when in a classroom. Today they were fighting about the right definition of a graph. As always, Leonie took the stance that you should define all object without any reference to sets. Or any notion of sets. Riemann was interested in the discussion a such, but still bothered by the fact that his sanctuary - the only place where he could hit on girls while at the same time bringing his Greeks to paper - was overtaken by the intellectual barbarians he had to endure the whole week. When he found he was starting the same page for the fourth time, he decided to stand up and go over. Five minutes after the decision he actually stood up. One minute later, he took the first step in Lenoie's direction. He was a bit puzzled what his mouth would come up with, how he'd stop these loud folks from ruining his calm but caffeinated math experience.

Attached: IDEOLOGY.jpg (615x409, 37K)

Other urls found in this thread:

en.wikipedia.org/wiki/Lagrange_inversion_theorem
axiomsofchoice.org/analytic_function
axiomsofchoice.org/holomorphic_function
axiomsofchoice.org/linear_approximation
twitter.com/NSFWRedditImage

sickest fuck

>It shouldn't have been that much of a surprise, given the math department was straight around the corner, but Riemann just didn't take her for a coffee gal. She and her mater were getting louder and soon swear word fueled anxiety was governing the room. To be sure, nobody could overhear the discussion she was heaving with her mates, even when in a classroom

sounds pretty comfy. "ching chong ping pong durka durka mohammad jihad" are the only sounds that govern the halls of academia in my university.

i'd like to see a young riemann try and concentrate over that noise

I feel like doing some shit writing.
Any wishes where I should take the story?

I also have a question.
Given an infinite series [math] (a_k) [/math] and the (truncated) function
[math] f_m(x) = x + \sum_{n=2}^m a_k x^k [/math]

what's the expansion of

[math] f_m(g_m^{-1}) [/math]

where [math] g_m [/math] is the inverse of the limit of [math] f_\infty [/math] truncated at m itself.

There's an explicit formula for the coefficients of the inverse function of an analytical function, so this problem can also be tackled analytically. It's work, however.

So e.g. consider the expansions of

e^x - 1

and its inverse

log(1+x)

to m'th order and concatenate those.
It will by x plus som small polynomial expression with coefficients depending on m and which goes to zero for m to infinite (as in that case, those functions are perfect inverses of one another).
There's a formula for this in terms of the coefficients of the exponential function.

>inverse of the limit of f∞ truncated at m itself.

this isn't always invertible, right? are we just assuming an inverse exists over all of $\mathbb{R}$?

The series should make for an analytic function everywhere and then, if you're interested in tackling it, you got

en.wikipedia.org/wiki/Lagrange_inversion_theorem

Attached: Screen Shot 2018-03-10 at 17.25.01.png (694x505, 52K)

if that's the case, then given some y, isn't the single root of d(f_inf(x) - y)^2 / dx your inverse x?

Not sure what you're saying.

We're given a sequence (a_k) that makes for an analytical function f. It determines an inverse g with coefficients of series expansion around x=0 are given in terms of those of f (the inverse is given by the Lagrange inversion theorem). So we now have actually two series.
Now choose any m and consider both functions to be expanded to that power m. Those are two polynomials. Concatenate those. The result will be x + some polynomial with smallest coefficient m+1 and highest coefficient m^2. (See pic).

The question is what this polynomial is, as a function of (a_k) and m.

What we already know about it is that for m to infinity, it's coefficients will all go to 0. This is because of m to infinity, we get the concatenation of the analytical function and its inverse.

how does the composition f_m(g_m^-1(x)) not work out to be the identity?

so the inverse of f_inf truncated at m is not the same as the inverse of f_m then.

>so the inverse of f_inf truncated at m is not the same as the inverse of f_m then.
Consider

[math] f(x) := -1 + \frac{1}{1-x} = x+x^2+x^3+... [/math]

The fraction is simple to invert and its inverse function has a simple expansion that actually looks pretty much the same as the above. But the expansion to any truncation at the power m>5 of the above will give an ugly as fuck root with ugly coefficients.

>how does the composition f_m(g_m^-1(x)) not work out to be the identity?
If f ang g are expanded to the power of m each, then in the polynomial resulting from the concatenation, it's clear that its term of order x^(2m) will always be nonvanishing. It can't cancel with anything!

x^(m^2) I mean.

I.e. the highest coefficient rusulting from plugging in one polynomial of highest order x^m into the other

>analytic function everywhere

i don't think so, only for |r| < 1 so that the geometric series converges.


>The fraction is simple to invert and its inverse function has a simple expansion that actually looks pretty much the same as the above. But the expansion to any truncation at the power m>5 of the above will give an ugly as fuck root with ugly coefficients.

okay, right. i have to think about this then.

the analysis book i used actually uses the term "analytic" to describe holomorphic functions (they define "analytic at z" as differentiable in a neighborhood around z), and glosses over a lot of details when it comes to series expansions and the equivalence between holomorphic and analytic functions. i've proven every exercise/result, but i have to go back through the series stuff and cement it.

the definition of g_n looks like the n-1th derivative of the nth power of the multiplicative inverse of the derivative. i'm not sure how to interpret that.

Well you know I'd say the best approach to these problems is to take the example (e.g. the exp(x)-1 one) and find the formula in terms of a_k, so that when you plug in a_k = 1/k! (for k>0), you get the right solution for the polynomial concatenation. This formula with the a_k will then also hold for a whole variety of other functions.

There's also the formula with the Bell polynomials, maybe just accepting those is a good start. Maybe just considering the problem for m=2 is a good start too. Then the concatenation we look for will always be of the form

[math] f_m(g_m(x)) = x + a \cdot x^3 + b \cdot x^4 [/math]

Now what is a and b in terms of a_k that define f?

ad

spoiler I just plugged it into Mathematica and found

a = -2 * (a_2)^2
b = (a_2)^3

but of course here we fixed m=2 and interesting would be what those coefficients look in terms of m.

Attached: Screen Shot 2018-03-10 at 18.59.47.png (310x574, 38K)

at the moment, i'm trying to grasp why the cauchy integral formula [math]f(z_0)=\frac{1}{2\pi i}\int_C f(z)dz [/math] is true for non-infintestimal positively oriented simple closed contours C. i understand the argument. C can be replaced by the circle path z_0 + pe^(it) if p is small enough so that the new path is contained in C's interior, since integrating over the "keyhole" shaped region formed by connecting C with a negatively-oriented version of z_0+pe^(it) has integral zero by the cauchy-goursat theorem. similarly, shrinking p does not affect the value of such an integral. the integral of the difference f(z)-f(z_0) / z-z0 over C_p can be bounded using the continuity of f, the fact that |z-z0| = p, and since the upper bound can be made as close to zero by shrinking p, along with the fact that the ingrals value does not change as p is made smaller, means that the integral of the difference must be zero. using the linearity of the integral and splitting it into two terms, the term on the right is equal to -f(z0)2pi i, and the term on the left is equal to the integral in the problem statement.

but i still don't completely understand exactly why the path can be manipulated in such a way (replacing with a circle and then shrinking). these just follow from the cauchy goursat theorem, which i've also practiced, but it's hard for me to picture how the cauchy integral theorem is true when all points on the path are not very close to z_0. most of these results follow from the multiplication rule and the cauchy-riemann equations, which i understand, i think i need to study them further before i move on.

and since this is sort of the essence of what we're discussing (functions that are determined by only local information), it seems like a good idea to meditate on it until i get a clear "picture".

>f(z0)=12πi∫Cf(z)dz

whoops, forgot the denominator z-z0. that's important.

another, very informal way i think of this is that beginning and end of the closed path meet at a branch cut where the difference is zero but i times the difference is 2pi.

which is just another way of saying that adjusting the radius does not affect the angle.

I'd say you block yourself if you tell yourself you need to grok everything before working with it. I'd start with plugging in all items of relevance together and see where I end up.
But in any case, I've made a few notes on an perspective to your questions here

axiomsofchoice.org/analytic_function

In particular, I break down the function into that infinite sum and the question becomes simpler when you find how most summands won't contribute anyhow. And why.
Also, here
axiomsofchoice.org/holomorphic_function
axiomsofchoice.org/linear_approximation

All not extremely related to the inversion question.

Attached: ABC_Mochizuki_Beckert.jpg (996x560, 146K)

>I'd say you block yourself if you tell yourself you need to grok everything before working with it.

if i move on before getting it, i'm just going to forget it in a month or two. but you're probably right. perhaps that's how i failed most of my high school math courses and ended up in remedial math as a college freshman.

thanks for your notes though.

>fm(g−1m)

is this some sort of approximation error for finite expansions?

>in terms of the coefficients of the exponential function.

and do you mean it's like a fourier series?