So Veeky Forums, prove that 0 factorial = 1

...

Other urls found in this thread:

luschny.de/math/factorial/hadamard/HadamardsGammaFunction.html
twitter.com/SFWRedditVideos

I ain't gotta prove shit nigga

It's a very reasonable convention that fits with some convenient abstractions of the factorial. Most formal definitions of the factorial will explicitly state 0!=1 or it'll be a natural consequence of the same.

>prove a definition

The definition of factorial says that 0 factorial is 1. QED.

Boy, that was tricky.

>proving undisprovable hypothesis

[math] (n+1)! = (n+1)*n*(n-1)*(n-2)*...*1 [/math]
[math] \frac{(n+1)!}{(n+1)} = n*(n-1)*(n-2)*...*1 [/math]
[math] \frac{(n+1)!}{(n+1) }= n! [/math]
[math] n=0 [/math]
[math] 0! = \frac{(0+1)!}{(0+1)} [/math]
[math] 0! = \frac{1!}{(1)} [/math]
[math] 0! = 1 [/math]

A more interesting question might be if there's any fields or subfields that find it convenient to define 0! as something other than 1 or to leave it undefined (or if there's any reasons to dislike the definition), sort of similar to how 0^0 is widely defined as being equal to 1 but not universally.

>prove that 0 factorial = 1
1! = 1
n! = n-1)! * n
Therefore 0! = 1! / 1 = 1

0! = 1
n! = (n-1)!*n
0! = (-1)! * 0
1 = 0

Problematic reasoning if you're not already accepting there's a meaningful value to associate 0! to.

There is exactly 1 way to arrange nothing. Therefore, 0! = 1.

If zero is flat nothing, can one be the height from nothing (time)?

what.

Gamma(n) = (n-1)!
Gamma(t) = Integral 0..inf x^{t-1}e^{-x} dx
Gamma(1) = 0! = Integral 0..inf x^{0} e^{-x} dx = Integral 0..inf e^{-x} dx = e^0 = 1

N A I V E

Why? Gamma is the only good continuous function expanding factorial. I trust it.

Aside from the problems that come from treating analytic continuations as the discrete functions they're continuations of, it's only the only good one because it fits all necessary criteria, including 0!=1. If it didn't, it wouldn't be.

You could instead drop the requirement, say gamma is good because it fits the integer arguments we want (along with the other desired details) and pull the 0!=1 from that by fiat, but then the argument's no better than the original, intuitive reasons for explicitly stating 0!=1.

It's either circular or superfluous.

Not the guy you were responding to though. It's just a common line of thought that you get used to in upper graduate studies after spending your earlier years thinking that the solution to any indefinite identity is just to throw analysis at it until you get something acceptable.

>Current year
>Still interpolating with the gamma function

Seriously though, there's actually some interesting alternatives to the gamma function that have some desirable properties in their own right the gamma function lacks. I don't feel like fucking with LaTEX to just plagiarize anyway, so here's a good writeup:

luschny.de/math/factorial/hadamard/HadamardsGammaFunction.html

I'm not denying the crucial place the gamma function holds in physics and other fields mind you, I just think it's interesting.

Well, actually not. There is only one function on positive reals that satisfies {f(1)=1, f(x+1)=x*f(x)}. Only one.

*or {f(a)=(a-1)!, f(x+1)=x*f(x)} for any natural a
So gamma is the only function defined on positive reals that coincides with factorials, and you don't have to assume 0!=1

*but with condition f(x+1)=x*f(x) of course

Well, along with some other needed conditions yes. That's not really relevant here though.

Note f(0+1) does not equal =0*f(0). This isn't a problem, of course, because we're concerned with positive reals. So that doesn't help us here in the issue.

It's also debatable if that's a property we need in our continuation. Luschny has an alternative that satisfies a similar relation also held by the factorial.

The gamma function's uniqueness and utility comes more from its log convexity, though there's been some work showing that many of the applications where that's useful only really require superadditivity, which the "alternatives" provide as well (they also have 0!=1, mind you).

That's more of an interesting aside though. The point about defining 0! by way of the gamma function being superfluous or circular still holds.

If you just want nice consistency with existing properties as a way of defining 0! that's already worked into the "basic" factorial, as shown by some guy earlier in this thread.

Again though, I really want to stress that I'm not a nut and I'm not actually proposing that we should abandon or denigrate the gamma function.

#gammafunctionlivesmatter

>Note f(0+1) does not equal =0*f(0). This isn't a problem, of course, because we're concerned with positive reals. So that doesn't help us here in the issue.
Actually you mess this up on this point. If you require condition f(x+1)=x*f(x) only on positive numbers and f(2)=1! which is equivalent to f(1)=1 under the first condition then you already get a function defined on positive numbers that has f(1)=0!=1.

>Note f(0+1) does not equal =0*f(0)
It is not being used anywhere. To state f(1)=1 f(2)=1 you have to apply only f(2)=f(1)*1 which is well-defined.

Don't know if I'm being clear enough.

Nah, you are I was just being sleepy and dumb. You're right on that point.

Nice proof, but if you do the substitution in line 1 or 2, then your equality doesn't hold. This leads me to believe you make mistake either on line 2 or 3.
I don't know what mistake it is, but it's illogical that you can substitute N with one number on line 3 and get a different result, if you do the same on line 1. Somehow the equality was broken.
Maybe your proof makes sense for numbers larger than 0 ?

Not the guy you're responding to but it's not a proof so much as a motivation. The falling sequence terminates at 1, if you do the substitution in line 1 or 2 you still need to terminate at 1, meaning you get
(0+1)! = 1 with everything else, including the 0, dropped.

You'll get something like:
(0+1)! = 0*1
1! = 0

No, the sequence n*(n-1)*(n-2)*..1 terminates at 1, n, n-1, n-2 are integers greater than 1. If you substitute n in for 0 there's no valid place for n to be.

It's like if I wrote
n! = n*(n-1)*(n-2)*...3*2*1
You're expected to take that to mean n-2>3, the definition still applies if I'm using it to figure out what n! is.

Consider otherwise I could expand out the identify further to something like
n! = n*(n-1)*(n-2) and keep going until I hit (n-100) before I threw in the ellipses, thereby seeming to forbid any argument less than 100.

It's just a writing convention.

this doesnt work because (-1)!*0 is only 0 if (-1)! if finite, so you get 1 = 0 OR (-1)! = inf, since it cant be 1 = 0 we get (-1)! = inf, which fits the known properties.

those plots are fucking bullshit, did he make them in paint?

in Fig 4 we se L(0) = 0.5, but in Fig 5 its clearly L(0) < 0.3

Nah L(0)=1/2 in both, but the displayed y-range of the second plot just doesn't start at y=0.
But yeah, that's a silly thing to do and was probably done by the plotting program and he didn't care to make it pretty.

Nah, he just truncated it strangely/stupidly. If you count the ticks you'll see it doesn't actually go from one to zero with no break. The comparison plots are weird though, it looks like Maxima or something where presentation isn't really a key aim.

Holy shit you worded your post so similarly I almost checked to make sure I didn't do some sort of amnesia samefagging

And looking at the appendix, he's using C#, and then I assume some library like ... gnu plot?

...

I guess the problem's root cause, is that the number of multiplications you do depends on the value of N. And we don't represent this in the equation.

If we represent it as the product operator (greek P), then we just get an empty product. And the empty product is just defined to be 1. It has no logic in practice, but it makes sense in Set theory.

It could have been defined as "any" other number, if Set theory was not around.

Now I see your point.
Makes sense.

I know maxima uses gnu plot so I'm still going with that guess. But I'm not sure what else uses GnuPlot, so maybe not.

We do represent it, it's just in the annoying ellipses example form of giving a series. Which is always the trade off of that representation, it's incredibly evocative but you need to be used to reading it when it suggests something absurd.

If you have a function going through x! for integers, like Gamma(x), then if f(x) is a function that =1 at integers, Gamma(x)·f(x) is another "Gamma like" function.

However

[math] (1+z)^u = \sum_{n=0}^\infty \frac{ \Gamma (u+1) } { \Gamma (n+1) \Gamma ( -n+u+1) } z^n [/math]

for all u,
is a good argument for Gamma's pole position.

By which I mean to say that it's a straight forward generalization of pic related.

Besides, the infinities at -n are arguably ugly, but if you look at that formula,
from the perspective of that generalization
these infinities are there to kill off any orders >n in the case n is an integer, where expression must be a polynomial, not an infinite series

user showed that if it is defined at all, it has to be 1.

The best way to demonstrate that it intuitively *should* be defined is to note that n! is the cardinality of the symmetric group on n things. Thus, [math]n! = |S_0| = 1[/math]. This comes up when you work with generating functions etc. so it's very natural.

^mistake, that should be [math]0![/math], not [math]n![/math]

...

We know that
[math]n! = n \cdot (n - 1)![/math]
and [math]1! = 1[/math]

So if we solve [math]0![/math] from
[math]1! = 1 \cdot 0![/math]
we get [math]0! = 1[/math]

You are not taking into account the number of multiplications you need to do, in order to find a factorial of a number. That's why your reasoning is wrong.
Also 1 thing can be arranged in only one way.
But 0 things can't be arranged at all, because there is noting to arrange.

>a factorial is repeated multiplication
lol. What's next? Exponentiation is just repeated multiplication?

...

Most people don't understand math, it has always been this way

It isn't?
>tfw total noob

Exponentiation being repeated multiplication for integers is a property that arises from the definition of exponentiation, not the definition itself.

>not knowing about multiple definitions existing for things

It's useless as a definition in most math beyond highschool. Try and explain what e^pi is supposed to result in by defining exponentiation as a number of multiplications.

didn't see that in the thread ;
there's just one fucking way to arrange zero thing
ain't that clear ?

It's defined as such.

As a math newbie, what would a better definition of exponentiation be?

xn=x×x×x×...×xn

>most math beyond highschool
It's pretty bad in a modern highschool curriculum too. The MAA and most mathematical pedagogy organizations strongly recommend teachers have their students conceptualize multiplication as repeated addition or exponentiation as repeated multiplication.

It's typically shown that those hold for integers but it's (ideally) stressed that it's not the same thing and exploration is given for how to relate them.

>e^π

n! = (n+1)!/n+1
3! = (3+1)!/3+1 = 4!/4 =24/4 = 6
2! = (2+1)!/2+1 = 3!/3 = 6/3 = 2
1! = (1+1)!/1+1 = 2!/2 = 2/2 = 1
0! = (0+1)!/0+1 = 1!/1 = 1/1 =1
You're welcome faggot OP

>For the sequence 1,2,4... what comes next?
>a) 5
>b) 6
>c) 7
>d) 8

WTF?? NO... DELETE THIS!

there is no way to prove this, it's just a definition we accept because it "makes sense" in a variety of different scenarios

how many ways are there to order the elements of the empty set? one way:

delete yourself

> implying that (0+1)!/0 is defined

d
the sequence is 2^n
2^0=1,2^1=2,2^2=4,2^3=8

>0 + 1 = 0
lol Americans....

WRONG
the first few terms of the sequence are given by the polynomial (x-1)(x-2)(x-4)(x-5)(x-6)(x-7)(x-8)

[math] (1).~\text{Define: }~~~~~0! = 1 [/math]
[math] \blacksquare [/math]

o-oh ;_;

let E be multiplicative identity, then
3!=E*1*2*3=6E
2!=E*1*2=2E
1!=E*1=E
0!=E
by definition, multiplicative identity is equal to 1
0!=1
Q.E.D.

For [math]\frac{b}{c} > 0[/math], [math]a^{\frac{b/c}}[/math] is defined as the unique number x such that [math]x^c = a^b[/math]. This exists and is unique because [math]f(x) = x^b[/math] is monotonic etc.

Then you just extend to positive reals by continuity, and to negative reals by defining [math]a^{-r} = \frac{1}{a^r}[/math].

The other way is to define ln(x) as the integral of 1/x, show that it's invertible, and then let exp = ln's inverse. Then [math]a^b := exp(ln(a)b[/math]. This definition is maybe cleaner but it's not so intuitive.

post more brat pls

cont'd
because multiplicative identity(or unity) stands for a number that satisfies following identites:
E*E=E
k*E=E; where k is arbitrary number
and there's one and only one number that safisfies mentioned conditions, namely, 1
soz m8. writing from a diff PC right now, which doesn't contain my "smug anime girls" folder

correction:
k*E=k

ffs srsly

0! = 1!/1! = 1

if you say that
n * n-1 * n-2 ... * 1 = n! even for n = 0
then, assuming your vague use of ... means what I think it means, 0! = 0 * 1 = 0

how about n! = n(-1)! * n doesn't hold for n < 1?

5! = 5 x (4!)
4! = 4 x (3!)
3! = 3 x (2!)
2! = 2 x (1!)
1! = 1 x (0!) or 1! = (0!)
1 = (0!)

6, the sequence is Floor[n/2]*Ceiling[n/2]

WRONG.

The answer is 7, it's a list of the Central polygonal numbers, the maximum number of pieces formed by slicing a pizza with n cuts

WRONG
5, it's all numbers not divisible by 3

Yes, that's what you'd need to invoke. But how do you know it does hold for n=1? You need 0! to be defined, that's why the reasoning is problematic.

The conclusion is true but there's no compelling reason to accept the argument here unless you're willing to concede 0! is defined. It's a good motivation for defining 0!=1, but it's not a "proof."

>whats the history of 0?

denigrate yourself nigger

Our formulas are nicer to write if 0! = 1, so we define 0! = 1. Isn't math something?

nice

god wills it

No matter how much of nothing I have, it is still nothing.

Are you Finnish? I'm doing research about diffusion of help helpper meme (apu apustus).

This.

Oh boy it's the annual return to school flood again