/sqt/ Stupid Questions Thread

This thread is for questions that don't deserve their own thread.

Tips!
>give context
>describe your thought process if you're stuck
>try wolframalpha.com and stackexchange.com
>How To Ask Questions The Smart Way catb.org/~esr/faqs/smart-questions.html

Previous thread

Other urls found in this thread:

math.stackexchange.com/questions/637169/taylor-series-for-cot-x
hyperphysics.phy-astr.gsu.edu/hbase/electric/maxsup.html#c2
en.wikipedia.org/wiki/Riemann_series_theorem)
etoix.wordpress.com/category/calculus-by-spivak/page/2/
mathopenref.com/const3pointcircle.html
pastebin.com/gbWtEJwE
twitter.com/AnonBabble

What if we flung a black hole into another black hole?
Who wins?

Is this a stupid question?

The even larger Black Cock of course

If you have a thin plate with δ(x,y) being its density at (x,y), and you integrate that function twice with respect to x and y, shouldn’t the units of δ be mass/area? I asked my teacher and he said the units are mass/volume but you’re looking at it at a specific point in time or something. I don’t think that really makes sense, if you have a density function in one dimension whose units are mass/distance it should give you mass. Shouldn’t integrating a function of mass/volume once or twice yield units of mass/area of mass/distance, respectively?

If you're expecting a unit of mass at the end of two integrations (one w/ resp. to x, one w/ resp. to you), the unit of delta should be mass/area.

They just sort of merge, I think.

How?

I want to do a Taylor series of f(x) = cotan^2(x) around 0. The first term of the series is f(0), which is infinity (and so are others), but WolframAlpha tells me it's 1/x^2. What am I doing wrong? Am I not supposed to just plug 0 in the equation to find the first term?

math.stackexchange.com/questions/637169/taylor-series-for-cot-x

yeah, I failed to realize I can't make a series around 0 since cotan is not continuous there

I'm such a brainlet

How do you know that Tree(3) is so big that it's impossible to even think about it? How do you know it has an end? Why can't anyone describe Tree(3) completely?

by falling into each other, how else

My prof derived EM waves from Maxwell's equations and I had a hard time following it. Anyone got a brainlet way to derive it or a really thorough explanation?

Is there a way to draw a circle through 3 of its points without introducing a coordinate system?
Sorry if the question is too dumb, I've just always been bad at Euclidean geometry.

Let [math]G[/math] be the group of matrices of the form [math]\begin{bmatrix} 1 & a \\ 0 & 1 \end{bmatrix}[/math] for [math]a \in \mathbb{C}[/math].

I'm supposed to show that there are finite-dimensional [math]\C G[/math]-modules which are not completely reducible. My hunch is that [math]\C G[/math] as a module over itself is not completely reducible, but I have no idea how to prove this. What does it mean for a module to be finite dimensional anyway? I thought not all modules had invariant basis number.

If one considers (Lie) group representations, "finite-dimensional" usually means "finite-dimensional over the ground field" - this is [math] \mathbb{C} [/math] here.
Do you know a somehow "natural" finite-dimensional vector space over [math] \mathbb{C} [/math] on which [math] G [/math] acts linearly (you'll probably guess the one which is not completely irreducible)? Then you immediately get a [math] \mathbb{C}G [/math]-module which fulfills the desired condition.

This derivation is pretty brainlet-friendly.
hyperphysics.phy-astr.gsu.edu/hbase/electric/maxsup.html#c2

I'm stuck on this problem. (For some context, the text being used is Royden - roughly 6.1-6.2)

For part a) is it sufficient to say that since the terms are nonnegative, f(x) is increasing?

For part b) is the argument a standard continuity argument with |s_i - x_i|?

For part c) do we have to start by using Lebesgue's Theorem to state f(x) is differentiable almost everywhere?

Mainly, I'm trying to find where Lebesgue's and Vitali's follows in the grand scheme of this.

Sorry for the long question and I'm not looking for a full proof but rather an approach that might be useful. I appreciate any help I may recieve.

*Vitali's covering lemma by the way

In case there are other vitali theorems around

Thanks.

Really stuck on this one.

>Find a sequence of functions, [math](f_{n})[/math], on the closed interval [math][0,1][/math] such that each [math]f_{n}[/math] is differentiable, and the derivatives, [math](f^{\prime}_{n})[/math], converge uniformly to some function [math]g[/math], but the functions, [math](f_{n})[/math] do not converge to a differentiable function, [math]f[/math].

It'll probably be piece wise and I wouldn't doubt if the absolute value shows up.

Why can I use Cauchy'S integral theorem to say the line integral on the curve c=unit circle of (z^3 * cotz) is equal to zero? I am unfamiliar with Laurent series so my only tool to assess applicability is determining where it's analytic. I can see it's not analytic for z=n*pi, n=0,1,2,3... since cotz = cosz/sinz

However, my book says it's diffferentiable and therefore analytic in the neighborhood of n= 0. Please explain. How can it be discontinuous at z=0*pi and still differentialable?

It IS continuous at [math] z = 0 [/math]:
You can use l'Hospital (legit for fractions of holomorphic functions!) to determine [math] \lim_{z \to 0} \frac{z^3}{\cot z} [/math].
That this limit indeed gives a holomorphic function follows now from the fact that your function is bounded and holomorphic in an open neighborhood of [math] z = 0 [/math] (else there couldn't be a limit).

what about [math]f_n(x) = x+n[/math]

I don't have Rudin next to me but there is an example of this in the beginning of chapter 7. Something to do with trig functions if I remember correctly.

fuck it's so simple, i think it works too
i'll take a look

so this isn't differentiable since its derivative isn't defined on the end points right?

Hmm, I see. Stupid question, but why is it sufficient say, e.g., 1/z-3 has a discontinuity when z=3 since rational numbers can't have zero in the denominator, but not here?

Sorry, brainlet's first complex analysis course

It might not work because the functions don't converge, period, and so definitely can't converge to a differentiable function. That's kind of a degenerate solution to the problem and probably not what is asked for.

suppose it was [math]f_{n}(x)= x+ \frac{1}{n}[/math], which converges to [math]x[/math]. Then this was be non-degenerate, correct?

But f(x) = x is differentiable.

Actually, your function isn't defined in zero, either. But you can find a continuation which is holomorphic in this point.
Your integral doesn't go through zero, so one might use the continued function as well in order to be able to use Cauchy's theorem. So, you're indeed right - actually the function isn't defined in this point - but one can define it there in a nice way.

but not everywhere on [math][0,1][/math] though

a) is pretty self explanatory, arguing about adding terms should be sufficient
b) for a rational s_i, f(s_i)-f(s_i-∆)≥1/2^i for all ∆>0
Can't help you with c) right now, going to sleep, good luck user.

where isn't it differentiable

o-on the endpoints ?

when using power series to solve differential equations, why is it ok to split the sums? in general is it always ok or are there instances where it doesn't work?
i ask because i thought for certain series rearranging resulted in a different limit (en.wikipedia.org/wiki/Riemann_series_theorem)

>the function locally extends to a differentiable function defined on some open set
this is the usual definition of differentiability at a point which is not an interior point. clearly it holds in this case.

shieeet

I know the answer but don't know how to get it from the information...

Any good lecture about gravitation (with lots of examples too, if possible)? I'm studying undergrad physics and this topic is specially difficult for me.

I have trouble with free body diagrams (I fuck up with gravitational forces sometimes), Kepler laws and conservation of energy.

>[math] x^2 - 2x + 2 > 0 [/math]

My solution
>[math] (x-1)^2 > -1 [/math]
Because the exponent is odd, the left hand side of the equation will always be positive, and so any value in the set of reals will work for x.

But the solution here, etoix.wordpress.com/category/calculus-by-spivak/page/2/ , has a different answer. How did they arrive at that? Pic related. Ultimately we both had the same final answers, all reals, but I'm curious as to how I may have arrived at the same answer differently. My method for arriving at this answer was a straightforward completion of the square. It looks as though this person used a similar method, but I don't understand how they could've done anything differently.

>pic related
Can't post duplicate image until 2019, but it's right here:

It depends on the particular form of the recursion relation you get when you substitute the series solution into the differential equation.
If you get an [math]a_{n+2} = f(n) a_n [math], as you often get for a second order ODE, then clearly the odd and even [math]a_n[/math] terms are independent and can be split in this way.

fug, I mean
If you get an
[math]a{_n+2}=f(n)a_n[/math],as you often get for a second order ODE,then clearly the odd and even [math]a_n[/math] terms are independent and can be split in this way.

I'm so fucking bad at programming I'm seriously considering switching out of EE into MechE just because I know I wouldn't have to deal with this shit any longer. Fuck me even harder knowing that most of the jobs available for EE are in embedded systems.

Someone put me out of my misery

Actually I think I have a solution, would image = (2 + (-1))^n = 1^n = 1 work?

yeah that's right, the binomial theorem gives that as an alternate form for (2 + (-1))^n.

>I'm so fucking bad at programming

You just have to practice. Programming isn't hard, just a bit alien at first. Just remember that the computer is retarded and you have to be stupidly explicit when telling it what to do.

it works for power series, you don't have to worry about the details in undergrad ODE

da

Thanks! It took me way too long to figure that out.

Why hasn't Veeky Forums figured out women?

How do i practice?

sin(x) and cos(x) are linearly independent. I know that's true.

However, when trying to prove it, I can show it for every (x) EXCEPT For pi/4
Everything else there is no such a/b so that asin(x) + bcos(x) = 0 except a=b=0
But if x = pi/4 then
sin(x) = sqrt(2) / 2
cos(x) = sqrt(2) / 2
so
a = 1
b = -1
1 * (sqrt(2) / 2) + (-1) * (sqrt(2) / 2) = 0
Where both a and b are not equal 0
How does this not show them linearly dependent?

>How does this not show them linearly dependent?
Because you showed sin(pi/4) and cos(pi/4) are linearly dependent, not sin(x) and cos(x).

But that makes them linearly dependent on the periodic set {0 ... 2pi} for (x = {pi/4, 3pi/4, 5pi/4, 7pi/4})
Am I skipping over a property of the definition?

>But that makes them linearly dependent on the periodic set {0 ... 2pi} for (x = {pi/4, 3pi/4, 5pi/4, 7pi/4})
This doesn't mean anything.

>Am I skipping over a property of the definition?
If sin(x) and cos(x) were linearly dependent then there are scalars a,b with a*sin(x)+b*cos(x)=0. This is an equality of functions, so if this equality is true then for every x you must have a*sin(x)+b*cos(x)=0.

See that's where I'm getting lost.
For every x. x = pi/4.
a = 1
b = -1
The equality now holds true when both a,b != 0
Which, from what I understand, means these are not linearly independent

>For every x. some x = pi/4.
forgot a word

>For every x. x = pi/4.
Not every x is equal to pi/4.

>Which, from what I understand, means these are not linearly independent
Is 1*sin(x)-1*cos(x)=0 for all x?

I've had a few beers, bear with me

No, but it is for some x (7pi/8)

>No, but it is for some x (7pi/8)
Then you haven't shown linear dependence.

I'm missing a step somewhere.
Am I meant to be treating sin(x) and cos(x) as separate sets?

>Am I meant to be treating sin(x) and cos(x) as separate sets?
I don't know what you mean by this.

Your a,b need to satisfy a*sin(x)+b*cos(x)=0 for every x.

So even though there exists an a,b for x = pi/4, because this a,b does not apply to all x this shows them linearly independent?
I'm trying to work through all of this in an abstract linear alg class without having taking the computational version beforehand so I'm a little in over my head

>So even though there exists an a,b for x = pi/4, because this a,b does not apply to all x this shows them linearly independent?
No, it just shows that the a,b you've chosen doesn't prove linear dependence. To show linear independence you need to prove the only solution is a=b=0.

How do make learning mathematics interesting?

Wasn't there a numberphile video on this? They usually do an okay job describing things.
From what I remember, the amount of variables you need to "solve" tree(3) is some 2 to the power of 2 to the power of 2.... Like a thousand times. Which is obviously some finite number, but fuck doing that kind of math.
Also he kept saying shit like "your brain will turn into a black hole!" rather than we simply can't fathom it.

Black holes don't exist

Something about triangles and bisection

mathopenref.com/const3pointcircle.html

perpendicular bisectors

>you need to prove the only solution is a=b=0.
Right but a solution of a,b != 0 exists for odd multiples of pi/4. This is what's tripping me up, I can show it independent for everything else but this confuses me

user you're overthinking this in ways that are difficult to understand

1) do you agree 1 and x are linearly independent
2) does it matter that the line y=x and the line y=1 intersect
3) do you agree that x and x^2 are linearly independent
4) does it matter that y=x and y=x^2 intersect
5) do you agree that 1, x, x^2, ... form a basis for polynomials

Like I said I've only had purely abstract definitions to work with, I've never seen actual representations of what I'm working with so I never really considered the fact that an intersection existing doesnt matter for linear independence
I think I got messed up with linear maps having direct sums that don't intersect with linear independent sets possibly intersecting

>Right but a solution of a,b != 0 exists for odd multiples of pi/4.
It needs to hold for all x.

Right, that's starting to finally sink into my brain. It's about the a,b being consistent for all x, not for some a,b working for some x.
This has been helpful, a lot of this shit has me working at 60% understanding and just kind of getting by without really knowing why. I actually feel like I kind of understand what makes linear independence work more

Did you guys need to use hyperbolic trig functions in Calc II? I'm retaking the class and it wasn't in the curriculum the first 2 times. This bitch made a third of the questions based on hyperbolic trig functions and now I got a 40 on it and I feel like a retard because I didn't study that crap. At least give us the formula

Why is the "aligned" environment messing up the equations? They're supposed to be aligned horizontally, with the equal signs being right above one-another, but somehow they're just slightly off center...

Here's the LaTeX:
pastebin.com/gbWtEJwE

Can someone explain where I'm going wrong? Book says answer is zero (actually chegg does). #30 in chapter 14.2 of advanced engineering mathematics by kerzwig

I think my decomposition is wrong, but I'm not sure why

What's the question asking?

evaluate the contour integral over that curve

>, the amount of variables you need to "solve" tree(3) is some 2 to the power of 2 to the power of 2.... Like a thousand times.
But how do they know that's the amount of variables?

>residue of (2z^3+z^2+4)/(z^4+4z^2)
>limit z->0 of z(2z^3+z^2+4)/(z^4+4z^2)
>limit (2z^3+z^2+4)/(z^3+4z)
>limit (6z^2+2z^1)/(3z^2+4)
> 0 / 4 = 0

Therefore it's 0

Idk residues yet, but at least i know my answer is wrong, i guess.

I see now i dont know how to deal with complex numbers and fraction decomposition at all, so yeah, that's where the error is.

Whats the difference between log and ln?

ln always means log base e
log is most commonly base e in the context or mathematics (on rare occasions it'll mean base 10), and base 2 in the context of computer science

what you need to is split it so that
[math]\frac{2z^3+z^2+4}{z^2(z^2+4)} = \frac{2z^3}{z^2(z^2+4)} + \frac{z^2+4}{z^2(z^2+4)} = \frac{2z}{z^2+4} + \frac{1}{z^2}[/math] and then you can apply residue theorem to each separate fraction

thanks user!

Is anyone able to help with a proof of this? I've been fiddling for a while and can't seem to crack it.
I've tried expressing g(x) as a generic polynomial of degree n, taking the p-th derivative and playing with the coefficients but nothing seems to work. Any help would be appreciated

Why bother graduating high school If I can't go to Ivy League?

>Any help would be appreciated
Write down the ith derivative of h.

Well as far as I can tell, the essence of it is seeing how the coefficients of g turn out and then showing that each can then be divided by (p-1)!. Then in addition to that showing each of those then has factor p. I feel like I need to crack what's happening with g before I look at h.

the coefficients are binomial coefficients times other integers.

>seeing how the coefficients of g turn out and then showing that each can then be divided by (p-1)!
This isn't necessarily true, re-read the question, and then write down the ith derivative of h.