I'm trying to calculate f(t) that maximizes the functional I(f), where f(t) > 0 for all t

I'm trying to calculate f(t) that maximizes the functional I(f), where f(t) > 0 for all t.

Any thoughts?

Try adding a function [math] \beta (x) [\math] that vanishes at the endpoints of integration and then differentiating the entire thing to find the response of the functional in terms of the functions.

I already tried that approach, the thing that trips it up is the shifted function argument f(t+\Delta)

I tried reformulating the shifted function as a convolution with a delta function, but can't get any further.

Assuming [math]\Delta > 0 [/math],
[math]f(t) = \left\{
\begin{array}{ll}
1 & \quad a \leq t \leq b \\
C & \quad b \lt t \leq b + \Delta
\end{array}
\right.
[/math]
The functional is unbounded for [math] C \to \infty [/math].

that would be correct, if not for the denominator. Approaching this numerically yields something that looks like a sin function

btw how do you get latex commands in your comment?

ah shit, I see what you mean. You are correct, sir. However, the integration limits on top should read a - b + Delta

you do what I tried to do but with a forward slash in the closing tag instead of a backslash

[math] nice [/math]

so what I meant to say was the expression should be [math] I\left(f\right)=\frac{\int_{a}^{b+\Delta}f\left(t\right)f\left(t+\Delta\right)dt}{\int_{a}^{b}\left[f\left(t\right)\right]^{2}dt} [/math]

or wait... fuck, i mean

[math]
I\left(f\right)=\frac{\int_{a}^{b-\Delta}f\left(t\right)f\left(t+\Delta\right)dt}{\int_{a}^{b}\left[f\left(t\right)\right]^{2}dt}
[/math]

the point it that [math] f(t) [/math] and [math] f(t+\Delta) [/math] have to be zero everywhere outside of the integration domain. Not sure how to express that...

If you assume F is a function that maximizes the functional then so does k*F for any positive k.
Your solution will not be unique.

That's fine. The scaling of the function doesn't matter. It's the shape I'm after.

>the point it that f(t) and f(t+Δ) have to be zero everywhere outside of the integration domain. Not sure how to express that...
So f(t) has to be zero outside of [a,b], meaning that the functional is 0 for Delta>(b-a)?

Since I(k*f)=I(f), you can focus on functions that have an L^2 norm of 1.
Basically you only need to be concerned with the numerator.

correct. basically you can't "hide" part of f(t) outside of the domain and make it arbitrarily large like you did before.

The problem is easy if delta=0.
Maybe try taylor expanding wrt delta around delta=0.

not sure if I'm misunderstanding your point, but if I ignore the denominator the trivial answer is that f(t) is infinite in the integration domain. How would one "focus" on functions with an L2 norm?

Since you can rescale f by any constant without affecting I(f), rescale by a constant that makes the denominator 1.

> The problem is easy if delta=0.

no shit then I(f) = 1 ;)

so you mean something like

[math]
f\left(t\right)f\left(t+\Delta\right)\approx f\left(t\right)\left[f\left(t\right)+\Delta f^{\prime}\left(t\right)+\frac{1}{2}\Delta^{2}f^{\prime\prime}\left(t\right)+...\right]
[/math]

and then do some Euler-Lagrange magic?

If delta is near zero, obviously f would be near 1/sqrt(b-a).

Just to be clear, your f(t) is parameterized by Delta as well, meaning that you want to find a specific shape of f_Delta(t) for each Delta in +-(b-a)?

because basically the denominator IS the constant that makes everything one. Without the denominator, f(t) = infinity.

ideally yes, but right now I would be happy to just be able to solve any (nontrivial) example, i.e. say a = 0
b = 10
Delta = 1

or whatever.

But yes, a general solution as a function of the integral limits and Delta would be nice.

Isn't this just an autocorrelation function for a signal of finite duration? Surely some mathematician has done this in a much more general fashion a few decades ago.

Yes it is the continuous cross-correlation
integral of f(t) with itself at lag Delta.

The point is I'd like to maximize the aforementioned quantity while keeping the energy of the signal constant/finite.

Surely you are correct, but I don't know where to look beyond standard functional analysis and calculus of variations, which is why I'm here