How would Veeky Forums go to isolate peaks like these?

How would Veeky Forums go to isolate peaks like these?

Top graph is my raw signal, where I'm trying to get "shocks" from. Second graph is a transformed version which makes them more obvious. In each burst, I want to know how many shocks there were (in this easy case 2 per burst, and maybe 3 for the last one).

The problem is that the level of the signal, relative levels of the shocks and number of shocks are widely variable; and timing is as well to a lesser extent. So for example, a simple threshold won't work because there will either be shocks under it, or it will mark two shocks too close together as a single one. Simple local minimum/maximum won't work either because of the noise.

I've tried a few approaches, but they either end up not being robust enough to variations of the signal or too sensitive to noise.

Other urls found in this thread:

en.wikipedia.org/wiki/Matched_filter
en.wikipedia.org/wiki/Wavelet_transform
paos.colorado.edu/research/wavelets/bams_79_01_0061.pdf
twitter.com/SFWRedditImages

Smooth then FFT?

Sort of depends on what part of the shock you want to pick. Peak or onset?

I would suggest trying cross correlation with a template shock. If that works then you're golden. The trick is figuring out what to use for the template shock.

Faulting that I would try using kurtosis. Specifically go on google scholar and search for 'kurtosis baillard'. That could be helpful as kurtosis should be level insensitive and should pick out the onset of the shock.

If you have a method that works but is too sensitive to noise, you could try wavelet denoising. If you're using matlab the wavelet toolbox has some pretty simple functions you could try out.

check height of a position
check if heights to the left and right are lower or equal
repeat step 2 always moving one more sample to the left and the right until a threshold in width or height difference to the first point is reached.
This should reliably detect arcs with a minimum width or height if the graph is as smooth as your trandformed one.
I have an interesting idea for a really noisy graph which would cause the algorithm to miss a general large shock due to many smaller maxima in the graph. Im not sure if this would work but it sounds cool to me:
Start with a very small threshold and detect all maxima with the algorithm.
These maxima now make up ylur new graph. Now increase the threshold and do it again. Repeat until the threshold is as big as the initial threshold you wanted in the first place

Ran on an isolated burst then? I already tried playing with it on the whole signal, but nothing obvious appeared. Do you have something precise you're expecting to come out of it?

>Sort of depends on what part of the shock you want to pick.
In decreasing order of importance I want to know how many there were and more or less when they began/ended.

>The trick is figuring out what to use for the template shock.
I think this approach will be really hard. Check this new example, where either 17 or 21 would be acceptable answers. There are cases when two are really close to each other, blurring the lines.

Thanks for the pointers to Kurtosis and wavelet denoising, I'll check that out.

Looks pretty clean, not much noise, so I'd say you can stick to simple shit. Calculate the envelope (absolute value + appropriate low pass). After that, you just put a simple threshold and delay in, meaning if your signal grows above the threshold, you count a signal (you can also use the change in amplitude for this, try it out). After it's below the threshold again you have a dead time in which no signal will be detected, even when it's above the threshold. After the dead time it's back to the beginning. That's pretty much the simplest thing you can do, it will be fast and manageable, but it will work as the signal looks good enough. You just have to implement it and adjust the parameters.

>until a threshold in width or height difference to the first point is reached.
I think that'd the weak point in this approach. Because I can have local maxima in a large shock be larger than a whole small shock. I've been thinking about making something that would include the length of the local maximum as well to try to mitigate that though.

The problem with a fixed threshold is that it's impossible to have a single value that is both below all peaks and above the local minima. For example in this image, I need a threshold low enough so that I pick up the second shock. But then if it's low, big shocks close together will merge into a single one.

Then go with the change in amplitude as I suggested. Try and see what works. Just try to keep it simple, the data you have is not problematic at all.

>the change in amplitude
As in a like a discrete derivative "X[n] - X[n-1]"? The noise is the biggest problem then. The latest thing I tried is actually:
- Take the signal, square then average (gives out the transform in the second graph)
- Take the derivative, average again
- Then ignore all signal lower than T, and study the average sign over a window
- Successive positive/negative signs with ignored values in between are merged together
- Shocks then start at a negative->positive transition and end at a positive->negative transition

The problem is that T. Too low and I pickup noise, too high and a miss a transition that's been too slow (which is maybe where I should bring back timing in that to mitigate again?). It does work in 95% of the cases, but I'm aiming for better than 99%, and the wide variety of signals I get always kills my thresholds at some point.

>square amplitude
>low pass filter (just enough to get rid of HF noise)
>use threshold with hysteresis (have to fine tune hysteresis parameters by hand)
this is the simplest method

Hysteresis is exactly what is used to solve this problem.

>It does work in 95% of the cases, but I'm aiming for better than 99%, and the wide variety of signals I get always kills my thresholds at some point.
You might be in over your head. If you know from theory what the shape of your shocks is you can use convolution filters on your signal to pick them right out. From my time in signal processing this is the highest SNR method of picking out peaks, but like I said you need to know the shape pretty accurately.
see
en.wikipedia.org/wiki/Matched_filter

Alternatively you can brute force a convolution with a bunch of different shapes and go for a wavelet transform kind of thing. Maybe that could make things more clear.
see
en.wikipedia.org/wiki/Wavelet_transform

What you described (with hysteresis) is the approach I've historically been using, but it doesn't work as good as I'd like due to that threshold problem merging the shocks.

>Hysteresis is exactly what is used to solve this problem.
Hysteresis prevents the effect of noise around the threshold, but doesn't solve the merging problem. I can have an "upgoing threshold" and "downgoing one", but there's still no way to place them so that it covers both cases?


Convolution is obviously the best way to go, but not only is the signal very different depending on the environment, but it can also be squished or stretched in time by a factor up to 3 or 4, and I don't see any obvious way to normalize it (and I can't learn on the signal on a scale larger than what's in the pictures). This stretching is the main reason I dropped that approach.

>Convolution is obviously the best way to go, but not only is the signal very different depending on the environment, but it can also be squished or stretched in time by a factor up to 3 or 4, and I don't see any obvious way to normalize it (and I can't learn on the signal on a scale larger than what's in the pictures).

If it's just temporal stretching then the wavelet transform is your friend. Going by what your data looks like I would try convolving with a gaussian or something similar (e.g. derivative of gaussian). Wavelet transform would then convolve it using a range of time stretching that you give it (say 3 or 4 times) and output a 3D plot where each line is a convolution of a gaussian of a particular width with your signal.

paos.colorado.edu/research/wavelets/bams_79_01_0061.pdf
This paper was pretty instructive on how to do wavelet transforms, though there is probably a package to do the same thing nowadays.

I've been reading into it since you last post. I could indeed be a very powerful tool if I manage to make it output something useful. Thanks for the in-depth link, I'll probably need a custom implementation anyway since the next step will be using it on a very constrained embedded environment.

>using it on a very constrained embedded environment
That might be a problem. I recall my implementation of a wavelet transform based on that paper being quite slow (>3 orders of magnitude slower than an FFT). I've read it is possible to produce implementations that are comparable in speed to FFT but have no idea how difficult it is.

Hope this doesn't eat too much of your time into a dead end.

I wrote a program to do exactly what you are trying to do. It captured spikes, bursts, and I think cycles.

I'd be quite interested if you offered to save me a few weeks with it

Convolving with the Gaussian.... try exponentially modified Gaussian to account for clear tailing.

Could you use a neural network and train it with enough samples to detect a burst shape?

I've been considering it too, especially since that could also help me with my overall goal (after I spot the shocks, I then look for a pattern in these + additional information). The chief obstacle is that input data is not easy to come by. I have at the very maximum a few hundred of these kind of samples, I don't think it'll be enough.

I would love to, but we unfortunately met on Veeky Forums and technically I wrote the program for someone else.

But seeing that other people need the same type of program, I might try to convince that person to make it open source after they finish their research.

In the meantime, I am too lazy to google, but I think there was a nice Python package made by blue water online for free. You might want to skim through the documentation tosee if it could do what you are looking for. it uses calls to c/c++ code, so it should be pretty fast I think.

>neural networks
>on 1D data
Time consuming, and inefficient. You should use neural networks only when the data is complicated enough that there is no way to mathematically represent it. Good for image processing, graph analysis not so much.

Lying ass bitch - you ain't done shit beyond a bit of R scripting for averages.

I have a big benis :D