Revisiting Hum Filters

Skip to filter circuits:

I’m in a relatively noisy environment when it comes to mains hum having overhead power lines nearby. So any ELF/VLF receiver I put together will have to deal with this.

To get an idea of what kind of filtering I might need I simply put a jack plug with bare terminals into the mic in of a laptop, held on to it to make myself an antenna, and recorded for half a minute using Audacity. Here’s a snippet of the resulting waveform:


Yup, that is one well-distorted sine wave. Reminds me of the waveform going to bulbs from triac-based dimmers, though haven’t any in the house.

More usefully, here’s the spectrum plot (Audacity rocks!) :


There’s a clear peak at 50Hz. Next highest is at 150Hz, the 3rd harmonic. It’s around 12dB down, which (assuming it’s the voltage ratio being shown, ie. 20*log10(V2/V1)) is 1/4 of the voltage. Next comes 100Hz, the 2nd harmonic, about 30dB down, about 1/32 of the voltage (from ratio = 10^(dB/20)).

(I’m in Italy where like most of the world the mains AC frequency is 50Hz. In the Americas it tends to be 60Hz).

So I reckon I definitely need to cut the 50Hz as much as possible, probably 150Hz too.

Digital filters are relatively straightforward to implement in software, but here there’s a snag. The incoming signal is analog, so will need to go through an ADC. The ELF/VLF signal of interest is likely to be of very small amplitude compared to the mains hum. So using say a 16-bit ADC, capturing the whole signal at maximum resolution, it’s conceivable that the interesting signal only occupies a couple of bits, or maybe even be below a single bit. So really the filtering has to happen in the analog chain, before the ADC. Experimentation will be needed, but I imagine a setup like this will be required:


There are a few options for the kind of receiver to use, essentially coil-based (magnetic component of the radio wave) or antenna (electrical component), the nature of the early circuitry and pre-amp will be dependent on this. But the main role of the pre-amp is to boost the signal well above the noise floor of subsequent stages (using low-noise components). At a first guess, something in the region of x10 – x100 should be adequate.

Next comes the filter(s). Now the fortunate thing here is that the ELF/VLF frequency ranges I’m considering, say 5Hz-20kHz are pretty much the audio frequency ranges and are thus within the scope of standard audio components. Well, 5Hz is below the nominal 20Hz-20kHz figures given for audio, but the key thing is that at the high end, it’s nowhere near anything requiring exotic components. Even the humble 741 op-amp (dating from 1968) has a unity-gain bandwidth around 1MHz. For the TL071 family, a reasonable low-cost default these days it’s 3MHz.

One option for filtering the mains hum out would be to use a high pass filter and only look at the higher end of VLF (conversely, a low pass filter and go for ELF). But notch (band stop) filters can be pretty straightforward, so it should be productive to target just the 50Hz (and maybe 150Hz).

(A more exotic approach would be to use something like an analog bucket brigade line device as used in many analog phaser & flanger audio effects boxes, with its delay fixed at the period of the fundamental 50Hz. Mixing this inverted with the input signal non-inverted will cause cancellation at the fundamental and all it’s harmonics, ie. a comb filter. But not only does that seem overkill here, it will in effect degrade the signal of interest).

There are a few alternatives for notch filters. While they can be built from passive components, there are significant benefits to using active components, especially in terms of controlling the parameters. For these reasons and circuit simplicity, op-amps are a good choice over discrete components.

There are three leading candidate circuit topologies, as follows.

Active Twin-T Notch

This classic passive circuit is the starting point.


The notch frequency is given by fc = 1 / (2 pi R C)

This assumes a low impedance source for Vin and a high impedance connected to Vo, which can easily be achieved using op-amp buffers. One drawback of this setup is that its selectivity, the slope of the sides of the notch, is fairly poor. This can be significantly increased by using op-amps to bootstrap the T :


The notch frequency is determined as for the grounded T above, only this time the Q/selectivity can be varied, according to the values of R4 and R5.

But a troublesome problem remains: all 6 components on which the frequency depends have to have precise values to place the notch where required. Any variation is likely to lead to a sloppy notch, of low Q. While 1% tolerance resistors are the norm these days, capacitors tend to have tolerances more like 5 or 10%.  One option is to use reasonably well-matched capacitors (from the same batch) and vary the resistors. But this still leave 3 variables, with some level of interdependence.

(I’ve actually got this one on a breadboard at the moment. For a one-off circuit it isn’t unreasonable to use resistors a little below the calculated values in series with pots, and once fine tuned replaced with fixed values).

Bainter Notch

This is quite a nifty circuit (and new to me). The main benefits are described in the title of Bainter’s own description : Active filter has stable notch, and response can be regulated. Notch depth depends on gain and not (passive) component values.


I’ve yet to play with this one, but it certainly shows promise. A downside is that the component values calculation is rather unwieldy. Another ref. is this TI doc: Bandstop filters and the Bainter topology.

State Variable Filter

The State Variable topology is very versatile, offering high- and low-pass outputs as well as bandpass. By mixing the high- and low-pass outputs or the input with the bandpass output, a notch can be achieved. Crucially the gain, center frequency, and Q may be adjusted separately. A bonus compared to the Twin-T is that the frequency is determined by just 2 resistors and 2 capacitors. A few days ago I stumbled on a tweaked version of the standard topology which offers a few advantages. I won’t go into details here, it’s all described in the source of this diagram – Three-op-amp state-variable filter perfects the notch.


Once I’ve played with the Twin-T a bit more, I’ll have a go with this one. I have a good feeling about it.


Human Impact on Radio Nature

I’ve stumbled on two pieces of info related to this in the past couple of days so reckon it’s worth making a note.

The first is NASA’s Van Allen Probes Spot Man-Made Barrier Shrouding Earth, actually about high-powered VLF transmitters for ground-submarine comms,  probably affecting the near-space environment. “A number of experiments and observations have figured out that, under the right conditions, radio communications signals in the VLF frequency range can in fact affect the properties of the high-energy radiation environment around the Earth”. The main reference paper is on a pay-for site, so little detail is at hand.

The second is Why I Quit Natural Radio, a post by a Radio Nature enthusiast who’s been monitoring ELV/VLF for decades, noting a massive drop off in the ‘interesting’ natural signals he receives. He suggests the cause may be the rise in the use of mobile phones, associated UHF/microwave emissions (‘Cellular frequencies‘) affecting the magnetosphere and/or ionosphere thus impacting VLF propagation. A term he’s coined is rather disturbing ‘electromagnetic smog’.

He also refers to HAARP, a research system (and favourite of conspiracy theorists) that has historically blasted the ionosphere with high power HF. According to official sources it hasn’t been used for a long time. I have my doubts that relatively brief, localised high-energy signals like these would have any lasting impact – similar events might well occur in nature, and these natural systems tend to be very resilient.

Candidate Neural Network Architecture : PredNet

While I sketched out a provisional idea of how I reckoned the network could look, I’m doing what I can to avoid reinventing the wheel. As it happens there’s a Deep Learning problem with implemented solutions that I believe is close enough to the earthquake prediction problem to make a good starting point : predicting the next frame(s) in a video. You train the network on a load of sample video data, then at runtime give it a short sequence and let it figure out what happens next.

This may seem a bit random, but I think I have good justification. The kind of videos people have been working with are things like human movement or motion of a car. (Well, I’ve seen one notable, fun, exception : Adversarial Video Generation is applied to the activities of Mrs. Pac-Man). In other words, a projection of objects obeying what is essentially Newtonian physics. Presumably seismic events follow the same kind of model. As mention in my last post, I’m currently planning on using online data that places seismic events on a map – providing the following: event time, latitude, longitude, depth and magnitude. The video prediction nets generally operate over time on x, y with R, G, B for colour. Quite a similar shape of data.

So I had a little trawl of what was out there.  There are a surprisingly wide variety of strategies, but one in particular caught my eye : PredNet. This is described in the paper Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning (William Lotter, Gabriel Kreiman & David Cox from Harvard) and has supporting code etc. on GitHub. Several things about it appealed to me. It’s quite an elegant conceptual structure, which translates in practice into a mix of convnets/RNNs, not too far from what I anticipated needing for this application. This (from the paper) might give you an idea:


Another plus from my point of view was that the demo code is written using Keras on Tensorflow, exactly what I was intending to use.

Yesterday I had a go at getting it running.  Right away I hit a snag: I’ve got this laptop set up for Tensorflow etc. on Python 3, but the code uses, which uses Python 2. I didn’t want to risk messing up my current setup (took ages to get working) so had a go at setting up a Docker container – Tensorflow has an image. Day-long story short, something wasn’t quite right. I suspect the issues I had related to nvidia-docker, needed to run on GPUs.

Earlier today I decided to have a look at what would be needed to get the PredNet code Python3-friendly. Running (Kitti is the demo data set) led straight to an error in Nothing to lose, had a look. “Hickle is a HDF5 based clone of Pickle, with a twist. Instead of serializing to a pickle file, Hickle dumps to a HDF5 file.“. There is a note saying there’s Python3 support in progress, but the cause of the error turned out to be –

if isinstance(f, file):

file isn’t a thing in Python3. But was only passing a filename to this, via, so I just commented out the lines associated with the isinstance. (I guess I should fix it properly, feed back to Hickle’s developer.)

It worked! Well, at least for I’ve got it running in the background as I type. This laptop only has a very wimpy GPU (GeForce 920M) and it took a couple of tweaks to prevent near-immediate out of memory errors:

export TF_CUDNN_WORKSPACE_LIMIT_IN_MB=100, line 35
batch_size = 2 #was 4

It’s taken about an hour to get to epoch 2/150, but I did renice Python way down so I could get on with other things.

Seismic Data

I’ve also spent a couple of hours on the (seismic) data-collecting code. I’d foolishly started coding around this using Javascript/node, simply because it was the last language I’d done anything similar with. I’ve got very close to having it gather & filter blocks of from the INGV service and dump to (csv) file. But I reckon I’ll just ditch that and recode it in Python, so I can dump to HDF5 directly – it does seem a popular format around the Deep Learning community.

Radio Data

Yes, that to think about too.

My gut feeling is that applying Deep Learning to the seismic data alone is likely to be somewhat useful for predictions. From what I’ve read, the current approaches being taken (in Italy at least) are effectively along these lines, leaning towards traditional statistical techniques. No doubt some folks are applying Deep Learning to the problem. But I’m hoping that bringing in radio precursors will make a major difference in prediction accuracy.

So far I have in mind generating spectrograms from the VLF/ELF signals. Which gives a series of images…sound familiar? However, I suspect that there won’t be quantitatively all that much information coming from this source (though qualitatively, I’m assuming vital).  As a provisional plan I’m thinking of pushing it through a few convnet/pooling layers to get the dimensionality way down, then adding that data as another input to the  PredNet.

Epoch 3/150 – woo-hoo!


It was taking way too long for my patience, so I changed the parameters a bit more:

nb_epoch = 50 # was 150
batch_size = 2 # was 4
samples_per_epoch = 250 # was 500
N_seq_val = 100 # number of sequences to use for validation

It took ~20 hours to train. For, it has produced some results, but also exited with an error code. Am a bit too tired to look into it now, but am very pleased to get a bunch of these:




Morse Code Practice Key

Recently I’ve been trying to fill in some of the massive holes in my knowledge about radio. For this reason I’d quite like to have a go at the radio amateur license exams. No idea how to go about it though, the geographic complications. I suppose I might stand a chance with the Italian version, as long as I could take a dictionary with me. Anyone know anything about this?

Anyhow, even though it’s rather anachronistic (and no longer a requirement for the exams), the radio amateur sites all have some mention of Morse Code. I’ve always wanted to learn it, I guess from watching too many spy films. There are loads of things I should be doing, but yesterday’s procrastination was making this gadget. Good fun.

Hardware Delusions

The key front end sensors I wish to build for this project are an ELF/VLF radio receiver and a seismometer. The frequency ranges of interest here are < 20kHz,  in other words, in the audio range (and probably extending a little lower).

As it happens, in a past life I studied audio frequency electronics, transducers, signals and systems, DSP and so on formally for 3 years, have written articles on (nonlinear) analog electronics for a magazine, and probably more significantly have been an electronic music hobbyist for around 40 years. In short, I consider myself something of an expert in the field. The word for this is hubris.

I started planning the sensors like a bull at a gate. On the seismic side, I hadn’t really thought things through very well. On the radio side – I’d only really skimmed Radio Nature, and my knowledge of radio reception is minimal. Since then, the flaws in my ideas have poured out.

Seismic Errors

I’ve got a design for seismic signal sensors roughed out. While a magnet & coil is a more traditional way of detecting audio frequency deflections, I thought it would be neater somehow to use semiconductor Hall Effect devices. A standard design for a proximity detector is one of these components (which are housed much like transistors) backed by a magnet. When a something like a piece of iron passes by, the magnetic flux varies and hence the output of the device (linear output devices are available).

So for my seismometer, the moving part will be a steel ball bearing on a spring, hanging in a jar of oil (for damping). There will be 3 sensors located in the x, y & z directions (N-S, E-W, up-down) relative to this.

One potential complication with this setup had occurred to me. For a (relatively) linear response, the ball bearing would have to move in line with the face of the sensor. Obviously, in practice, most of the time the movement will be off-axis. However, my thinking went, there should still be enough information coming from all 3 sensors in combination to potential determine the deflection of the ball bearing. The data produced by these sensors will ultimately go into a neural network system, and they’re good at figuring out peculiar relationships.

But I’d missed another potential source of problems, it only came to me a couple of days ago. There is likely to be significant, pretty complex, interaction between the ball bearing and all 3 magnets. Whether or not this additional complication will be such that that the directional seismic information is totally obfuscated remains to be seen. I plan to experiment, maybe I’ll need 3 independent sensors…


A little learning is a dang’rous thing. The danger I faced was wasting time & money in building a VLF receiver that simply couldn’t work.

I’d only skimmed the material, but something about the use of a coil as a receiver appealed to me. But the designs I’d seen were all pretty big, say around 1m in diameter. Hmm, thinks I, why not shrink that down to say 30cm and just boost the gain of the receiver circuit. It was only after I’d wound such a coil and picked up nothing but hum & noise that I got around to reading a little more.

It turns out there are two related issues involved: the way a small (relative to wavelength) loop antenna works isn’t exactly intuitive, and also its output is very low. It’s frequency-dependent, but the level of the desired signal is at a similar order of magnitude as the thermal noise generated by the loop, less than that of many op amps. The good Signore Romero, author of Radio Nature, has a practical discussion of this in his description of A Minimal ELF Loop Receiver. (Being at the low end of the frequency range of interest make this rather a worst-case scenario, but the points still apply). Basically there’s a good reason for having a big coil.

Another possible design flaw coming from my lack of learning is that I initially thought it would make sense to have coils in the x, y & z dimensions. As it turns out, because VLF signals are propagated as ground waves (between the surface of the planet and the ionosphere), pretty much all a coil in the horizontal plane will pick up is local noise such as mains hum. But I’m not yet discarding the inclusion of such a loop. Given the kind of neural net processing I have in mind, a signal that is comprised of little more than local noise may well be useful (in effect subtract this from the other signals).

But even having said all this, a loop antenna may still be of no use for me here – Noise Annoys. Renato has an image that nicely sums up the potential problem:


Right now I don’t have the funds to build a loop antenna of any description (big coils use a lot of wire!) but as and when I can, I’ll probably be looking at something along the lines of Renato et al’s IdealLoop (the image above comes from that page).

I do have the components to put together some kind of little portable whip antenna (electric field) receiver, I think I’ll have a look at that next, particularly to try and get an idea of how the noise levels vary in this locale.

I’ve also got one linear Hall effect sensor, so I can have a play around with that to try and get some idea of my seismometer design’s viability.


Provisional Graph

I’ve now located the minimum data sources needed to start putting together the neural network for this system. I now need to consider how to sample & shape this data. To this end I’ve roughed out a graph – it’s short on details and will undoubtedly change, but should be enough to decide on how to handle the inputs.

To reiterate the aim, I want to take ELF/VLF (and historical seismic) signals and use them to predict future seismic events.

As an overall development strategy, I’m starting with a target of the simplest thing that could possibly work, and iteratively moving towards something with a better chance of working.

Data Sources

I’ve not yet had a proper look at what’s available as archived data, but I’m pretty sure what’s needed will be available.  The kind of anomalies that precede earthquakes will be relatively rare, so special case signals will be important in training the network. However, the bulk of training data and runtime data will come come from live online sources.

Seismic Data

Prior work (eg OPERA) suggests that clear radio precursors are usually only associated with fairly extreme events, and even those are only detectable using traditional means for geographically close earthquakes. The main hypothesis of this project is that Deep Learning techniques may pick up more subtle indicators, but all the same it makes sense to focus initially on more local, more significant events.

The Istituto Nazionale di Geofisica e Vulcanologia (INGV) provides heaps of data, local to Italy and worldwide. A recent event list can be found here. Of what they offer I found it easiest to code against their Atom feed which gives weekly event summaries. (No surprise I found it easiest, I had a hand in the development of RFC4287 🙂

I’ve put together some basic code for GETting and parsing this feed.

Radio Data

The go-to site for natural ELF/VLF radio information is and it’s maintainer Renato Romero has a station located in northern Italy. The audio from this is streamed online (along with other channels) by Paul Nicholson. Reception, logging and some processing of this data is possible using Paul’s VLF Receiver Software Toolkit. I found it straightforward to get a simple spectrogram from Renato’s transmissions using these tools. I’ve not set up a script for logging yet, but I’ll probably get that done later today.

It will be desirable to visualise the VLF signal to look for interesting patterns and the best way of doing this is through spectrograms. Conveniently, this makes the problem of recognising anomalies essentially a visual recognition task – the kind of thing the Deep Learning literature is full of.

The Provisional Graph

Here we go –


CNN – convolutional neural network subsystem
RNN – recurrent neural network subsystem (probably LSTMs)
FCN – fully connected network (old-school backprop ANN)

This is what I’m picturing for the full training/runtime system. But I’m planning to set up pre-training sessions. Imagine RNN 3 and its connections removed. On the left will be a VLF subsystem and on the right a seismic subsystem.


In this phase, data from VLF logs will be presented as a set of labeled spectrograms to a multi-layer convolutional network CNN. VLF signals contain a variety of known patterns, which include:

  • Man-made noise – the big one is 50Hz mains hum (and its harmonics), but other sources include things like industrial machinery, submarine radio transmissions.
  • Sferics – atmospherics, the radio waves caused by lightning strikes in a direct path to the receiver. These appear as a random crackle of impulses.
  • Tweeks – these again are caused by lightning strikes but the impulses are stretched out through bouncing between the earth and the ionosphere. They sound like brief high-pitched pings.
  • Whistlers – the impulse of a lightning strike can find its way into the magnetosphere and follow a path to opposite side of the planet, possibly bouncing back repeatedly. These sound like descending slide whistles.
  • Choruses – these are caused by the solar wind hitting the magnetosphere and sound like a chorus of birds or frogs.
  • Other anomalous patterns – planet Earth and it’s environs are a very complex system and there are many other sources of signals. Amongst these (it is assumed here) will be earthquake precursors caused by geoelectric activity.

Sample audio recordings of the various signals can be found at and Natural Radio Lab. They can be quite bizarre. The key reference on these is Renato Romero’s book Radio Nature – strongly recommended to anyone with any interest in this field. It’s available in English and Italian (I got my copy from Amazon).

So…with the RNN 3 path out of the picture, it should be feasible to set up the VLF subsystem as a straightforward image classifier.

On the right hand side, the seismic section, I imagine the pre-training phase being a series of stages, at least with: seismic data->RNN 1; seismic data->RNN 1->RNN 2. If you’ve read The Unreasonable Effectiveness of Recurrent Neural Networks (better still, played with the code – I got it to write a Semantic Web “specification”) you will be aware of how good LSTMs can be at picking up patterns in series. But it’s pretty clear that the underlying system behind geological events will be a lot more complex than the rules of English grammar & syntax. But I’m (reasonably) assuming that sequences of events, ie predictable patterns do occur in geological systems. While I’m pretty certain that this alone won’t allow useful prediction with today’s technology, it should add information to the system as a whole in the form of probabilistic ‘shapes’. Work already done elsewhere would seem to bear this out (eg see A Deep Neural Network to identify foreshocks in real time).

Training & Prediction

Once the two subsystems have been pre-trained for what seems a reasonable length of time, I’ll glue them together, retaining the learnt weights. The VLF spectrograms will now be presented as a temporal sequence, and I strongly suspect the time dimension will have significance in this data, hence the insertion of extra memory in the form of RNN 3.

At this point I currently envisage training the system in real time using live data feeds.  (So the seismic sequence on the right will be time now, and the inputs on the left will be now-n). I’m not entirely sure yet how best to flip between training and predicting, worst case periodically cloning the whole system and copying weights across.

A more difficult unknown for me right now is how best to handle the latency between (assumed) precursors and events.  The precursors may appear hours, days, weeks or more before the earthquakes. While I’m working on the input sections I think I need to read up a lot more on Deep Learning & cross-correlation.

Reading online VLF

For the core of the VLF handling section of the neural nets, my current idea couldn’t be much more straightforward. Take periodic spectrograms of the signal(s) and use them as input to a CNN-based visual recognition system. There are loads of setups for these available online. The ‘labeling’ part will (somehow) come from the seismic data handling section (probably based around an RNN). This is the kind of pattern that hopefully the network will be able to recognise (the blobby bits around 5kHz):

Screenshot from 2017-07-01 18-07-52

“Spectrogramme of the signal recorded on September 10, 2003 and concerning the earthquake with magnitude 5.2 that occurred in the Tosco Emiliano Apennines, at a distance of about 270 km from the station, on September 14, 2003.” . From Nardi & Caputo, A perspective electric earthquake precursor observed in the Apennines

It’ll be a while yet before I’ll have my own VLF receiver set up, but in the meantime various VLF receiver stations have live data online, available through This can be listened to in a browser, e.g. Renato Romero’s feed from near Turin at (have a listen!).

So how to receive the data and generate spectrograms? Like a fool I jumped right in without reading around enough. I wasted a lot of time taking the data over HTTP from the link above into Python and trying to get it into a usable form from there. That data is transmitted using Icecast, specifically using an Ogg Vorbis stream. But the docs are thin on the ground so decoding the stream became an issue. It appears that an Ogg header is sent once, then a continuous stream. But there I got stuck, couldn’t make sense of the encoding, leading me to look back at the docs around how the transmission was done. Ouch! I really had made a rod for my own back.

Reading around Paul Nicholson’s pages on the server setup, it turns out that the data is much more readily available with the aid of Paul’s VLF Receiver Software Toolkit. This is a bunch of Unixy modules. I’ve still a way to go in putting together suitable shell scripts, definitely not my forte. But it shouldn’t be too difficult, within half an hour I was able to get the following image:


First I installed vlfrx-tools, (a straightforward source configure/make install, though note that in latest Ubuntu in the prerequisites it’s libpng-dev not libpng12-dev). Then ran the following:

vtvorbis -dn,4415 @vlf15

– this takes Renato’s stream and decodes it into buffer @vlf15.

With that running, in another terminal ran:

vtcat -E30 @vlf15 | vtsgram -p200 -b300 -s '-z60 -Z-30' > img.png

– which pulls out 30 seconds from the buffer and pipes it to a script wrapping the Sox audio utility to generate the spectrogram.