Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

Acoustic Emission Localisation in Wind Turbine

Olsson, Philip LU (2025) BMEM05 20251
Department of Biomedical Engineering
Abstract
This work investigates single-sensor localization of acoustic emissions (AEs) along a wind turbine blade with supervised neural networks. We evaluate six architectures: a Dense MLP, a 1D CNN on time-domain inputs, two 2D CNNs on linear spectrograms (LinSpec) and mel-spectrograms (MelSpec), a CRNN (2D CNN + LSTM), and RepVGG (Mini RepVGG in simulation, RepVGG-A0 in experiments). Three experimental datasets are used: Hex Bolt outside (HB), Centre Punch outside (CP), and Centre Punch inside (IB). Performance is reported as normalized mean absolute error (NMAE = MAE divided by the dataset span), so 0.10 means about 10% of the span, and Pearson correlation r to check that predictions increase with true distance.

In simulation, MelSpec models... (More)
This work investigates single-sensor localization of acoustic emissions (AEs) along a wind turbine blade with supervised neural networks. We evaluate six architectures: a Dense MLP, a 1D CNN on time-domain inputs, two 2D CNNs on linear spectrograms (LinSpec) and mel-spectrograms (MelSpec), a CRNN (2D CNN + LSTM), and RepVGG (Mini RepVGG in simulation, RepVGG-A0 in experiments). Three experimental datasets are used: Hex Bolt outside (HB), Centre Punch outside (CP), and Centre Punch inside (IB). Performance is reported as normalized mean absolute error (NMAE = MAE divided by the dataset span), so 0.10 means about 10% of the span, and Pearson correlation r to check that predictions increase with true distance.

In simulation, MelSpec models were most robust. The CRNN reached the lowest training NMAE but was not robust to dataset changes, dropping when pulse width or noise increased. In experiments, the 2D CNN with MelSpec achieved the best single-dataset test NMAE on CP, HB, and IB (0.031, 0.060, 0.065) with high r. Trained on the combined dataset, RepVGG-A0 with MelSpec delivered the lowest test NMAE overall at 0.049 (4.9% of the 37m span) and maintained high r across CP, HB, and IB. In cross-dataset evaluations, RepVGG was the only model to meet the 0.10 NMAE benchmark on any transfer (CP->HB: NMAE 0.081, $r=0.948$). The 0.10 NMAE benchmark was met on the main test evaluations.

Overall, single-microphone AE localization is feasible. Among the tested combinations, RepVGG-A0 with MelSpec and diverse training data is the most promising for accuracy and robustness, while r complements NMAE by verifying consistent ordering of predictions along the blade. The CRNN, despite the strongest training fit in simulation, was not robust to waveform and noise changes. Prior throughput reports for RepVGG suggest real-time feasibility, though this work does not benchmark runtime. (Less)
Popular Abstract
Finding the source of wind turbine blade “pings” with one microphone and AI

This project shows that a single, low-cost microphone plus machine learning can estimate where along a wind-turbine blade an acoustic “ping” came from, with typical errors of only a 5\% percent of the blade length.


Wind-turbine blades sometimes make brief sounds when tapped or when small cracks start to grow. Finding where that sound came from matters for quick and safe inspections. Many current methods need several sensors or detailed physics models. This project tested a simpler idea: can one microphone and modern AI figure out the distance along the blade from the sound alone?

To find out, recordings were made on a full-size blade. The microphone, an... (More)
Finding the source of wind turbine blade “pings” with one microphone and AI

This project shows that a single, low-cost microphone plus machine learning can estimate where along a wind-turbine blade an acoustic “ping” came from, with typical errors of only a 5\% percent of the blade length.


Wind-turbine blades sometimes make brief sounds when tapped or when small cracks start to grow. Finding where that sound came from matters for quick and safe inspections. Many current methods need several sensors or detailed physics models. This project tested a simpler idea: can one microphone and modern AI figure out the distance along the blade from the sound alone?

To find out, recordings were made on a full-size blade. The microphone, an iPhone 13, sat at the root. Short taps were made along the blade, both outside and inside. Each sound was turned into a picture that shows how energy changes over time and frequency. A machine-learning model was then trained to link those pictures to the true position of the tap.

The result: on held-out tests the system located events with errors around five percent of the measured span. For the 37-metre section used in evaluation that is roughly 1.8 metres. We also checked that the ordering of predictions made sense compared with the true positions, so the model was not just close on average but also consistent along the blade.

Two sound patterns helped the model learn. First, dispersion in the blade: different frequencies travel at different speeds, so the “bright bands” in the time–frequency picture tilt and spread out with distance. This effect is the basis of several physics-based methods and inspired the approach here. Second, propagation changes with distance: close to the microphone there is often a quick airborne burst followed by a wave travelling through the blade, while farther away high frequencies fade and lower frequencies last longer.

Why this matters: with more varied training data the same approach could support fast screenings during maintenance using only one sensor at the blade root. A promising next step is a single system that both detects when a sound event happens and estimates where it came from. (Less)
Please use this url to cite or link to this publication:
author
Olsson, Philip LU
supervisor
organization
alternative title
Lokalisering av Akustiska Emissioner i Vindturbin
course
BMEM05 20251
year
type
H2 - Master's Degree (Two Years)
subject
keywords
AI, Machine Learning, Wind-turbine, Neural Networks, Non-destructive testing, Sound-source localisation
language
English
additional info
2025-18
id
9211996
date added to LUP
2025-09-11 13:40:47
date last changed
2025-09-11 13:40:47
@misc{9211996,
  abstract     = {{This work investigates single-sensor localization of acoustic emissions (AEs) along a wind turbine blade with supervised neural networks. We evaluate six architectures: a Dense MLP, a 1D CNN on time-domain inputs, two 2D CNNs on linear spectrograms (LinSpec) and mel-spectrograms (MelSpec), a CRNN (2D CNN + LSTM), and RepVGG (Mini RepVGG in simulation, RepVGG-A0 in experiments). Three experimental datasets are used: Hex Bolt outside (HB), Centre Punch outside (CP), and Centre Punch inside (IB). Performance is reported as normalized mean absolute error (NMAE = MAE divided by the dataset span), so 0.10 means about 10% of the span, and Pearson correlation r to check that predictions increase with true distance.

In simulation, MelSpec models were most robust. The CRNN reached the lowest training NMAE but was not robust to dataset changes, dropping when pulse width or noise increased. In experiments, the 2D CNN with MelSpec achieved the best single-dataset test NMAE on CP, HB, and IB (0.031, 0.060, 0.065) with high r. Trained on the combined dataset, RepVGG-A0 with MelSpec delivered the lowest test NMAE overall at 0.049 (4.9% of the 37m span) and maintained high r across CP, HB, and IB. In cross-dataset evaluations, RepVGG was the only model to meet the 0.10 NMAE benchmark on any transfer (CP->HB: NMAE 0.081, $r=0.948$). The 0.10 NMAE benchmark was met on the main test evaluations.

Overall, single-microphone AE localization is feasible. Among the tested combinations, RepVGG-A0 with MelSpec and diverse training data is the most promising for accuracy and robustness, while r complements NMAE by verifying consistent ordering of predictions along the blade. The CRNN, despite the strongest training fit in simulation, was not robust to waveform and noise changes. Prior throughput reports for RepVGG suggest real-time feasibility, though this work does not benchmark runtime.}},
  author       = {{Olsson, Philip}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{Acoustic Emission Localisation in Wind Turbine}},
  year         = {{2025}},
}