Bernafon’s frequency lowering algorithm, Frequency Composition™ is designed to restore audibility of high-pitched sounds for hearing aid users with severe high-frequency hearing loss. The implementation of Frequency Composition™ in Oasisnxt offers a wide range of settings to optimize the fitting for each user’s needs and preferences. However, it might be challenging to understand how this will impact the signal at the hearing aid’s output. This post proposes a method to visualize the effect of Frequency Composition™ on a speech signal with different settings using spectrogram subtraction.
The supplementary materials included with this blog can be used to reproduce our results and extend this technique to your own use cases. You will also find a script to generate a clean audiogram with personalized annotations for your reports or documentation.
Fitting Frequency Composition™
Fitting a severe high-frequency hearing loss is known to be a difficult case for any audiologist. Vinay & Moore (2007) showed that the probability of having a cochlear dead region is associated with lower hearing thresholds. In this case, the residual hearing might not benefit from high-frequency acoustical cues produced with conventional amplification. This limitation comes from the inner ear’s difficulties to encode high frequencies. Frequency Composition™ is therefore designed to restore audibility of phonemes like fricatives by lowering information from a source range, in the higher frequencies, to a destination range with better residual hearing.
Above, a typical audiogram of a hearing aid user who could benefit from frequency lowering. You will find the code in the supplementary material to reproduce the audiogram and see how to add specific information on the figure. The interval where dead regions are likely to be met is shown in red. This audiogram configuration suggests that phonemes like /s/ or /f/ might not be audible with conventional amplification. Frequency Composition™ could therefore be used to restore audibility of these phonemes.
The challenge in this situation is to define the destination range, the amount of lowered signal, and what to do with the remaining high frequencies. Oasisnxt defines a default setting for all these parameters based on information derived from the audiogram. This default works well on average but it might benefit from some fine tuning based on individual’s needs to improve the efficiency of the feature. The above-mentioned parameters are available in the fitting software and can be modified independently. The figure below shows some potential adjustments:
- Default settings: based on the implemented fitting rationale and the audiogram,
- Additional gain: if high frequencies are not audible, then you can increase the intensity of the lowered signal or,
- Use a lower destination range: the new destination range is now between 2.3 and 3.2 kHz,
- High frequency attenuation: if you suspect cochlear dead regions in the higher frequencies, then it might be interesting to reduce the amplification in the source range to reduce distortions. All these fine-tuning possibilities must be individually evaluated with the hearing aid user to select the most appropriate settings. But, do you know what the consequences are on the hearing aid’s output when you apply these changes? If you are not sure, then we can help you to get a clear understanding of how these changes impact a speech signal with a specific visualization method.
Visualizing how Frequency Composition™ affects speech
It might be a complex task to visualize the effect of Frequency Composition™ on a speech signal because a single figure must include many variables. As speech is a highly modulated signal in the time and frequency domains, it is expected that the selected figure must also take these dimensions over. Using a spectrum, which averages the frequency specific energy over time, is not satisfying as it will remove useful information localized in the time domain. Therefore, we should use a spectrogram as a basic representation of the reference signal (black and white in the time and frequency domain) and add the effect of frequency lowering with colors. This allows us to see where the modifications are applied in the time and frequency domain.
Here is a short description about the recording conditions. The test signal is the logatome /zas/ taken from the Oldenburg Logatome Corpus (Meyer et al., 2011) and repeated by three different speakers, i.e. one male and two female speakers. Recordings with and without Frequency Composition™ were made on an acoustical manikin and the speech signal was presented from the front at 65 dB SPL. The recording without Frequency Composition™ is shown here as a reference:
We compute a spectrogram for each aided recording with and without Frequency Composition™. The spectrogram with Frequency Composition™ is subtracted from the reference spectrogram so that we obtain a time/frequency mask representing the effect of frequency lowering. We can assign a color code to easily track the changes given by the algorithm, i.e. green for added signal and red for removed signal. You can listen at the recordings and try to make the connection with what you see in the different figures:
Lower destination range
This representation has the advantage of showing in a single illustration the reference signal and at the same time the effect of frequency lowering on each single phoneme. You can see that energy is added mainly for the phonemes /z/ and /s/ in the destination range and that changes in the fitting software can be easily identified:
- adding intensity to the lowered signal is illustrated with more intense green coloration in the destination range,
- attenuating high frequencies is shown with the red part in the source range,
- choosing a lower destination range (2.3 – 3.2 kHz) is represented by a shift of the green band which matches the selected interval.
This evaluation is mandatory to understand what the impact is, when you change the settings of Frequency Composition™. However, it does not replace traditional validation with phonemic tests and tests in daily life situations. You will find all the needed information and materials to reproduce the illustration and the recordings in the folder linked to this blog. You can also make your own recordings and apply the function to get a similar figure. A poster, presented at the Annual Scientific and Technology Conference of the American Auditory Society (AAS) in 2013, summarizing the differences between manufacturers with this visualization technique, can also be found in this folder.
All supplementary materials are here for you in this zip folder.
Bisgaard, N., Vlaming, M. S. M. G., & Dahlquist, M. (2010). Standard Audiograms for the IEC 60118-15 Measurement Procedure. Trends in Amplification, 14(2), 113–120. https://doi.org/10.1177/1084713810379609
Meyer, B. T., Brand, T., & Kollmeier, B. (2011). Effect of speech-intrinsic variations on human and automatic recognition of spoken phonemes. The Journal of the Acoustical Society of America, 129(1), 388–403. https://doi.org/10.1121/1.3514525
Vinay, V., & Moore, B. C. J. (2007). Prevalence of Dead Regions in Subjects with Sensorineural Hearing Loss. Ear and Hearing, 28(2), 231–241. https://doi.org/10.1097/aud.0b013e31803126e2 IEC