Formats the Html codes.FormatFix the Html to be XHtml compliantFix
Validate that the text is XHtml compliant.Validate
Dr. Marshall Chasin, the Canadian audiologist, author and educator, and I were invited to write a chapter for non-audiologists about hearing aids and musicians for the Springer Handbook of Systematic Musicology. Now that it is in print I would like to write a little about a topic that I have had a lot of interest in since beginning my work on music and hearing aids in 2010 (Hockley et al., 2010).
Musicology is the study of music, its production and perception, and its cultural, historical and philosophical background. This handbook is a very large reference volume of over 1000 pages covering many topics such as musical acoustics, psychophysics, signal processing and cognition, to name but a few. It is available as an ebook and it is also possible to download the chapters individually, which is convenient. This handbook is primarily directed at researchers and students, however, anyone who has an interest in music could find it an incredible source of information.
It is only in the last few years that music has become more of a topic in the hearing aid world, despite the importance of music in many peoples’ lives. Internationally, projects have been undertaken in the last few years such as Hearing Aids for Music by Dr Alinka Greasley and her colleagues in the United Kingdom to explore the impact that hearing loss and hearing aid technology have on music listening. Similar to the Springer Handbook, this project is directed at a multidisciplinary audience from a variety of fields who all have many different experiences to share.
In our book chapter, Marshall Chasin and I began with a very brief explanation about the nature of hearing loss and how hearing is assessed. After a brief discussion about the need for tools to directly assess music perception with a hearing loss, we then describe the acoustic differences (and similarities) between music and speech as an input to a hearing aid. One of the biggest differences between music and speech is the differences in level. Live music is much more intense than speech, even when it is judged to be quiet. Speech typically does not exceed 80dB (A) whereas a trumpet, for example, can have a peak of between 88 and 108 dB (A). After discussing the acoustics of speech and music, we then moved on to some practical clinical strategies to handle music as an input to a hearing aid. Next we moved on to what can be done with technology within the hearing aid to accommodate music. This includes changing where the dynamic range of the Analog-to-Digital Converter operates, so that it can better respond to intense musical signals. We then briefly discuss some adjustments that can be made within a dedicated music program in the hearing aid. These include the compression settings, amplification bandwidth and finally the need to disable automatic systems which may negatively affect music as an input signal to the hearing aid. We finally conclude that there is a lot of work that still needs to be done to look at music and hearing aids and also call for more collaboration between audiology and musicology to look at what can be done for people with a hearing impairment who are musicians or music enthusiasts.
It was a great opportunity to work on this project with Marshall Chasin for Springer-Verlag. I am especially excited about the fact that this work was published within a reference book on Musicology, a subject area which is not typically associated with Audiology. I hope that this chapter creates some curiosity about the subject of hearing aids and music in people who have consulted this handbook
Chasin, M., & Hockley, N. S. (2018). Hearing Aids and Music: Some Theoretical and Practical Issues. In R. Bader (Ed.), Springer Handbook of Systematic Musicology (pp. 841-853). Berlin, Heidelberg: Springer-Verlag.
Hockley, N.S., Bahlmann, F., & Chasin, M. (2010). Programming hearing instruments to make live music more enjoyable. The Hearing Journal, 63(9), 30 – 38.