The hearing organ sustains sensory transduction across an extraordinary dynamic range of greater than 100 dB (1010 range in intensity); making the cochlea the premier biosensor! This reflects functional performance spanning the age-old adage of “hearing a pin drop” and the unsubtle new-age phenomenon of earbud-delivered music-on-the-go. The ‘pin-drop’-level acuity derives from the cochlear outer hair cell-based ‘cochlear amplifier’ which enables tuning and enhancement of the micromechanical forces that activate the inner hair cell mechano-electrical transducer channels. The remarkable feature of cochlear function is that this exquisite sensitivity can be regulated and protected over our lifetime. Further, regulation of the tuning achieves real-time capability via cholinergic efferent feedback to the outer hair cells. This efferent regulation enables un-masking of acoustic signatures from background noise; utilizing for the most-part contralateral suppression where sound entering the opposite cochlea activates the medial olivary complex (MOC) neurons and the crossed olivocochlear efferent bundle (CoCB), spanning the ipsilateral organ of Corti, as cochlear tunnel crossing fibres, to innervate the outer hair cells. The recruitment of these efferent fibres and the suppression of outer hair cell transduction operate over a range of time courses, with the most potent effect in the millisecond to second range, to affect dynamic hearing modulation.
In comparative studies in the mouse model, we assessed the regulation of sound transduction via neurohumoral signalling. In experiments undertaken following protocols approved by the UNSW animal care and ethics committee, we evaluated the contribution of contralateral suppression to the adaptation of hearing sensitivity with acute high level noise (∼85 dB; < 1 min). This involved measurement of distortion product otoacoustic emissions (DPOAEs) to monitor the loss of sensitivity of the cochlear amplifier following noise, in the presence and absence of contralateral suppression. Our data indicated that there was a significant attenuation of ipsilateral hearing adaptation in the seconds to minutes time domain if ipsilateral noise was coincident with contralateral suppression. Thus contralateral suppression during noise accelerated the rate of recovery of sensitivity between noise intervals. This contrasted with analysis of cochlear adaptation to an elevated noise floor, where we undertook a series of assessments of hearing sensitivity over time (> 1 h) at a sustained sound level above the saturation point for MOC-derived efferent drive to the outer hair cells (85 dB SPL). Experiments showed that loss of hearing sensitivity developed progressively with a time course of approximately 20 min, evident from increases in auditory brainstem response thresholds under ketamine/xylazine/acepromazine anaesthesia. The loss in sensitivity (∼15 dB) due to this moderately high noise exposure was restored by 96 h with normal acoustic levels (∼55 dB SPL), reflecting temporary threshold shift (TTS). This time course for adaptation was considerably slower than that of the contralateral efferent suppression (inactivation within one minute of sustained noise) and likely reflects second messenger signalling that impacts on the ‘cochlear amplifier’ (based on DPOAE measurements) and inner hair cell type I spiral ganglion synapses. A primary candidate for this slower cochlear adaptation mode is purinergic signalling, arising from noise-induced ATP release.