Site Loader
Rock Street, San Francisco

The results from Experiment 2 revealed that those with high
cognitive performance were better than those with low cognitive performance at
identifying and/or reporting the speci?c effects of a certain hearing aid
processing scheme. The results are in line with the notion that the person’s
reasoning ability, in this case to identify, analyze and give details of their
experiences of a certain type of signal processing, is dependent on cognitive function.

The results from Experiment 1 indicate that measures of
working-memory capacity and verbal information-processing speed correlate with
speech recognition in noise. The pattern of results was consistent with the
idea that when auditory processing becomes very dif?cult, because of an adverse
listening situation and a damaged cochlea, the individual’s cognitive function
in?uences performance to a high degree (Pechora Fuller et al, 1995).

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Differences in individual listeners’ cognitive function may
lead to differing abilities to perform with (and bene?t from) hearing aids.
Experiments 1 and 2 were designed to investigate some possible relationships
between cognitive function and hearing aid use.


 Ratings different
from zero may indicate identi?ed differences between settings A and B. The ±95%
con?dence intervals for the average differences A–B across all listening
situations were positive and signi?cantly different from zero for subjects with
both high and low working memory performance. This indicates that both subject
groups overall rated the speech-dependent setting higher than the reference



The reference setting was included to allow for direct
comparisons between A and B. The only difference between A and B was the
inclusion of the ‘comfort mode’ in setting A. Therefore, the differences that
were expected to be reported in the ?eld test were differences in the way that
sounds were processed in the absence of speech.

 (B) Reference
setting: always process sounds in ‘speech mode’.

(A) Speech-dependent setting: process sounds with the speech
dependent signal processing (‘speech mode’ in the presence of speech, and
‘comfort mode’ in the absence of speech).

The experimental hearing aid had a program switch with two
settings, A and B, programmed as follows:

Experimental aids:

Figure 4 shows the mean HTLs
and ranges.





Subjects who had no previous experience of hearing aid use were
taken on from the patient population at the University Hospital of Linköping.
Forty-nine subjects were pre-tested with the reading span test. The result of
the test is shown in Figure 3. To maximize differences in working-memory
capacity, only those with especially high or especially low results were
invited to participate in the ?eld test. Thus, about 25% of the subjects were
assigned as high cognitive performers (High-Cog), about 50% as normal
performers, and about 25% as low performers (Low-Cog). Nine subjects with high
working-memory performance and eight subjects with low performance participated
in the ?eld test, nine females and eight males. Their average age was 66 years
(range 48–78 years).

Subjects and methods:

In Experiment 2, the relationship between working memory
capacity and the ability to identify and report speci?c effects of an
experimental hearing aid in a ?eld test was considered. The experimental aid
processed the sound differently depending on whether or not speech was
perceived as being present in the input signal.

 Cognitive function
and sensitivity to processing effects

Experiment 2:  Hearing
aids which process signals differently in situations dominated by noise



A correlation matrix was
calculated for the variables listed in Table 1, and is shown in Table 2. As can
be seen from Table 2, substantial correlations were found. The expected
correlations between age and the cognitive variables were found. As subjects
get older, cognitive performance becomes worse (higher age corresponds with
longer reaction time and lower working memory capacity). Also, the expected
correlation between speech recognition in noise and hearing thresholds was

Table 1 shows the average results
and standard deviations of the speech-in-noise test, as well as the cognitive
test results. Furthermore, data are shown for two variables that can be
expected to co vary with the other variables, age and average hearing
threshold. Age should be expected to correlate with the cognitive tests (e.g.
Salthouse, 1982). Pure-tone average hearing threshold (average 250 Hz to 6 kHz)
should be expected to correlate with speech recognition in noise (e.g.
Hagerman, 1984; Humes et al, 1994; Moore, 1996). The average values for speech
recognition in noise correspond well with previously presented data (e.g.
Arlinger, Billermark, Öberg, Lunner & Hellgren, 1997). The cognitive results
are in reasonable agreement with previous studies (Lyxell et al, 1996). The age
and pure-tone average values are as expected for the chosen population (?rst time
users at a Swedish hearing clinic).

 Seventy-two ?rst-time hearing aid users, 32
females and 40 males, with a mean age of 67 years (range 33–89 years), were
chosen from the patient population at the University Hospital of Linköping. The
subjects ful?lled the following criteria: they were all bilaterally ?tted with
the same type of non-linear signal processing hearing aids (Oticon Digifocus),
and had used the aids for about 1 year at the time of testing. Figure 1 shows
the mean hearing threshold levels (HTLs) and ranges. All subjects had normal
vision, or normal vision with eyeglasses.

Subjects and hearing aids:

The outcome measure from this
test is the SNR yielding 40% correct recognition. The lower the SNR for 40%
correct, the better the performance.

Aided and unaided speech
recognition in noise was measured by determining the signal-to-noise ratio
(SNR) that yielded 40% correct recognition of test words using lists of 10 low redundancy
?ve-word sentences in an adaptive test procedure (The Hagerman Sentences
(Hagerman & Kinnefors (1995)). Since the sentences had low redundancy,
subjects could not make use of context to guess the sentences. The speech
material, as well as the competing speech-shaped noise, was recorded on CD (see
Hagerman (1982) for details on the speech material and the speech-shaped noise)
and presented through a single frontally located loudspeaker in an audiometric
test room. The noise level was variable, and aided and unaided performance were
tested at a speech level of 70 dB SPL (C-weighted equivalent level).

 The subjects were given a rhyme judgment test,
in which they had to decide whether two simultaneously presented words rhymed.
The two rhyming words were orthographically dissimilar in spelling (e.g.
DAGS–LAX (Lyxell, Rönnberg & Samuelsson (1994)). The subjects were to
respond by pressing prede?ned buttons for ‘yes’ and ‘no’ answers. Half of the
pairs rhymed, and half did not.


Reading span is a working-memory
test designed to tax memory storage and processing simultaneously (Daneman
& Carpenter, 1980). The subjects’ task was to comprehend sentences and to
recall either the ?rst or the ?nal words of a presented sequence of sentences
(Baddeley et al, 1985). The words were presented in a word-by-word fashion, at
a rate of one word per 0.80 s. Half of the sentences were absurd (e.g. ‘The
train sang a song’), and half were normal (e.g. ‘The girl brushed her teeth’).
The subjects’ task was to respond ‘yes’ verbally (for a normal sentence) and
‘no’ verbally (for an absurd sentence) during a 1.75-s interval after each
sentence. After a sequence of sentences (three, four, ?ve or six sentences in
ascending order), one of the words ‘First’ or ‘Final’ was displayed on the
screen, indicating that the subjects should start to recall either the ?rst or
the ?nal words of all the three, four, ?ve or six sentences in the sequence.
The order (‘First’ or ‘Final’) was randomized. The performance measure was the
percentage of the number of totally presented words correctly recalled.


 The cognitive tests
were performed with an application developed for a PC (Rönnberg et al, 1989;
Hällgren, Larsby, Lyxell & Arlinger, 2001). Stimuli were presented on the
screen. The test words were presented in Swedish.



Working memory is believed to be a general
information-processing system. Thus, testing of cognitive function may be
performed by either visual or auditory presentation. By using a visual
presentation, we may avoid cognitive performance being affected by possible
extra-cognitive loading caused by the individual hearing impairment. If
signi?cant correlations are found between the visual cognitive tests and the
speech test in noise, then it seems reasonable to surmise that cognitive function
is important for speech recognition in noise.

Experiment 1: Sentences in noise and cognitive function


In Experiment 2, we studied the relationship between
working-memory capacity and the ability to identify and report speci?c effects
of complex signal processing. This was tested with hearing aids that process
signals differently in situations dominated by noise. The local ethics
committee approved the two experiments.

Experiment 1 was an investigation of the possible link
between cognitive function and speech recognition threshold in noise (with and
without hearing aids).


2. The ability to identify and
report speci?c effects of complex signal processing in the hearing aid may
depend on cognitive function.

1. Sentence performance in noise
may be correlated with cognitive function.

The above listing is by no means
comprehensive, but is rather a sample of possible issues regarding interactions
between cognitive function and hearing aid use. Here, we will look more closely
at two hypotheses:

 In order to better understand how auditory
function and hearing aid use might interact with cognitive function, several
issues have to be considered, including the consequences of cochlear damage,
ageing, working-memory capacity, verbal information-processing speed, and
signal processing in hearing aids.

Will the hearing-impaired wear
cognitive aids in the future? The question arises because hearing aids are
becoming increasingly complex, and we actually do not know whether the
increased complexity is equally bene?cial for all subjects. Complexity in the
signal processing carried out by a hearing aid may be cognitively demanding to
differing degrees for different persons. If so, the hearing aid ?tting may need
to be tailored to the individual user’s cognitive ability


 Establishing models of
template-matching in the context of human perception of corrupted speech.  Improving the performance of automatic speech
recognition systems in acoustically adverse conditions.

Revising models of auditory periphery by including the role
of the plunging pathway in making the cochlear response to speech sounds robust
to degradation in acoustic conditions.

Success in this project will contribute to and have significance
for the following:

 The goal of this
research is to develop a machine which will use state-of-the-art non-linear
peripheral auditory models (PAM) connected to a perceptually inspired model of
template matching to (1) predict phonetic confusions made by normally-hearing
listeners, and (2) predict intelligibility of distorted speech generated by
passing naturally spoken speech through realistic communication systems.  

C. Application of Cortical Processing Theory to Acoustical


 We have begun an
investigation into the issue of short-time computation of the speech-based STI
and comparison of results with long-term computation results and, for the
simple case of additive noise, comparison with short-time SNR.  To date, 0 dB SNR and 0 dB SNR plus
reverberation acoustic conditions have been considered.  The speech-based STI techniques analyzed
include an envelope regression method Ludvigsen, 1990, Goldsworthy, 2004, a
normalized correlation method Goldsworthy, 2004 and a normalized covariance
method Holube, 1996, Koch, 1992. When speech plus noise is considered, all
three techniques qualitatively track short-time fluctuations in SNR as seen
below in Figs 1 and 2

B.4. Short- and Long-Time Computation of the Speech-Based




Selected preliminary data for
modulation frequency discrimination tested at 110, 139, 175, and 220 Hz is
given in Figure 2.  The modulation
envelope at the tested frequency was used to modulate an 880 Hz sinusoidal


The stimuli consisted of two
tones, a reference and a target.  The
reference frequency was randomized within a 4 semitone interval.  So for the 440 Hz tone, the reference could be
between 392 and 494 Hz. The target was always the higher tone.  Both the target and reference were 500 ms
long with a 200 ms gap between.  Note
that normal hearing performance on this task for non-musicians is approximately
a quarter of a semitone.  The pilot
cochlear implant subject of Figure 1 showed relatively poor performance at the
higher frequencies. 


 Psychoacoustic procedures The testing paradigm
used in the preliminary data collection was a two alternative forced choice
paradigm with a two down, one up decision rule.  Such a paradigm converges to a 70.7% correct
response criteria.  Selected preliminary
data for pure tone frequency discrimination tested at 440, 880, 1760 and 3520
Hz is given in Figure 1.  


 Preliminary psychoacoustic and speech
reception data has been collected for one cochlear implant subject.  The data focused on pure tone frequency
discrimination, amplitude discrimination, and modulation frequency discrimination.  The data will facilitate the decision process
towards selecting the important psychoacoustic and speech reception testing
battery that will be used in the first MSI experiments involving cochlear
implant subjects.  Substantial effort was
extended on the software control and analysis functions for the collection of
this data.  

B2. Role of Discrimination on
Speech Reception Performance

 Substantial effort was focused on the
development of the STI and NCM models for predicting speech reception by
cochlear implant users.  In particular, a
component of the model was developed for comparing speech reception scores
across multiple cochlear implant users as well as normal hearing subjects listening
to a noise vocoder simulation of cochlear implant sound processing.  The developed component is based on an
efficiency factor that scales the metric values (Fletcher and Galt, 1950;
Fletcher, 1953; Dugal et al., 1980). 

B.1 Further Development of the
STI and NCM Models.

B. Models of speech

 Although supra-threshold effects of hearing
impairment are widely believed to be related to the decreased resolution on
psychoacoustic tasks and the poorer speech-reception abilities of
hearing-impaired listeners, the role of reduced audibility itself in explaining
the consequences of hearing loss is as yet not completely understood.  Work has begun on a comprehensive review of
the literature on the effects of hearing loss on performance in psychoacoustic
and speech-reception tasks to examine the evidence for supra-threshold
deficits. The review is organized into five major categories of studies that
include (a) temporal resolution, (b) intensity resolution, (c) spectral
resolution, (d) speech reception, and (e) correlational studies of speech and
psychoacoustic abilities.

A-4.  Review of Past Research on the Role of
Audibility in Predicting Effects of Hearing Impairment

 Signal-processing algorithms have been
developed for simulation of two types of hearing-aid processing that will be
included in the SRT testing.  These
processing modes include (a) linear amplification using the revised NAL
guidelines for frequency-gain characteristic (Byrne and Dillon, 1986) and (b)
multi-band amplitude compression amplitude based on a wide-dynamic range,
instantaneous compressive system (Goldstein et al., 2003).

A-3.  Signal Processing for Hearing-Aid Simulation

 We have developed software for obtaining
adaptive measurements of Speech Reception Threshold (SRT) for HINT sentences
(Nilsson et al., 1994) as a function of the type of background noise (wideband
noise which is either steady-state or square-wave-interrupted at a rate of 10
Hz) and noise level (60 or 80 dB SPL). 
This procedure measures the Speech-to Babble (S/B) ratio required for
50%-correct reception of sentences using equivalent lists of phonemically
balanced speech materials. 

A-2.  Speech Testing in Background Noise

In our current implementation,
the use of additive-noise masking is limited to the first 40 dB of hearing loss
at a given frequency and the remaining loss is simulated using multi-band
expansion.  With this combined approach,
we are able to simulate hearing loss in the range of mild to profound loss
using noise levels and signal presentation levels that do not exceed
comfortable listening levels in normal-hearing listeners. 

 We have implemented signal-processing
algorithms to produce functional simulation of hearing loss in listeners with
normal hearing.  These algorithms, which
have been developed in MATLAB, are based on the combination of two different
approaches towards hearing-loss simulation: 
(a) the addition of a spectrally shaped masking noise to elevate
thresholds of a normal-hearing listeners to match those of a given
hearing-impaired listener; and (b) the use of multi-band expansion in which
level-dependent attenuations are applied to sounds in different frequency bands
to map tone levels at the hearing-impaired listener’s threshold to the
threshold of a normal hearing listener and to mimic the rapid growth of
loudness observed in hearing loss. 

A-1.   Simulations of Sensorineural Hearing Loss

This research is concerned with
analyzing the factors responsible for poor speech reception by listeners with
hearing impairments, and with developing techniques for overcoming these
degradations.  To the extent the research
is successful, it will help govern design goals for improved wearable hearing
aids, establish new criteria and techniques for aural analysis, and contribute
to improved understanding of both residual auditory function and speech

Role of reduced audibility:

 To develop and evaluate signal processing
techniques that hold promise for increasing the effectiveness of hearing aids.

 To assess the relative contributions of
various functional characteristics of hearing impairments to reduced
speech-reception capacity.

 To evaluate the effects of style of speech
articulation and variability in speech production on speech reception by
hearing impaired listeners.

To develop and evaluate
analytical models that can predict the effects of a variety of alterations of
the speech signal on intelligibility.

The aims of project are:

The goal of this research is to
develop a hearing aid for people suffering from sensorineural hearing
impairment and cochlear implants for impaired ones. The focus is on problems
resulting from inadequate knowledge of effects of various transformations of
speech signals on speech reception by impaired listeners, specifically on
fundamental limitations on improvements in speech reception, which can be
achieved by processing speed.


Post Author: admin


I'm Velma!

Would you like to get a custom essay? How about receiving a customized one?

Check it out