RSS feed
- 18th century ruin becomes stylish low-energy home
- Hit the road to cheap motoring by switching to an electric car
- Overcoming disability as a barrier to college education
- Irish Sign Language in schools for deaf children
- Connected: How a Cochlear Implant Made Me More Deaf
- When signing is singing
- Classic cars and the NCT
- IT conversion courses – not all employers are converts
- Open Days: smoothing the way to third level
- Live captioning phone app on the way
Categories
Archives
- November 2017
- October 2017
- September 2016
- January 2015
- November 2014
- February 2014
- January 2014
- May 2013
- March 2013
- January 2013
- December 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- February 2011
- January 2011
- December 2010
- November 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
Mapping a path to a hearing sun
I went for my third mapping on Thursday 1st Dec, a week ago.
First, a warning: what you are about to read is quite long and technical, and therefore probably quite dull and uninteresting if you’re not a CI user yourself, or at least profoundly interested in the audiological science surrounding it.
Jacki, my audiologist, turned down the volumes of the higher frequencies, which were a little too sharp for my liking. But she also added in a new programme with the same moderated high frequencies but with a “smoother” sound scale. It seems that my previous map was based purely on what I could tolerate, which resulted in some simple inconsistencies in terms of the sound information the processor was sending to my implanted ear. When I listened to the “piano scale” of my 22 electrodes, two or three of the electrodes were a good bit out of tune. They were adjusted so that the scale is now smoother but still within tolerances. The result was better, although I couldn’t explain exactly why.
However, in the standard baseline speech recognition test that followed, I only got 26pc again – no improvement on my first test three months ago. A bit disappointing, but both Jacki and speech therapist Lesley were reassuringly philosophical. Jacki went as far as to say that if someone had told her a patient was using the phone and listening to audiobooks without too much trouble but only getting 26pc in the test, she wouldn’t believe it, which suggests that the standard speech recognition test they use is a bit one-dimensional. There are no words, just sentences. I do WAY better with context, and through earphones. Also some folk do far less well in these types of tests because they believe that they need to get all the sentences right, not just one or two words. Jacki thinks I exhibit some of the classic signs of that type of ‘test anxiety’.
Lesley ran through some exercises and confirmed that with some context, I score very well, possibly higher than average, but that the next stage is learning to listen to things without context, or open-ended rather than closed ended questions or information. I just need to keep working at it.
She also mentioned that the Beaumont team is working on developing another set of test measures to give a more holistic reflection of progress, such as the way I do with telephones and audiobooks, for instance.
Despite these reassurances and the knowledge that I’m progressing really well on a practical level, does this suggest the quality of what I’m hearing is as good as it could be? I wasn’t 100pc sure about the latest program, so Jacki offered to book me in again to see her a week later – if I wanted to – for a further tune-up.
After this appointment, I decided to try and just use the implant on its own for a while. One CI-using friend did this and swears that she saw her speech recognition improve dramatically after that. Still uses her hearing aid, but prefers implant, as is the case with most implantees.
After a few days of this, the exercise was useful in that it showed up what I’m missing – and which had been ‘masked’ to some extent by what I was hearing through my hearing aid. It was basically not as sharp as I wanted and quiet sounds are not coming through as much as I would have expected. My implanted ear, even though it feels the stronger, more dominant ear in general, still feels like its playing only a supporting role to my HA ear when it comes to speech recognition.
So I went back today to see the ever-patient Jacki, and explained some of this to her. She turned up some of the quiet sounds but also some of the higher sounds (I had used the example of not hearing fans very well, which turn to be in the higher ranges of the sound frequency spectrum). Result: much better again. Sharper, louder, fuller.
On the question of using just the implant alone, Jacki explained that the old school of thought was that you shouldn’t mix CIs and HAs when your CI is first activated until you’ve had a chance to get used to it, but now the school of thought (which Beaumont subscribes to) is to use both from the word go. The logic is essentially that two ears are better than one, and even if the sound information being received seems radically different from one ear to the other, it’s still better to have two ears giving serviceable hearing.
But it also turns out that Jacki is tuning my programs based on me having a hearing aid too. Programs can be tailored for an implant on its own, but if she did this, then it might be too overwhelming if I used the hearing aids as well because the implant output would be more powerful – it would be trying to do the work of two ears in as far as it can. So we’re sticking to the current plan and enabling a set-up that gets the best from both ears – in balance.
(exhales)