Not logged in : Login
(Sponging disallowed)

About: Auditory evoked potential detection during pure-tone audiometry     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : bibo:AcademicArticle, within Data Space : linkeddata.uriburner.com:28898 associated with source document(s)

AttributesValues
type
seeAlso
sameAs
http://eprints.org/ontology/hasAccepted
http://eprints.org/ontology/hasDocument
dc:hasVersion
Title
  • Auditory evoked potential detection during pure-tone audiometry
described by
Date
  • 2021-06-02
Creator
status
Publisher
abstract
  • Modern audiometry is largely a behavioural task, with the pure-tone audiogram (PTA) being the gold standard for evaluating frequency-specific hearing thresholds in adults. The nature of behavioural audiometry makes estimating accurate hearing thresholds difficult in infants and people with disabilities, where following instructions or interacting with the test may be difficult or impossible. We propose a method in which Auditory Evoked Potentials (AEPs) are used as an alternative to behavioural audiometry for detecting frequency-specific thresholds. Specifically, P300 responses elicited by the tones of a PTA are automatically detected from electroencephalogram (EEG) data, to evaluate hearing acuity. To assess the effectiveness of this method, we created a dataset of EEG recordings from participants presented with a series of pure tones at 6 different frequencies with steadily decreasing volumes, during a PTA test. This dataset was used to train a support vector machine (SVM) to identify when a participant was played a tone, whether they perceived it or not using their EEG. Results demonstrate that detecting hearing events can be very accurate for participants for whom the classifier has been trained apriori. However, accuracy drops significantly for unseen participants - when the classifier has not been trained on any prior data from a given participant before classifying their EEG. However, by establishing that AEP response-based audiometry is viable for detecting tones, future work will explore the ability of more powerful deep neural networks to accurately estimate for unseen participants.
Is Part Of
list of authors
presented at
is topic of
is primary topic of
Faceted Search & Find service v1.17_git144 as of Jul 26 2024


Alternative Linked Data Documents: iSPARQL | ODE     Content Formats:   [cxml] [csv]     RDF   [text] [turtle] [ld+json] [rdf+json] [rdf+xml]     ODATA   [atom+xml] [odata+json]     Microdata   [microdata+json] [html]    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 08.03.3331 as of Aug 25 2024, on Linux (x86_64-ubuntu_noble-linux-glibc2.38-64), Single-Server Edition (378 GB total memory, 16 GB memory in use)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2024 OpenLink Software