This HTML5 document contains 28 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dctermshttp://purl.org/dc/terms/
n2https://kar.kent.ac.uk/id/eprint/
wdrshttp://www.w3.org/2007/05/powder-s#
dchttp://purl.org/dc/elements/1.1/
n15http://purl.org/ontology/bibo/status/
rdfshttp://www.w3.org/2000/01/rdf-schema#
n17doi:10.1109/
n20https://demo.openlinksw.com/about/id/entity/https/raw.githubusercontent.com/annajordanous/CO644Files/main/
n13https://kar.kent.ac.uk/91394/
n7http://eprints.org/ontology/
n14https://kar.kent.ac.uk/id/eprint/91394#
n4https://kar.kent.ac.uk/id/event/
bibohttp://purl.org/ontology/bibo/
n11https://kar.kent.ac.uk/id/org/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
n8https://kar.kent.ac.uk/id/document/
n19https://kar.kent.ac.uk/id/
xsdhhttp://www.w3.org/2001/XMLSchema#
n10https://demo.openlinksw.com/about/id/entity/https/www.cs.kent.ac.uk/people/staff/akj22/materials/CO644/
n6https://kar.kent.ac.uk/id/person/

Statements

Subject Item
n2:91394
rdf:type
bibo:Article n7:ConferenceItemEPrint n7:EPrint bibo:AcademicArticle
rdfs:seeAlso
n13:
owl:sameAs
n17:NER49283.2021.9441417
n7:hasAccepted
n8:3253576
n7:hasDocument
n8:3253675 n8:3253672 n8:3253673 n8:3253674 n8:3253576 n8:3253592
dc:hasVersion
n8:3253576
dcterms:title
Auditory evoked potential detection during pure-tone audiometry
wdrs:describedby
n10:export_kar_RDFN3.n3 n20:export_kar_RDFN3.n3
dcterms:date
2021-06-02
dcterms:creator
n6:ext-i.v.mcloughlin@kent.ac.uk n6:ext-grba@kent.ac.uk n6:ext-r.palani@kent.ac.uk
bibo:status
n15:peerReviewed n15:published
dcterms:publisher
n11:ext-af0a9a5baed87c407844a3f5db44597c
bibo:abstract
Modern audiometry is largely a behavioural task, with the pure-tone audiogram (PTA) being the gold standard for evaluating frequency-specific hearing thresholds in adults. The nature of behavioural audiometry makes estimating accurate hearing thresholds difficult in infants and people with disabilities, where following instructions or interacting with the test may be difficult or impossible. We propose a method in which Auditory Evoked Potentials (AEPs) are used as an alternative to behavioural audiometry for detecting frequency-specific thresholds. Specifically, P300 responses elicited by the tones of a PTA are automatically detected from electroencephalogram (EEG) data, to evaluate hearing acuity. To assess the effectiveness of this method, we created a dataset of EEG recordings from participants presented with a series of pure tones at 6 different frequencies with steadily decreasing volumes, during a PTA test. This dataset was used to train a support vector machine (SVM) to identify when a participant was played a tone, whether they perceived it or not using their EEG. Results demonstrate that detecting hearing events can be very accurate for participants for whom the classifier has been trained apriori. However, accuracy drops significantly for unseen participants - when the classifier has not been trained on any prior data from a given participant before classifying their EEG. However, by establishing that AEP response-based audiometry is viable for detecting tones, future work will explore the ability of more powerful deep neural networks to accurately estimate for unseen participants.
dcterms:isPartOf
n19:repository
bibo:authorList
n14:authors
bibo:presentedAt
n4:ext-324ba3cc078cb22e2e6640c7297bc59d