This HTML5 document contains 34 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dctermshttp://purl.org/dc/terms/
n18https://kar.kent.ac.uk/id/eprint/70009#
n2https://kar.kent.ac.uk/id/eprint/
wdrshttp://www.w3.org/2007/05/powder-s#
n10http://purl.org/ontology/bibo/status/
dchttp://purl.org/dc/elements/1.1/
rdfshttp://www.w3.org/2000/01/rdf-schema#
n16https://kar.kent.ac.uk/id/subject/
n17https://demo.openlinksw.com/about/id/entity/https/raw.githubusercontent.com/annajordanous/CO644Files/main/
n5http://eprints.org/ontology/
n7https://kar.kent.ac.uk/id/event/
bibohttp://purl.org/ontology/bibo/
n22https://kar.kent.ac.uk/id/publication/
n19https://kar.kent.ac.uk/id/org/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
n13doi:10.1609/
owlhttp://www.w3.org/2002/07/owl#
n4https://kar.kent.ac.uk/id/
n9https://kar.kent.ac.uk/id/document/
n21https://kar.kent.ac.uk/70009/
xsdhhttp://www.w3.org/2001/XMLSchema#
n15https://demo.openlinksw.com/about/id/entity/https/www.cs.kent.ac.uk/people/staff/akj22/materials/CO644/
n11https://kar.kent.ac.uk/id/person/

Statements

Subject Item
n2:70009
rdf:type
n5:EPrint n5:ConferenceItemEPrint bibo:AcademicArticle bibo:Article
rdfs:seeAlso
n21:
owl:sameAs
n13:aaai.v33i01.33016562
n5:hasAccepted
n9:3141423
n5:hasDocument
n9:3151560 n9:3151557 n9:3151558 n9:3151559 n9:3141423 n9:3141428
dc:hasVersion
n9:3141423
dcterms:title
Word Embedding as Maximum A Posteriori Estimation
wdrs:describedby
n15:export_kar_RDFN3.n3 n17:export_kar_RDFN3.n3
dcterms:date
2019-07-17
dcterms:creator
n11:ext-04d6827d22e77ab5ee6f928ce0a25332 n11:ext-m.s.jameel@kent.ac.uk n11:ext-bc985c1027c86cbe6ce3f78e7156df7e n11:ext-f5e7751d5d9ca120bde58f413e6af41b n11:ext-8b8d1daa85ca9de53c51134fea59a1a4
bibo:status
n10:peerReviewed n10:published
dcterms:publisher
n19:ext-2982d8c3f4b994a774e9990d11f20cd1
bibo:abstract
The GloVe word embedding model relies on solving a global optimization problem, which can be reformulated as a maximum likelihood estimation problem. In this paper, we propose to generalize this approach to word embedding by considering parametrized variants of the GloVe model and incorporating priors on these parameters. To demonstrate the usefulness of this approach, we consider a word embedding model in which each context word is associated with a corresponding variance, intuitively encoding how informative it is. Using our framework, we can then learn these variances together with the resulting word vectors in a unified way. We experimentally show that the resulting word embedding models outperform GloVe, as well as many popular alternatives.
dcterms:isPartOf
n4:repository n22:ext-23743468
dcterms:subject
n16:QA76
bibo:authorList
n18:authors
bibo:presentedAt
n7:ext-61362b61cdfaeb899ba21a045d62e8e1
bibo:issue
1
bibo:volume
33