This HTML5 document contains 30 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dctermshttp://purl.org/dc/terms/
n8https://kar.kent.ac.uk/63217/
n2https://kar.kent.ac.uk/id/eprint/
wdrshttp://www.w3.org/2007/05/powder-s#
dchttp://purl.org/dc/elements/1.1/
n19http://purl.org/ontology/bibo/status/
n17https://kar.kent.ac.uk/id/subject/
rdfshttp://www.w3.org/2000/01/rdf-schema#
n21https://demo.openlinksw.com/about/id/entity/https/raw.githubusercontent.com/annajordanous/CO644Files/main/
n14doi:10.1109/
n3http://eprints.org/ontology/
n6https://kar.kent.ac.uk/id/event/
bibohttp://purl.org/ontology/bibo/
n16https://kar.kent.ac.uk/id/org/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
n5https://kar.kent.ac.uk/id/eprint/63217#
n15https://kar.kent.ac.uk/id/document/
n18https://kar.kent.ac.uk/id/
xsdhhttp://www.w3.org/2001/XMLSchema#
n12https://demo.openlinksw.com/about/id/entity/https/www.cs.kent.ac.uk/people/staff/akj22/materials/CO644/
n10https://kar.kent.ac.uk/id/person/

Statements

Subject Item
n2:63217
rdf:type
n3:EPrint bibo:Article bibo:BookSection n3:BookSectionEPrint
rdfs:seeAlso
n8:
owl:sameAs
n14:ICIP.2017.8296344
n3:hasAccepted
n15:179119
n3:hasDocument
n15:2706530 n15:2706531 n15:2706532 n15:179119 n15:2706533 n15:179183
dc:hasVersion
n15:179119
dcterms:title
Fisher Vector based CNN architecture for Image Classification
wdrs:describedby
n12:export_kar_RDFN3.n3 n21:export_kar_RDFN3.n3
dcterms:date
2018-02-22
dcterms:creator
n10:ext-4d520406bc63b9b0d65f77c4218c3361 n10:ext-c95c7930cc51975816b5a65b6627bf54 n10:ext-i.v.mcloughlin@kent.ac.uk n10:ext-0901e1f569a67f3cf985c0ce8b33476f
bibo:status
n19:peerReviewed n19:published
dcterms:publisher
n16:ext-af0a9a5baed87c407844a3f5db44597c
bibo:abstract
In this paper, we tackle the representation learning problem for small scale fine-grained object recognition and scene classification tasks. Conventional bag of features(BoF) methods exploit hand-crafted frontend local features, and learn the representations via various machine learning techniques. Convolutional neural networks(CNN) directly learn the representation from raw images and benefit from joint optimization of network parameters in an end-to-end manner. However, the performance of existing representation learning methods is still unsatisfactory for the small-scale recognition tasks. To address this issue, we present a FV coding based CNN(FV-CNN) architecture. FV-CNN has three main advantages in that firstly it is able to exploit activations from the intermediate convolutional layer and a probabilistic discriminative model to derive the FV coding. Secondly, it takes advantage of the end-to-end back-propagation of the gradients to jointly optimize the whole learning process. Finally, it can learn a compact representation. When evaluated on benchmark datasets of fine grain object recognition (Caltech-CUB200), and scene classification (MIT67), accuracies of 88.0% and 82.2% are achieved.
dcterms:isPartOf
n18:repository
dcterms:subject
n17:T
bibo:authorList
n5:authors
bibo:presentedAt
n6:ext-ec9b5ecf7a328b0bf261d8b0cb3bc6cb