Not logged in : Login
(Sponging disallowed)

About: https://dragonfly.hypotheses.org/388     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : rss:item, within Data Space : linkeddata.uriburner.com:28898 associated with source document(s)

AttributesValues
type
Creator
  • Christof Schöch
described by
Date
  • 2013-07-03T17:54:40Z
Subject
  • cat
  • calibre
  • Articles
  • Dropbox
  • My research
  • RegEx
  • RelaxNG
  • TEI
  • TXM
  • Tools
  • TreeTagger
  • WEKA
  • gephi
  • jEdit
  • oxgarage
  • rename
  • split
  • stylo package
  • text analysis
  • text repository
  • xslt
rss:title
  • What the perfect repository for text analysis looks like (to me)
rss:link
rss:description
  • The longer I work with various collections of literary texts, available in various formats, and for use with various tools, the more I would like to have a nice repository which me and others could use to ingest, store, transform, update and extract text collections. So what would this repository look like? Basically, I’m describing a use case, and would...
content:encoded
  • The longer I work with various collections of literary texts, available in various formats, and for use with various tools, the more I would like to have a nice repository which me and others could use to ingest, store, transform, update and extract text collections. So what would this repository look like? Basically, I’m describing a use case, and would be very interested to hear from you out there, fellow text analysis practicioners, whether you have aspects to add to this, and how you are currently dealing with the issues outlined here.

    Massive reading! (Photo credit: "Work with schools: after a book talk, showing boys gathered..." by the New York Public Library, Flickr commons, <a href=http://www.flickr.com/photos/lselibrary/3925726691/. )" width="300" height="274" srcset="https://f-origin.hypotheses.org/wp-content/blogs.dir/857/files/2013/07/massivereading-300x274.jpg 300w, https://f-origin.hypotheses.org/wp-content/blogs.dir/857/files/2013/07/massivereading-500x457.jpg 500w, https://f-origin.hypotheses.org/wp-content/blogs.dir/857/files/2013/07/massivereading-328x300.jpg 328w, https://f-origin.hypotheses.org/wp-content/blogs.dir/857/files/2013/07/massivereading.jpg 760w" sizes="(max-width: 300px) 100vw, 300px" />

    Massive Reading! (Photo credit: “Work with schools: after a book talk, showing boys gathered…” by the New York Public Library, Flickr commons, http://www.flickr.com/photos/lselibrary/3925726691/.)

    Basically, I see the following steps in my “repository for text analysis use case”, roughly from beginning to end:

      1. Ingest texts coming from various sources (Gutenberg, Wikisource, TextGrid, ABU, theatre-classique.fr, ebooksgratuits.com, you name it) and, hence, in various formats (txt, html, epub, doc, XML, TEI P4, TEI P5), into the repository.
      2. Transform all of these text formats into a central “master” format, basically a relatively basic implementation of TEI, and make them valid against a schema defining that implementation.
      3. Add typological metadata to the TEI header, including things like: genre, sub-genre, author gender, literary epoch, narrative form, etc.
      4. Create various derivative files from the master files, especially plain text files split into pieces in various ways (by kb or numbers of words, or by structural segments like chapters, paragraphs, scenes or acts).
      5. Create such derivative files containing only specific parts of the master files, like only text from the “body”, or only “speeches”, or everything but quotations.
      6. Create collections of derivative files based on the typological metadata, such as a collections of all crime fiction novels from the 20th century, or all comedies in verse written by women (not that many yet).
      7. Use the typological metadata to flexibly generate filenames for the derivative files, for example “author_title” or “genre_year-title”.
      8. The master and/or derivative files can be further annotated linguistically (tokenization, lemmatization, POS-tagging) and otherwise (named entity recognition, speakers, etc.)
      9. The master files can be corrected, updated and versioned, and new sets of derivatives be generated from the updated masters
      10. The collections of derivative files can be published for documentation, reproducibility and reuse, including resue by external analysis services
      11. All data is stored safely and securely, and access can be more or less restricted so that collaborative work on the collections becomes possible
      12. And finally, of course, and because 12 is a nice number, use these collections of derivative text files for all kinds of computational text analysis

    Don’t we all need this in some way or another, when doing computational text analysis? Currently, I am doing these things with a wild but somewhat functional although ultimately clunky combination of a wide array of tools and services (I’m probably forgetting some):

    1. Calibre to create txt or rtf from epub and other formats (#1 above)
    2. Oxgarage to create TEI from various formats (#1 above)
    3. A little bit of XSLT in jEdit or oXygen, to derive selective content from TEI files and write txt files (#4 and #5)
    4. RegEx in jEdit or oXygen, to create or clean up TEI files (#1 and #3)
    5. RelaxNG in jEdit oXygen, for validation (#3)
    6. Dropbox for storing everything and accessing it myself (#11)
    7. “cat”, “split” and “rename” on the command line to split and merge text files and to rename the filenames of specific text collections (#7)
    8. TreeTagger via an R wrapper, for linguistic annotation (#8)
    9. “stylo” package for R, WEKA, Gephi, TXM, jEdit, for all kinds of queries and analyses (#12)
    10. do things manually, for good measure and difficult cases (all steps, even #12: I do read stuff in the good old way, too.)

    Notice that in the first list, #9 (update with versioning) #10 (publish for reuse) and #11 (storage with access control) are not really catered for with this bricolage setup. More importantly, however, the workflow involves too many tools which are not really connected with each other, so that the process of creating a given derivative of a text can neither be easily be repeated by myself nor, just as importantly, by others.

    Currently, here in Würzburg, we are exploring how to provide for most if not all of these things with TextGrid and the DARIAH infrastructure.  I’m personally curious to hear how others do this, but it would also be of great interest to us when further defining this use case, how others deal with this: So, how do you do it? What is needed but not in the list? What works well for you? What doesn’t?

    Of course, I have not really said what my perfect repository would look like: I’m not sure, but I do know that when I’ll be using it, I will be impressed and gratified by its elegance, flexibility and reactiveness. It will work beautifully.

is rdf:_14 of
Faceted Search & Find service v1.17_git149 as of Dec 03 2024


Alternative Linked Data Documents: iSPARQL | ODE     Content Formats:   [cxml] [csv]     RDF   [text] [turtle] [ld+json] [rdf+json] [rdf+xml]     ODATA   [atom+xml] [odata+json]     Microdata   [microdata+json] [html]    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 08.03.3331 as of Aug 25 2024, on Linux (x86_64-ubuntu_noble-linux-glibc2.38-64), Single-Server Edition (378 GB total memory, 14 GB memory in use)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2024 OpenLink Software