I've been using this blog to brainstorm features since its inception. Partly this is to share my designs with people who may find them useful, but mainly it's been a way to flush this data out of my brain so that I don't have to worry about losing it.
Last night, I declared my app feature-complete for the first version. But what features are actually in it?
Let's do some data-flow analysis on the collaborative transcription process. Somebody out in the real world has a physical artifact containing handwritten text. I can't automate the process of converting that text into an image — plenty of other technologies do that already — so I have to start with a set of images capturing those handwritten pages. Starting with the images, the user must organize them for transcription, then begin transcription, then view the transcriptions. Only after that may they analyze or print those transcriptions.
Following the bulk of that flow, I've cut support for much of the beginning and end of the process, and pared away the ancillary features of page transcription. The resulting feature set is below, with features I've postponed till later releases struck out.
- Image Preparation
- Single image upload/replacement
- Image orientation/resolution controls
Upload several images- Single-image titling
- Automatic title generation for several images
- Recto/verso image set collation
- Conversion of a set of titled images into pages of a transcribable work
- Page Transcription
- Zoom
- Transcription text entry
- Subject links
- Auto-linking
- Request review
Unclear tagsSensitive tags
- Transcription Display
- Table of Contents for a work
- Page display
- Multi-page transcription display
- Page annotations
Printable TranscriptionPDF generationTypesettingTable of contents pageTitle/Author pagesExpansion footnotesSubject article footnotesIndex
- Text Analysis
- Subject Articles
- "What links here" indexes
Relatedness graphs(Implemented, but turned off for v1.0)- Subject categories
- Category-based navigation
Categorized relatedness graphs