I’ve been using this blog to brainstorm features since its inception. Partly this is to share my designs with people who may find them useful, but mainly it’s been a way to flush this data out of my brain so that I don’t have to worry about losing it.
Last night, I declared my app feature-complete for the first version. But what features are actually in it?
Let’s do some data-flow analysis on the collaborative transcription process. Somebody out in the real world has a physical artifact containing handwritten text. I can’t automate the process of converting that text into an image — plenty of other technologies do that already — so I have to start with a set of images capturing those handwritten pages. Starting with the images, the user must organize them for transcription, then begin transcription, then view the transcriptions. Only after that may they analyze or print those transcriptions.
Following the bulk of that flow, I’ve cut support for much of the beginning and end of the process, and pared away the ancillary features of page transcription. The resulting feature set is below, with features I’ve postponed till later releases
- Image Preparation
- Page Transcription
- Transcription Display
- Table of Contents for a work
- Page display
- Multi-page transcription display
- Page annotations
Printable Transcription PDF generation Typesetting Table of contents page Title/Author pages Expansion footnotes Subject article footnotes Index
- Text Analysis