Difference between revisions of "Conceptual Framework"

From Metadata-Registry
Jump to: navigation, search
 
(10 intermediate revisions by 3 users not shown)
Line 1: Line 1:
===Metadata Repository Conceptual Framework===
+
===Metadata Management Conceptual Framework===
  
There is an Architecture/Philosophy at work that forms the basis of the following framework:
+
Currently central data stores associated with institutional repositories and other metadata aggregators face a number of challenges: data is often stored in remote locations, in incompatible data formats, and varying degrees of quality--this data must be aggregated, normalized, and integrated. Existing data is frequently highly variable in quality, often incomplete, and may not conform to existing standards. [1]
  
#A Provider/Management service provides a user interface and a data storage space for providers to register their OAI servers with the aggregator, supply and maintain data necessary to the HarvestService, and provide metadata describing their service. This service creates two XML document types for further processing:
+
The Metadata Management Services integrated toolset seeks to solve a number of these problems by providing services to improve the quality of existing metadata, manage the aggregation of metadata (both from original metadata sources and quality improvement services), and provide services to redistribute the improved metadata in multiple formats.
#*HarvestTrigger documents that contain all information necessary to initiate an OAI harvest from a metadata provider.
+
#*CollectionRecord documents that contain descriptive metadata about a collection of metadata being provided by a metadata provider.
+
#A HarvestService that processes incoming HarvestTrigger documents in order to initiate an OAI harvest and then optionally further processes the resulting harvested documents. The HarvestService utilizes the concept of WatchedFolders and an extensible ProcessingHarness. HarvestTrigger documents may arrive in the WatchedFolders from many sources via many methods and will be processed as long as they contain a valid set of instructions. This service produces the following documents for further processing:
+
#*HarvestMerge documents that contain information from the HarvestTrigger document that initiated the harvest and may also contain the results of combining all metadata documents created by the OAI harvest process.
+
#*Emailed server and metadata validation responses may also be optionally produced
+
#A MetadataCrosswalk service processes incoming HarvestMerge documents to create the desired metadata format. This is most likely a qualified Dublin Core metadata format created and maintained for use by an assortment of services. By default, the desired QDC is created from the required oai_dc metadata format that must be provided by every OAI-PMH server, but it may also be crosswalked from any other "native" metadata format. The MetadataCrosswalk service:
+
#*validates the base utility of incoming metadata and either accepts or rejects it
+
#*utilizes "safe" XSLT transforms to correct minor imperfections where possible
+
#*utilizes either custom or default XSLT transforms provided by a MetadataQA service to create the desired QDC
+
#*produces as output a DbInsert document that is used by the MetadataIngest service of the Metadata Repository
+
#A MetadataQA service that produces provider-specific crosswalks in the form of XSLT scripts to create high quality QDC metadata from selected native metadata.  
+
#MetadataIngest service that further validates and parses DbInsert documents and inserts the processed metadata into the Metadata Repository
+
  
The Metadata Repository stores all provided native metadata formats and QDC in an object/relational layer. From this layer, internal processes create static XML documents for use in efficiently serving metadata in multiple formats via OAI.
+
The broad architectural framework underpinning the toolset involves the following functional systems and services:
 +
*A central repository that will aggregate item-level metadata from multiple non-equivalent and otherwise incompatible data sources, in multiple formats (using OAI as the data interchange glue), ultimately providing a base for other services.
 +
*Harvest services to manage the routine repeat harvesting of metadata from both data providers and data enhancement services, via OAI, including:
 +
**Data provider and service registration systems
 +
**An integrated OAI harvest service that includes flexible handling and reporting of routine low-level data validation errors
 +
**A harvest management and scheduling service to automate repeat data harvests
 +
**Event logging and history interfaces
 +
**Error-tracking and notification services
 +
**User access management
 +
**User forums and information sharing systems
 +
**Harvest diagnostics and helpdesk support, enabling extensive problem solving by staff without strong technical/programming skills
 +
*A comprehensive suite of OAI servers and related data interface tools to enable data providers to easily create interfaces between an OAI server and any number of internal data storage systems regardless of format or storage mechanism. These servers should:
 +
**be available in multiple programming languages and run on multiple operating systems to provide out-of-the-box functionality in a broad range of environments
 +
**share a single object model across languages and operating systems to make it easy for developers and consultants to switch languages and operating environments
 +
**provide a single, simple plugin API to support multiple data sources, making it easy to share data plugins
 +
*Testing services that will allow data providers to fully test new OAI server installations for functionality, protocol compliance, and data validity as well as provide ongoing validation and testing to support trouble-shooting and diagnostics
 +
*Data transformation services to provide:
 +
**data normalization and cleanup
 +
**data enhancement
 +
**crosswalking to multiple formats
 +
*Data distribution services to allow data consumers, including the original data providers, to access the normalized, enhanced, and aggregated data in the central repository in a variety of data formats and schemas, in order to support a broad range of downstream data-based services
 +
**All data transformations and normalization is completely transparent to downstream users, with complete data provenance at all levels [2] [3]
 +
 
 +
REFERENCES:
 +
*[1] "[http://www.siderean.com/dc2003/501_Paper24.pdf Analyzing Metadata for Effective Use and Re-Use]," Naomi Dushay and Diane I. Hillmann. Paper presented at DC2003 conference, Seattle, Wa.
 +
*[2] "[http://hdl.handle.net/1813/7897 Improving Metadata Quality: Augmentation and Recombination]," by Diane I. Hillmann, Naomi Dushay and Jon Phipps. Paper presented at DC2004 conference, Shanghai, China.
 +
*[3] "[http://arxiv.org/abs/cs.DL/0501083 Orchestrating Metadata Enhancement Services: Introducing Lenny]," by Jon Phipps, Diane I. Hillmann, and Gordon Paynter. Paper presented at the DC2005 conference, Madrid Spain.

Latest revision as of 13:39, 11 February 2008

Metadata Management Conceptual Framework

Currently central data stores associated with institutional repositories and other metadata aggregators face a number of challenges: data is often stored in remote locations, in incompatible data formats, and varying degrees of quality--this data must be aggregated, normalized, and integrated. Existing data is frequently highly variable in quality, often incomplete, and may not conform to existing standards. [1]

The Metadata Management Services integrated toolset seeks to solve a number of these problems by providing services to improve the quality of existing metadata, manage the aggregation of metadata (both from original metadata sources and quality improvement services), and provide services to redistribute the improved metadata in multiple formats.

The broad architectural framework underpinning the toolset involves the following functional systems and services:

  • A central repository that will aggregate item-level metadata from multiple non-equivalent and otherwise incompatible data sources, in multiple formats (using OAI as the data interchange glue), ultimately providing a base for other services.
  • Harvest services to manage the routine repeat harvesting of metadata from both data providers and data enhancement services, via OAI, including:
    • Data provider and service registration systems
    • An integrated OAI harvest service that includes flexible handling and reporting of routine low-level data validation errors
    • A harvest management and scheduling service to automate repeat data harvests
    • Event logging and history interfaces
    • Error-tracking and notification services
    • User access management
    • User forums and information sharing systems
    • Harvest diagnostics and helpdesk support, enabling extensive problem solving by staff without strong technical/programming skills
  • A comprehensive suite of OAI servers and related data interface tools to enable data providers to easily create interfaces between an OAI server and any number of internal data storage systems regardless of format or storage mechanism. These servers should:
    • be available in multiple programming languages and run on multiple operating systems to provide out-of-the-box functionality in a broad range of environments
    • share a single object model across languages and operating systems to make it easy for developers and consultants to switch languages and operating environments
    • provide a single, simple plugin API to support multiple data sources, making it easy to share data plugins
  • Testing services that will allow data providers to fully test new OAI server installations for functionality, protocol compliance, and data validity as well as provide ongoing validation and testing to support trouble-shooting and diagnostics
  • Data transformation services to provide:
    • data normalization and cleanup
    • data enhancement
    • crosswalking to multiple formats
  • Data distribution services to allow data consumers, including the original data providers, to access the normalized, enhanced, and aggregated data in the central repository in a variety of data formats and schemas, in order to support a broad range of downstream data-based services
    • All data transformations and normalization is completely transparent to downstream users, with complete data provenance at all levels [2] [3]

REFERENCES: