Description

Recently, the cataloging department at Michigan State University Libraries was asked to catalog all 8000 full-text MSU ETDs in ProQuest. Since deriving MARC records by inserting appropriate MARC fields and pasting persistent URL from ProQuest database one at a time would be a very time-consuming process, an automated “batch-derive” process using XSLT was developed. This process selected and merged data from existing in-house print thesis records and ProQuest-created ETD records by matching various elements in the two record sets. Multi-step matching was called for since there was no common unique identifier among the two record sets and there were inconsistencies in data entry. Through this automated process, the cataloging department was able to incorporate data correction and customization into the batch process, and to insert persistent URLs into corresponding records in a fraction of time required for the manual process. Besides presenting the workflow and the design of the process itself, this presentation will also discusses quality issues in both in-house and ProQuest records, explain why ProQuest records were not loaded directly, the difficulties encountered during the implementation, the limitations of this batch process, and possible adaptation of this process for other copy and original cataloging activities.

Keywords: ETD, XSLT, batch derive, vendor records

Start Date

17-3-2010 2:30 PM

Share

COinS
 
Mar 17th, 2:30 PM

Augmenting In-house and Vendor-supplied MARC records: Automating Batch Derive of ETD Records by XSLT

Recently, the cataloging department at Michigan State University Libraries was asked to catalog all 8000 full-text MSU ETDs in ProQuest. Since deriving MARC records by inserting appropriate MARC fields and pasting persistent URL from ProQuest database one at a time would be a very time-consuming process, an automated “batch-derive” process using XSLT was developed. This process selected and merged data from existing in-house print thesis records and ProQuest-created ETD records by matching various elements in the two record sets. Multi-step matching was called for since there was no common unique identifier among the two record sets and there were inconsistencies in data entry. Through this automated process, the cataloging department was able to incorporate data correction and customization into the batch process, and to insert persistent URLs into corresponding records in a fraction of time required for the manual process. Besides presenting the workflow and the design of the process itself, this presentation will also discusses quality issues in both in-house and ProQuest records, explain why ProQuest records were not loaded directly, the difficulties encountered during the implementation, the limitations of this batch process, and possible adaptation of this process for other copy and original cataloging activities.

Keywords: ETD, XSLT, batch derive, vendor records