In the staff interface, go to the Cataloging tab, then click on 'Upload Marc Data' in the sidebar. You'll get a form where you can upload a file of MARC data. The first time you upload, you should probably leave the 'Test Load' setting at true so you can see what OpenBiblio thinks about your MARC file, without messing with the database. After that, go back to the upload screen and change the setting to false. Make sure that the collection and material type values are set the way you want them, and then upload the file again. You should get a message telling you how many records were imported. After the import, you'll need to find the imported records, set the call number, and add copies. This can be automated by using SQL commands.
(Patch:1118356 sets the call number automatically on import. This patch does not work in recent versions.)
A test file in the correct format, with a few very simple records can be downloaded here.
Also notice paragraphs Pre-processing and Post-processing.
Use 'Test Load' to have an indication if the import file isn't too large. There is no automated way to resume import at the point where it stopped. You can try to resume import after manually removing the already imported records from the import file.
Tips for preventing script time out:
The database table layout of these versions is a predecessor of the 1.0 layout and does not allow to import all of the MARC record format. Data loss:
When the input file is large, the script may not finish and no records are imported at all.
Equal to above, but if the process times out, import is performed partially.
Allows to import larger files, but it's still a good idea to split large import files. If the process times out, import is performed partially.
MARC data is not plain text data. Many programs allow you to edit MARC data in a plain text format, but that text format can not be imported directly into OpenBiblio. You will need to convert the records into actual MARC transmission format (Z39.2) before import. MarcEdit is an excellent, free program for Windows that can help you with this.
Another fine tool is yaz-marcdump, that comes with the YAZ toolkit from Index Data.
Input and output can be different...
On Windows 2000, the 4.x.x versions did not work, but the most recent 2.x.x was OK.
Example: Split a large file into smaller chunks
yaz-marcdump -C 500 -s splitfile largefile
Documentation for yaz-marcdump
The following are example SQL commands, to be executed on the records imported
MARC 050$a - Classification number => biblio.call_nmbr1
update biblio, biblio_field set biblio.call_nmbr1 = biblio_field.field_data where biblio_field.bibid=biblio.bibid and biblio_field.tag="050" and biblio_field.subfield_cd="a";
MARC 852$j - Shelving control number => biblio.call_nmbr1
update biblio, biblio_field set biblio.call_nmbr1 = biblio_field.field_data where biblio_field.bibid=biblio.bibid and biblio_field.tag="852" and biblio_field.subfield_cd="j";
If you run the following more than once, you will get duplicate copies and barcodes.
insert into biblio_copy (bibid, copyid, create_dt, copy_desc, barcode_nmbr, status_cd, status_begin_dt, due_back_dt, mbrid, renewal_count) select bibid, null, sysdate(), null, biblio_field.field_data, "in", sysdate(), null, null, "0" from biblio_field where biblio_field.tag="852" and biblio_field.subfield_cd="p";
insert into biblio_copy (bibid, copyid, create_dt, copy_desc, barcode_nmbr, status_cd, status_begin_dt, due_back_dt, mbrid, renewal_count) select bibid, null, sysdate(), null, concat(lpad(biblio.bibid, 5,'0'),"1"), "in", sysdate(), null, null, "0" from biblio;