Using Unix to compare the output of a CARL search against local journal holdings by dan mahoney analyst programmer I dmahone@hal.unm.edu dmahone@unmb.BITNET 505-277-4816,505-277-4412,FAX 505-277-7735 centennial science and engineering library university of new mexico albuquerque new mexico 87131 CARL, as everyone should know by now, is the Colorado Alliance of Research Libraries. In this writer's opinion CARL is a shining example of an automative library system. We have been using their UnCover program for well over a year now. We have three personal computers that have cd-roms and are hooked up to the local MicroVax as dumb terminals using Procomm. The computers run a Procomm script to log the users directly into the UnCover program. The session is captured to a file on the MicroVax. After the user is finished with the program the capture log is automatically edited using Unix shell scripts. These scripts mostly use the sed command. The following article will show how to trim a lot of the unneccessary data from the capture log and then compare the journals in the citations against a file of local holdings. This project will differ from the others because using a title to compare against local holdings requires coping with variations in the titles. However slight these inconsistenties may be they do effect the outcome. In order to accomplish this project we will need an additional file to compare local holdings against and a unique search engine that will keep us aware of our progress or lack of progress. The local holdings file needs the entire title of the journal and the call number to fit on one line. -- 2 -- The following is the command that telnets to CARL and records the output in a file called carl.log. COMMAND IS telnet 192.54.81.128 | tee carl.log This will provide a capture log of the output from CARL. It will not record output from your keyboard though. There will also be a ^M at the end of each line. These do not show up in the printout but tend to get the screen output a trash look. The command to delete these ^M is one pass is. COMMAND IS tr -d '\015' < carl.log > temp.1 The above command is using the delete option in the tr command. The \015 is the hexidecimal representation of the control M. Another irrating set of characters in this capture log are the ascii escape sequence that are used for highlighting and clearing the screen. These are the characters that appear as ^[[1m. I use a two step process to to eradicate those. COMMAND IS tr -d '\033' < temp.1 > temp.2 COMMAND IS sed s/\[..//g < temp.2 > temp.3 The tr or translate command is deleting the ^[ part of the ^[[1m. The sed s/\[.m//g is substituting the [1m,[2m,[3m... with nothing. The above commands do their fair share in removing most of the excess characters in a CARL capture log. We also use the delete function of sed to delete a lot of of the repeated menu screens that one ends up with in a capture log. This cuts down on wasted paper and toner and speeds up the printing time. For example, one example is the "Welcome to the CARL " line in the beginning of the script. -- 3 -- COMMAND IS sed /Welcome to the CARL System/D < temp.3 > temp.4 This will delete the entire line that matchs the text between the slashes. What one can do with sed is to have a file of commands. The following will have sed use the command in the file carl.del to delete the appropriate lines in temp.4 and redirect the output to temp.5. COMMAND IS sed -f carl.del < temp.4 > temp.5 The following is an example of what data is in the carl.del file. <<< carl.del >>> /TO CONTINUE DISPLAY/D /TO DISPLAY FULL RECORDS/D /Enter word or words/D /separated by spaces and press/D /You may make your search more specific/D / Welcome to UnCover, the Article Access Solution from CARL./D /This database contains records describing journals and their contents./D /Coverage is rapidly growing as CARL member holdings are processed./D /UnCover will soon include more than 10,000 titles, and descriptions/D /of over 600,000 articles will be generated each year. Articles can/D /be retrieved individually or displayed as the table of contents /D /for any given journal issue./D <<< end of carl.del >>> An efficient way to create these files is to edit a copy of the capture log. Delete all the lines that vary, such as the citations and the dates, but leave in the menu screens. I am assuming that one might use the editor "vi". If one is using vi do the following. Hit escape, then a colon, then type,"%s/^/\//" then a return. This will substitute the first column with a slash (/). Then do the following, escape, colon, "%s/$/\/D/", and then a return. This will add a slash and a D at the end of each line. I then delete the whitespace between the last character on the line and the slash D. I don't think that this is neccessary, but I do it for easy reading. Another major -- 4 -- caveat here. Sed will have a problem if you have a slash between two other slashes. If you put a backslash before an embedded slash things will be fine. All the above commands used in conjunction will trim a majority of the grizzle from the CARL capture log. If one only wants to extract the citation part of a carl capture log do the following. Edit a file called carl.awk: <<< carl.awk >>> /TITLE/,/OWNERS/ Then issue the following command. COMMAND IS awk -f carl.awk < temp.5 > temp.6 Awk will respond will only inputting the lines that fall between the "TITLE" and the "OWNER" line. Now that we have the output sufficiently trimmed down we will concentrating on matching the journals mentioned in the citations against our local holdings. CARL does not provide the ISSN or CODEN field but their title field is very close to being consistent. Another major caveat with this project is if the citation has a reference to the journal "Compute", when it come time to search the local holdings file, any line that has the word "Compute" will return as a match. This will include such titles as Computer Graphics, Computer Imaging, and many many others. To sidestep this problem, examine your list of local holdings and the CARL list of serials. One major culprit here are one word journal titles such as Electronics, Science, Cell, Compute, Computer, Ecology, Energy, and Journal just to name a few. What we will do here is to create a file of sed commands that will look for these one word titles and change -- 5 -- them to reflect the local call number. Below is an example of that type of file. <<< carl.swap >>> s/ In: Electronics. / IN== ELECTRONICS Per TK7800 E4384/ s/ In: Science. / IN== Science Per Q1 S28 / s/ In: Carbon. / IN== Carbon Per QD181 C1 C3 / s/ In: Cell. / IN== Cell Per QH573 C38 / s/ In: Computer. / IN== COMPUTER Per TK7885 A1 I5/ s/ In: Compute. / IN== COMPUTE We do not carry this / s/ In: Ecology. / IN== Ecology Per QH540 E3 / s/ In: Energy. / IN==Energy Per HD9502 A1 E533 at Parrish / s/ In: Engineer. / IN==Engineer Per TK1 W43 / s/ In: Journal /IN== JOURNAL need to cross reference this/ s/ In: Journal. /IN== JOURNAL need to cross reference this/ COMMAND IS sed -f carl.swap < temp.6 > temp.7 This step adds the call numbers to the lines in the capture log. This also changes the "IN:" to "IN==" so when we extract the IN: line from the capture log we will not bring in any one word journal titles. This step takes a little time because sed will look on each line in the output to check for one word titles. After this is accomplished we will then extract the IN: lines and redirect them to a file and have sed work on those files. Comparing the title field against local holdings is variable to say the least. With CARL citations the title of the journal is on the line that begin with the word IN:. What we want to do is to extract the lines that have the word IN: on them. For the millionth time we redirect the output to a file. COMMAND IS grep IN: < temp.7 > temp.8 After we have created this file we want to replaced the IN: and the eleven spaces that preceed it and the five spaces that come after the IN:. -- 6 -- COMMAND IS sed s/" IN: "//g < temp.6 > temp.7 This will leave a file of just titles. This is what we are looking for. Or it seems to be what we want. But here is another caveat. Some of these lines will have blank spaces after the characters. If we ask to search the following, "byte " against our file of local holdings and our local holdings file has no spaces after the word byte we will never get a response on this title. What we want to do here is to delete the multiple blank spaces that are at the end of the titles. This will be done with sed. There are two ways to do this, one is easy and the other is not. The easy way is to edit a file call carl.eol <<>> s/ *$//g Then after you have edited the above one line file. Use sed to execute the command in that file. COMMAND IS sed -f carl.eol < temp.7 > temp.8 The hard way to do this requires elaboration. First there are certain characters that the shell consider meta-characters. The asterick and the $ are both meta-characters. So if one was to do this command from the command line one would have to protect these meta-characters from the shell by means of a backslash. COMMAND IS sed s/" "\*\$// < temp.7 > temp.8 We are getting to the point that we could search the output in the carl.8 file against our local holdings. -- 7 -- To normalize the titles we will run them through the translate command to convert all upper case characters to lower class. since our local holdings file is in lower case. COMMAND IS tr A-Z a-z < temp.8 > temp.9 I mentioned earlier we will use a different search engine in place of fgrep to search our local holdings file. We could use fgrep since it is quicker but if fgrep doesn't find a match we will not be notified. For example, in our local holdings file we have a line that reads "qa76.8 p38 pc magazine". The way CARL has this journal is "PC magazine : the independent gu...". Therefore if we attempt to search CARL's pc magazine title against our own we will never return a match. Below is our new search engine called carl.lookup. <<< carl.lookup >>> #!/bin/sh echo ============================================= echo Below are some of the journals that are located echo at this library echo If a title is \*NOT\* listed below it does \*NOT\* echo mean that we do not have it- echo It means this program did \*NOT\* find it echo Periodical are located on LL1 west echo ============================================= while cmd=`line` do if grep "${cmd}" local.holdings.carl then echo -n " " else echo "${cmd}" >> found.not fi done mail somebody < found.not rm -f found.not echo This listing will be appended to your search log Edit the above file and do a chmod +x to make it executable. The above program will search each line in our temp.9 against our file of local.holdings. The titles that it does not find -- 8 -- will be redirected into a file called found.not. At the end of the program the found.not file will be mailed to somebody. Somebody will then check the local holdings file to see if we actually do own the title and that its title doesn't agree with CARL's title exactly. Then edit the local holdings file to reflect CARL's version of the title. Another caveat here, some titles on CARL vary slightly and you might have three versions of a particular title. For example, qa 76.8 p38 pc magazine : the independent guide to ibm-stan qa 76.8 p38 pc magazine : the independent guide to ibm-stan qa 76.8 p38 pc magazine : the independent guide to ibm-stan the above three lines reflect one title with the inconsistent spacing surrounding the colon. We will redirect the output from the temp.9 into this program. and capture the output into a file call temp.10 COMMAND IS carl.lookup < temp.9 > temp.10 Then we will append the temp.10 file to the trimmed version of the carl capture log which is temp.5. We also want to pump the output to the terminal. COMMAND IS cat temp.10 | more COMMAND IS cat temp.10 >> temp.5 Now, the file called temp.5 has a trimmed version of the carl output and has the list of local journal at the end of the listing. We will rename the temp.5 file to carl.trim. COMMAND IS mv temp.5 carl.trim -- 9 -- Now, the task has been accomplished. To clean up after ourselves we will want to delete all the temp.* files that we have used. COMMAND IS rm -f temp.* This project takes time to mature. After each search one might have to keep editing the local holdings file to reflects CARL's title more accurately. We have been running this program for close to a year now. Our users enjoy having the computer look for the journals that we own after a carl search and it makes researching that much quicker. Howdy, I might have sent you this one already, the following file is a shell script and info on how to have ones local holdings checked after a dialog or stn search. Using Unix to check for local journal holdings after an Online Search PART TWO by dan mahoney analyst programmer I dmahone@hal.unm.edu dmahone@unmb.BITNET 505-277-4816,505-277-4412,FAX 505-277-7735 centennial science and engineering library university of new mexico albuquerque new mexico 87131 In the first part we mentioned that most of online databases have the ISSN numbers in their citations. Other databases have the CODEN fields in their citations, but they add an additional character. Databases such as Compendex and Inspec have both the ISSN and CODEN fields, but they also add the additional character to their CODEN field. For these two databases we will use the ISSN number to compare against our file of local holdings. For the first step we will use the Unix command "grep" to extract the line with the ISSN field in the citation and redirect it to a file called issn0. Our capture log in this example is called "tip.record". COMMAND IS grep ISSN: tip.record > issn0 The output of a standard Compendex search would provide one with the following: <<< contents of issn0 >>>> CODEN: ROBODV ISSN: 0263-5747 1248-9 CODEN: JCCCAT ISSN: 0022-4936 LANGUAGE: English CODEN: PESODO ISSN: 0161-6374 LANGUAGE: English PAGES: 1583-7 CODEN: JESOAN ISSN: 0013-4651 LANGUAGE: English 409-15 CODEN: JCRGAE ISSN: 0022-0248 LANGUAGE: English 263-72 CODEN: JECMA5 ISSN: 0361-5235 LANGUAGE: English Energy Prog. 4, Vol. 1 PAGES: 419-25 CODEN: AHENDB ISSN: 0276-2412 316-19 CODEN: JMSLD5 ISSN: 0261-8028 LANGUAGE: English 277-84 CODEN: THSFAP ISSN: 0040-6090 LANGUAGE: - 2 - Notice that in the output, the ISSN doesn't fall into a definite pattern. For the second part of this exercise we will need to edit each line in the issn0 file. In the earlier example dealing with STN's Chemical Abstract, we used the "sed" command to replace the "IS " with nothing to leave just the ISSN number. We will still use "sed" for this but we will also use the Unix command "awk". The first thing we will do to edit the file is to use the substitute command in sed to replace the word "ISSN: " with @ and redirect the ouput to a file called issn1. We are using the at-sign (@) because it is unlikely that this character will show up in either the ISSN or CODEN field. COMMAND IS sed s/"ISSN: "/@/ < issn0 > issn1 <<< contents of issn1 >>>> CODEN: ROBODV @0263-5747 1248-9 CODEN: JCCCAT @0022-4936 LANGUAGE: English CODEN: PESODO @0161-6374 LANGUAGE: English PAGES: 1583-7 CODEN: JESOAN @0013-4651 LANGUAGE: English 409-15 CODEN: JCRGAE @0022-0248 LANGUAGE: English 263-72 CODEN: JECMA5 @0361-5235 LANGUAGE: English PAGES: 419-25 CODEN: AHENDB @0276-2412 316-19 CODEN: JMSLD5 @0261-8028 LANGUAGE: English 277-84 CODEN: THSFAP @0040-6090 LANGUAGE: English In the output above we have marked the field that we are interested in with the at-sign (@). Next we will remove everything to the left of the @ with awk. A component of awk - 3 - is the field separator designator. We will use the field separator to separate the ISSN from the left of the line. Once again we will redirect the output to a file, a file called issn2. COMMAND IS awk '{ FS = "@"; print $2}' < issn1 > issn2 <<< contents of issn2 >>>> 0263-5747 0022-4936 LANGUAGE: English 0161-6374 LANGUAGE: English 0013-4651 LANGUAGE: English 0022-0248 LANGUAGE: English 0361-5235 LANGUAGE: English 0276-2412 0261-8028 LANGUAGE: English 0040-6090 LANGUAGE: English ISBN 0000000000 Notice that our job is still only partially finished. Here is a further elaboration of the above awk command. The FS = "@" means that the field separator is the @. A field is akin to a column in awk. For example, if one had a file of columnar data separated by a tab, then the tab would be considered a field separator. The "print $2" tells awk to print the second field, since the @ is considered the first field. The end result of the above command is to print all the characters to the right of the at-sign. Our final attempt to edit the file will also use awk. We want the final output to be just a column of ISSN numbers. This can be accomplished by the following. COMMAND IS awk '{print $1}' < issn2 > issn3 <<< output of issn3 >>> 0263-5747 0022-4936 0161-6374 - 4 - 0013-4651 0022-0248 0361-5235 0276-2412 0261-8028 0040-6090 The elaboration of the above command asks awk to print only the first field designated by $1. The above is the output we were striving for. For the sake of efficiency we would like to sort the above file and then run it through uniq. The reason for this is so we do not have to search for thirty occurrences of the same ISSN number. COMMAND IS sort < issn3 > issn4 Now that the file has been sorted, we can run it through "uniq". COMMAND IS uniq -d < issn4 > issn5 COMMAND IS uniq -u < issn4 >> issn5 *NOTE: there is also a sort -u command which does the same thing. sort -u < issn4 > issn5 Even though the file issn4 had no duplicates in it, most will, so it is still a good idea to run uniq. The next step after running the output through uniq is to use the fgrep command to use the data in issn5 file to search the file of local holdings. In this exercise our file that has our local holdings is called "local.holdings". Once again we want to have the output going to the screen in order to convince ourself that things are working and also to redirect the output to a filed called issn6. We will use tee to redirect the output of the fgrep command so that the output will go to both the screen and the output file. COMMAND IS fgrep issn5 local.holdings | tee issn6 - 5 - The next step of this is to append the issn6 file to the end of the capture log. Once again be extra careful in using two greater than signs ( >> ) to append the output of issn6 to the tip.record file. If you use only one greater than sign you will overwrite the tip.record file. COMMAND IS cat issn6 >> tip.record The final step here to for us to clean up after ourselves by removing all the issn* files. This can be accomplished by the following. COMMAND IS rm -f issn? The above command removes all files the begin with issn and have an additional character after it. The ? is a single character wildcard in unix as opposed to the asterick which is multiple character wildcard. The -f after the rm forces rm to remove without prompting the user. Some system administator alias the rm command to prompt the user for each file to be deleted. The CODEN field can be used in a similiar fashion provided that the CODEN is actually a pure CODEN entry with no additional characters. The Biosis database has this type of CODEN field. The only difference in the above program would be to use sed to substitute "CODEN: " with @ as opposed to substituting "ISSN: " with @ when dealing with issn1. - 6 - If the CODEN you are dealing with is not a "pure" CODEN don't give up because Unix can also help you there. A "pure" CODEN consists of only five alpha-numeric characters. Some brands of Unix provide a command called "cut". One could pipe the output of file of issn4 through cut -d5 which will only output the first five characters on the line. This will drop the sixth character which some databases use for checking purposes. COMMAND IS cut -5 < coden3 > coden4 Once again we redirected the output to a file for each command in order to monitor our progress. Once again, after you have insured yourself that the above program works you can recode it to sidestep the output file usage. Unix provide the ability to pipe data from one process to another. The code below will provide us with the same results without using the output files. ---------------------------< CUT HERE >--------------------------- #! /bin/sh # above line tells Unix which shell to use grep ISSN tip.record | sed s/"ISSN: "/@/ | awk '{FS="@"; print $2}' > issn0 awk '{print $1}' < issn0 | sed /^$/D | sort > issn1 uniq -d issn1 > issn2 uniq -u issn1 >> issn2 fgrep -f issn2 local.holdings | tee issn3 cat issn3 >> tip.record rm -f issn? ---------------------------< CUT HERE >--------------------------- Edit the above into a file and called it jh.2. Then do the - 7 - following at the Unix command line. COMMAND IS chmod +x jh.2 This will make the above text file an executable file. Then if you type jh.2 at the command line, the program will be sent in motion. The file below is a complilation of what was discussed in the first part of this article. The file below provides the journal hunt capability to the following databases, STN's Chemical Abstracts, Compendex,Inspec,Georef, and Biosis. The program below presents the user with a menu type program that queries the user on what database they have used and executes the appropriate code. --------------------------< CUT HERE >------------------------------- #!/bin/sh # The above line tells Unix what shell we want to use. cd clear # echo Would you like to play JOURNAL HUNTER echo which is a program to look for journals when echo you search databases such as STN and Dialog and BIOSIS echo It searches our collection based on ISSN echo If the ISSN or CODEN is \*not\* displayed in the record echo This will not work echo echo answer y or n read answer00 case $answer00 in n|N) exit;; y|Y) echo What type of database did you use echo 1--STN Chemical Abstracts echo 2--Compendex or Inspec or Georef echo 3--BIOSIS read answer01 case $answer01 in 1) grep "IS " tip.record | sed s/"IS "// | sort > issn1 uniq -d issn1 > issn2 uniq -u issn1 >> issn2 fgrep -f issn2 local.holdings | tee issn.out cat issn.out >> tip.record rm issn1 rm issn2 rm issn.out;; 2) grep "ISSN" tip.record | sed s/"ISSN: "/@/ | awk '{FS="@";print $2}' \ | awk '{print $1}' | sort > issn.1 uniq -d issn.1 > issn.2 uniq -u issn.1 >> issn.2 fgrep -f issn2 local.holdings | tee issn.out cat issn.out >> tip.record rm issn.1 rm issn.2 rm issn.out;; 3) grep "CODEN" tip.record | sed /ISSN/d | sed s/"CODEN: "/@/ | awk '{FS="@";print $2}' \ | awk '{print $1}' | sort > coden.1 uniq -d coden.1 > coden.2 uniq -u coden.1 >> coden.2 fgrep -f coden.2 local.holdings | tee coden.out cat coden.out >> tip.record rm -f coden.? esac # choice of stn compendex or biosis esac # initial journal hunt question - do you want to use. -----------------------------< CUT HERE >--------------------------------- Once again edit the above code and then do a chmod +x filename in order to make it executable. NOTE: the first part of this story was published in library software review, jan-feb92. below is a sample of our local holdings files, notice how all info fits on one line, call# title issn and coden, qc789.3t43u516 review / oak ridge national laboratory. qa76.8.a6 a6. a+. 0740-1590 we dont have compute q3 .z39 a. zeitschrift fur naturforschung. a, ZFNSA qh 301 a2x. bioscience. 0006-3568 BISNA sf 810 a3 v4x. veterinary parasitology. 0304-4017 qe 701 a48x. alcheringa. 0311-5518 qd 1 a52x. journal of the american chemical society. 0002-8223 qb 1 a65x. astronomical journal. 0004-6256 qa 76.6 a8a. acm transactions on mathematical software. 0098-3500 tn860 .a3. aapg bulletin. 0149-1423 te1 .a67. aashto quarterly. 0147-4847 qa1 .a517. abstracts of papers presented to the american mathematical qe1 .g19. abstracts with programs. 0016-7592 qd1 .a25. accounts of chemical research. 0001-4842 ta680 .a557. aci materials journal. 0889-325X ta680 .a556. aci structural journal.0889-3241 qa76.5 .c617. acm computing surveys. qa75.5 .a35. acm transactions on computer systems 0734-2071 t385 .a25. acm transactions on graphics. 0730-0301 qa76.7 .a888. acm transactions on programming languages 0164-0925 ta501 .a6352. acsm bulletin. 0747-9417 qa1 .a158. acta applicandae mathematicae. 0167-8019 qa3 .a23. acta arithmetica. AARIA 0065-1036 tl787 .i47. acta astronautica. 0094-5765 qb1 .a1715. acta astronomica. AASWA 0001-5237 qd450 .a25 acta chemica scandinavica. series a: 0302-4377 ACSAA qd241 .a25 acta chemica scandinavica. series b: 0302-4369 qd901 .a25. acta crystallographica. section a, ACACB 0108-7673 qd901 .i523. acta crystallographica. section b, ACBCA 0108-7681 qd901 .i525. acta crystallographica. section c 0108-2701 qa76 .a25. acta informatica. 0001-5903 qa1 .h81. acta mathematica hungarica. ACMTA 0236-5962 qa1 .a18. acta mathematica. 0252-9602 ACMAA ta349 .a3. acta mechanica. 0001-5970 AMHCA ts200 .a3. acta metallurgica. 0001-6160 AMETA