This article was first published in:
International Cataloguing and Bibliographic Control 26 (2) Apr/Jun 1997 pp41-46
Any citation should use the above details of the hard copy version
Ann Chapman
The end of one year and the start of the next seems to herald the appearance in the media of numerous reviews on the events and statistics of the past year. Observing this in early 1995 suggested a new approach to the analysis of the BNBMARC Currency Survey data. Other reports (1992 and 1995) on the survey have focused only on the hitrate and how and why it has changed. Since there is a substantial amount of data now from this survey it was decided to look in more detail at the information recorded and what else it might show over and above the hitrate. The decision to look at the samples collected in 1994 has no special significance; 1994 was simply the most recent full calendar year of sample data available. Specific points for investigation were identified and the data analysed to see what it showed about these points; the findings are presented here. The statistical analyses using SPSS were carried out by Steve Prowse.
The BNBMARC Currency Survey has been carried out continuously since its inception in January 1980 by staff at the research unit based at the University of Bath. Originally the Centre for Catalogue Research (CCR, 1977-1987) and then the Centre for Bibliographic Management (CBM, 1987-1992), in 1992 the unit became UKOLN: The UK Office for Library and Information Networking. Initial CCR research proposals identified currency of bibliographic records as one measure of their quality, and the survey set out to investigate this by posing two questions.
Sample analysis produces monthly hitrates to answer the first question, and hitrates recorded during the survey lifetime answers the second. Cataloguing stage samples have been taken since January 1980 and ordering stage samples since February 1988.
Samples are obtained from a wide range of UK libraries, both public and academic. The libraries are randomly selected from those listed in the current edition of the Library Association's Directory of Libraries in the United Kingdom and the Republic of Ireland. Each month twelve libraries (six academic and six public) provide two samples - ten items at the cataloguing stage and ten at the ordering stage. Each library participates for six months and is then replaced by another library of the same type. Participation is on a rolling basis with one academic and one public library leaving and one of each type joining each month.
For each item in the sample, libraries record the ISBN (if known), author, title, publisher and date of publication of the item. Completed sample sheets are returned to UKOLN for checking against the BNBMARC database via the British Library's online information service BLAISE-LINE. Records found for items are categorised as (a) Cataloguing-In-Publication (CIP) records, (b) formerly CIP records or (c) British Library (BL) created records. A note is also made of the date the record was added to the database, which enables the length of time a record has been available to be calculated.
For each month cataloguing and ordering datafiles are created which are then analysed to produce the hitrates. Within each datafile data is recorded for each item in the sample: whether a record was found, which type of record (if any) was found and the number of days it has been on the BNBMARC files, the publisher imprint prefix of the ISBN, and the publication date. Since all sample sheets and datafiles collected over the lifetime of the Survey are stored, a substantial amount of data is available for further analysis when required. In the past, the analyses have focused on the proportion of records found when required (the hit-rate) and the proportion of CIP records contributing to the hit-rate. With the data available, it was decided to examine the 1994 sample in more detail.
Each month hitrates are calculated for the two samples by conflating the results of the preceding 12 months to avoid the peaks and troughs caused by seasonal variations. The hitrate shows the proportion of items for which records were on the BNBMARC files on the date the sample was taken. The hitrates for January and December 1994, together with the highest hitrates recorded during the year, are shown in Fig.1.
CAT 1994 |
Public |
Academic |
All |
Jan 1994 |
82% |
84% |
83% |
Dec 1994 |
80% |
87% |
84% |
|
max 83% (Jul-Oct) |
max 91% (Jul) |
max 87% (Jul-Aug) |
ORD 1994 |
Public |
Academic |
All |
Jan 1994 |
81% |
77% |
79% |
Dec 1994 |
83% |
83% |
83% |
|
max 84% (Oct) |
max 84% (Oct-Nov) |
max 84% (Oct-Nov) |
Fig.1
The results in Fig.1 reflect the situation at the time the sample is taken; ie. the time at which the record was required. Since some libraries might be willing to wait for a limited period for a record for at least some items, a recheck search was included in the survey. This takes place six months after the first search and aims to find out how many more items from the sample will have records at that point. For many libraries a six month wait would not be acceptable, but most of these extra records are added to the database in the month following the sample date. The results of the recheck searches for 1994 are shown in Fig.2.
CAT 1994 |
Public |
Academic |
All |
Jan 1994 |
91% |
91% |
91% |
Dec 1994 |
90% |
92% |
91% |
|
max 92% (Feb) |
max 95% (Jul-Aug) |
max 93% (Jul-Aug) |
ORD 1994 |
Public |
Academic |
All |
Jan 1994 |
89% |
85% |
87% |
Dec 1994 |
91% |
88% |
89% |
|
max 92% (Aug-Nov) |
max 88% (Sep-Dec) |
max 91% (Oct) |
<ALIGN=Center"Fig.2
The next step was to look again at the items for which records were not found even at the six month recheck. A second recheck search was therefore made in June 1995 on the 1994 sample. This gave a range of second recheck time periods varying from 6+ months for the December 1994 sample to 12 months for January 1994.
At the cataloguing stage second recheck a further 3 records were found - 2 in the academic sample and 1 in the public sample. Rather more records were found in the ordering stage sample with 7 records each in the academic and public samples giving a total of 14 extra records. When these additional records are added to the December 1994 figures (being cumulative figures this includes the whole of 1994) revised recheck hit-rate figures can then be produced, see Fig. 3.
|
Public |
Academic |
Combined |
||||||
Searches |
First search |
6 mth rechk |
2nd rechk |
First search |
6 mth rechk |
2nd rechk |
First search |
6 mth rechk |
2nd rechk |
CAT: |
80% |
90% |
90% |
87% |
92% |
92% |
84% |
91% |
91% |
ORD: |
83% |
91% |
92% |
83% |
88% |
89% |
83% |
89% |
91% |
Fig. 3
The late records can be accounted for in three ways. Firstly, some non-CIP items might take longer than average to be processed by the BL Acquisitions Processing and Cataloguing (AP&C) departments. Secondly, there are those items which are deposited late with the BL and for which no CIP data was submitted. Thirdly, items quoted on the ordering samples can be taken from publicity flyers, and the actual publication date may be some months ahead; since CIP records only appear on BNBMARC files up to 12 weeks in advance of publication , a sample item which is six months away from publication is unlikely to have a record at the time of the survey search.
After the second recheck there were still items for which no records were found on the BNBMARC files. These items were then checked against J. Whitaker & Sons' BookBank CD-ROM and Book Data's BookFind CD-ROM (in both cases the April 1995 edition). In the cataloguing stage sample, records were found on one or both of the CD-ROM's for all but 18 (out of 89) items (the 18 items were 9 from the public sample and 9 from the academic). Similarly in the ordering stage sample, records were found on one or both of the CD-ROM's for all but 8 (out of 101) items (the 8 items were 3 from the public sample and 5 from the academic).
The results detailed above seem to indicate that there are a number of items which are not being deposited with the British Library. They appear not to have been deposited with any of the other five copyright agency libraries either, since no copyright agency contributed records were found for these items. Under current UK copyright and legal deposit legislation, a free copy of each item published in the UK should be deposited by the publisher with the British Library, and with each of the other five copyright agency libraries, Cambridge University Library, Oxford University Library, the National Library of Scotland, the National Library of Wales and Trinity College Library, Dublin. The rapidly growing numbers of new titles published per year in the late 1980's, and the expected continuation of this trend, placed increasingly severe pressures on the British Library in providing bibliographic records for all legal deposit material. A shared cataloguing initiative between the British Library and the other copyright agency libraries, aimed at more efficient and effective provision of bibliographic records for legal deposit material, was implemented in November 1990.
Under the Copyright Libraries Shared Cataloguing Programme (CLSCP), each of the libraries receiving legal deposit materials has undertaken to be responsible for a particular section of UK publishing output, with the British Library National Bibliographic Service (NBS) contributing 70% of current catalogue records and the other five agency libraries contributing the other 30% between them. Records produced by the copyright agency libraries appear in the BNBMARC files until the British Library's own deposit copy has been received and the records enhanced with additional data. The enhanced records then replace the agency records.
Therefore, when at the second recheck, records are still not found for items, one possible reason is that the item has not been deposited as required, or has been deposited so late that, given average processing times, the record was not created by the time of the second recheck. At the cataloguing stage, where the sample is taken from physical items, a list is sent monthly to the BL Legal Deposit Office detailing the items not found. The BL can then pursue any items which still have not been deposited. The survey can only help with items which appear as part of the sample, though it perhaps indicates the scale of the problem for the BL. Of course, these items may eventually be deposited, and records for them created. Investigation of the level of compliance with legal deposit, taking into account very late deposits, would involve looking at survey samples for, perhaps, three or more years ago.
There is an alternative explanation for a very few of the 'missing' records, which is that there is an exclusions policy for the BNBMARC files. While items on the sample sheets are checked to see if they appear to be within the sample parameters, it is possible that some items outside the parameters are included where the information available is not enough to exclude them (checks on each physical item are not possible).
The items which were not found at first search were examined to see whether any link could be identified between subject area and items for which records either appeared late or not at all. The currency survey uses a random sample selection method to produce a representative sample of items from the whole subject range of published materials. The subject areas of the items are not, however, recorded by the participating libraries so these had to be added. Subject areas were identified only for items not found at first search.
The items not found at first search during 1994 were allocated to Dewey decimal classifications at the hundred division level (eg: 530 = Physics). This was not an easy procedure since it had to rely heavily on the title details recorded. For some items where a record had been found at recheck or second recheck, the classification on the record was used. Where no records had yet been found, the title sometimes provided unambiguous information for classification (eg. Complete cookery course). Remaining items, in almost all cases, were classified by subject information provided in the records on one or other of the CD-ROM's. In a very few cases a 'most likely' classification was decided on.
The cataloguing sample contained 164 items for which no record was found at first search. Looking at the number of occurrences at the Dewey ten main divisions level we find that the largest categories are the 300s (24%) and the 600s (21%). The same procedure was carried out on the ordering sample which had 199 items for which no record was found at first search. Here at the ten main Dewey division level we find that the largest categories are the 600's and the 800's (both 22%) closely followed by the 300's (21%).
This does not prove any links with subject area and late/non deposit of items, however, and a comparison with the whole of the sample would be necessary to see if there were any correlation. The samples for 1994 total 1294 items in the cataloguing sample and 1249 in the ordering sample. Determining subject areas for the whole of the 1994 sample retrospectively would have been very time consuming, though could have been undertaken. Instead it was possible to use an analysis, recently carried out by James Elliot of the BL National Bibliographic Service (NBS), of the Dewey class marks of records on the BNB on CD-ROM, which includes all BNBMARC records created since 1986. The subject proportions on BNB on CD-ROM proved to be similar to the proportions in the above analysis of items with no records found at first search. The conclusion therefore, at least for the sample taken in 1994, is that records not found at first search in general reflect the subject distribution of published output in the UK. See Fig.4.
|
BNB on CD-ROM |
Cataloguing |
Ordering |
|||
Dewey classification |
No. of records |
% of total |
No of cases not found at 1st search |
% of total |
No of cases not found at 1st search |
% of total |
000 - 099 |
18677 |
4% |
11 |
7% |
24 |
12% |
100 - 199 |
10216 |
2% |
5 |
3% |
6 |
3% |
200 - 299 |
1711 |
4% |
2 |
1% |
2 |
1% |
300 - 399 |
93087 |
22% |
39 |
24% |
42 |
21% |
400 - 499 |
12401 |
3% |
3 |
2% |
1 |
0.5% |
500 - 599 |
27932 |
7% |
17 |
10% |
4 |
2% |
600 - 699 |
70122 |
17% |
35 |
21% |
44 |
22% |
700 - 799 |
36048 |
9% |
20 |
12% |
19 |
10% |
800 - 899 |
96568 |
23% |
10 |
6% |
44 |
22% |
900 - 999 |
39202 |
9% |
21 |
13% |
13 |
7% |
Total |
|
|
164 |
|
199 |
|
Fig.4
The subject distribution at the Dewey hundred divisions level was also examined. At the cataloguing stage, it was found that for most of the divisions there were between 1 and 3 occurrences. However, 6 divisions had between 7 and 9 occurrences each and accounted for a total of 31% of the items. In three of these six divisions there is also a clear indication that the majority of the items are in the public library sample.
Repeating the Dewey hundred division level comparison on the ordering sample it was again found that for most divisions there were between 1 and 3 cases. Seven divisions had between 9 and 34 occurrences each, accounting for 53% of the items, and showed a greater variation between divisions than those in the cataloguing sample. Also three divisions have the majority of items in the public sample and three have the majority in the academic sample.
Given that the previous analysis shown in Fig.4 indicated that late or non deposited items reflected the general subject distribution, it is likely that the differences between academic and public libraries may reflect the differences in collection policies. In general, public libraries have a higher intake of items on home economics, European history (which includes the immense numbers of popular books on both World Wars, as well as UK local history) and fiction than academic libraries, which in turn have a higher intake of items on law than public libraries. The distribution at the hundred division level is shown in Fig.5.
Cataloguing sample: Items not found at first search |
||||||||
Dewey classification |
No of cases |
% of 164 |
Case division: |
|||||
Law |
340 |
9 |
5.5% |
5A : 4P |
||||
Medicine |
610 |
9 |
5.5% |
3A : 6P |
||||
Home econ |
640 |
9 |
5.5% |
1A : 8P |
||||
Europ history |
940 |
9 |
5.5% |
1A : 8P |
||||
Soc services, welfare |
360 |
8 |
4.9% |
5A : 3P |
||||
Fiction |
820 |
7 |
4.2% |
1A : 6P |
Ordering sample: Items not found at first search |
|||||||
Dewey classification |
No of cases |
% of 199 |
Case division: |
||||
Fiction |
820 |
34 |
17% |
2A : 32P |
|||
Business man |
650 |
18 |
9% |
16A : 2P |
|||
Computing |
005 |
14 |
7% |
10A : 4P |
|||
Sports, hobbies |
790 |
11 |
5.5% |
1A : 10P |
|||
Home econ |
640 |
10 |
5% |
0A : 10P |
|||
Soc sciences |
300 |
9 |
4.5% |
7A : 2P |
|||
Law |
340 |
9 |
4.5% |
4A : 5P |
Fig.5
It would be interesting to see whether the whole sample does indeed reflect the same situation and whether the proportions remain constant over several years. The sample data will, therefore, from January 1996 include subject information; this analysis can then be repeated on other occasions to see what changes there are.
The 1994 samples were also examined to see whether any link could be identified between specific publishers and deposit of published items. The survey datafiles do record publisher information, in the form of the publisher prefix section of the ISBN, but there were problems in making this analysis.
One problem was that some titles recorded in the samples have no ISBN - usually because they are from very small publishers, but sometimes because the ISBN was unknown to the sample provider at the time of the sample. If no records are found for such items, a 'no data' value is entered in the datafile. Such cases cannot therefore be included in the analysis. In 1994 there were 17 cases with no ISBN in the cataloguing sample (1.3%) and 22 cases in the ordering sample (1.8%).
Then, the publisher prefix of the ISBN does not always easily identify the publishing group for this sort of analysis. Currently, for example, Pitman uses both 0 273 and 0 7121 for Pitman Publishing and 0 272 and 0 273 for Pitman Medical. A more complex situation is found with the Butterworth-Heinemann group. Here it is found: Butterworth using 0 406, 0 407, 0 408, 0409, and 0 952; Butterworth Architecture using 0 408 and 0 7506; Butterworth-Heinemann using 0 409 and 0 7506; Newnes using 0 7506; Heinemann Educational Publishers using 0 431 (also used for Heinemann Library) and 0 435 (also used for Heinemann ELT and Heinemann International Literature; and Heinemann (William) Ltd and Heinemann Young Books both using 0434 and 0437, of which 0 434 is also used by Heinemann Publishing Press and 0 437 by Heinemann Professional Publishing. Any analysis would require prior determination of which prefixes should be linked, and even if this were done it would not be possible to compare the analysis of one year with another, given the mergers and takeovers in publishing. Despite these limitations, some analysis was possible, but with little in the way of significant findings.
The cataloguing sample for 1994 contained 1294 items. Of these 17 items had no ISBN so no publisher prefix was available. A total of 458 publisher prefixes appeared in the sample, of which 284 (62%) were
represented by only one item each. Similarly, the ordering sample for 1994 contained 1249 items. Of these 22 items had no ISBN. A total of 377 publisher prefixes appeared in the sample, of which 223 (59%) were represented by only one item each.
The first analysis looked at whether specific prefixes accounted for many of the items not found at first search. In the cataloguing sample there were 164 such items. Of these, 115 (70%) were from publisher prefixes which each only had one item in the 164, and two items were recorded each for a further 11 prefixes: these 126 prefixes accounted for 77% of the 164 items. Three items were recorded for each of another five prefixes and four items each for yet another three prefixes. It appears that rather more (8% more) publisher prefixes represented by only one item were found in the not found at first search category than in the sample as a whole.
The ordering samples contained 199 items not found at first search and there was a lower rate of single item prefixes - 111 (56%), while two items were recorded each for another sixteen prefixes: these 127 prefixes accounted for 64% of the 199 items. Analysis of the remaining items revealed: three items were recorded each for five prefixes, four items were recorded each for another six prefixes, five items each for another two prefixes and nine items were recorded for one prefix. Interestingly, slightly less (3% less) publisher prefixes represented by only one item were found in the not found at first search category than in the sample as a whole.
BL NBS has requested details of the items for any publisher prefix with more than two items not found at first search in order to do some investigation of their own into the non or late deposit of these items.
The second analysis looked at the type of records recorded for each publisher prefix. The datafiles record whether the record found at first search is (a) a full record created by the BL from a deposit copy of the item concerned, (b) a full record upgraded by the BL from a Cataloguing-In-Publication (CIP) record on receipt of the deposit copy, and (c) a CIP record created from pre-publication details supplied by the publisher of the item. Further the datafiles also record whether a record was subsequently found at the six month recheck and which of the three types of record listed above was found then. It was therefore possible to analyse the number of different types of record found for each of the prefixes using a simple scoring system, see Fig.6.
Record type |
Score |
First search Full BL |
7 |
CIP upgraded |
6 |
CIP |
5 |
Recheck Full BL |
4 |
CIP upgraded |
3 |
CIP |
2 |
Still no record found |
1 |
Fig.6
To obtain an average score for each publisher prefix the following method was used. Each item in the 1994 samples (1294 for cataloguing and 1249 for ordering) was given an individual score as shown in Fig.6. Average scores were then calculated by adding all the scores for a prefix and dividing by the number of items with that prefix. For example, there are three items for publisher prefix xxx; at first search there is for one item a full BL created record, and for the second item a CIP record, while for the third item a CIP upgraded record is found on recheck. For these three items then, scores of 7, 5 and 3 are recorded; added together they total 15 points for that prefix. Dividing the total points scored by the number of items for the prefix gives an average score - in the example fifteen points divided by three items gives an average score of five. Libraries will want to find the records they require at first search, and those prefixes scoring five or above are likely to have had records available for most of the their items occurring in the sample. Libraries purchasing records for catalogues will be concerned also with completeness of records, and would want to find as many prefixes as possible scoring six and above. Libraries using the database for acquisitions records are more likely to want records available, even if they are only at the CIP stage, and would perhaps be less concerned about those prefixes only scoring five.
Prefixes scoring seven as an average had full BL created records at the date of the sample for all works with that prefix. To score seven, however, means that all of the items recorded for a prefix had to have been deposited with the BL in time for a record to be created (no CIP records were previously on file for these items). Of the ninety-three prefixes which did score an average of seven, eighty only had one item each in the sample, while ten had two items, one had three items and two had four items.
Prefixes scoring 6 to 6.9 as an average had either full BL created records or CIP upgraded records at the date of the sample for items with that prefix. A total of one-hundred and thirty-three prefixes scored at this level, ranging from the seventy prefixes with only one item each in the sample and twenty-one prefixes with two items, up to one prefix with 49 items and another with 59 items.
In total in the 1994 cataloguing sample, 226 prefixes out of 469 prefixes present in the sample had scores of between six and seven (48%) - that is full MARC records, whether created initially through the CIP programme or directly by the BL, for all items.
Since many prefixes only have one item within the sample it is interesting to compare the few prefixes with large numbers of items. There were six prefixes with more than twenty items each in the 1994 cataloguing sample. The prefix with 59 items scored 6.3, that with 50 items scored 5.9, that with 49 items scored 6.3, and 5.4 was scored by other two prefixes - one with 26 items and the other with 22 items.
The prefixes were scored in the same way as for the cataloguing sample. Prefixes scoring an average of seven (BL created records) totalled seventy-three, of which sixty-three had only one item in the sample, another eight prefixes had two items and two prefixes had three items.
There were eighty-five prefixes scoring 6 to 6.9 as an average (BL created or CIP upgraded records). As with the cataloguing sample, this ranged from the forty-five prefixes with only one item in the sample and eleven with two items to one prefix with forty-one items.
In total in the 1994 ordering sample, 158 out of 386 prefixes (41%) present in the sample had scores of between six and seven.
Nine prefixes were recorded with more than twenty items each in the ordering sample. The highest number of items for a prefix was 77, and four prefixes had more than thirty items each. The prefix with 77 items scored 5.6, that with 49 items also scored 5.6, that with 41 items scored 6.0, that with 38 items scored 5.8, that with 28 items scored 5.4, that with 27 items scored 5.2, that with 23 items scored 5.6, that with 22 items scored 1.4, and that with 21 items scored 4.7.
Finally, it was decided to look at the spread of publication dates among the 1994 sample. From the tables shown below (Figures 7 and 8) it can be seen that there is a clear pattern for the majority of sample items to be published either in the sample year or the previous few years. In fact 70% of items in both the cataloguing and ordering samples have publication dates of 1993, 1994 and 1995, ie. were published within a year of the sample or were forthcoming. Including items published in 1992 (ie. published within two years of the sample) the proportions go up to 80% of the cataloguing sample and 78% of the ordering sample. If the spread of publication dates is widened to cover the years 1990-1995 it is found that 90% of items in the cataloguing sample and 89% of items in the ordering sample were published in these years. Only 2% of items in both samples were published more than ten years before the sample year.
The same trend is apparent whether looking at records found at first search, records found at recheck or missing records; the majority of the publication dates will be within two years of the sample date. Thus at first search 79% of the cataloguing sample and 76% of the ordering sample were published in 1992 or later. At the recheck 99% of the cataloguing sample and 94% of the ordering sample were published in 1992 or later. Of records not found 77% of the cataloguing sample and 82% of the ordering sample were published in 1992 or later.
From the above analyses, it can be seen that there is a substantial amount of data recorded for the BNBMARC Currency Survey since its inception in 1980. The hitrates, which are now on Web pages (at http://www.ukoln.bath.ac.uk/research/bibman/bnbmarc/ ), show that for most of the year libraries would have found records at both cataloguing and ordering stages for around eight out of every ten items which they could expect the BL to be providing records for. The recheck hitrates (also on the Web pages) show that six months after the sample date records would have been found for around nine out of every ten items.
The BL is notified of the items which are not found in the cataloguing sample and can therefore contact the relevant publishers and request the items if they have not yet been deposited. This requesting procedure may help by reminding publishers of other items they should be depositing. The number of items not found after the recheck might give some indication of the scale of the problem of non deposit (a possible 10% of UK publishing output) but can do little in practical terms to help solve it.
No link was established between subject area and late/non deposit but UKOLN is extending the data collected to include subject coding since it also provides an opportunity to monitor the numbers of items in different subject areas being acquired by the academic and public sectors.
The publisher performance analyses may prove useful to the BL in regard to late/non deposit despite the problems with producing a comparative list. Those prefixes for which three or more items were not found can be identified and sent to the BL. They will then be able to contact the publishers concerned and further explore the problem.
The proportion of records found at full MARC status (48% cataloguing and 41% ordering) is perhaps an area where the BL can hope for improvement with the contract for CIP records with Bibliographic Data Services as from January 1996. It would be possible to repeat the analysis in 1995 to see if the proportions have increased.
The date analysis provides a useful look at the age of material currently being acquired, indicating that 70% of these items acquired or catalogued were published in the year of the sample and the preceding year, with a few orders for items to be published the following year. Only a very small proportion of items (2%) currently being acquired are more than ten years old.
|
CATALOGUING SAMPLE FOR 1994 |
|||||||||||
DATE |
SAMPLE |
SEARCH |
RECHECK |
MISSING |
||||||||
|
Public |
Acad. |
All |
Public |
Acad. |
All |
Public |
Acad. |
All |
Public |
Acad. |
All |
1976 |
1 |
|
1 |
1 |
|
1 |
|
|
|
|
|
|
1977 |
1 |
|
1 |
|
|
|
|
|
|
1 |
|
1 |
1978 |
|
2 |
2 |
|
2 |
2 |
|
|
|
|
|
|
1979 |
|
2 |
2 |
|
2 |
2 |
|
|
|
|
|
|
1980 |
1 |
|
1 |
1 |
|
1 |
|
|
|
|
|
|
1981 |
2 |
4 |
6 |
2 |
4 |
6 |
|
|
|
|
|
|
1982 |
3 |
5 |
8 |
3 |
5 |
8 |
|
|
|
|
|
|
1983 |
1 |
3 |
4 |
1 |
3 |
4 |
|
|
|
|
|
|
1984 |
6 |
5 |
11 |
5 |
3 |
8 |
|
|
|
1 |
2 |
3 |
1985 |
6 |
5 |
11 |
5 |
3 |
8 |
|
|
|
1 |
2 |
3 |
1986 |
7 |
11 |
18 |
6 |
11 |
17 |
1 |
|
1 |
|
|
|
1987 |
9 |
11 |
20 |
6 |
11 |
17 |
|
|
|
3 |
|
3 |
1988 |
8 |
10 |
18 |
6 |
7 |
13 |
1 |
|
1 |
1 |
3 |
4 |
1989 |
14 |
18 |
32 |
13 |
17 |
30 |
|
|
|
1 |
1 |
2 |
1990 |
25 |
30 |
55 |
21 |
26 |
47 |
1 |
|
1 |
3 |
4 |
7 |
1991 |
26 |
50 |
76 |
22 |
47 |
69 |
1 |
|
1 |
3 |
3 |
6 |
1992 |
42 |
87 |
129 |
37 |
80 |
117 |
2 |
3 |
5 |
3 |
4 |
7 |
1993 |
173 |
209 |
382 |
132 |
188 |
320 |
13 |
6 |
19 |
28 |
15 |
43 |
1994 |
347 |
197 |
544 |
293 |
165 |
458 |
28 |
12 |
40 |
26 |
20 |
46 |
1995 |
|
2 |
2 |
|
2 |
2 |
|
|
|
|
|
|
Fig.7
|
ORDERING SAMPLE FOR 1994 |
|||||||||||
DATE |
SAMPLE |
SEARCH |
RECHECK |
MISSING |
||||||||
|
Public |
Acad. |
All |
Public |
Acad. |
All |
Public |
Acad. |
All |
Public |
Acad. |
All |
1976 |
|
|
|
|
|
|
|
|
|
|
|
|
1977 |
1 |
1 |
2 |
1 |
1 |
2 |
|
|
|
|
|
|
1978 |
2 |
|
2 |
2 |
|
2 |
|
|
|
|
|
|
1979 |
|
2 |
2 |
|
1 |
1 |
|
|
|
|
1 |
1 |
1980 |
3 |
3 |
6 |
2 |
2 |
4 |
|
|
|
1 |
1 |
2 |
1981 |
2 |
3 |
5 |
2 |
3 |
5 |
|
|
|
|
|
|
1982 |
1 |
|
1 |
1 |
|
1 |
|
|
|
|
|
|
1983 |
6 |
5 |
11 |
6 |
4 |
10 |
|
|
|
|
1 |
1 |
1984 |
2 |
5 |
7 |
2 |
5 |
7 |
|
|
|
|
|
|
1985 |
6 |
5 |
11 |
6 |
4 |
10 |
|
|
|
|
1 |
1 |
1986 |
8 |
7 |
15 |
8 |
6 |
14 |
|
|
|
|
1 |
1 |
1987 |
6 |
6 |
12 |
4 |
6 |
10 |
|
|
|
2 |
|
2 |
1988 |
11 |
15 |
26 |
10 |
15 |
25 |
|
|
|
1 |
|
1 |
1989 |
15 |
30 |
45 |
14 |
27 |
41 |
|
|
|
1 |
3 |
4 |
1990 |
22 |
37 |
59 |
20 |
34 |
54 |
|
|
|
2 |
3 |
5 |
1991 |
23 |
45 |
68 |
21 |
41 |
62 |
|
|
|
2 |
4 |
6 |
1992 |
30 |
73 |
103 |
27 |
60 |
87 |
|
1 |
1 |
3 |
12 |
15 |
1993 |
113 |
166 |
279 |
96 |
149 |
245 |
4 |
2 |
6 |
13 |
15 |
28 |
1994 |
390 |
212 |
602 |
302 |
157 |
459 |
53 |
23 |
76 |
35 |
32 |
67 |
1995 |
|
3 |
3 |
|
|
|
|
2 |
2 |
|
1 |
1 |
Fig.8
Chapman, Ann Why MARC surveys are still a hot bibliographic currency Library Association Record v.94(4) Apr 1992 p248-254
Chapman, Ann National Library bibliographic record availability: a long-term survey Library Resources and Technical Services v.39(4) Oct 1995 p345-357
Thanks to Steve Prowse of UKOLN for undertaking the statistical analyses, and to James Elliott of the British Library National Bibliographic Service for the subject data on BNB records. Thanks also to Lorcan Dempsey of UKOLN and members of the Book Industry Communication Bibliographic Standards Technical Sub-group for comments on earlier drafts of this article. UKOLN is funded jointly by the British Library Research and Development Department and the Joint Information Systems Committee of the Higher Education Funding Councils; neither funding body is responsible for the results and opinions in this article.
Previous reports of the results of the BNBMARC Currency Survey, which has been carried out since 1980, have concentrated on the hitrate and how and why it has changed. This report focuses instead on one particular year of the survey, 1994, and looks at what further information can be obtained from the data collected. The report indicates a level of non-compliance with legal deposit legislation of around 10%, but identifies no link between non-compliance and specific subject areas. Analysis also indicates that, of items acquired or catalogued in the UK, around 70% of items are within two years of the publication date and that only 2% of items are more than 10 years old.
UKOLN Papers and Reports | Bibliographic Management Research Group
web page by Isobel Stark
Last Revised 25-Feb-1998