Usage report issues

Issue Report: In past years, IngentaConnect had not reported titles with 0 usage at all; now they do — and their COUNTER JR1 has more than 20,000 titles (we subscribe to two). Many of these titles are not real; eleven are duplicated titles but with typos in the ISSNs.

Usus Response:
Usus contacted Ingenta, which recently brought their usage statistics program in-house, to let them know that publishers should limit zero usage to subscribed titles only and that there were title issues in the JR1 including missing titles, duplicate titles, and invalid ISSNs. Ingenta responded that they have development work already scheduled to restrict the titles listed to just those where subscriptions are held for an institutional registration (including zero usage titles) and that they would address the title issues.

Issue Report: Manually downloading reports from EDP Sciences (Vision4Press) shows empty columns “Publisher”, “Print ISSN” and “Online ISSN”.

Usus Response: Usus contacted ScholarlyIQ, which processes stats for Vision4Press, and they saw that these values are not currently being provided to them by the publisher. They have notified the publisher and will work with them to provide these values.

Issue Report: Downloading an Excel JR5 from the new Cambridge Core Admin Site, I found that the columns for YOP are in the wrong order – starting with “Articles in Press” from the left, but then YOP 1999, YOP 2000 … finishing with YOP 2016 and YOP unkown as the last two columns on the right. YOP pre-2000 is missing.

Usus Response: With regard to the ordering of YOP columns in JR5, while the Code of Practice does not explicitly state the order in which these columns appear, the common expectation is that they be in descending order when viewed left-to-right. Usus will contact Cambridge University Press to request that they update the order of the columns.

Issue Report: Manually fetching reports from Silverchair publishers (e.g., AMA, Annals of Internal Medicine, many other small societies), instead of storing the statistics in the file as integers, they are in formulas (e.g. “=400” instead of just 400).

Usus Response: COUNTER will be contacting Silverchair and the relevant auditor about the formulas in the cells.

Issue Report: While collecting data I’ve encountered a number of BR2 and BR3 XML reports from Springer that don’t validate against the SUSHI schema. The problem is that an Exception element reporting “No Usage Available for Requested Dates” is included at the end of the ReportResponse block, after Requestor, CustomerReference and ReportDefinition elements, when it should precede these other 3 elements.

Usus Response: Springer was contacted and they were able to fix the problem.

Question/comment: Many vendors have developed a “search all” portal on their platform interface so that users can search across all the databases all together. Therefore, search numbers are almost identical for all the databases and are not relevant to how the individual databases are being used
Response: The intent of the COUNTER Code of Practice (but not necessarily explained clearly) is that if a user is responsible for selecting the databases, then searches against each database are counted as “Searches – Regular”. However, if the system automatically chooses the databases (as is the case with a federated search or multi-database discovery) then each database would count the search as “Searches – automated and federated”. The “search all databases” option is a user action; therefore, arguably all databases record as “Searches – Regular”.

Question/comment: If a user is searching from local discovery services, searches are not recorded at vendors’ sites. So some of the uses are missing.
Response: If the local discovery service is hosting the database on behalf of the database provider, then it is the responsibility of the local discovery vendor to provide the COUNTER DB1 report reflecting the use of those databases. If the user does not control which databases are searched, they would be reported as “Searches – automated and federated”. If the local discovery service blends all records into one central index such that the original database is no longer identifiable, then database-level statistics are not possible, but the library could use the PR1 report from the discovery platform and assume the total searches at the platform level apply to each of the databases covered by the discovery platform. Again, this highlights how searches are not a reliable measure of “value”.

Question/comment: Search is simply not relevant to how the database content is useful. It is more like a metric evaluating the database searching efficiency.
Response: Agreed, in a multiple database search environment, searches are not an indication of value.

Comment: Result click and record view seem to be good alternatives and I think they have addressed some of the issues. However, we found that vendors have developed different flavors of result clicks and record views. Some vendors have the identical result click numbers and record view numbers for their databases. Some vendors’ result clicks numbers are closer to record views. Some vendors have huge differences between result clicks and record views. I hope that COUNTER can help us to clarify some of the questions listed below:

Question: In the “COUNTER Quick Guide to Result Clicks and Record Views” document posted on COUNTER site, it indicates that a link to full text from result page is only counted as result click but not record view, but some vendors define their record views as user retrievals. Any time a user views a record -full text of not, article or video, image, etc., record view is recorded. Which is correct?
Response: Result Clicks are user-initiated actions from the Result List (not the detailed display). Record views are counting the views of the detailed record (abstract view) and may or may not include full text.

Question: If users are searching on their local discovery services, and clicking on the links to records from there, is the record view the only metric that will be counted at vendors site? Are the result clicks being triggered at vendors’ sites?
Response: If the local discovery system is hosting the metadata that make up a database and is able to track the source database for the result the user clicks, then it is the responsibility of the discovery vendor to provide the DB1 report reflecting the Result Clicks. If the local discovery system provides the detailed record (abstract view or even full text view) by linking the user to the database provider’s platform, then the database provider would count the Record Views and Full Text Requests (which would be applied to the title concerned.) To clarify, Result Clicks are measuring actions that take place on the result list for a given platform so would be counted by the provider of the user interface (e.g. local discovery or Federated Search) and would not be counted by the vendor that “owns” the record. If the detailed record is pulled from the vendor site in real-time, then the vendor site would count the Record View. Note that COUNTER recently released the Provider-Discovery reports that provide a standard way of discover providers to report usage of databases and journals back to the database/journal provider. Customers are identified in such reports allowing the database provider (or journal provider) to offer more holistic reporting of the use of their content. COUNTER has already added an option JR1b report to allow a publisher to report on usage of their content across multiple Platforms. A similar database report will be considered for the next release of the COUNTER Code of Practice.

Question: One of the main reasons we move to result click/record view is that we thought the usage can be linked to a particular database even though the searches are being done from the search all portal on vendors’ interface. However, some vendors told us that if users are searching via their “search all” interface, result click and record views still cannot be grouped by the databases. Are there any requirement from COUNTER standard about that?
Response: The discovery vendor should maintain the identity of the original database the result came from, even if they only provide a single central index, and use that identify to capture the Result Clicks and Record Views. In the case of A&I databases like PsycInfo or full text databases which require a subscription, an institution would need to have a subscription to that database before detailed records from that database are displayed or even searched. Therefore, to meet the expectations of their database providers, the local discover would need to know which database a given result was from and thus should be able to provide the needed reporting.

Question: If users are exporting/downloading multiple records at a time to citation management tools or other places, how are the record views and result click recorded? Exporting and downloading multiple records are very different kind of uses than looking at individual records.
Response: Result Clicks reflect user actions of many kinds and are intended to reflect an expression of interest in a result. Adding a result to a folder or exporting to a citation manager would be an expression of interest – IF that action happens on the Result List. Record Views reflect the retrieval of detailed records; therefore, if the export involves the system retrieving the detailed record for the export, then technically the Record View would be counted as well.

Issue Report: I want to gather COUNTER R4 stats from ProQuest using my (standard) SUSHI client, however I can’t! ProQuest is using WS-Security mechanisms to allow for username/password authentication, which is expressly forbidden in the SUSHI standard (http://www.niso.org/apps/group_public/download.php/10253/Z39-93-2013_SUSHI.pdf, page 33): “To ensure interoperability of clients and servers, do not use WS-Security extensions or similar mechanisms to introduce username/password authentication to the SOAP or HTTP level”.

Usus Response: ProQuest was alerted to this problem and has resolved it.

Issue Report: My library doesn’t have a Project Euclid package; instead we subscribe to more than a dozen journals available on the Project Euclid Platform. Project Euclid started to offer COUNTER 4 reports for 2015, but their interface requires librarians to download a separate JR1 file for each subscribed journal. This set up of course requires extra time since librarians need to download several reports from the same platform and then they have to compile those files into one file before loading them into an ERM. The vendor should provide one JR1 file for each customer.  Could you please check with Project Euclid?  Thanks!
Usus Response:  Project Euclid has been contacted and is now in the process of bundling all of the individual journal reports into a single download.  They hope to have this deployed in the next few weeks.

Issue Report: I’ve noticed that several publishers whose content is hosted by the same usage processing vendor do not include zero use titles on the JR1 and JR1a reports if these data are harvested via SUSHI protocol. Whereas, if I request the same reports manually, these do include zero usage titles.
Usus Response: The COUNTER Code of Practice does require publishers to include zero use titles in the Journal Reports regardless of delivery mechanism.  (Only providers of aggregated full text databases, such as EBSCO and ProQuest, are exempt from providing zero use titles.)  In this case, the usage processing vendor has been notified of the problem and is in the process of correcting it.  They will be working closely with their publisher clients and notifying each one once the problem has been resolved.  It is important to periodically compare SUSHI reports to manually harvested COUNTER reports.  The community is encouraged to alert Usus to issues such as these.

Issue Report: Errors have been found in JR5 reports harvested using SUSHI from IEEE, IOPscience, and RSC. These errors include empty identifier elements, publication year value errors, and issues with usage dates.

Usus Response:  All of the publishers identified with these problems have their statistics handled by the same usage processing vendor.  The vendor has been contacted, and they have agreed to make the changes.  They will need around 2 weeks to complete this work, but the updated reports should be available to the libraries in the next month’s reporting cycle.

Issue Report: I can request a JR1 report from ScienceDirect with the report version as 4, but the response generated mentions the report version as 3.  Since in our system we identify the report version based on the information provided in the report tag, it’s showing a mismatch.

Usus Response: Elsevier confirmed that there was a problem with the JR1 file in that the SUSHI part of the response indicated release 4 and the COUNTER report said release 3.  They have since fixed the problem.

Issue Report: I was looking for usage statistics for the American Journal of Psychiatry, because its use seemed low for the last few years. I downloaded the “legacy report” on their site, which went up to October 2014. This overlapped by a month with the October stats in the COUNTER report for the whole calendar year of 2014 available on their site. When I wrote to them, I got the following:  “You can retrieve your prior usage reports at http://psychiatryonline.org/action/institutionUsageReport. Please note that due to our platform migration in late October 2014, the statistics prior to that time period (October 2014 and previous) are only available as legacy reports.”  My message to other users is to be cautious when looking at stats from this publisher!

Usus Response:  It was confirmed with American Psychiatric Publishing that they migrated to their new system on 10/29/2014, so the usage for October 2014 is divided by the legacy report and the current report, with the legacy report containing the bulk of the usage for the month.

Issue Report: Say a journal we subscribe to offers all of its articles as open access after 6 months. What would be the best way to determine usage of the 6 months that we’re actually paying for? Is this even possible? I was thinking about running JR1 and JR1 GOA reports and then subtracting JR1 GOA from JR1 data but I’m not sure if Gold Open Access data would be what I need in this case. Would this hypothetical journal be considered gold open access?

Usus Response:  The JR1 GOA reports are only for Gold Open Access journals and articles – essentially publications that have been open access from the time they were released.  Publications that become open access after an embargo period are currently not tracked within these reports, although this is under discussion for future Codes of Practice.

Issue Report:  Since I’ve begun collecting Journal Reports 5 for the period of 2014, I have noted that certain vendors use a kind of moving wall in the range of reported Years of Publication. The Code of Practice states that “vendors must provide each YOP in the current decade and in the immediately previous decade as separate columns”. In my understanding, this would, for the decade of 2010 to 2019, compromise separate columns for the years back to 2000.
There are several vendors who, while supplying these columns correctly until the period of 2013, have left out the column for the Year of Publication of 2000 in their reports for the period of 2014. The reporting of separate Years of Publication begins instead with the year of 2001.
Another issue is the fact that in the mentioned reports, the accumulated column for the previous years is named “YOP Pre-2000”, so I am wondering wherever the data for the year of 2000 itself has gone? Is it accumulated in the column of “YOP Pre-2000” or is it lost altogether? Perhaps it is simply an issue of naming the columns correctly, as in the reports for previous periods there are columns for “YOP 2000” as well as columns for “YOP Pre-2000”.
Vendors in question are Institute of Physics, Nature Publishing Group, and Royal Society of Chemistry, which are all using MPS Insight as a statistics platform.

Usus Response:  MPS Insight has been contacted and their reports for these publishers will be adjusted to include a separate column for YOP 2000, which will include 2000 data only.

Issue Report: Last year I noticed that the 2014 DeGruyter BR 2 report was reporting extremely low usage statistics. I contacted DeGruyter and they explained that they experienced a “DDOS attack” on their servers which caused some accounts to not properly report usage data. I’m assuming this affected all DeGruyter usage reports – we only have Harvard ebooks from DeGruyter (no journals). This problem supposedly fixed in November 2014.

Usus Response: DeGruyter was hit with a denial-of-service (DoS) attack in or around November 2014 – they are unclear exactly when it started – and during this time users were unable to access content.  The impact and duration of the attack varied, however, in terms of the accounts that were affected.  As a result of the attack DeGruyter has moved its servers to a new provider and they haven’t experience another episode like this in the last 4 months. In terms of the usage reports, the attack did result in lower usage for effected accounts.   In cases where this occurred, DeGruyter analyzed historical data to account for the lower usage on these accounts.  Not every single account was affected, with European accounts seeing these anomalies more than US and Canadian accounts.  In addition, it was not just ebook reports but also journal reports.  If an institution sees this type of anomaly in their usage reports from DeGruyter it is recommended that DeGruyter be contacted directly to help assess if the low usage during the effected time period may be attributed to the DDOS attack and how to account for the anomaly alongside an analysis of historical usage data.

 

Issue Report:  I believe there is a problem with some publishers double-counting use in JR1s. Some publishers link directly to HTML via a link resolver (1 use) and when the same user clicks on the PDF version moments later (2nd use) a second use is counted.

Usus Response:  COUNTER investigated this issue a few years back and addressed concerns raised about the inflation of full text request counts due to the “interface effect” where the users are automatically presented with the HTML version and access the PDF from the HTML view.  COUNTER’s investigation into the situation resulted in the  introduction of separate PDF and HTML counts in COUNTER JR1 and JR1 GOA reports.  By having access to the detailed HTML Requests and PDF Requests it is possible to assess which formats are being used, but also if a site is forcing users through the HTML to see the PDF. This will be evident in the reports and thus librarians may choose to use the PDF Requests for the cost-per-use analysis.

Issue Report:  Reports from http://www.karger.com/ are labelled “Journal Report 1 (R4)” but are missing line 3 (Institutional identifier), and mislabel call A8 as “Title” instead of “Journal”. They also do not appear to be running a SUSHI server.

Usus Response:  We have verified that others are seeing the same issues with the reports.  We have contacted contacted Karger to make sure they are aware of the problem and to get an update on correcting their COUNTER reports as well as an update on when their SUSHI server for R4 will be operational.

Update:  Karger issued a fix on 4/13/2015 to their COUNTER reports to address the issues identified in this issue report.

Issue Report:  I’m finally trying to do stats using COUNTER 4. I read the code of practice for definitions/descriptions, got into EBSCO admin and downloaded all COUNTER4 options. I’m not seeing a way to identify successful requests of articles or books *by database.* Is this still possible with COUNTER 4?

Usus Response:

The COUNTER Code of Practice R4 only requires  full text requests be captured at the journal title level for the overall platform.  There is no COUNTER report offering full text journal requests by title and database. We will pass your request on to the COUNTER Executive Committee for consideration in a future release of the COUNTER Code of Practice.

Issue Report: In a recent ProQuest DR1 report for our subscription to British Humanities Index, I noticed that the number of record views were much, much higher than the number of result clicks. For the 2014 calendar year, the number of result clicks was 116 but the number of record views as 1760. Any idea how the number of record views could get so high as compared to result clicks?

Usus Response:  In DB1 (R4) reports, result clicks are generated when users are searching a database on the platform host and then clicking on a search result. Record views are generated when users are opening a detailed record the platform hosts regardless of the source of the search, e.g. the search could have been initiated on a discovery service or federated search system.  Because of this, the activity you are seeing with British Humanities Index is accurate in that if your library is using a system such as a discovery service your users may be accessing records from BHI more times through the discovery service than going directly to BHI and generating result clicks.

Issue Report: Ebrary defines “section requests” in the BR2 version 4 report as the following usages: Pages viewed, Copies made, Pages printed, Instances of PDF downloads, Instances of full-document downloads. This would have the potential to artificially inflate the number of section requests when comparing it to other ebook platforms that only count them as chapter requests.  Could you please look into this? Is a vendor “allowed” to count this many kinds of usage for BR2 section requests?

Usus Response:  With the BR2 reports in COUNTER R4, use is counted for any activity which is less than a use for the entire book.  Entire book use is counted in BR1.  With that, BR2 reports and a vendor/publisher definition of “sections” may vary so long as the sections are defined and the activity that is counted is only counted once.  For example, a use can’t be counted in a “chapter” use and then again in a “page” use.  As such, as Ebrary is defining “sections” they are operating according to the BR2 guidelines.