Steve Pettifer and his colleagues did not heavily promote their 2008 paper on digital library tools. So it came as a surprise when, in August 2012, Pettifer got an e-mail from the Public Library of Science (PLOS), based in San Francisco, California. A PLOS representative told him that people had viewed or downloaded the article (D. Hull et al. PLoS Comput. Biol. 4, e1000204; 2008) more than 53,000 times. It was the most-accessed review ever to be published in any of the seven PLOS journals. The paper had come out just as biologists' interest in digital publishing was building and the number of tools was exploding, says Pettifer, a computer scientist at the University of Manchester, UK. “It hit the right note at the right time,” he says.

Credit: IMAGEZOO/CORBIS

At one time Pettifer would have listed the paper on his CV accompanied by the journal's impact factor and the article's number of citations — in this case, about 80. But when he came up for promotion this year, he realized that tracking citations was not going to tell the whole story about the paper's influence. Impact factor is a crude measure that applies only to the journal, not to specific articles, he says; citations take a long time to accumulate, and people may not cite a paper even if it influences their thinking. So he added the number of views to the CV entry. And he did not stop there.

Next to many of the papers listed, Pettifer added labels indicating scholarly and public engagement. The labels were generated by ImpactStory in Carrboro, North Carolina, one of several services that gauges research impact using a combination of metrics — in this case, a wide range of data sources, including the number of times a paper has been shared on social-media websites or saved using online research tools.

When Pettifer submitted his annotated CV for the first round of promotion review, his mentor expressed confusion. He took a look and said, “What the hell are these badges doing in your CV?” recalls Pettifer. “But once I explained them, he said, 'Well, give it a go.'” Pettifer submitted his CV for the second round — and got his promotion. He does not know for sure whether the metrics helped, but he plans to use them on future grant applications. “I'm definitely a convert,” he says.

Outside the box

'Altmetrics', a term coined in 2010 by ImpactStory co-founder Jason Priem, refers to a range of measures of research impact that go beyond citations. Several altmetrics services have emerged in the past few years (see 'Four ways to score'). They produce reports that gauge impact by taking into account not just academic citations, but also digital use and sharing of data — which can include the number of times a paper has been tweeted, 'liked' on Facebook, covered by the media or blogs, downloaded, cited on Wikipedia or bookmarked online. Some services also evaluate research products such as software, data sets and slideshows by tracking the number of people who have used or viewed the product online (see Nature 500, 243–245; 2013).

Altmetrics offer researchers a way to showcase the impact of papers that have not yet gathered many citations, and to demonstrate engagement with the public. They can be accessed through journals or independent websites, and can track the impact of particular data sets or papers, or evaluate the combined influence of publications and products produced by multiple researchers in a department.

But these services must be used wisely. They are not meant for strict quantitative comparisons; nor do they always distinguish between positive and negative attention. And although scientists can include altmetrics in job and grant applications and annual reports, they must select relevant data and clearly explain the context to avoid provoking mistrust or confusion.

Some altmetrics services generate profiles that summarize the impact of a researcher's products. ImpactStory allows scientists to import lists of items such as papers and software from existing user profiles at websites such as Google Scholar, which automatically tracks a researcher's papers, or the online software-code repository GitHub. Scientists can also manually enter the digital object identifiers (DOIs) of their papers, or input their Open Researcher and Contributor ID (ORCID), a unique identifier that can be used to tag all of a researcher's work. ImpactStory then creates a profile showing how frequently each product has been viewed, saved, discussed, cited or recommended online.

Steve Pettifer: “It hit the right note at the right time.” Credit: JOHN T. LATHAM

Other services take a more article-centric approach. Altmetric in London allows users to access data on individual papers using a bookmarklet — a browser bookmark that executes JavaScript commands. (Altmetric is funded partly by Digital Science, a sister company to Nature Publishing Group.) Users install the bookmarklet in their Internet browsers; then, when they come across a paper that they are interested in, they click the bookmarklet button. A report pops up in the corner of the browser, providing altmetrics that include a score indicating how much online attention the paper has received. The score takes into account the number of people who have read or mentioned the article, as well as the relative importance of the medium and the mentioner. Newspaper coverage is weighted more heavily than tweets, and tweets by individuals more heavily than those by journals promoting their content.

Many journals display some altmetrics on their sites automatically; these might be generated in-house or provided by an external service. Every article published by PLOS, for example, includes an online metrics tab showing data such as views, downloads and social-media mentions. A feature called Article-Level Metrics Reports lets users search for PLOS papers by criteria such as author or keyword, and generates a summary metrics report for the set of results, including article usage by paper age and maps of authors' locations. Several journal publishers, including Nature Publishing Group in London and Cell Press in Cambridge, Massachusetts, display data from Altmetric on their sites, and John Wiley & Sons in Hoboken, New Jersey, began a trial with the metrics firm in May. HighWire Press, an electronic-publishing platform at Stanford University in Palo Alto, California, is collaborating with ImpactStory to add altmetrics to its journal websites.

Altmetrics enable scientists to see ripples generated by their research that might otherwise go unnoticed. Individual researchers can try to track buzz on their own, but data-aggregation and updating services make it much easier. These services also automate difficult tasks, such as finding all tweets that link to a particular paper; each article will have multiple URLs, so conducting such a search manually would be very time-consuming.

The reports can even suggest potential collaborators or journals. For example, if an informatics paper is mentioned a lot by biologists, the author might consider publishing his or her next article in a biology journal to increase exposure, says Heather Piwowar, co-founder of ImpactStory.

Measures of caution

Despite the benefits, researchers and evaluators must interpret altmetrics data cautiously. Data sets might not be comprehensive: not all services detect news stories that do not give URLS for the study, for example. The popularity of social-media sites changes over time, so it is unrealistic to expect a paper published in 2008 to generate as many tweets as one published in 2013. And some disciplines, such as computational biology, are more active than others on social media, so comparisons between disciplines may be unfair.

To get the most meaningful information, users should dig into the underlying data. Although a paper's Altmetric score can suggest whether it is worth clicking through to the more detailed report, “qualitative assessment is far more important than the number”, says Euan Adie, founder of Altmetric.

To help users to interpret the data, most services put numbers in context. ImpactStory normalizes data by publication year and includes percentiles — it might, for example, note that a given paper has more readers on the online reference manager Mendeley than 97% of papers indexed that year. Altmetric shows results normalized by journal, which allows fairer comparison of papers in discipline-specific publications. And in May, PLOS began offering Relative Metrics, a service that lets users see how a paper compares to other PLOS articles in the same subject area, using tools such as graphs of article views.

Including altmetrics in decisions on grants, hiring and tenure requires careful consideration. Gerald Rubin, executive director of the Howard Hughes Medical Institute's Janelia Farm Research Campus in Ashburn, Virginia, is sceptical of altmetrics that do not explicitly indicate quality, such as number of tweets. He adds that altmetrics suffer from one of the same flaws as citation counts: a mediocre paper in a popular field will receive more attention than a first-rate paper in a small field. And including altmetrics in a job application? “At this point, I don't think anyone would pay attention,” says Rubin, who looks at many applications.

But some people do pay attention. Scientists are permitted to use altmetrics to demonstrate social impact in reports for the Research Excellence Framework (REF), an evaluation of UK academia that influences funding, notes Graeme Rosenberg, REF manager at the Higher Education Funding Council for England in Bristol. Plum Analytics, an altmetrics company based in Dresher, Pennsylvania, and Seattle, Washington, this year completed a pilot project with the University of Pittsburgh in Pennsylvania, in which it generated altmetrics profiles for a subset of researchers that could be aggregated by department. The next step is to roll out altmetrics profiles for the entire institution, says company co-founder Andrea Michalek. Plum is also currently running projects with about ten other institutions.

Rubin is better disposed towards altmetrics that suggest a positive value judgement, such as the number of requests to use software. In that vein, Adie suggests that rather than simply reporting numbers, researchers should use altmetrics to find success stories that they can mention in their CVs or on their websites. The data might reveal that a non-governmental organization or a government department took notice of a paper, for example. Altmetric plans soon to start flagging up citations by agencies such as the World Health Organization and the Intergovernmental Panel on Climate Change, both based in Geneva, Switzerland.

Context such as percentile ranks or explanations of data sources can help evaluators to interpret altmetrics. In Pettifer's CV, he included a legend for his ImpactStory labels, listing some of the data sources, such as Mendeley, Twitter and Wikipedia. Piwowar suggests that researchers who worry that evaluators will view altmetrics negatively could start by including the data in annual performance reviews, which are lower-risk than grant or job applications.

Some think that altmetrics will soon become a normal part of a CV. It used to be that researchers who wanted to demonstrate the importance of a recently published article could only say, “Look, I really believe this is great research,” notes Mike Thelwall, an information scientist at the University of Wolverhampton, UK. Now, he adds, “you can back up your words with a little evidence”.

Table 1 Four ways to score A quartet of services offers free metrics reports that go beyond citations.