It Must Be Measured: #Scio12 #Altmetrics

Jan 31 2012 Published by under [Information&Communication]

No, I am not referring to d00dly ruler tricks here. We all know that no one actually measures.

Fellow Scientopian DrugMonkey has blogged a perfect storm of a discussion on impact factor and glamour science. Click on over and read the comments (warning: your head may explode). This argument will sound familiar to most readers. Basically, everyone knows that the impact factor (IF) can be gamed by journals. IF reflects some sort of average citation rate for a journal; it says nothing about the quality of any given paper. Some people make the point that IF keeps the measurement of productivity from being solely a pub count. Others add that IF is imperfect, but it's "what we have."

Really?

At Science Online I attended a discussion of Alternative Metrics or altmetrics:

As the volume of academic literature explodes, scholars rely on filters to select the most relevant and significant sources from the rest. Unfortunately, scholarship’s three main filters for importance are failing:

  • Peer-review has served scholarship well, but is beginning to show its age. It is slow, encourages conventionality, and fails to hold reviewers accountable. Moreover, given that most papers are eventually published somewhere, peer-review fails to limit the volume of research.
  • Citation counting measures are useful, but not sufficient. Metrics like the h-index are even slower than peer-review: a work’s first citation can take years.  Citation measures are narrow;  influential work may remain uncited.  These metrics are narrow; they neglect impact outside the academy, and also ignore the context and reasons for citation.
  • The JIF, which measures journals’ average citations per article, is often incorrectly used to assess the impact of individual articles.  It’s troubling that the exact details of the JIF are a trade secret, and that  significant gaming is relatively easy.

I hoped that the discussion would provide a gentle introduction to the concept of altmetrics. My hopes died, and I felt adrift during the session. I have played with some of the new measures on the altmetrics site. I get what these researchers want to do; I just have not figured out how each measure fits into a bigger picture. [I do appreciate more of the discussion now.]

For a kinder, gentler introduction to the topic, I recommend a piece in the current issue of The Chronicle of Higher Education that profiles Jason Priem, a graduate student in library sciences at the University of North Carolina at Chapel Hill. He helped develop Total-Impact, an altmetrics site that tracks information as it is discussed across the web. He discusses the general concept of the site as well as its current limitations (hey, it's still in alpha).

The internet disrupts traditional publishing; we no longer need to fit the scientific record to the dead-tree world of volumes and issues and page numbers. This shifting paradigm is dragging metrics along, potentially crushing IF in the process.

13 responses so far

  • DrugMonkey says:

    Doesn't all this WebmetricsElrven stuff just push us towards a marketing metric? Where papers do best when Institutional flacks flog it on social media, when the lab or their buddies do the same? Pushing the role of established social media persons to the forefront, as a replacement rather than improvement on the current members of the Authoritah! class?

    This latter consideration is a big hurdle. It makes the online science enthusiasts seem *incredibly* self-serving.

    I can see the excuse-making now "yeah, everyone knows Klout is totes a flawed measure...but it is what we have so whaddaya gonna do?"

  • Mr. Gunn says:

    Sorry you didn't get everything you wanted out of the impact assessment session. I feel partly responsible, but it's really early days for this stuff. What you got was 20 minute intros to the different parts of the nascent assessment infrastructure. We should have had an "intro to altmetrics" session at the beginning or something.

    Figshare is working on paper disaggregation, to move scholarly communication away from the PDF-centric culture towards a more granular type of sharing. This might sound wacky, but lots of things are being shoehorned into paper form just so the scholar can get credit, such as code, datasets, etc. It would be pretty awesome if instead of having to cite to a whole paper, you could cite a specific part, so that instead of searching for authors or keywords, you could do queries like "show me all the papers that criticize this specific technique used in this paper".

    Mendeley is working on making research impact assessment faster. The number of readers a paper has is updated *daily*, and work is being done now to compare citation rates to readership counts. See http://dev.mendeley.com The project mentioned at #scio12 is the institutional Mendeley plan, where a whole site will sign on to Mendeley, and they can then get things like the aggregate number of readers of all the papers published by members of the institution or department, they can compare their readership rates with that of other schools in the program or other faculty at the same institution, and they can also see which journals are being read the most by faculty at the school. If, and this is a audacious if, the readership metrics turn out to be a early indicator of research impact, the idea would be for faculty hiring committees and tenure review boards and grant funders to start including this metric in their assessment. I'll address the gaming issue more below.

    PLoS is working on a similar project with their article-level metrics. see http://article-level-metrics.plos.org/ They're looking at downloads, pageviews, comments, and also bookmarks, tweets, etc, of a given paper. Early indications are that highly tweeted papers are more likely to be highly cited. [1]

    http://total-impact.org is another project that takes a author-centric approach, as opposed to a paper-centric approach. You tell Total Impact all of the objects that are your scholarly output and it will track them for you. This includes things like views of a presentation on slideshare or Youtube, forks on github, and also supports citation of data as long as it's got an identifier from a data repository, such as http://datadryad.org/

    So how does all this fit together? Well, consider the following scenario. Researcher A writes some code for identifying tornados based on weather satellites readings. Researcher deposits his code in Figshare (or the new F1000 service or somewhere it can get an identifier). She includes a link to the data stored in NASA's data repository in the metadata record describing the code. Researcher A's code is amazing, so Researchers B-Z come along and use the code in follow-up work, and they link back to her code in their follow-up work. Total-impact picks up those links and automatically adds them to her personal impact statement. When it's time to apply for a grant, the grant review board looks over her publication record, sees she's published a few papers in journals that don't tend to publish crap, sees that her publications in those journals have accumulated a good number of readers on Mendeley even though the follow-up papers haven't even come out yet, and also checks her total-impact statement. The board sees that the code she created using funds from her last grant has been used by 25 other scientists in the past year and that helps them decide to approve her new grant.

    Might sound a bit complicated, the the point is that the basic infrastructure for this has been laid down now. A mechanism for accelerating research and empowering reviewers and funders to make better allocation decisions has been created. Yes, it can be gamed, but the Impact Factor is currently being gamed, and on top of that, it's a crappy metric. If people were at least trying to game a metric that has intrinsic meaning, perhaps that would work to our advantage in that people would finally be motivated to do things like science outreach. Some people are going to game whatever metric exists, so might as well make it one that causes useful work to be done in the process of gaming. That said, I'd hope that these metrics simply serve as a panel of indicators, along with the traditional ones, and along with the good judgment of colleagues serving as a sanity check. From that perspective, it can only help, and if it finally decreases the abuse of the Impact Factor, that would almost make it worthwhile in itself.

  • Mr. Gunn says:

    Forgot the citation for the above assertion about the effect of tweets on citations.

    Eysenbach, G. Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact J Med Internet Res 2011;13(4):e123 http://dx.doi.org/10.2196/jmir.2012

  • whizbang says:

    Now that I've dug around on the altmetrics site and played with some of the instruments, I understand the discussion that surrounded me at #scio12 better. I wish I had done that ahead of time. I'll be ready next year!

  • Bashir says:

    To play naive, many of these discussions about metrics seem to start from the assumption that what we are looking to measure is essentially the attention paid to a paper (or a researcher). Whether it is based on cites, readership, or tweets. We take all of these stats and figure out who the "high profile" or influential researchers are. Of course that seems to fall slightly off of any idea of who the good or important researchers are. By these types of metrics Dan Brown (Davinchi Code guy) might be up there with Shakespeare. So are we just saying that attention is the best proxy for quality that we can get? Is quality just too fuzzy? So we just go with some sort of weighted quantity, which is essentially what these measures all seem to be?

  • Pascale says:

    So I guess the question becomes how we can measure the relative importance of scientific findings?
    Individual article citations demonstrate long-term impact, but are a meta-lag indicator. Tweets, blogs, etc show early "general-popular" interest, but may not reflect the actual impact on Science with a capital S.
    Metrics via online citation management, such as Mendeley or Zotero, may provide the best "middle ground" of interest plus building on the science, but they are not yet universal enough to get the entire picture.
    Ultimately, why do we need these measures anyway? I know, it's because P&T committees want to do more than count citations, but I'm not certain that anything we have come up with (including and especially journal IF) really helps.

  • Dario says:

    Pascale,

    these are good points but saying that measures are only useful in the context of, say, tenure committees or funding agencies is misled.

    Ask yourself: how do you discover new papers nowadays? Anything you find and decide to read is mediated by filters which in turn use metrics to determine the relevance and authority of a research work. Even the simple decision of reading a paper because it's cited as a reference in another paper is based on the assumption that citation following is a good strategy to find the most relevant and authoritative research. With a growing volume of scientific literature this strategy may turn out to be suboptimal. Reading a paper because your peers recommend it is another strategy that may fail if your direct peers are unable to scout the most recent literature to identify the influential works that you should be reading. Measuring impact and its context in a richer and faster way is essential to support the pace and breadth of scientific discussion which is not represented by the tempo of citations any more.

    • Pascale says:

      How do we discover new papers today?
      I get a number of electronic updates, including set searches and electronic TOCs, but this addresses stuff I want to know about in general. When I need to know about something outside of my immediate realm of interest (for example, I have a patient with a disease I haven't seen for a while) I search internet databases and pick up articles that look relevant AND that I can access. Free gets read first, but I will pick up my credit card if something looks highly relevant and I can't get it any other way.
      When I've moved into a new area, I have tried using collections by relevant groups on Mendeley. I'm not certain if that extra Web 2.0 filter adds any extra value over just running the terms through Google Scholar.
      We really need to know what we want from these tools or metrics so we can determine how they perform. I'm not certain that we all are on the same page there.

      • Dario says:

        I do exactly the same, which means I depend on a proprietary ranking algorithm and some fairly shallow information search strategies (does the title/abstract look relevant? Are the authors familiar/reputable) to determine what I read and what I cite. This strikes me as (part of ) the problem altmetrics is trying to address. Also, despite the focus on web 2.0, altmetrics is not limited to social media. Any source that can provide reliable and measurable data of impact is a potentially good candidate for inclusion. One day you may want to search for papers cited in educational materials only or looked up by a specific population (e.g. clinicians/practitioners), you won't get that information via Google Scholar.

  • [...] for Educational Innovation and Change” and @malin’s link to an article on  ”It Must Be Measured: #Scio12 #Altmetrics“.  By 07.30, as I got off the bus in the town centre I’d read those three articles [...]

  • views says:

    This design is wicked! You most certainly know how
    to keep a reader amused. Between your wit and your videos,
    I was almost moved to start my own blog (well, almost.
    ..HaHa!) Fantastic job. I really loved what you had to say,
    and more than that, how you presented it. Too cool!

  • Dave says:

    A very interesting topic and something I know very little about. I think I have a fair bit of reading to do !

Leave a Reply


− 6 = two