With its HeLa genome agreement, the NIH embraces a expansive definition of familial consent in genetics

I wrote before about the controversy involving the release earlier this year of a genome sequence of the HeLa cell line, which was taken without consent from Henrietta Lacks as she lay dying of ovarian cancer in 1950s Baltimore.

Now, the NIH has announced an agreement with Lacks’ descendants to obtain their consent for access to and use of the HeLa genome (the agreement applies only to NIH funded research, but the hope is that others will agree to it as well).

I think the NIH handled this reasonably well. There’s no way to go back and consent Henrietta Lacks, so one could reasonably argue that nobody should be allowed to use HeLa cells ever or generate and use information derived from their genome. But that seems like too harsh a judgment, especially given the pride the family takes in the use of Henrietta’s cells for research.

So I think it’s entirely reasonable, in this case, to give the family the right to consent for use of these cells, and to impose whatever restrictions on the use they see fit.

However, there are some issues raised by this case and this decision that warrant further discussion.

First, exactly when, and under what conditions, should someone’s heirs be able to consent on their behalf? It sounds like there was broad consensus from the Lacks family about how to handle this. But what if there hadn’t been? Does the consent right pass down strictly to one’s legal heirs? And maybe more relevant to existing use of clinical samples, many consent documents allow people donating samples to withdraw their consent in the future. Does that right also pass down to one’s heirs?

Second, and to me more importantly, is the issue I raised previously with respect to Rebecca Skloot’s op-ed on the topic. In both her piece, and in the editorial by Francis Collins and Kathy Hudson, there is mention of the need not just to make up for the lack of original consent, but to protect the genetic privacy of the Lacks family. The notion is that, because they are so publicly associated with HeLa cells, anything that is discovered about these cells will immediately be associated with members of the family. And with the decision announced today, the NIH is explicitly giving the Lacks family the right to veto uses of HeLa cells, not because Henrietta would not have consented to the use in 1951, but because they view it as an invasion of their privacy today.

This is indeed an issue, but it is a very different one than original consent. And unlike the original consent issue – which can be argued as applying narrowly to the HeLa case – the privacy issue applies to all genomic data, whether properly consented or not. Collins and Hudson talk about “de-identified” samples in their essay, ignoring the now abundant evidence that one can almost trivially deduce the donor of a clinical sample from a small amount of DNA sequence and the use of public databases of genetic information.

Thus, in the near future, any human genetic data out there will be subject to the same risk that the Lacks family now faces. We can’t set up a panel of family members for each of the tens of thousands of samples that will soon be out there. And even if we could, I don’t think we should. There is no sensible or even workable way to require familial consent for the use of someone’s genetic material.

We believe in the absolute right of individuals to make decisions about how samples obtained from them can be used. But the very nature of inheritance and genetics means that every decision they make by necessity affects other individuals – close relatives most acutely, but by no means exclusively. Figuring out how we deal with this is one of the major practical and philosophical challenges of the age of genetic information, and even though Collins and Hudson chose to punt this issue down the road in the name of comity with the Lacks family, it is an issue we are going to grapple with very soon.

And I am disturbed that the Director of the NIH has, in effect, embraced an extreme position on this issue – that families have the right to veto uses of someone else’s DNA.

Posted in genetics, HeLa | Comments closed

Let’s not get too excited about the new UC open access policy

It was announced today that systemwide Academic Senate representing the 10 campuses of the University of California system had passed an “open access” policy.

The policy will work like this. Before assigning copyright to publishers, all UC faculty will grant the university a non-exclusive license to make the works freely available, provide the university with a copy of the work, and select a creative commons license under which is will be made freely available in UC’s eScholarship archive.

A lot of work went into passing this, and its backers – especially UCLA’s Chris Kielty – are to be commended for the cat herding process required to get it though UC’s faculty governance process.

I’m already seeing lots of people celebrating this step as a great advance for open access. But color me skeptical. This policy has a major, major hole – an optional faculty opt-out. This is there because enough faculty wanted the right to publish their works in ways that were incompatible with the policy that the policy would not have passed without the provision.

Unfortunately, this means that the policy is completely toothless. It provides a ready means for people to make their works available – which is great. And having the default be open is great. But nobody is compelled to do it in any meaningful way – therefore it is little more than a voluntary system.

More importantly, the opt-out provides journals with a way of ensuring that works published in their journals are not subject to the policy. At UCSF and MIT and other places, many large publishers, especially in biomedicine, are requiring that authors at institutions with policies like the UC policy opt-out of the system as a condition of publishing. At MIT, these publishers include AAAS, Nature, PNAS, Elsevier and many others.

We can expect more and more publishers to demand opt-outs as the number of institutions with open/public access policies grows. In the early days of such “green” open access, publishers were pretty open about allowing authors to post manuscript versions of their papers in university archives. They were open because there was no cost to them. Nobody was going to cancel a subscription because they could get a tiny fraction of the articles in a journal for free somewhere on the internet.

However, as more universities – especially big ones like UC – move towards institutional archiving policies, an increasing fraction of the papers published in subscription journals could end up in archives – which WOULD threaten their business models. So, of course (and as I and others predicted a decade ago), subscription publishers are now doing their best to prevent these articles from becoming available.

So long as the incentives in academia push people to publish in journals of high prestige, authors are going to do whatever the journal wants with respect to voluntary policies at their universities. And so, we’re really back to where we were before. Faculty can make their work freely available if they want to – and now have an extra way to do it. But if they don’t want to, they don’t have to.

The only way this is going to change is if universities create mandatory open access policies – with no opt-outs or exceptions. But this would likely require actions from university administrators who have, for decades, completely ignored this issue.

So don’t get me wrong. I’m happy the faculty senate at UC did something, and I think the eScholarship repository will likely become an important source of scholarly papers in many fields, and the use of CC licenses is great. And maybe the opt out will be eliminated as the policy is reviewed (I doubt it). But, because of the opt out, this is a largely symbolic gesture – a minor event in the history of open access, not the watershed event that some people are making it out to be.

Posted in open access, public access, University of California | Comments closed

Those who deny access to history are condemned repeatedly

One of the most disappointing aspects of the push for open access to scholarly works has been the role of scholarly societies – who have, with precious few exceptions, emerged as staunch defenders of the status quo.

In the sciences – where most of the open access battles have been fought – anti-OA stances from societies have been driven by the desire to protect revenue streams from society-run journals. I had always hoped that the humanities – less corrupted by money as they are – would embrace openness in ways that science has been slow to do. Ahh for the naïveté of youth.

At my own institution – UC Berkeley – efforts to pass a fairly tepid “open access” policy were thwarted by humanities scholars who felt a requirement that faculty at public institution make their work publicly available represents some kind of assault on academic freedom. But that is nothing compared to an absurd statement released this week by the American Historical Association.

The gist of the AHA’s statement is this: they want universities that require their recently minted PhD’s to make copies of their theses freely available online to grant a special exemption to historians, allowing them to embargo access to their work for up to six years.

The ostensible reasons for this embargo request is to defend the ability of junior faculty to get their theses published in book form by a scholarly press – something they claim online access precludes. Here is their explanation:

By endorsing a policy that allows embargos, the AHA seeks to balance two central though at times competing ideals in our profession–on the one hand, the full and timely dissemination of new historical knowledge; and, on the other, the unfettered ability of young historians to revise their dissertations and obtain a publishing contract from a press.  We believe that the policy recommended here honors both of these ideals by withholding the dissertation from online public access, but only for a clearly stated, limited amount of time, and by encouraging other, more traditional forms of availability that would insure a hard copy of the dissertation remains accessible to scholars and all other interested parties.

They are basically arguing that, because of the tenure practices of universities, the history literature should remain imprisoned in print form – and that scholars without access to print copies should be denied timely access to this material – unless you think six years is timely.

What really galls me about this is that the AHA takes the way that academia works as a given. Yes, IF university presses refuse to publish books based on theses available online, and IF universities require such books for tenure, then young historians whose theses are made available online without an embargo are at a disadvantage. I’ve heard this from lots of young humanities scholars – and while I would dispute the extent to which it’s true, people really feel this way.

But shouldn’t the response to this sad situation by the leading organization representing academic historians – many of whom are in leadership positions at universities across the country – be to, you know, actually lead? Instead of a reactionary call for embargoes, they SHOULD have said something like this:

The way scholars in our field are evaluated is broken – so broken, in fact, that a young scholar in our field feels immense pressure to hide their work from public view for years so that they can cater to antiquated policies from our presses and our universities. The inability of our field to take full advantage of the internet as a means of dissemination should be a wakeup call for all of us in the field – and the AHA is committed to using our pull, and that of our members, to reform our presses and alter the rules for tenure at our institutions as rapidly as possible.

Shame on the AHA for being yet another scholarly society to let down the scholars they represent.

Posted in open access | Comments closed

New Preprint: Uniform scaling of temperature dependent and species specific changes in timing of Drosophila development

We posted a new preprint from the lab on arXiv and would love your comments.

This work was born of our efforts to look at evolution of transcription factor binding in early embryos across Drosophila. When we started doing experiments comparing the three most commonly studied species, the model D. melanogasterD.pseudoobscura and D. virilis, we quickly ran in to issue: even though these species look superficially fairly similar, and develop in roughly the same way, they don’t really like to live at the same temperature, and even when they are grown in common conditions, they develop at different rates. So, for example, in order to collect an identical sample of stages from D. melanogater and the slower-developing D. virilis, you have to collect for different amounts of time- and we had no real idea of how this would affect the measurements we are making. And if you want to compare the tropical D. melanogaster to the cold-preferring D. pseudoobscura, you can either choose to collect at temperatures that neither prefers (21C) or grow them under different conditions, again with no clear understanding of how these differences affect our measurements.

So, a few years ago, a new postdoc in the lab (Steven Kuntz) decided to look at this question in more detail. He first developed methods to take time-lapse movies of developing embryos at carefully controlled temperatures, and then proceeded to characterize the development of 11 Drosophila species (all with fully-sequenced genomes) from different climates at eight temperatures ranging from 17.5C to 35C. He then developed a combination of manual and automated ways to identify 34 key developmental landmarks in each movie.

As was already well known, D. melanogaster development accelerates at higher temperatures taking around 2,000 minutes at 17.5C but just over 1,000 minutes at 32.5C.

Timing of D. melanogaster development at different temperatures

We observed similar overall trends for other species, with the other tropical species (D. simulansD. sechelliaD. erecta, D. yakubaD. ananassae and D. willistoni) showing similar patterns to D. melanogaster, while the temperature (D. virilis and D. mojavensis) and alpine (D. pseudoobscura and D. persimilis) were consistently slower even when grown at identical temperatures. The tropical species all started to show effects of high temperature (lower viability and slower development) at 32.5C, while the alpine species showed even greater effects at the cooler 30C.

Effects of temperature on development time for 11 Drosophila species

 

There’s a lot more in the paper about both of these issues, but the thing that I find really amazing, is that despite all of this variation in developmental timing both between species and at different temperatures, the relative timing of the 34 events we measured was virtually identical in all species and conditions. Indeed we find no statistically significant differences in the relative timing of any event between the initial cellularization of the blastoderm and hatching.

Proportional developmental time between species and at different temperatures

 

I find this almost perfect conservation of the relative timing of development across these diverse species and conditions stunning – and very much counter to what I expected – which was that different stages, which involve very different molecular and cellular processes, would be differentially affected by temperature, and that either selection or drift would have led to variation in relative timing between species. While there are lots of possibile explanations for this phenomena, the most straightforward is that developmental timing is controlled by some kind of master clock that scales with first-order kinetics with temperature, and which is the major target for interspecies differences in developmental timing. If true, this would be quite remarkable.

If you’ve gotten this far, you’re obviously reasonably interested in the topic. As I’ve written before, we are now posting all of our lab’s paper on arXiv prior to submitting them to a journal, and we invite you comments and criticism, with the hope that this kind of open peer review will not only make this paper better, but will serve as a model for the way we all should be communicating our work with our colleagues and interacting with them to discuss our work after it is published.

We’re going to try out PubPeer for comments on this paper. Please use this link to comment.

Posted in EisenLab, EisenLab preprints | Comments closed

A CHORUS of boos: publishers offer their “solution” to public access

As expected, a coalition of subscription based journal publishers has responded to the White House’s mandate that federal agencies develop systems to make the research they fund available to public by offering to implement the system themselves.

This system, which they call CHORUS (for ClearingHouse for the Open Research of the United Status) would set up a site where people could search for federally funded articles, which they could then retrieve from the original publisher’s website. There is no official proposal, just a circulating set of principles along with a post at the publisher  blog The Scholarly Kitchen and a few news stories (1,2), so I’ll have to wait to comment on details. But I’ve seen enough to know that this would be a terrible, terrible idea – one I hope government agencies don’t buy in to.

The Association of American Publishers, who are behind this proposal, have been, and continue to be, the most vocal opponent of public access policies. They have been trying for years to roll back the NIH’s Public Access Policy and to defeat any and all efforts to launch new public access policies at the federal and state levels. And CHORUS does not reflect a change of heart on their part – just last month they filed a lengthy (and incredibly deceptive) brief opposing a bill in the California Assembly would provide public access to state funded research.

Putting the AAP in charge of implementing public access policies is thus the logical equivalent of passing a bill mandating background checks for firearms purchasing and putting the NRA in charge of developing and operating the database. They would have no interest in making the system any more than minimally functional. Indeed, given that the AAP clearly thinks that public access policies are bad for their businesses, they would have a strong incentive to make their implementation of a public access policy as difficult to use and as functionless as possible in order to drive down usage and make the policies appear to be a failure.

You can already see this effect at work  – the CHORUS document makes no mention of enabling, let alone encouraging, text mining of publicly funded research papers, even though the White House clearly  stated that these new policies must enable text mining as well as access to published papers. Subscription publishers have an awful track record in enabling reuse of their content, and nobody should be under any illusions that CHORUS will be any different.

The main argument the CHORUS publishers are making to funding agencies is that allowing them to implement a solution will save the agencies money, since they would not have to develop and maintain a system of their own, and would not have to pay to convert author manuscripts into a common, distributable format. But this is true only if you look at costs in the narrowest possible sense.

First, there is no need for any agency to develop their own system. The federal government already has PubMed Central – a highly functional, widely used and popular system. This system already does everything CHORUS is supposed to do, and offers seamless full-text searching (something not mentioned in the CHORUS text), as well as integration with numerous other databases at the National Library of Medicine. It would not be costless to expand PMC to handle papers from other agencies, and there would be some small costs associated with handling each submitted paper. However, these costs would be trivial compared to the costs of the funding the research in question, and would produce tremendous value for the public. What’s more, most of these costs would be eliminated if publishers agreed to deposit their final published version of the paper directly to PMC – something most have steadfastly refused to do.

But even if we stipulate that running their own public access systems would cost agencies some money, the idea that CHORUS is free is risible. There is a reason most subscription publishers have opposed public access policies – they are worried that, as more and more articles become freely available, that their negotiating position with libraries will be weakened and they will lose subscription revenues as a consequence. Since a large fraction of these subscription revenues (on the order of 10%, or around $1 billion/year ) come from the federal government through overhead payments to libraries, the federal government stands to save far, far, far more money in lower subscription expenditures than even the most gilded public access system could ever cost to develop and operate.

CHORUS is clearly an effort on the part of publishers to minimize the savings that will ultimately accrue to the federal government, other funders and universities from public access policies. If CHORUS is adopted, publishers will without a doubt try to fold the costs of creating and maintaining the system into their subscription/site license charges – the routinely ask libraries to pay for all of their “value added” services. Thus not only would potential savings never materialize, the government would end up paying the costs of CHORUS indirectly.

Publishers desperately want the federal agencies covered by the White House public access policy to view CHORUS as something new and different – the long awaited “constructive” response from publishers to public access mandates. But there is nothing new here. Publishers proposed this “link out” model when PMC was launched and when the NIH Public Access policy came into effect, and it was rejected both time. Publishers hate PMC not because it is expensive, or even because it leads to a (small) drop in their ad revenue. They hate it because it works, is popular and makes most people who use it realize that we don’t really need publishers to do all the things they insist only they can do.

CHORUS is little more than window dressing on the status quo – a proposal that would not only undermine the laudable goals of the White House policy, but would invariably cost the government money. Let’s all hope this CHORUS is silenced.

 

 

Posted in AAP, open access, politics, public access, science | Comments closed

Apotheosis of cynicism and deceit from scholarly publishers

The Association of American Publishers, who lobby on behalf of most for-profit and society scholarly publishers, have long opposed moves to make the scientific literature more readily available to the public. But, as open access publishing has gained traction and funders increasingly demand free access to the work they fund, the AAP’s defense of the status quo has descended to new depths. Perhaps the most egregious is a letter sent last week to the California Assembly opposing AB609, which would provide the public with access to state funded research.

Here are their points:

State Universities Could be Faced with Open Access Publishing Charges Estimated at More Than $1 Million Annually

While AB 609 does not require authors to publish in author-funded open access journals, many journal publishers charge an article publishing fee to researchers to cover the cost to the publishers for making the journal articles freely available online. These costs could be substantial and are fundamentally unknowable, but the author of AB 609 has said that they may be similar to those in  the implementation of the U.S. National Institutes of Health (NIH) policy, upon which AB 609 has been modeled. In a congressional hearing on open access in 2008, the director of NIH indicated that the agency spends $100 million a year for page fees and open access charges. Therefore, one might estimate that California could spend $1.1 million each year on these charges, as California’s research budget is 1 % of that of NIH ($332 million vs. $30 billion). This rough estimate is likely an underestimate, as it only accounts for publishing charges and not for infrastructure, compliance, or the variation in open access charges.

Do you follow the publishers’ argument here?  Any time an author voluntarily chooses to publish in an open access journal, even if they are under no legislative mandate or pressure to do so, the publishers want those costs to count against any legislation that seeks to improve public access. This is pure balderdash.

And note how they compute this “cost”. They cite a quote from former NIH Director Elias Zerhouni who estimated that in 2008 the NIH spent $100 million on page fees and open access charges. But Zerhouni said this in 2008 as the NIH Public Access Policy was being introduced – thus these costs had were not in any way the result of the policy – they arose from authors choosing on their own how to publish their work.  And that $100 million includes page fees – charges leveed by subscription publishers on authors in addition to the subscription fees they charge libraries for access to their content. I know the open access industry very well, and revenues in 2008 were nowhere  near $100m for the whole industry, let alone from NIH authors. I’d bet, at most, total revenue was $20m, with max $10m from the NIH (and I’m sure this is an overestimate). So the vast majority of charges they are citing were actually payments in page charges to AAP publishers!

This is a completely preposterous and deceitful argument – one they undoubtedly know is wrong in both logic and detail – and demonstrates that they are willing to outright lie to achieve their legislative aims.

Savings to State Universities from Cancelled Journal Subscriptions Are Unlikely

There are no countervailing savings from the policies in AS 609 to offset the signifioant costs entailed. State universities would still need to maintain a large portion of their budgets for journal subscriptions, as students and researchers would oontinue to need to access research articles that are written by researchers from outside of California and not subject to the bill’s provisions, Where some smaller journals may be cancelled or go out of business, and others may change to an author-pays open access business model, there will be many that continue as subscription journals. In fact, some analysts have suggested that costs for subscriptions may actually increase, as publishers will still need to recoup their investments in publication from a smaller subscription base.

CA AB 609 Will Undermine Investments in the Peer Review Process that Ensures the Quality and Integrity of Scientific Research, Potentially Requiring California to Make Those Investments Itself

The peer review process ensures that research articles are rigorously reviewed by experts in specialized fields before they are published – in effect, the “checks and balances” of good science. Publishers invest in supporting the peer review process that vets the validity and significance of authors’ research findings by identifying appropriate reviewers, maintaining content management systems, providing enhanced digital coding and graphic design, disseminating the articles, enhancing the discoverability of article content and preserving the scholarly record. AB 609 would reduce publishers’ ability to continue those investments, and potentially transfer those costs to the California research budget.

So let’s put these two things together. The bill will not save California any money because libraries will not cancel any subscriptions, but will undermine publishers’ ability to carry out peer review because they will lose revenue from canceled subscriptions. Huh? They can not have it both ways. Either publisher revenues will drop OR California will save no money. These can not both be true at the same time.  Even if you buy their argument that the cancellation of subscriptions will undermine peer review, in order for this to happen, subscriptions would have to be cut, which would save California money.

I also love this ridiculous line:

AB 609 would reduce publishers’ ability to continue those investments, and potentially transfer those costs to the California research budget.

Note the complete logical fallacy contained in this one sentence. They are arguing that they need subscription revenue from California in order to invest in peer review, but if California cuts off these subscriptions then California is going to have to cover these costs themselves. So let me see if I get this. California is already paying for something, but if they stop paying for it, they’re going to have to pay for it themselves. My brain is going to explode.

There is no other explanation for this kind of lunacy other than the publishers think that they can kill this bill and others like it by making legislators think it will cost them money if it passes. But since this is manifestly false, they have to go through Olympic-level logical gymnastics in order to claim it is true. The publishers are lying, and they are clearly hoping that by lying in such a confusing way,  legislators – few of whom are familiar with the intricacies of scientific publishing – will believe what they’re being told.

CA AS 609 Will Negatively Impact California Jobs

California ranks second in the country for periodical and journal publishing jobs, employing approximately 17,000 people with a payroll of more than $250 million. By requiring surrender of their value-added, peer reviewed scientific journal articles within 12 months of publication, AS 609 will erode the financial sustainability of not-far-profit and commercial publishers, ultimately putting jobs at risk. Government mandates that make journal articles available free will likely have the same effect on the publishing industry as experienced by many newspapers when they chose to give their content away for free. Newspapers facing bankruptcy had to start charging for online access, as It Is unlikely that someone will subscribe to a newspaper (or journal) when they can obtain the articles for free online.

What a crock of shit. First of all, in order to make it sound like California jobs are at risk, the publishers lump journal publishing together with periodical publishing. I would hasten to bet that virtually all of the 17,000 jobs they cite are in the periodicals industry, and have absolutely nothing to do with scholarly publishing. In fact, there is relatively little activity in scholarly publishing in California – most journals are based in Boston, NY or Washington. And I suspect the biggest employer in the scholarly publishing industry is PLOS – who have >100 people working full time in their San Francisco office, as well as a larger pool of California-based freelancers and other contractors. Plus California is a hotbed for growth in open access publishing – including hot new startups like PeerJ.

And it is equally cynical to use the analogy of newspapers for the effect this bill would have on scholarly publishers. The AAP knows full well that unlike with newspapers, there is a perfectly viable alternative business model – open access – which PLOS, BMC and others have proven is both viable and profitable. They know that if subscriptions go away, scholarly publishing will not go away. But the obscene profits made by the AAP’s members will. And that is something they are willing to lie through their teeth to achieve.

AB 609 Is Unnecessary Because Publishers Are Devoted to Providing Access to Research and Invest in the Dissemination of Research in a Variety of Ways

Publishers provide access to published research articles through a variety of methods, including subscriptions, article rental and free-to-reader “open access” articles that are subsidized by author fees or sponsorships, Publishers have also voluntarily created programs that provide access to research literature for communities that have been previously underserved through outreach programs, such as patientlNFORM, the Emergency Access Initiative and Research4Life, as well as programs fOr” public libraries, journalists and high schools. Publishers have also worked with research funders, including government agencies and private foundations, for collaborative solutions to advance access to articles that report or analyze funded research. These collaborative, flexible partnerships are the rIght way to advance access while ensuring the sustainability of a well-functioning scholarly system. AS 609 takes us in an opposite direction and would contribute to fragmentation, duplication and dilution of efforts to build an infrastructure that is interoperable and efficient.

Yeah, that’s right. The AAP’s members are devoted to providing access to research. They are so devoted to it that they spent the first two pages of this letter arguing that providing access to the public would destroy their industry and take thousands of California jobs with them. The only reason that AAP members have done anything to make the literature available to anyone is that they know that their practices are deeply unpopular with the public, and so they create bogus access initiatives that they think will make them look like they’re trying. But this letter proves otherwise.

The only thing the AAP is devoted to is preserving the status quo – and lying to achieve their goal.

Posted in open access, politics, publishing, science, science and politics | Tagged , , , | Comments closed

WTF? The University of California sides with publishers against the public

The University of California system spends nearly $40 million every year to buy access to academic journals, even though many of the articles are written, reviewed, and edited by UC professors. So you’d think the cash-strapped UC system would leap to back any effort to undermine the absurd science publishing system.

You’d think. But you’d be wrong.

Assemblymember Brian Nestande (R-Palm Desert) introduced a bill – The California Taxpayer Access to Publicly Funded Research Act (AB 609) – that would require recipients of state-funded research grants to make copies of their work freely available through the California State Library within six months of their initial publication.

Although I think that the six month embargo is unnecessary – there’s no reason not to make publicly funded works immediately freely available – I sent in a letter supporting the bill, as it establishes the state’s interest in ensuring public access to taxpayer funded research.

Hearings into the bill were scheduled for last week, but were delayed so that the bill could be modified in order to earn the support of the University of California – the flagship higher education system in the state, and the host of millions of dollars in state-funded research.

When I first heard this I was excited. “Finally,” I thought, “UC is stepping up to the plate and taking a strong stance in support of open access.” Then I read the letter UC had sent.

Adrian Diaz, the University of California’s Legislative Director, wrote that UC was “supportive of the legislation’s intent” but would only support it if the embargo period were extended to one year, and if its own grant programs were exempted from the bill’s requirements.

I was dumbfounded.

Here is Diaz’s rationale for extending the embargo:

The University recommends that the bill’s six month publication embargo period be amended to conform to federal public access policies. The National Institutes of Health (NIH) Public Access Policy and the recent public access policy direction to federal agencies from the Office of Science and Technology Policy (OSTP) both permit a twelve month embargo period for  published manuscripts. We believe that consistency between the different public access policies to which our researchers must comply will help avoid confusion and promote compliance with the law. A twelve month embargo period will also allow publishers, including small publishers and scholarly societies, to meet their needs for revenue while ensuring long-term public access to published research. UC believes that a twelve month embargo period will facilitate publication in leading scholarly journals, which may reject manuscripts for which the permissible embargo is only six months.

This is nothing short of insane.

When the White House issued its “public access” policy a few months ago, in which they directed Federal agencies to make works they fund available to the public within 12 months, I argued that open access supporters should not celebrate because this was going to establish a year long delay as the law of the land. And here is the first evidence that I was right.

But it is even more troubling that a university whose libraries are facing budget cuts every year while they try to keep up with the ever-increasing cost of journal subscriptions would cite publishers’ need for revenue as their guiding principle when judging policies related to scholarly publishing.

How can Diaz DEFEND this system?? A system in which universities fork over billions of dollars of public money every year in order to buy back access to papers researchers gave to publishers for free? A system that is bankrupting our libraries? A system that denies people access to research their tax dollars paid for?

What is wrong with the University? Is it so married to the status quo that it can not see that it is being immeasurably harmed by it? Is it so out of touch with its public mission that it reflexively sides with the establishment even when it means unambiguously thwarting a public good?

For decades universities have sat idly by doing nothing while the serials crisis loomed. They have been silent as immense change has come to scholarly publishing. And now, when they finally speak up, this is what they say?

THIS is why we can’t have nice things.

——————————————–

I sent the following letter to Mr. Diaz and other UC officials:

Adrian Diaz
Legislative Director
Office of State and Governmental Relations
1130 K Street, Suite 340
Sacramento, CA 95814

Dear Mr. Diaz,

I am writing in regards to your letter of April 12th sent to the Assembly Accountability and Administrative Review Committee regarding AB 609, The California Taxpayer Access to Publicly Funded Research Act.

Your letter expresses support for the legislation’s intent, but conditions UC support for the bill on a lengthening of the embargo period from six months to one year. I urge you to reconsider this position.

You write that a longer delay is necessary to “allow publishers to meet their needs for revenue”, yet this is true only for publishers that use a subscription-based business model that is outdated and no longer serves the interests of the research community or the public that funds it.

Journals that fund their operations through subscriptions have no choice but to restrict access to the content to subscribers. Thus the business model is fundamentally incompatible with what should be the goal of public research funders and public institutions of higher learning: to make the results of taxpayer funded research freely and immediately available to the public.

Fortunately, there is an alternative.

In 2001 I co-founded the Public Library of Science (PLOS), a San Francisco based non-profit publisher of scientific and medical journals that has pioneered “open access” – a business model in which the costs of publishing are covered by research funders, but the finished product immediately freely available. PLOS is a thriving company with a diverse portfolio in biology and medicine, including the world’s largest biomedical research journal, PLOS ONE, which will publish in excess of 25,000 articles in 2013.

PLOS’s success has led to an explosion of open access publishers, including several California startups, as well as new imprints from commercial publishers and scientific societies. And a few months ago the three largest private biomedical research funders in the world – the Howard Hughes Medical Institute in the US, the Wellcome Trust in the UK and the Max Planck Institute in Germany – collaborated to launch a high-profile open access journal called eLife.

In calling for the embargo period in AB 609 to be extended, the University of California is taking the position that subscription based publishing is in need of protection, even though there is a clear, California based alternative that would achieve the public access to taxpayer funded research you say you support.

Subscription based publishers – both commercial and non-profit – have long been thorns in the side of the UC library system, demanding ever increasing and unjustifiable fees – last year it was close to $40m – to provide faculty and students with access to publications that should and could have been made freely available. I urge you to speak with cash strapped librarians at any of the UC campuses – who every year are forced to cut subscriptions to important journals they are no longer able to afford – if subscription based publishers should be viewed as allies of the University of California in need of legislative protection.

The people of the state of California have every right to immediate free access to the results of taxpayer funded research, and the University of California should be urging the legislature to strengthen the public access provisions in AB 609.

I hope you will reconsider your position on this matter. I would be happy to discuss this issue with you or any of your staff.

Michael B. Eisen, Ph.D.
Associate Professor of Genetics, Genomics and Development
Investigator, Howard Hughes Medical Institute
Department of Molecular and Cell Biology
University of California, Berkeley

Posted in open access, politics | Comments closed

Door-to-door subscription scams: the dark side of The New York Times

An article appeared on the front page of the Sunday New York Times purporting to expose a “parallel world of pseudo-academia, complete with prestigiously titled conferences and journals that sponsor them”.

The story describes the experience of some unnamed scientists who accepted an email invitation to a conference, which then charged them for participating, and of some other scientists who submitted papers to a journal they had never heard based on an email solicitation and were later charged hefty fees for doing so.

Somehow, in the mind of author Gina Kolata, this is all PLoS’s fault, quoting someone who calls this phenomenon the “dark side of open access”.

Here is her logic:

The number of these journals and conferences has exploded in recent years as scientific publishing has shifted from a traditional business model for professional societies and organizations built almost entirely on subscription revenues to open access, which relies on authors or their backers to pay for the publication of papers online, where anyone can read them.

Open access got its start about a decade ago and quickly won widespread acclaim with the advent of well-regarded, peer-reviewed journals like those published by the Public Library of Science, known as PLoS. Such articles were listed in databases like PubMed, which is maintained by the National Library of Medicine, and selected for their quality.

But some researchers are now raising the alarm about what they see as the proliferation of online journals that will print seemingly anything for a fee. They warn that nonexperts doing online research will have trouble distinguishing credible research from junk. “Most people don’t know the journal universe,” Dr. Goodman said. “They will not know from a journal’s title if it is for real or not.”

There’s so much that is wrong with this I don’t know where to start.

First, this IS a real phenomenon. I get several emails every day from some dubious conference inviting me to speak or some sketchy journal asking me to be on their editorial board or to submit an article. However these solicitations are so obviously not legit, I can’t believe anyone falls for them. To suggest this is some kind of dangerous trend based on a few anecdotes is ridiculous.

And yes, a lot of these suspect journals charge authors for publishing their works, just like open access journals like PLoS do. But suggesting, as the article does, that scam conferences/journals exist because of the rise of open access publishing is ridiculous. It’s the logical equivalent of blaming newspapers like the NYT for people who go door-to-door selling fake magazine subscriptions.

Long before the Internet, publishers discovered that launching new journals was like printing money – something Elsevier specialized in for decades, launching hundreds of new journals with hastily assembled editorial boards and then turning around and demanding that libraries subscribe to these journals as part of their “Big Deal” bundles of journals. These journals succeeded because there are always researchers looking for a place to put their papers, and many of these new journals greased the wheels by having fairly lax standards for publication.

The same is true for conferences. For as long as I can remember I’ve been receiving solicitations to attend and/or speak at conferences organized by for-profit firms like Cambridge Health Tech that seem to cobble together sets of speakers from whomever they could get to accept – taking advantage of scientists’ desire to put “invited speaker” on their CVs – and then charging scientists, often from industry where travel budgets are bigger, to attend. I am sure some of these meetings are useful to some people (I’ve never been to meetings like this, some people tell me they’re basically junkets with little scientific merit, others say they are very useful) – but the idea that profiteering on people’s desire for prestige in science is something that came onto the scene with open access publishing is patently absurd.

The real explanation for the things described in the article is that it’s insanely easy to create conferences and journals and to send out blasts of emails to thousands of scientists hoping a few will take the bait. It’s science’s version of the Nigerian banking scams – something far more deserving of laughter than hand-wringing on the front page of the NYT.

But if Gina Kolata and the NYT are really concerned about scams in science publishing, they should look into the $10 BILLION DOLLARS of largely public money that subscription publishers take in every year in return for giving the scientific community access to the 90% of papers that are not published in open access journals – papers that scientists gave to the journals for free!  This ongoing insanity not only fleeces huge piles of cash from government and university coffers, it denies the vast majority of the planet’s population access to the latest discoveries of our scientists. And if the price we pay for ending this insanity is a few gullible scientists falling for open access spam, it’s worth it a million times over.

Posted in open access | Comments closed

Toxoplasma, Cat Piss and Mouse Brains: my lab’s first paper on microbial manipulation of animal behavior

All animals live in a microbe rich environment, with immense numbers of bacteria, archaea, fungi and other eukaryotic microbes living in, on and around them. For some of these microbes, the association is transitory and unimportant, but many make animals their permanent home, or interact with them in ways that are vital for their survival. Many members of an animal’s “microbiome” are affected by, and often become dependent on, aspects of the animal’s behavior. And, as microbes will do, some – and we believe many – of these microbes have evolved specific ways to manipulate the behavior of their animal neighbors to their advantage.

My lab has begun to study several such systems, seeking to discover the molecular mechanisms that underly these fascinating microbial adaptations – none of the several dozen cases in which microbial manipulation of animal behavior has been documented are understood in molecular detail.

One of these systems involves the eukaryotic parasite Toxoplasma gondii which reproduces clonally in most (if not all) warm blooded animals, but – for unknown reasons – only reproduces sexually in the digestive system of cats. Thus, in order to complete the Toxo lifecycle, an infected animal has to be eaten by a cat. This creates a conflict of interest between Toxo, who wants its host to be eaten by a cat, and the host, who would rather NOT be eaten by a cat. Indeed this “I don’t want to be eaten by a cat” effect is so strong, that many animals have evolved an innate fear of all things cat – especially their smells.

For example, if you take a laboratory mouse and put him (for a variety of reasons we usually do these experiments with males) in a box with a bowl or water they will largely ignore it. Swap out the water and put in something that the mouse has no reason to fear – like rabbit urine – and they still more or less ignore it. Swap that out and put in cat urine, and it’s a whole different ball game – the mouse spend most of its time on the other side of the cage.

Amazingly, it seems that rodents infected with Toxo lose this innate fear of cats – possibly as a result some property Toxo has evolved to increase the likelihood that it will end up in a cat’s tummy. Several papers have come out on the topic in recent years (from Robert Sapolsky’s lab at Stanford as well as others), but the molecular mechanism is unknown.

A graduate student – Wendy Ingram – became obsessed with this phenomena and is pursuing it as a joint project between my lab and that of Ellen Robey (an immunologist who studies the host response to Toxo infection and the ways the parasite evades immune surveillance).  Wendy has begun a bunch of experiments to examine this phenomenon – she is interested, in particular, in the role the immune system might play in mediating this response. Her first wave of experiments is done, and we have posted a preprint of a paper describing them on the arxiv.

Wendy first showed that the behavioral effect is robust, and is general across Toxo (previous experiments had used only one of the three major North American variants of Toxo – Wendy showed the same effect in the other two subtypes. But more interestingly, Wendy found that the effect was strong and persists for months in an attenuated Toxo strain that – unlike the other strains we and others have examined – is not detectable in the brains of infected animals after a few weeks. This would seem to refute – or at least make less likely – models in which the behavior effects is the result of direct physical action of parasites on specific parts of the brain. It’s just a start in trying to dissect a complicated phenomenon, but Wendy has a whole slew of followup experiments under way or in planning that should shed more light on what aspects of the innate fear response are being overridden and what, if any, role the immune system is playing.

As always, we welcome your thoughts and comments on the paper, released here as part of our commitment to make preprints of all of our lab’s papers available as soon as (if not before) we’re ready to submit them to a journal, and to make them available here for open peer review.

 

Posted in EisenLab preprints, microbial manipulation of animal behavior, My lab | Comments closed

The Past, Present and Future of Scholarly Publishing

I gave a talk last night at the Commonwealth Club in San Francisco about science publishing and PLoS. There will be an audio link soon, but, for the first time in my life, I actually gave the talk (largely) from prepared remarks, so I thought I’d post it here.

An audio recording of the talk with Q&A is available here.

——

On January 6, 2011, 24 year old hacker and activist Aaron Swartz was arrested by police at the Massachusetts Institute of Technology for downloading several million articles from an online archive of research journals called JSTOR.

After Swartz committed suicide earlier this year in the face of legal troubles arising from this incident, questions were raised about why MIT, whose access to JSTOR he exploited, chose to pursue charges, and what motivated the US Department of Justice to demand jail time for his transgression.

But the question that should have been asked is why downloading scholarly research articles was a crime in the first place. Why, twenty years after the birth of the modern Internet, is it a felony to download works that academics chose to share with the world?

The Internet, after all, was invented so that scientists could communicate their research results with each other. But while you can now get immediate, free access to 675 million videos of cats (I checked this number today), the scholarly literature – one of greatest public works projects of all time – remains locked behind expensive pay walls.

Every year universities, governments and other organizations spend in excess of $10 billion dollars to buy back access to papers their researchers gave to journals for free, while most teachers, students, health care providers and members of the public are left out in the cold.

Even worse, the stranglehold existing journals have on academic publishing has stifled efforts to improve the ways scholars communicate with each other and the public. In an era when anyone can share anything with the entire world at the click of a button, the fact that it takes a typical paper nine months to be published should be a scandal. These delays matter – they slow down progress and in many cases literally cost lives.

Tonight, I will describe how we got to this ridiculous place. How twenty years of avarice from publishers, conservatism from researchers, fecklessness from universities and funders, and a basic lack of common sense from everyone has made the research community and public miss the manifest opportunities created by the Internet to transform how scholars communicate their ideas and discoveries.

I will also talk about what some of us have been doing to liberate the scholarly literature – where we have succeeded and where there is more work to be done. And finally, with these efforts gaining traction, I will describe where we are going next.

While I talk, I want you to keep in mind that this is about more than just academic publications. This is about the future of the Internet and what we are willing to do, as individuals and societies, to ensure that information that should be free IS free. If we can’t figure out how to make scientific and scholarly works – most of which were funded by taxpayers and published by authors with no expectation of being paid – freely available, we will struggle to do it in cases where the conditions for free access are less ripe.

One last bit of introduction. I am a scientist, and so, for the rest of this talk, I am going to focus on the scientific literature. But everything I will say holds equally true for other areas of scholarship.

OK.

Most people date the birth of the modern scientific journal to the middle of the 17th century, when the Royal Society in England took advantage of the growing printing industry to begin publishing proceedings of their meetings for the benefit of members unable to attend, as well as for posterity.

But scholarly journals as we know them were really a product of the 19th century, when growing activity and public interest in science led to the creation of most of the big titles we know about today: Science, Nature, The New England Journal of Medicine, The Journal of the American Medical Association and The Lancet published their first editions in the 1800’s.

They had noble missions. For example, the preface to the first edition of Science in July 1880 stated that its goal was to  “afford scientific workers in the United States the opportunity of promptly recording the fruits of their researches, and facilities for communication between one another and the world”.

Like their predecessor, these journals were enabled by the technologies of the industrial revolution – steam powered rotary printing presses and efficient rail-based mail service. But they were also severely limited by them. Printing and shipping articles around the country and the world was expensive, and because of this, two key features of modern journals were established.

First, journals limited what they printed, choosing for publication only those works deemed to be of the greatest interest to their target audience. And second, they sold subscriptions – sending copies only to those who had paid. While intrinsically restricting, this business arrangement made sense. Every printed copy of a journal incurred a cost to the publisher, and charging readers meant revenues scaled with costs.

As science grew, so too did science publishing, with increasingly specific journals emerging to cater to new disciplines. By 1990 there were around 5,000 scientific journals in circulation, all of them printed and shipped to subscribers. And the costs were skyrocketing. If you were lucky enough to be at a major research university, you could find most of these journals in the library. But most scientists had to make do with a small subset – whatever their library could afford. And the public was all but completely shut out.

Then along came the Internet.

Scientific journals, serving a computer savvy audience with access to fast Internet connections through universities, were amongst the first commercial ventures to take advantage of this new technology. Within a few years – from 1995 to 1998 – virtually all major publishers put versions of their printed journals online.

But in doing so they made a crucial and fateful choice. Rather than adopting their business model to the new medium, they stuck with the same subscription-based system that they used for their print journals. And why not – so long as scientists were still giving them papers, and universities were buying them back, it was a great business. An even better one given that they no longer had to pay for printing and shipping.

But with this major shift in the means of dissemination, what was once a common sense way for publishers to provide a valuable service while dealing with the limitations of available technology became an irrational impediment to achieving this very goal.

To understand just how crazy this system is, you need to understand a bit more about how scientific journals work and what the life cycle of a scientific idea looks like.

Take your typical scientist at my home institution – the University of California Berkeley. She draws a salary from the state of California, and works in a building funded by the state. When she has a new idea, she goes out and raises money to buy equipment and supplies and to pay the salaries of the students and staff who will actually do the work. In all likelihood this money will come from the US government – through agencies like the NIH or NSF. And if not from them, from a public minded non-profit or foundation like the Howard Hughes Medical Institute that funds my lab. This scientist and her students then spend a great deal of time – usually years – pursuing the idea, until they finally have a result they want to share with their peers.

So they sit down and write a paper describing why they were interested in the question, what they did, how they did it, what they found, and what they think it means.

And then they hopefully submit it to one of the 10,000 journals currently in operation – choosing based on scope and importance. With few exceptions, these journals work the same way. The paper is assigned to an editor – sometimes a salaried professional, but usually a practicing scientist volunteering their time. They read the paper and decide who in the field is in the best position to evaluate the authors’ methods, data and conclusions. They send the paper to these scientists – who again are volunteering their time as a service to the community – who read it and render their opinion on the paper’s technical merits and suitability to the journal in question. The editor looks at all these reviews and decides whether to accept, modify or reject the work. If the paper is accepted, the journal takes the manuscript, converts it into a publishable form, and posts it on the web. If the paper is not accepted, the scientists either go back and do some more work and rewrite the paper, or they send it to another journal, triggering a complete reprise of the entire process.

I want you to note just how little the journal actually does here.

They didn’t come up with the idea. They didn’t provide the grant. They didn’t do the research. They didn’t write the paper. They didn’t review it. All they did was provide the infrastructure for peer review, oversee the process, and prepare the paper for publication. This is a tangible, albeit minor, contribution, that pales in comparison to the labors of the scientists involved and the support from the funders and sponsors of the research.

And yet, for this modest at best role in producing the finished work, publishers are rewarded with ownership of – in the form of copyright – and complete control over the finished, published work, which they turn around and lease back to the same institutions and agencies that sponsored the research in the first place. Thus not only has the scientific community provided all the meaningful intellectual effort and labor to the endeavor, they’re also fully funding the process.

Universities are, in essence, giving an incredibly valuable product  – the end result of an investment of more than a hundred billion dollars of public funds every year – to publishers for free, and then they are paying them an additional ten billion dollars a year to lock these papers away where almost nobody can access them.

It would be funny if it weren’t so tragically insane.

To appreciate just how bizarre this arrangement is, I like the following metaphor. Imagine you are an obstetrician setting up a new practice. Your colleagues all make their money by charging parents a fee for each baby they deliver. It’s a good living. But you have a better idea. In exchange for YOUR services you will demand that parents give every baby you deliver over to you for adoption, in return for which you agree to lease these babies back to their parents provided they pay your annual subscription fee.

Of course no sane parent would agree to these terms. But the scientific community has.

And the consequences are severe.

Even though the entire scientific and medical literature is, in principle, available at the click of a mouse to anyone with an Internet connection – very few people have access to the entirety of this information.

This is most obviously a problem for people facing important medical decisions who have no access to the most up-to-date research on their conditions – research their tax dollars paid for. In a world where patients are increasingly involved in health care decisions, and where all sorts of sketchy medical information is available online, it is criminal that they do not have access to high quality research on whatever ails them and potential ways to treat it.

Astonishingly, many physicians and health care providers also lack access to basic medical research. Journal subscriptions in medicine are very expensive, and most doctors have access to only a handful of journals in their specialty.

But this lack of access is not just important in the doctor’s office. Scores of talented scientists across the world are blind to the latest advances that could affect their research. And in this country students and teachers at high schools and small colleges are denied access to the latest work in the fields they are studying – driving them to learn from textbooks or Wikipedia rather than the primary research literature. Technology startups often can not afford to access to the basic research they are trying to translate into useful products.

And interested members of the public – like many of you – find it difficult to engage with scientific research. Is it any wonder that such a large fraction of the population rejects basic scientific findings when the scientific community thumbs its collective noses at the them by making it impossible for them to read about what we’re doing with all of their money? Many in the publishing industry dismiss the idea that the public even wants to read scientific papers, pointing to their often highly technical language. But a major reason these papers are so inscrutable is that their authors conceive of their audience very narrowly – basically scholars in their field. And if you have no expectation that the public will read your work, you do not write it to be accessible to the public.

But even if you have no interest in ever reading a scientific paper, you should care deeply about this issue. Because in addition to pay walls, the balkanization of the scientific literature into hundreds of publisher fiefdoms stops researchers from developing new ways to organize, extract information from and improve the navigability and utility of the scientific literature. It is astonishing, for example, that to this day there is no dedicated search engine that allows you to search the full-text of every published scientific paper. This makes researchers less effective and limits the value we all get from the billions of dollars we invest in science every year.

And the greatest tragedy of all is that this is completely unnecessary.

Back in the 1990’s several people began promoting a simple alternative model. The idea was to treat science publishing like a service, with publishers getting paid a fee for the value they provide, but once this fee is paid, the finished product would effectively enter the public domain rather than the publishers private one.

One of the people pushing this new model – now known as “open access” – was my postdoctoral advisor at Stanford, Pat Brown, who enlisted me in his crusade. After failing to convince existing publishers to adopt this model – they generally met this idea with laughter if not outright hostility –  the two of us, along with former NIH Director Harold Varmus, launched a non-profit publisher – which we dubbed the Public Library of Science or PLOS – determined to prove that this model would work.

After all, universities were already forking over billions of dollars to support publishers. We were offering them a better deal – access for everyone at a lower price. But, while logic and value were on our side, and we got statements of support from within and outside the scientific community, when push came to shove, only a small group of pioneers joined us. And the reason was that publishers had one very powerful card up their sleeve.

Although scientists do not get paid when the papers they submit to research journals get published, they nonetheless receive something of very high value. Academia is an industry of prestige, and the currency in which prestige is traded is journal titles. In most scientists’ minds, a publication in an elite journal like Nature or Science is as good as gold – a ticket to a job, grants and tenure. And the allure of these publications is so high that most scientists continue to choose journals based entirely on their prestige, even while they acknowledge that their business practices are bad for science and the world.

Realizing that our biggest obstacle was overcoming the prestige of established subscription based journals, PLOS launched with two journals that adopted the same elitist editorial policies of Science, Nature and their ilk – PLoS Biology for basic life sciences and PLoS Medicine for the clinical world. We hired professional editors from others in the industry, built fancy editorial boards and had a suite of Nobel Prize winners singing our praises.

But prestige is a difficult thing to engineer. Colleagues, friends and even family members would stipulate all the flaws in the current system and praise what we were doing, but, when they had a high profile paper, would turn around and send it to the same old subscription journals. It was a very frustrating experience.

I’d like to say that I understood why they made these decisions. But I didn’t. I thought – and still think – they were just being cowardly. And when I suggested they were being chickens by sending papers to Science or Nature they would complain that they couldn’t because their jobs – or their trainees jobs – were at stake.

I didn’t think they were right. But the truth is that I didn’t have a lot of evidence to show them. At the same time we were starting PLOS, I was starting my own lab in Berkeley. Senior colleagues, knowing about my extracurricular activities, took me aside and warned that I would never get grants or tenure if I didn’t publish my work in the old guard high profile journals, and that I would ruin the careers of my trainees if I put my principles over practical realities.

I didn’t want to believe them. I wanted to believe if I did good work people would notice. I wanted to believe that success in science did not require capitulating to stupid, destructive traditions. I also knew I’d look like a total hypocrite if I failed to live up to my own exhortations.

So I made a commitment that every paper from my would go to journals that made them freely available from day one. And, over 13 years, I have stuck completely to my pledge. And you know what? The sky didn’t fall. I got grants. Then I got a tenure track job at Berkeley (I had started out at the National Lab up the hill). Then I got tenure. And then I was named an investigator with the Howard Hughes Medical Institute – a coveted award that now funds most of my research. And the people in my lab have not suffered either. My graduate students have received fellowships and gone on to land plum postdoctoral positions – except for the one who went to Face Book and is now a millionaire – and my postdoctoral fellows have all gotten faculty positions at good schools.

But despite this, most of my colleagues still stand by the “I need to publish in Journal Blah in order to get” whatever goal they were seeking at the time.

Fortunately, publishing decisions are not entirely in the hands of individual investigators. In 2008, under pressure from Congress to provide taxpayers access to work they fund, the National Institutes of Health – who funds about $30 billion dollars of research every year – implemented a public access policy requiring that grantees make their work available through the National Library of Medicine.

This was an important landmark in the history of the access movement, as, for the first time, a major funding agency was making it a condition of receiving a grant that authors make their works available to the public. And the policy has been successful – 80% of NIH funded works published in 2011 are now freely available online – there’s nothing like the threat of losing funding to get people to do the right thing.

Unfortunately, under heavy lobbying pressure from publishers, the NIH policy allows for up to a years delay between publication and the provision of free access. While better than nothing, delayed access to the literature no more provides the public with access to the latest advances in biomedical research than handing out year old copies of the New York Times keeps everyone up to date on the latest World events.

And, again under pressure from Congress, earlier this year the Obama administration weighed in on the matter, directing other federal agencies that fund large amounts of research to develop their own public access policies. The White House said all the right things about the importance of public access – and got a lot of positive press. But unfortunately, if predictably, their actions did not match their words. The new White House policy all but established the one year delay used by the NIH as the law of the land – explicitly citing the need to sustain subscription-based publishing business as their excuse. Another huge missed opportunity in an area that has had tons of them.

But at least the White House did something. The other major player in this arena – the universities who employ the vast majority of academic scientists, and whose policies shape the course of their careers – have been completely silent. As with funding agencies, universities could hasten the transition to full and immediate open access by making it a condition of employment. Few people would turn down a job because it came with such a requirement.

But, while their own libraries sound the alarm about rising subscription costs and diminishing access, university administrators across the country have done next to nothing to promote changes in scientific publishing that would not only save them money, but make the research done on their campuses more efficient and effective. This is an astonishing abdication of their public mission and responsibility as stewards of scholarship.

However, despite these failings from scientists, funders and universities, the facts on the ground are changing rapidly. In 2007, PLOS launched a new journal – PLOS ONE – that not only provided open access to all of its content, but also dispensed with the notion – central to journal publishing since the 17th century – that journals should select only papers of the highest level of interest to their readers.

Rejecting papers that are technically sound is a relic of the age of printed journals, whose costs scaled with the number of papers they published and whose table of contents served as the primary way people found articles of interest.

But we are no longer limited by the number of articles we can publish, and people primary find papers of interest by searching, not browsing. So PLOS ONE asks its reviewers only to assess whether the paper is a legitimate work of science. If it is, it is published. The process is relatively simple – no need to ping pong from one journal to another in order to find the highest impact home.

This idea evidently appeals to the scientific community, because PLOS ONE has grown rapidly. It will publish in excess of 25,000 articles this year, and though only five years old, it is now the biggest biomedical research journal in the world. And it publishes great science – PLOS ONE articles are routinely talked about both by science journalists and the popular press.

And PLOS ONE has not just been a success as a journal, but also as a business, turning a profit that has not only put PLOS on solid financial footing, but attracted the eye of commercial and non-profit publishers worldwide. In the past year several PLOS ONE clones have been launched and there is broad consensus that this sector will grow and ultimately dominate scientific publishing.

But the battle is by no means won. Open access collectively represents only around 10% of biomedical publishing, has less penetration in other sciences, and is almost non-existent in the humanities. And most scientists still send their best papers to “high impact” subscription-based journals.

But as frustratingly slow as progress has been, I believe we are close to a tipping point with most members of the scientific community believing that open access is the future, and a growing and diverse set of publishers engaged in open access businesses.

But being able to access papers is just the beginning. We can now finally start to actually take advantage of computers and the Internet to not just make scientific publishing open, but to make it better.

If the 17th century founders of the Proceedings of the Royal Society went to read a contemporary scientific journal, they would find it disturbingly familiar. Even though we can read papers on a portable computer while flying 35,000 feet over the Pacific Ocean, the only thing that distinguishes a contemporary paper from a 17th century one is the occasional color photograph.

The multilayered, hyperlinked structure of the Web was made for scientific communication, and yet papers today are largely dispersed and read as static PDFs – another relic of the days of printed papers. We are working with the community to enable the “paper of the future”, that embeds not only things like movies, but access to raw data and the tools used to analyze them.

There is also no need for papers to be static works fixed in a single form at their time of publication. Good data and good ideas in science are constantly evolving, and scientific papers should evolve over time as new data, analyses, and ideas emerge – whether they support or refute the original assertions.

But the biggest target of our efforts is peer review. Peer review is the closest thing science has to a religious doctrine. Scientists believe that peer review is essential to maintaining the integrity of the scientific literature, that it is the only way to filter through millions of papers to identify those one should read, and that we need peer reviewed journals to evaluate the contribution of individual scientists for hiring, funding and promotion.

Attempts to upend, reform or even tinker with peer review are regarded as apostasies. But the truth is that peer review as practiced in the 21st century poisons science. It is conservative, cumbersome, capricious and intrusive. It encourages group think, slows down the communication of new ideas and discoveries, and has ceded undue power to a handful of journals who stand as gatekeepers to success in the field.

Each round of reviews takes a month or more, and it is rare for papers to be accepted without demanding additional experiments, analyses and rewrites, which take months or sometimes years to accomplish.

And this time matters. The scientific enterprise is all about building on the results of others – but this can’t be done if the results of others are languishing in peer review. There can be little doubt that this delay slows down scientific progress and often costs lives.

This might be worth it if these delays made the ultimate product better. But it is not the case. While I am sure that some egregious papers are prevented from being published by peer review, the reality is that with 10,000 or so journals out there, most papers ultimately get published, and the peer reviewed literature is filled with all manner of crappy papers. Even the supposedly more rigorous standards of the elite journals fail to prevent flawed papers from appearing in their pages.

So, while it is a nice idea to imagine peer review as defender of scientific integrity – it isn’t. Flaws in a paper are far more often uncovered after the paper is published than in peer review. And yet, because we have a system that places so much emphasis on where a paper is published, we have no effective way to annotate previously published papers that turn out to be wrong.

And as for classification, does anyone really think that assigning every paper to one of 10,000 journals, organized in a loose and chaotic hierarchy of topics and importance, is really the best way to help people browse the literature?  This is a pure relic of a bygone era – an artifact of the historical accident that Gutenberg invented the printing press before Al Gore invented the Internet.

So what would be better? The outlines of an ideal system are simple to spell out. There should be no journal hierarchy, only broad journals like PLOS ONE. When papers are submitted to these journals, they should be immediately made available for free online – clearly marked to indicate that they have not yet been reviewed, but there to be used by people in the field capable of deciding on their own if the work is sound and important.

The journal would then organize a different type of peer review, in which experts in the field were asked if the paper is technically sound – as we currently do at PLOS ONE – but also what kinds of scientists would find this paper interesting, and how important should it be to them. This assessment would then be attached to the paper – there for everyone to see and use as they saw fit, whether it be to find papers, assess the contributions of the authors, or whatever.

This simple process would capture all of the value in the current peer review system while shedding most of its flaws. It would get papers out fast to people most able to build on them, but would provide everyone else with a way to know which papers are relevant to them and a guide to their quality and import.

By replacing the current journal hierarchy with a structured classification of research areas and levels of interest, this new system would undermine the generally poisonous “winner take all” attitude associated with publication in Science, Nature and their ilk. And by devaluing assessment made at the time of publication, this new system would facilitate the development of a robust system of post publication peer review in which individuals or groups could submit their own assessments of papers at any point after they were published. Papers could be updated to respond to comments or to new information, and we would finally make the published scientific literature as dynamic as science itself. And it would all be there for anyone, anywhere to not just access, but participate in.

There is nothing technically challenging about building such a system, and it makes so much sense that it can’t help but happen. But, of course, we’ve been there before. Science is oddly conservative, and there is enough money and power at stake to ensure that people will try to stop this from happening. So if you care about making the scientific literature open and accessible, I urge you to do whatever you can to make it happen. If you’re a scientist, get with the program – there are so many open access options around today, you no longer have any excuse. And try to stop looking at journal titles when you evaluate people and their work. It’s a poisonous process that has to stop.

If you’re not a scientist, but are interested in this cause, you can do all the normal things – write your members of Congress and the such. But I also encourage you to find scientists whose work you find interesting, but can not access, and send them an email. Or better yet, give them a call. Let them know you want to – but can not – read their work. And remind them that, in all likelihood, you paid for it.

If we all do this, them maybe the next time someone like Aaron Swartz comes along and tries to access every scientific paper ever written, instead of finding the FBI, they’ll find a giant green button that says “Download Now”.

Posted in open access, PLoS, science | Comments closed