Reactions to the Many Co-Authors Project

On November 6, 2023, the Many Co-Authors Project (MCAP) will go live. The stated goal of the project is “protecting really early career [scientists], grad students or people who just graduated, from having their career ruined by basically all of their work being cast into doubt.

Making sure that my collaborators, all of them - independent of their career status - are not negatively affected by the unfair stigma currently associated with my name is deeply important to me. I know too well the pain of being wrongly accused and understand and appreciate the desire to protect any author of false accusations of research misconduct.

Yet, while I understand the stated goal, I have deep reservations with the way the project was structured. For months, I was kept in the dark about the details of the MCAP. I finally received an invitation to it on October 23rd, 2023, after I asked for it.

Ultimately, given what I’ve learned about it, I’ve chosen not to participate.

Here are the five main reasons:

1. The project suggests an intent to introduce a rigorous evaluation of my work but lacks guardrails to protect me without a mechanism to validate the statements or claims by co-authors.

Because of this, it inadvertently creates an opportunity for others to pin their own flawed studies or data anomalies on me.

As an example, one paper involves multiple authors, including Author X and me. Without involving me, the other authors audited the paper and found anomalies in two studies. As I learned after the audit was completed, Author X claimed I collected the data for those studies. But I did not: the studies were demonstrably conducted at a University I was never associated with (information other authors on the team also have). When they decided to retract the paper, the authors agreed to use ambiguous language. I had explained to the authors that I am confident such ambiguity would be read by the public and any scholar that the paper is being retracted because of anomalous data that I am responsible for. Yet, they still decided to use the same statement and, initially, wrote to the journal editor without including me.

As it turns out, a formal complaint was filed against Author X at their University due to the anomalies (not by me). The complaint was dismissed as the paper is out of scope (older than six years). While I agree with adhering to the timeline, this further exacerbates the challenge of not getting to the bottom of what is going on.

2. The project is taking place during litigation, with the potential for it to be biased.

Two of the people who authored this initiative are either scholars at the University I have a lawsuit against or defendants in the lawsuit. I’d hoped these scholars would recuse themselves from the process, as it is common to bring bias when one is motivated to confirm a certain narrative. Instead, one of them is the person who “built the online reporting platform” for the MCAP.

3. The project does not account for bias toward me.

I am being held to a higher standard for data I collected as compared to other scholars. If a collaborator indicates that they did not collect data for a given study on the website, there are no additional questions. But for data they believe I’ve collected, the website asks additional questions.

I am held to a different standard in another important way. When collaborators conduct audits of papers co-authored with me, even if no anomalies have been detected, other scholars still raise questions about the data. I am concerned about biased forensics, opening me up to further risk of people jumping to conclusions that are wrong, as happened this summer.

And even if the audit found no issues, scholars still ask for the same study to be replicated. Why? Simply because I was involved in the paper.

Yet, if a collaborator audited a paper and identified issues for studies I was not involved with (because I did not collect nor handle the data), they are correcting the records without being publicly attacked. This is happening even when the type of anomalies they found in the data are the same as the anomalies found in mine, like a scammer filling out the study multiple times or the presence of duplicate rows of data or conditions coded the wrong way. Honest errors in research do occur, and they do not necessarily indicate research misconduct.

4. The project applies different standards for data retention to me than the field.

Some collaborators, they tell me, are finding it difficult to track down records of papers published many years ago. In fact, most schools have policies that require data to be kept for only three or six years. Some collaborators are stating this in their audits. And yet, I am held to different a standard. While it seems ok for a collaborator to not have perfect records from a decade ago, I am scrutinized for facing the same challenge.

5. The project does not account for common practices in the field.

Likely, some collaborators will report on the website that they do not have data they believe I’ve collected. In fairness, in most cases, I don’t have data for studies I did not collect data for either. Data sharing practices have changed over the last decade or so. I should not be viewed negatively for a practice that was (and still is) common in the field.


Like all scholars, I am interested in the truth. But auditing only my papers actively ignores a deeper reflection for the field. Why is it that the focus of these efforts is solely on me? Why has this exercise never been applied to other researchers whose papers have been retracted? If this project is supposed to strengthen the field and improve science, then, I recommend every collaborator involved in this, and every scholar behind MCAP, audits all their papers, not only the ones they’ve co-authored with me. I know of at least one co-author who intends to take on those efforts. In the spirit of good science, I invite everyone to follow suit. 

As I wrote to my collaborators, I hope to audit all my papers, and doing so with unbiased help. I am constrained now, as I don’t have access to a research budget, nor to records that I requested from HBS and that HBS failed to share with me to conduct a comprehensive audit.

From what I learned by talking to collaborators in the last few weeks, there are clear lessons from efforts like this MCAP. The need for better record keeping on who does what when, more transparent practices about data, and standards applied equally to all scholars, not just one.

Unfortunately, though, the MCAP is raising a lot of fear. Some collaborators told me they fear excessive scrutiny into their work, motivated by animus and directed to certain groups of people (women more so than men). They fear becoming targets of false accusations. They fear the behavioral science field lost its way by having scholars accusing others instead of engaging in dialogue. I don’t think these fears will make behavioral science better. A shared interest in improving science through dialogue rather than public shaming would.