There has been a long simmering debate within the genetics community as to how to define, discuss, and possibly constrain research into controversial topics. A particular point of contention is the sharing of genomic data, with the debate sometimes breaking out into the mainstream press. Here I want to first explain the current processes and broader motivations around data sharing and data sharing constraints, and then give my take on the dispute — where I end up disagreeing with nearly everyone.
Where does data come from and where does it go?
When investigators seek to conduct a data-oriented study a major focus is on informed consent: ensuring that participants agree to and understand what is being done with their data. The consent process can vary greatly across studies; some studies destroy the data after the study is completed, or keep it for only a well-defined set of secondary analyses, or make it available in a controlled access repository, or make it fully publicly available on the internet. Recently, the NIH has mandated that genomic data collected using federal funding should be deposited into an online database for broader research use. The agency “expects” studies to be consented for the “broadest possible” data sharing and research use and there is clear recognition that data sharing is generally good for scientific progress. But the reason broad sharing is not mandated, as explained by the NIH1, is out of concern that participants would be less likely to consent to such studies and the studies would become less representative of certain groups. This is the core trade-off: the more you ask of participants the less likely the they are to participate at all. This is also the reason genetic data is almost never made completely available on the web but resides in secured databases, because participants are worried about re-identification and genetic discrimination2. In short, while there is a move towards more open data sharing, the level of openness differs across studies due to consent/participation tradeoffs.
Sloppy data practices pose a risk for all science
While these issues may seem bureaucratic, treating consent and data integrity lightly can have substantive and long-lasting negative consequences for research well beyond a single project. A striking example of this is the study of genetic data from the Havasupai tribe and subsequent legal court case. In the early 1990’s, members of the Havasupai were recruited and consented for research into the genetic mechanisms of diabetes, which was unusually common in the tribe. The diabetes study did not yield results and the project was shortly ended, but the investigators continued to use the data for research that it had not been consented for, including studies of psychiatric disorders and evolutionary history. The latter usage was particularly egregious as it directly challenged the tribe’s beliefs about their ancestral origins. In a dramatic revelation, a tribal member attended the thesis defense for one such project and confronted the defending student about the provenance of the data, to which the student had no good answers. It quickly became clear that the data had been used against consent, leading the tribe to banish the investigators from the reservation and sue the university, winning the case in 2010.
Beyond the immediate impact on the tribe itself, the case was highly publicized and led to broad distrust of biomedical research in the Native American community. As a consequence, population genetics research of Native Americans in the US was effectively frozen for years. The impact is (unintentionally) summarized by the figure below — what I consider to be one of the most depressing figures in a genetics paper — which shows the sampling sites of a large Native American genetics project, and the glaring omission of any tribes in the US. A single screwup (albeit an egregious one) by a single study team can lead to a massive setback for an entire research area.
Another recent example was the use of the Adolescent Brain Cognitive Development (ABCD) cohort to conduct race-IQ pseudoscience in violation of the Data Usage Agreement. An NIH investigation determined that genetic data from the ABCD study had been used improperly to study research topics that were not described in the data access request and that controlled access data was shared with unauthorized collaborators (an echo of the Havasupai case). As punishment, the NIH banned the senior author from access to the database and required them to permanently delete all copies of the data; the author was eventually also fired by his university employer. Yet, shockingly, their external collaborators have apparently retained the data and continued to publish analyses in multiple subsequent pre-prints, five to date. The Chronicle of Higher Ed has the full story and in many aspects it borders on farce3, but not only did this all happen to data from children, the perpetrators appear to be completely undeterred. As revealed in a Guardian investigation last year, the same group — now operating outside academia — claimed on a private call that they are working with data from the UK Biobank (the biobank denies their data was leaked but acknowledges that these investigators have repeatedly attempted to request access). So in the very recent past we have fringe researchers accessing and leaking data in violation of study agreements and these same researchers continuing to attempt to access controlled data. As the Havasupai case demonstrates, if these exploits draw backlash from the study participants they could have major implications for the entire field of human genetics, which is highly dependent on accessible biobanks. The stakes are high.
When is research abhorrent?
Beyond consent violations, there has been a call for guidelines or constraints around research that may be considered harmful in more subjective ways. When people think of controversial research topics, the study of genetic differences between racial groups is typically high on their list. In reality, the scientific community and the NIH is keenly interested in identifying genetic causes of racial-health disparities and invests substantial resources in this effort. It is common for flagship genomics studies to highlight the few differences that are observed across racial or ancestry groups. This includes studies of seemingly radioactive topics, like a recent NIH-sponsored study of the relationship between African ancestry and brain function (published in Nature Neuroscience). So if there is already active funding and research into controversial topics, are there any red lines at all?
Recently, Matthews, Tabery, and Turkheimer (2024) [MTT] proposed a means of diagnosing “abhorrent” science, which they argue to be far worse than controversial. In their model, research can be judged along two axes — harm and value — with research that is both harmful and valueless being deemed abhorrent and worthy of some amount of stigma (“It is abhorrent because it serves no end other than to cause harm”). They further draw a subtle distinction between value and scientific validity, arguing that some research may be scientifically valid (i.e. conducted with rigor) while still having little value. As an example of value-less science, they point to a GWAS of the ability to smell asparagus in urine (archly titled “Sniffing out Significant Pee Values”). They go on to provide some examples of abhorrent science, focusing primarily on “genomic race science”, which (my definition, not MTTs) seeks to use genomic data to justify racial discrimination by claiming that observed differences between racial groups are largely genetically determined. MTT argue that this research has neither practical value, because our society has accepted that people deserve to be treated as individuals rather than as groups, nor theoretical value, because the methods employed cannot distinguish genetic from environmental causes [Update: I should note that MTT acknowledge the value of studies into specific genetic mechanisms, including individual causal alleles that may differ in frequencies across populations.] . MTT stress that their harm/value map is intended to be purely descriptive, and they do not propose any specific consequences for what should happen to abhorrent science4.
What can be done about abhorrent science?
Two other recent papers [Panofsky et al. (2024) and Bird & Carlson (2024)] have gone further to tackle the question of how the field should actually deal with genomic racism. Both papers demonstrate that the underlying research itself is largely conducted by coordinated groups of hobbyists, with no credible academic affiliations, typically publishing in journals that they themselves have founded and review. The underlying studies, to the extent they are conducted, are low quality and highly repetitive because the primary goal is to generate memes for political blogs and forums.
Bird & Carlson focus on aspects of study design that can inadvertently facilitate the development of such racist memes. They note that many population genetics cohorts are intentionally sampled and analyzed to emphasize genetic distinctions and do not accurately reflect the continuous diversity of modern populations. They advocate for the development of “more accurate continuous quantitative and visual descriptions of ancestry at multiple resolutions” and for wider use of ethical guidelines for analyzing, describing, and visualizing genetic ancestry. Finally, they urge genomics consortia to re-evaluate their informed consent procedures to ensure that participants are “thoroughly informed about potential group harms that might arise (including through secondary analysis of anonymized data)” and for agencies like the NIH to consider even stricter penalties for violations of data use agreements (motivated by ABCD debacle).
Panofsky et al. focus more on problems with the community response, which they see as both more responsible than the community likes to admit and largely reactive rather than preventative5. They note that anti-racist advocates primarily argue on historical or philosophical grounds, thus appearing to cede the “data driven” territory to the race scientists; when geneticists do get involved in countering misinterpretation of their work, they typically do so through formal FAQs and society statements that have little traction in the lay communities where racist memes are spread; and most geneticists simply do not get involved at all. They advocate for scientists to develop “antiracist material that can compete squarely with the directness and accessibility of curated memes and rehearsed claims” and discuss various ways of incentivizing this kind of work. They also urge scientists to establish standards of methodological rigor and act to prevent the publication of research that does not meet them. Most controversially, they suggest forms of data access restriction, such as requiring collaboration with consortia members or pre-registration and vetting of proposed studies before data is shared. They cite but do not particularly engage with concerns that such restrictions may be inappropriate or even border on censorship.
And what are the concerns?
The question of data access restrictions has come up in the past, and restrictions were vehemently criticized in two essays by behavior/intelligence researchers James Lee and Stuart Ritchie a few years ago6. Lee’s argument is fairly conspiratorial7, he states — without presenting any evidence — that “It’s been an open secret for years that prestigious journals will often reject submissions that offend prevailing political orthodoxies” and that geneticists are now being prevented by the NIH from even accessing data if their research “may wander into forbidden territory”. Lee claims that his requests for data from public databases have been denied or unduly scrutinized because he is seeking to investigate non-controversial aspects of the relationship between genetics and intelligence. He argues that researchers should not have to justify their work as being beneficial (i.e. having value) at all, and that the NIH has an obligation to make data available regardless of the content of the research it is being requested for. In his commentary on Lee’s article, Ritchie presents a more charitable and evidence-based read of the situation but raises similar concerns. He points to an Alzheimer’s disease cohort that does not allow data to be used for intelligence research and highlights the arbitrary nature of this distinction (Alzheimer’s being essentially a condition defined by decline in intelligence). He reasons that such restrictions are likely an over-reaction to low quality race/intelligence research and bureaucratic scope creep. He acknowledges that some data constraints may be appropriate, but also advocates against any content-specific bans.
What is to be done?
So far I’ve aimed to summarize the various motivations and position in the debate around controversial genomic science. Now I’ll give my take.
The problem of scientific racism is largely non-scientific
Panofsky et al. / Bird & Carlson hone in on an important aspect of the genomic racist ecosystem: their goals are not to conduct scientifically valid research that happens to be controversial, but to produce an endless stream of science-like garbage that makes for effective memes for the public8. Memes can be a potent source of misinformation especially in the hands of powerful or unstable people, but they are fundamentally (and intentionally) a non-academic work product. Even when the trappings of academic research are present — e.g. citations in the corner of an image — digging into theses citations will typically reveal the source to be an obscure blog post written by an anonymous collaborator, with a hastily thrown together spreadsheet or rudimentary code. The fact that this flood of content is aimed directly at the public also means that countering it through formal statements or paywalled manuscripts will not be nearly as effective as direct public engagement that is accurate, non-technical, and persistent.
Data access should be determined by participant consent not researcher gatekeeping
I’ll be blunt, the suggestions in Panofsky et al. to limit data access are wrong and should be opposed. While this ostensibly puts me on the side of Lee and Ritchie, they too have misunderstood why data restrictions are wrong: the principle should not be that all data is available for all research purposes, but that the decision of how data can be used must be left up to the consent of the study participants. The participants are, after all, the generators of the data and those who take on the most risk by releasing it for research. Just as it is irresponsible to misuse data that was only consented for specific purposes, it is also irresponsible to swoop in and prevent the use of data that was consented for broad purposes — both are violations of the agreement between the investigator and the participant.
Researchers may argue that they are acting in the participants’ best interests, but it can be surprisingly difficult to know what those interests actually are. In the case of the Havasupai, for example, the topics that were forbidden — psychiatric disease and population genetics — are typically seen as scientifically appropriate, whereas the topics that were allowed — population-specific genetic drivers of diabetes and metabolic syndromes — could certainly be considered stigmatizing and essentialist. Academics are an unusual slice of society, and it is unlikely that their assumptions about what is/isn’t harmful will align with those of study participants. The consent agreement is how participants make their values known. That also means participants need to be truly informed about the potential harms of broader research they are consenting to, as argued by Bird & Carlson.
The data constraints proposed by Panofsky et al. are also unlikely to work for purely practical reasons. Data access plays such a small role in the genomic racism effort that data restrictions do not present a meaningful barrier. It is disturbing to see Education GWAS associations included in the Buffalo shooters’ deranged manifesto, but these associations came from public Supplementary Tables, and if the Education GWAS had never been run this evidence would have simply been taken from a raft of prior false-positive candidate gene studies or just invented out of thin air like much of the document. Since the majority of the scientists interacting with this data are good actors, data constraints are much more likely to lead to disagreement and polarization. Rather than limit the reach of ill-intentioned outsiders, they create barriers for and isolate well-intentioned insiders.
I should mention that other fields have more fundamental issues than genomics (and also demonstrate how badly things can get in the complete absence of norms). The broader field of intelligence research, for instance, routinely publishes race science in its flagship journals, including decades of cherry-picked and invalid global IQ data from the open racist Richard Lynn, as well as blatantly irresponsible analyses such as one that assumed the earth is flat (with the same journal rejecting a critical response), one that relied on “individual examples taken from the authors life” (subsequently retracted), or one for which the primary code analysis could not be run and produced glaring errors. The International Society for Intelligence Research invited discredited race scientists to present at its meeting, leading one invited geneticist to publicly withdraw in protest. The conference eventually dis-invited the race scientist, only to invite them back to give an unlisted presentation in a subsequent year. Their response to external criticism of shoddy science was thus to simply start hiding their invitees. More recently, the editorial board of the journal Intelligence seemingly threatened to resign over new editors that were perceived to be critical of race science, published anonymous accusations against the editors in a race science blog, and then seemingly un-resigned (I say seemingly since no on-the-record statements from the board itself were ever made). In other words, it’s a damn mess, and the path for that community to regain credibility may require substantially more rigorous internal standards.
Advocates for open science should rely on facts not conspiracy theories
It is also worth clarifying that Lee’s piece in particular is heavy on sensationalism and light on facts. The primary emotional thrust of the article — and likely the reason it was published in City Journal9 — is that intelligence research is already being censored by the NIH! Yet there is no evidence that this has actually occurred. As mentioned above, the NIH advocates for broad data sharing and leaves the actual data access restrictions to the individual study investigators, with the NIH database merely acting as the storage facility. Thus the foundational claim of the article — that government bureaucrats are censoring science that doesn’t toe the party line — is inaccurate. It is possible that the individual study investigators were trying to prevent Lee from analyzing their data, but an equally plausible explanation is that the study participants simply did not consent to his research topics. Lee avoids mentioning the word “consent” entirely, but Ritchie at least entertains the possibility before discarding it:
When I chatted to some colleagues about this issue, some suggested that the NIH rule might be due to consent forms: maybe the participants in the original GWAS filled in a form that said their data would only be used for “medical” research in future, and so the NIH is just following that rule by restricting research on stuff like intelligence, which isn’t a “medical” outcome. I disagree that intelligence isn’t a medical outcome, for the reasons discussed above, but even if you grant this, the idea that research on drug and alcohol addiction wouldn’t count as “medical research” is obviously untenable.
This is just false. Consent documents can indeed be very specific and it is entirely possible that participants agreed for their data to be used to study, say, Alzheimer’s disease but not alcohol addiction. Anyone with even a passing experience with the NIH database is aware of the complex structure of consent groups that define what kind of research a given study was consented for. Pick a study (here is an example) and you will often see multiple specific consent-based restrictions: in this case one group of participants that only allows research on cardiovascular disease and one group that allows medical research but not the study of ancestry (more fallout from the Havasupai case). This study is inaccessible to a cancer researcher like myself, or a population geneticist, or an intelligence researcher. In fact, I regularly run into studies that were not consented for cancer research. Do the investigators for these studies have an ideological bias against cancer or population genetics? No, it is much more plausible that they simply offered a set of options in the consenting documents and this is what the participants elected to allow. Bold claims of research suppression, especially when written with the intent to steer public opinion, require due diligence to rule out plausible alternative explanations — and that was not done here.
Finally, I’ll note that Lee is surely aware of the tradeoffs between data sharing and participation because his own research relies on data that is not shared at all! Here is a representative “Data availability” statement from a recent paper where Lee is a senior author (work that was supported by multiple NIH grants):
That’s it! No attempt to deposit the data to a public repository, or to make a de-identified subset of the data available, or to provide a path to data access on-site or through formal collaboration. The data availability is that it is not available. And this is par for the course in behavioral genetics, where a handful of groups have a tight grip over twin cohorts/registries that have been accumulating data for decades — often using federal funding — and are either completely inaccessible to external researchers or require project pre-approval and collaboration. If that sounds a bit familiar, it is exactly the controversial gatekeeping that Panofsky et al. considered as a last resort. So we have the fairly ridiculous outcome where researchers who made their careers on studies with 100% gatekeeping are criticizing studies that dutifully deposited their data with some restrictions. The former does not justify the latter, but it makes clear that this discussion is not really about “open” science, but about how to balance risks and benefits (and maybe some axe grinding).
Lack of scientific validity should drive scientific stigma
Beyond the question of data access, we do need a framework with which to evaluate research on controversial topics. The “value” and “harm” axes outlined by MTT are a useful way to frame the discussion but limited by the inherently subjective nature of the terms. Even something as seemingly low value as the GWAS of asparagus pee could have potentially identified some critical new olfactory mechanism — it is hard to predict where value will come from10. When considering repercussions, the field should continue to place scientific validity at the forefront of judging scientific value. While MTT do not explicitly define a rubric for distinguishing value from validity11, I think two specific properties are relevant: well-defined parameters and control for confounding. The first requires precisely defining the target parameter that the study is attempting to estimate, i.e. “What is your estimand?”; the second requires carefully controlling for potential sources of confounding and extensively documenting the sources that cannot be controlled. Additionally, study pre-registration is vital to ensure that the estimand itself was not cherry-picked after seeing the results. These standards are already emphasized in other fields such as economics or in the causal inference literature, but the field of genetics has often skated by on the (false) assumption that any genetic association is fundamentally causal.
Rather than attempting to define certain phenotypes as harmful/stigmatizing based on ad hoc criteria or guessing at public sentiment, phenotypes should be tiered as relatively free of confounding or relatively saturated with it, with higher expectations of methodological rigor and care in presentation for the latter. For instance, traits related to intelligence should have high expectations of rigor not because they are sensitive but because they are clearly confounded in complex ways we do not fully understand. Part of me suspects that behavioral geneticists may actually be more comfortable defining IQ to be ethically off-limits than acknowledging the complex and unusual confounding. The former is merely a political decision and gives their research a sense of danger and importance, whereas the latter would be a methodological indictment of decades of flawed studies.
Stricter scrutiny should then be applied to work that is of low scientific validity and high potential harm. Studies that have low scientific validity should garner less impact, be accepted in lower-tier journals, and cited less frequently. Studies that have low scientific validity and also have a high degree of potential harm should additionally elicit polite but active criticism, both informal (lay explanations in blog posts and social media) and formal (critical reviews and perspectives). One example is the recent dust-up over the visualization of genetic ancestry in the AllOfUs flagship paper, which used an approach with poor scientific validity and was then politely but bluntly criticized in a blog12. Post-publication critique is an important function of a healthy and dynamic research community and encourages disagreements to be aired and resolved in the open rather than simmering in blinded peer reviews. Finally, researchers misrepresenting low-validity and high-harm studies should be stigmatized by the community just as with any other scientific misconduct. Ironically, the field has largely taken essentially the opposite approach: confounded genetic analyses of controversial traits were published in flagship journals to great fanfare, and only gradually revised over time, typically in venues with much lower impact and no public reach.
How would this approach apply to genomic racism? Take the example of using polygenic scores trained in Europeans to estimate phenotypic means across racial groups. What comes out is a ranking that is prone to recapitulate environmental differences but has the appearance of a simple genetic cause, an ideal tool of misinformation. The problems of polygenic score comparisons are widely acknowledged in the field and experts regularly warn about misinterpreting such scores:
Importantly, such analyses are not just wrong as a subjective value judgement, they fail tests for scientific validity. The underlying parameter is fundamentally undefined: it is not a crude estimate of the “genetic mean” of the phenotype in the target population, because it does not incorporate the influence of genetic variants that are specific to the target population. It is not even clear what scale the scores should be measured on (and for medical purposes they are generally standardized). The estimator is also fundamentally confounded: by aggregating the effects of thousands of variants, it becomes dominated by non-causal differences in minor allele frequencies, variant correlation (LD), population stratification, natural selection, indirect genetic correlations, and gene-environment interactions with no currently known methods for removing the bias. Importantly, this question can be investigated in a scientifically valid way, for example by deriving an estimator of trans-ancestry genetic correlation and considering potential sources of bias and confounding that may remain.
In short: Balancing data sharing and public responsibility
Genomic racism has for decades sought to establish a parasitic relationship with the field of genetics. It is disturbing to see a community that is obsessed with our work for the sole purpose of twisting it into abhorrent propaganda. This disturbed me 15 years ago as a population geneticist, when the obsession was over misleading PCA plots, and it disturbs me now as a medical geneticist, when the obsession is IQ GWAS and misleading polygenic score differences. However, academics have a tendency to react by reaching for the tools of bureaucracy, and I am concerned that this impulse will only lead to ineffective gatekeeping and turf wars. Scientists are generally not very good at predicting what the public should or should not see (and even worse at explaining why). Rather, our role must be — first and foremost — to respect the agreements we make with the study participants, and then to tell the public the hard truths, including making it abundantly clear when work on difficult topics is of low scientific validity.
This is a long post so let me summarize the key points:
Studies should follow the NIH recommendations and strive for the broadest possible consent for research and data sharing. As suggested by Bird & Carlson, that also means investigators need to do a better job of articulating the potential harms to their participants. It is simply untenable that a person consenting to research that improves human health, typically in a clinical/hospital setting, is fully informed that they have also consented to the study of factors like educational attainment, income inequality, or spousal choice if those concepts can be in some way be connected to “health”. The way to head off a repeat of the Havasupai scandal is to ensure that participants know exactly what they are signing up for. Multiple tiers of consent would allow individuals to pick the level of risk they are comfortable with without blocking the core research question.
Absent clear justification from the consent process, placing post hoc limitations on broad research areas like intelligence or substance use is inappropriate gatekeeping, a violation of the consent decisions made by participants, and a barrier to scientific progress (if you are still unconvinced, Ritchie’s article makes a more detailed case for the value in studying the genetics of intelligence and substance use). When such barriers are imposed, researchers within scrutinized fields are disincentivized from criticizing pseudoscience out of fear that their own work will be obstructed. This creates hostility in precisely the experts that are the most informed to offer substantive critiques of bad science and leads to further polarization.
Individuals who leak sensitive data should be categorically banned from accessing other datasets and should be treated by the community the same way as other scientific misconduct or fraud. Rules are there for a reason and, as the Havasupai case demonstrates, a data-driven discipline cannot survive without enforcing the rules.
The community should demand a high standard of scientific validity and rigor for research topics that have broad public impact and potential for misinterpretation. This includes not just comparisons across race, but across sex, wealth, geography, etc where confounding is also a major concern and potential misinterpretation is high. Studies should strive to define what they are estimating, on what scale, articulate all sources of confounding, and pre-register their analysis plans. Manuscripts should prioritize presenting the highest validity findings first even if they are more modest. It is irresponsible to write a paper where the Introduction starts with a claim like “criminality is highly heritable”, the Results report a confounded GWAS heritability of 0.05, and the Supplement shows a quasi-causal within-family estimate of 0.01 (this is a made up example, but you get the idea).
A commitment to open data access also requires scientists within the field to be actively critical of work that does not meet those high standards of rigor. This includes writing commentaries and critical reviews but, more importantly, accessible content for the public. The field has long operated under the assumptions that all traits are highly heritable and GWAS estimates are largely un-confounded and causal; assumptions that have proven particularly inaccurate for behavioral phenotypes. Such persistent misinterpretations need to be redressed and we should be vigilant about overstating findings in the future.
“Regarding informed consent, the GDS Policy expects investigators generating genomic data to seek consent from participants for future research uses and the broadest possible sharing. A number of commenters were concerned that participants would not agree to consent for broad sharing and that enrollment in research studies may decline, potentially biasing studies if certain populations were less likely to consent to broad use of their data. … NIH recognizes that consent for future research uses and broad sharing may not be appropriate or obtainable in all circumstances. ICs may continue to accept data from studies with consents that stipulate limitations on future uses and sharing, and NIH will maintain the data access system that enables more limited sharing and secondary use.” ~ NIH Genomic Data Sharing Policy
For what it’s worth, my knee-jerk reaction is that these concerns are overstated: genomic data is simply not that useful for malign purposes and we already have laws in place to prevent discrimination. We as a field should be carefully educating and encouraging participants to consent to truly open data sharing. But this is a topic for a different post.
In response to being terminated, the author sued their university for first amendment violations and, bizarrely, demanded a declaration that “the hereditarian hypothesis is worthy of study, but is presently under assault”. In late 2024 the court found in favor of the university and dismissed the case. Regarding the declaration, the judge stated: “There would be absolutely no reason for this Court to issue opinions about the hereditarian hypothesis or how CSU must conduct its business”. I include this example to highlight how genuinely nutty these disputes can get.
“Our analysis makes recommendations for the terms of the discussion about race research and is not intended to determine its outcome in any instance beyond the several examples we have presented. Particular research programs will continue to be controversial, as they should be. People can disagree in good faith about the value and potential harms of scientific endeavors. Scientific truth will out, and we will say again that nothing in our analysis should be taken as endorsement of suppressing the truth.” ~ Matthews et al.
“Human genetics has shopped at the gun store, cleaned and oiled the purchase, loaded and calibrated the weapon, and left it on a low table in the kids’ playroom. None of this excuses what the [scientific racism] movement is doing with the field's research. But pretending that weaponization is a new and troubling misappropriation of genetics ignores human genetics’ historic partnership with eugenic movements and the ongoing fit of its practices and products with racialist and determinist thinking.” ~ Panofsky et al.
Note that these essays preceded the suggestions of Panofsky et al. and were cited in the latter as dissenting opinions. I am summarizing them here out of chronological order.
The COVID-19 lab-leak theory even gets a shout-out.
Notably, Ritchie’s article gives a very similar description, a rare instance of agreement across all parties: “There’s a lot of very bad research on intelligence and genetics out there. There’s a small coterie of researchers who churn out low-quality studies on the most controversial questions—race differences, sex differences, and so on—either because they’re ideologically committed to certain results, or because they enjoy trolling and “owning the libs” (or both). They don’t take the research seriously, and nor do they try to anticipate potential misunderstandings or misinterpretations by writing “FAQ” documents to attach to their papers (as serious genetics researchers often do).”
An urban policy journal published by a self-described “free-market think tank” that also employs conservative provocateurs like Chris Rufo and highlights culture war topics.
This, again, may vary substantially by fields. The authors of MTT are a psychologist, a philosopher, and a behavior geneticist and these fields may engage in largely hypothesis driven research that probes a specific question for which value is quantifiable. In contrast, genomics often involves so-called “hypothesis free” scans and association studies for which the value is largely unknown until the study is conducted and sometimes not until well after that.
“Although a rigorous philosophical assessment of scientific validity and its relationship to value is beyond the scope of this project …” ~ Matthews et al.
Note that this critique did not call for blocking access to data, but for increased rigor and clarity: “The All of Us authors should therefore immediately post a correction to AoURFig2 that includes a clarification of its purpose, and corrections to the text so the paper properly utilizes terms such as race, ethnicity and ancestry. All of us need to work harder to sharpen the rigor in human genetics, and to develop sound ways to interpret and represent genetic data.”
“use genomic data to justify racial discrimination by claiming that observed differences between racial groups are largely genetically determined. MTT argue that this research has neither practical value, because our society has accepted that people deserve to be treated as individuals rather than as groups, nor theoretical value, because the methods employed cannot distinguish genetic from environmental causes.”
This is disingenuous. If genetic research can explain even some of the black/white achievement gap it serves immense value in a society that is fixated on that gap and seeking to remedy it by stiffing me as a white man with ever higher levels of taxation and discrimination.
New subscriber after your appearance on Awais' substack (I'm a psychiatrist and researcher from the UK) and you have not disappointed. Really interesting and informative, and I appreciate the level of thought and rigor you have put in to this detailed article.