The Essentially Contested Governance of Synthetic Biology
Can We Globally Govern Synthetic Biology Before Disaster? (Pt. 1)
Synthetic biology—the creation of novel systems from biological parts—is the new era of biotechnology. Yet, there remains almost no mandatory international oversight for the artificial synthesis of pathogens, and there is considerable variance in the governance of biotechnology across technologies, concerns, and actors. Most existing governance exists through voluntary guidelines. The Australia Group—an informal forum of 43 countries that harmonise their export control regimes—has expanded its export control list to a select few technologies specific to synthetic biology, such as nucleic acid assemblers. Organisations such as the OECD issue guidelines pertaining to biorisk management for biological resource centres. The key institution that exists to govern nucleic acid synthesis is the voluntary arrangement between labs known as the International Gene Synthesis Consortium (IGSC). However, for example, there is no international regulation that restricts particular modified pathogens or types of dual-use research of concern (DURC).
The governance of bioweapons in general has always struggled. The Biological Weapons Convention (BWC) famously does not include a verification mechanism, unlike the commensurate agreements of the Chemical Weapons Convention and verification for nuclear agreements conducted by the International Atomic Energy Agency. However, there is certainly significantly more institutionalisation of governance before the emergence of synthetic biology in the early 2000s. On top the BWC, there the UN Secretary-General’s Mechanism for Investigation of Alleged Use of Chemical and Biological Weapons; the InterAcademy Partnership Biosecurity Working Group; the Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies, and, by my count, at least 20 international institutions with some mandate that covers aspects related to biotechnology misuse—14 of which were set up before 20001.
So, why exactly has synthetic biology been so difficult to govern over the last two decades? There are broadly two buckets of explanations for this: a technological deterministic view that stresses dual use concerns and a social contextualist view that emphasises how actors use concerns about biotechnologies to advance pre-existing interests and concerns. However, I think both of these have pretty significant gaps. The history of global bioweapons governance is not just a story of dual use complexity complicating governance, nor just a story of prior concerns mismatched with object-level realities. A really important dimension is the contestation and debate that emerges, in part, due to the features of synthetic biology itself. Understanding the role of broader governance ecosystems, and the role that disagreement fundamentally plays in precluding governance, is important for mediating which approaches to governing biotechnologies we should, and shouldn’t be, excited about.
Dual Use and the Technological Deterministic View of Biosecurity Governance
Central to the technological deterministic view is that the dual use nature of synthetic biology—“the possibility that the same technology or item of scientific research has the potential to be used for harm as well as for good”2—creates all manner of governance challenges. Dual use technologies have greater proliferation risks because civilian deployment means these technologies or their precursors are already widespread; they lower the barriers for non-state actors or rogue states to access capabilities traditionally limited to major powers; and they complicate verification procedures for assessing whether a technology is being used for harmful purposes by requiring intrusive inspections. Governance is also complicated by the regulatory tension between constraining harmful uses and suppressing beneficial civilian uses, known as the dual-use dilemma. This perspective, more commonly than not, is found among practitioners in the biosecurity space.
The problem is, “biotechnology, along with virtually all other technologies, has the potential to be used for both benign and malign purposes”3. Dual use as a property is a binary that applies to practically all developments in synthetic biology and several other technologies. If knives can be used as both violent weapons and culinary essentials, then what use is it highlighting the dual use character of biotechnologies?
One option is to think of dual use as a scale, representing a continuum defined by “the ease of distinguishing military from civilian uses”4 (dual use distinguishability). Therefore, you might argue that what makes synthetic biology so distinct is that distinguishing between military and civilian uses is exceptionally difficult. For example, nuclear reactors for energy production typically use uranium enriched to 3-5%, while weapons-grade uranium requires enrichment levels above 90%. Similarly, large stockpiles of the causative agent of anthrax serve much less of a civilian purpose than a military one. However, whether you are developing a sophisticated bioweapons program or conducting research, the requisite laboratory capacity to artificially synthesise pathogens may look near-identical (depending on scale).
However, both traditional and novel biotechnologies are “processes rather than items…it is unhelpful to think of a state as possessing biological weapons and more accurate to think of it being in a position to threaten or perpetrate a biological attack”5. If it can take merely days to culture infectious bacteria, then the threat lies in the culturing processes rather than the viral products6. Synthetic biology offers the promise of replacing cellular inocula and bioreactors with artificially synthesised microorganisms and cell-free production systems. However, the degree of dual use distinguishability is broadly comparable.
There are also some conceptual uncertainties. Dual use along a civilian-military dichotomy does not capture misuse risks from non-traditional actors like lone wolves or illegitimate civilian labs. We may instead think a better dichotomy here is about harm vs benefit, but this can lead to some pretty counterintuitive prescriptions: militarised uses can be beneficial (e.g., deterrence) and civilian use can be harmful (e.g., unsafe, unregulated, drug development practices). We may instead look to other dichotomies such as peaceful/non-peaceful, legitimate/illegitimate, or benevolent/malevolent—but all have their own shortcomings and do not seem to explain what is so distinctive, if anything, about synthetic biology.
Controversy, Contestation, and the Social Contextualist View of Biosecurity Governance
In turn, the second bucket takes a social contextualist view, looking at how dual use risks have been socially constructed, especially given the persistently high barriers to the weaponisation of synthetic biology. This is where a lot of academic work lands, looking at the effects that the broader security landscape, perverse incentives facing security professionals, and a long-held norm against bioweapons have had in creating security concerns—occasionally removed from object-level realities. An important idea here, for example, is the purported importance of downplaying tacit knowledge barriers in order to maintain both the market promise of synthetic biology and incentivise their governance among institutions with broader security concerns7.
There is a lot of wisdom to be gained from social contextualist perspectives. In particular, the idea that governance outcomes are the product of how relevant governance communities construct, contest, and resolve uncertainties is a very important one. One thing I think is not widely appreciated enough is that the history of synthetic biology governance is not really one of impersonal friction due to dual-use uncertainties, but contestation between different interest groups with polar views and incentives.
In 2011, the Fouchier and Kawaoka groups introduced targeted mutations in ferrets to demonstrate that avian H5N1 influenza could evolve to become transmissible between mammals through airborne droplets. The nature of Fouchier’s announcement created a media spectacle. Consequently, Fouchier, Kawaoka, and 37 other experts announced a voluntary moratorium on gain-of-function research; the National Science Advisory Board for Biosecurity (NSABB) in the US recommended “the manuscripts not include the methodological and other details that could enable replication of the experiments by those who would seek to do harm”8; and NSABB would announce its new DURC policy in 2012.
However, investigations into this matter by the World Health Organisation (WHO), would culminate in a small group meeting of WHO in February 2012. The group concluded that the papers should be published in full. A critical factor was the clarification that the modified H5N1 viruses were weakly pathogenic when transmitted via coughing or sneezing—even though it was clear these experiments served as a proof-of-principle for conferring mammalian transmissibility without losing pathogenicity. NSABB would later meet and reverse their decision in March. Underpinning WHO’s decision was the view that publishing these manuscripts was important “to advance public-health efforts and scientific research”9.
Nowhere more do we see contestation, not complacency, precluding governance than the response to a 2014 paper that conducted similar experiments on H7N1 influenza without losing pathogenicity. Here, the academic debate was polarised between two camps: the Cambridge Working Group and Scientists for Science. Amidst increasing technical uncertainty, the former camp pushed for restrictions on “gain-of-function” research based on the catastrophic potential of a bioweapons attack. The latter camp largely focused on the sheer potential benefits of such research. It’s fairly difficult to conclusively conclude this is why further governance didn’t happen, but given the individualised and elite efforts that have driven much domestic biosecurity governance in the US in particular, it’s pretty hard to neglect the counterfactual effect that the absence of broader consensus has likely had.
Similar contestation plagues many of the deliberations at the Biological Weapons Convention, where pause about divulging trade secrets; seeking guarantees for technology transfer; protecting infant bioeconomies; and disagreements over cost-sharing arrangements are among the many points of contestation that have precluded innovation. A really instructive report worth flagging here is a 2021 report by UNIDIR, the primary research arm of the UN on disarmament affairs, on the difficulties in establishing a science and technology review mechanism at the Biological Weapons Convention. This report highlights four key disagreements: disagreements over the extent of state participation; differing views on how priority areas and questions should be determined; issues of autonomy and independence; and issues of how such a mechanism should be financed. What is not to blame is merely delays imposed by the bureaucratic nature of international institutions—an explanation I think often hand-waves tricky coordination problems.
If governance outcomes are the product of how relevant governance communities resolve uncertainties, then the inability to resolve uncertainties due to persistent contestation is an important part of why there is not greater governance of synthetic biology. However, this raises the question of why such contestation seems so much more intense compared to the governance of chemical weapons, other matters such as climate change, and other emerging technologies. I think there is an important middle-ground between these technological deterministic and socially contextualist perspectives: the ways in which biotechnologies shape the nature and incentives of their own governance regimes.
A Middle Way? Inherent Contestation in Bioweapons Governance
Biotechnologies have produced selection effects for and enacted constraints on actors and audiences; precluded the formation of consensus about what innovations mean for the distribution of dangerous capabilities; and created incentives for intense debate. This conclusion is important because it clarifies the conditions under which I am optimistic and pessimistic about the global governance of biotechnologies in the future and forms the basis of the interventions I am excited about. In particular, I think there are at least three aspects of biotechnological development that really matter for accurately diagnosing the current state of biotechnology governance.
I. The processuality of biotechnological development intrinsically creates uncertain capabilities
Synthetic biology is synonymous with ‘engineering biology’ because it often looks less like an improvement in discernible products, but improvements in knowledge, inputs, and organisational capabilities that allow us to manipulate biology in a way we could not before. However, I think there are at least two key disanalogies between synthetic biology and engineering that matter. Firstly, cars and nuclear warheads are complex in the sense that often defines complexity: “the number of components and the intricacy of the interfaces between them”10. Engineering often produces artefacts that have a defined function upon completion: a car transports; a warhead explodes. Much of this is because the discreteness of constituent parts means complexity can be grasped: the design and fabrication of parts means inputs underpinning an innovation are often themselves artefacts. However, synthetic biology involves manipulating existing and dynamic processes, requiring inputs that are themselves processes, such as maintaining sterile conditions or constraining accidental cellular responses.
A second disanalogy between synthetic biology and engineering is that engineering products often have capabilities that are fixed by design independent of human involvement. In some sense, this is dialectical across all technologies: cars need roads, and warheads need command and control systems. However, for a pathogen, there is significant dependence on its autonomous and human-mediated interaction in the environment to fulfill any function. In order to effect an attack, a pathogen must become a spreading outbreak; interact dynamically with hosts; and maintain its function via replication without deleteriously mutating. Whether it successfully spreads is mediated by its interactions with hosts, the environment, the demographics of the population it infects, the presence of countermeasures, and so on. Both these disanalogies can be characterised as biotechnologies being highly processual (as opposed to “artefactual”). Biotechnologies embody a much greater degree to which both their construction and execution depend on processes rather than necessary, discrete artefacts.
We often use the term “capabilities” to capture which types of actors can perform which types of tasks. However, the more precise conceptualisation is the notion of “affordances” from science and technology studies: the tasks that technologies enable users to perform. While capabilities are a function of the technology itself, affordances are necessarily relational: whether an advance confers a given capability is as much a function of extraneous considerations as it is technological shifts. A set of CRISPR-Cas9 reagents confers the latent capability to edit a genome, but it only affords this outcome to a user with a functioning wet lab, specific cell lines, and the procedural knowledge to deliver the components into a living cell.
This matters because forecasting the distribution of capabilities is critical not only for governance itself, but also for persuading relevant actors that governance is even required. Where it is just a matter of technical complexity alone, further research and scientific development often produce the tools to better grasp how capabilities are distributed, and, in turn, act appropriately. However, it is my argument that the processual and relational nature of biotechnologies means that capabilities (or, more accurately, affordances) are fundamentally indeterminate. The distribution of nuclear capabilities is much more straightforwardly proxied by the spatial distribution of nuclear warheads and enrichment facilities. However, the distribution of bioweapon capabilities does not have a comparable proxy. More than merely tacit knowledge and formal training barriers, several factors mediate the distribution of bioweapon capabilities: reagent cold chains, local health system readiness, the presence of preparedness plans, and all manner of other capabilities (e.g., cyber capabilities) matter to a significant degree.
Processuality is an important concept for several reasons. I think it does create meaningful variance in when ascertaining the distribution of capabilities is much more likely to happen: discrete products such as benchtop synthesisers offer much clearer chokepoints for governance than methods such as site-directed mutagenesis or CRISPR-Cas9 gene-editing. However, I do think it is ultimately an emergent feature of applying engineering principles to life itself. One practical implication is that I do not necessarily expect a lot of technical advancement into synthetic biology itself to make ascertaining the distribution of bioweapon capabilities any easier. This has been the story of the last 20 years of biotechnological development so far, and it is not clear to me that there are any potential breakthroughs that would change this picture—other than maybe the complete AI-driven automation of the synthetic biology enterprise.
II. The ubiquity of biotechnological development shapes who governs biotechnologies
Underpinning social contextualist views of governance is that outcomes are substantially determined by who gets to govern, and often the answer is who wields the most power. In a lot of domains, it is sufficient to look at the distribution of military capabilities to figure out whose incentives are most well-represented in governance regimes. I’d argue much of the nuclear governance regime, for example, can be explained in terms of the shifting interests and limitations of US hegemony paired with the game-theoretic logic of nuclear deterrence.
However, the biosecurity governance regime is different for two main reasons. Firstly, highly technical domains with opaque feedback loops and high levels of uncertainty tend to be disproportionately shaped by the relevant epistemic community: the “knowledge-based communities…helping states identify their interests, framing the issues for collective debate, proposing scientific policies, and identifying salient points for negotiation”11. This effect is strengthened at moments of technological emergence—“reality is constructed out of disorder”12. Where there is uncertainty, complexity, and emergence, the incentives of epistemic communities, path-dependence, and even the particularities of individual actors become much more important for explaining outcomes. The debates in response to significant advances in synthetic biology become an important part of when governance does, and does not, occur.
Secondly, pathogens are everywhere, spanning concerns ranging from public and environmental health to concerns about bioterrorism. In turn, biotechnologies are uniquely ubiquitous. Technological development in, and the governance of, synthetic biology spans concerns relating to international security; interstate conflict; chemical, biological, radiological, and nuclear (CBRN) weapons; public and environmental health; lab biosafety; and responsible life sciences research. In turn, the governance landscape is one that is at once small, technocratic, and specialised yet fractured, polarised, and complex.
I think this is important for characterising why contestation matters so much for bioweapons governance, and why I think the challenge for establishing consensus is much less tractable than at first may seem. Bioweapons governance cannot merely be about figuring out technical solutions alone, but aligning incentives among a myriad actors with divergent incentives. This incentive alignment also includes establishing consensus about fairly granular, technical matters given the technocratic foundations of this epistemic community. Yet, indeterminate affordances make resolving these uncertainties exceptionally difficult. In other words, the governance landscape for synthetic biology is set up for contestation to get in the way.
Almost everyone I’ve spoken to agrees, in general terms, about the need for more interdisciplinary and cross-community dialogue. It’s a hackneyed cliché seemingly without a lot of real force. However, this aspect is actually very important yet also very difficult. Again, I think it helps capture exceptions: the most successful institutions we’ve seen so far often have homogenous actor types (e.g., IGSC and nucleic acid synthesis providers) or very small scopes (e.g., the Australia Group and export controls).
These are dynamics pretty common to tricky coordination problems. However, a common approach is to focus on the subset of actors and concerns with the most power and leverage. In artificial intelligence, for example, this looks like prioritising lab governance. For biotechnologies, however, we do not exactly have the same recourse. This is not because there aren’t powerful actors. Nucleic acid synthesis, for example, is dominated by players such as Thermo Fisher Scientific and Integrated DNA Technologies (IDT). However, new breakthroughs also take place in academic labs and startups; specialist biosecurity expertise disproportionately exists in institutions such as NSABB; nucleic acid synthesis is only one aspect of the synthetic biology enterprise; and there are stark differences in incentives for the nature of biotechnological development between IDT and the US Department of Defense. The position of OpenAI shapes the governance of artificial intelligence much more significantly than the position of Thermo Fisher alone would shape biotechnology governance.
II. The high-consequence nature of biotechnology’s dual use concerns motivates the intensity of contestation
Biotechnologies are, in part, tricky to govern because they are dual use, but this dual use nature (or their degree of dual use distinguishability) cannot be a sufficient explanation for all the previously mentioned reasons. However, I think the key intervening variable is actually very simple: the actual magnitude of potential civilian and military consequences is exceptionally high for biotechnologies. The susceptibility of all humans to disease, and the ability for synthetic biology to be used in such a ubiquitous way, is why the dual use character of synthetic biology is so salient for both critics and proponents of the threat from bioweapons. What makes synthetic biology so important, then, are the potential efficiency gains it contributes towards both tackling disease and causing disaster. CRISPR-based gene editing platforms that offer a plausible pathway to entirely eradicating hereditary diseases like sickle cell anemia could also be repurposed to deliberately enhance the transmissibility or virulence of pathogens.
The governance landscape for synthetic biology may be set up for contestation, but that does not mean debate is inevitable. However, it is the magnitude of consequences that then creates the incentives for misaligned incentives to result in mutual blockers on governance and innovation. The problem isn’t that biotechnologies require more intrusive inspections, but that the costs of inspections are much greater if they risk slowing down or sacrificing a potentially world-changing technological enterprise.
A practical implication, that is arguably even more applicable for artificial intelligence, is that we should not expect increasingly dangerous capabilities to motivate greater governance if they are entangled with incredibly transformative capabilities. Much work, ranging from AI-bio evals to relevant public writing, seeks to establish consensus on the risk-reward calculus of emerging technologies. However, it is my argument that we should see contestation as emergent from the intrinsic magnitude and ratio of consequences for both use and misuse.
I am sympathetic to the notion that even though uncertainty is baked into the biotechnological enterprise, scope-sensitive considerations of the catastrophic potential of biorisks should push us towards risk aversion. This is the reason I personally hold the position I have. However, I’m definitely not excited about spreading better epistemics or altering subjective risk tolerances as approaches. I think it is worth recognising the nature of the synthetic biology governance regime as structured by contestation, in part due to the synthetic biology enterprise itself, and adjusting our priorities accordingly.
What Does This All Mean?
The technological determinist perspective drives us towards technical solutions to the problem of governing dual use technologies. Better taxonomies, more fine-grained forecasting of capabilities, and regulatory bargains between all the stakeholders in question. The social contextualist view, however, would suggest such efforts are secondary to addressing the political, social, and economic interests that drive our interest in constructing bioweapons as a threat.
There’s value to both perspectives. However, recognising the role that biotechnologies have played in precluding their own governance, in part, due to their processuality, ubiquity, and high-consequence nature, leads me to a few key takeaways. Firstly, I’m relatively pessimistic about most interventions where the theory of change rests on establishing consensus about impending biorisk to motivate greater governance. I do not expect AI-bio evals, uplift studies, and forecasting to establish the distribution of concerning capabilities with uncontestable clarity, and even if they did, they may only highlight the potential benefits at stake that will continue to drive debate and preclude governance.
Secondly, I think any hope for more robust consensus-forming will have to encompass much more intentional and reflexive efforts to shape who the relevant governance-making actors are for particular problems in the first place. Transparently, I think this kind of institutional engineering is pretty hard to engineer after the fact, but I am excited about proactive efforts. A really good effort here looks like the work of the Mirror Biology Dialogues Fund to develop consensus—not merely on object-level considerations related to mirror biology, but on laying the foundations of who the relevant experts, institutions, and forums are that shape how mirror biology should be governed in the first place. However, mirror biology also benefits from the luxury of time that other biotechnologies may not have for this approach.
Thirdly, I’m naturally more excited about interventions can provide outsized risk mitigation that do not require large-scale consensus. A lot of this is biodefense: screening orders of nucleic acids, investing in indoor air quality measures, and producing better therapeutics are among the many strategies even non-state actors can engage with to effect change.
A lot of this brings me to my final, and most important, conclusion. The critical challenge is how to ensure technological development goes well without demanding behaviour changes, slowdowns, and foregone value antithetical to critics of such measures amidst valid uncertainty and divergent incentives. I think much of the answer lies in how technological development can be steered towards safety without these compromises. This may be through thinking about how technological development is sequenced (e.g., differential technological development or risk-sensitive innovation); how parallel technologies can be used for safety (e.g., accelerating biodefense); and identifying particular gaps where broader institutional patterns lend themselves towards proactive technological steering (e.g., mirror biology). These are many of the key ideas I’ll be expanding on in future posts.
Seeing contestation as structured into certain governance regimes is, in some sense, sobering. However, I think it also provides a well-needed reality check with important prescriptions. We’re unlikely to prevent biological catastrophe by prohibiting dangerous paths. However, I do think there is plenty of space for greater ingenuity on rendering the path of progress much safer than it could be otherwise.
See this Google Doc.
Hähnel, M. (2024). Conceptualizing dual use: A multidimensional approach. Research Ethics, 17470161241261466. https://doi.org/10.1177/17470161241261466
Enemark, C. (2017). Biosecurity Dilemmas: Dreaded Diseases, Ethical Responses, and the Health of Nations. Georgetown University Press.
Vaynman, J., & Volpe, T. A. (2023). Dual Use Deception: How Technology Shapes Cooperation in International Relations. International Organization, 77(3), 599–632. https://doi.org/10.1017/S0020818323000140
Enemark, C. (2017). Biosecurity Dilemmas: Dreaded Diseases, Ethical Responses, and the Health of Nations. Georgetown University Press.
I note this is pretty contestable given the many possible steps to turn a pathogen into a bioweapon. However, the importance of each step is a complex matter, particularly given the ability to cause disease with an agent alone. The broader point holds true though, that readiness to conduct an attack using bioweapons is much more important than the discrete agents or technologies one has stockpiled, and that both traditional biotechnologies and synthetic biologies are complicated by similar challenges in this respect.
A source that I think representatively makes this argument is Jefferson, C., Lentzos, F., & Marris, C. (2014). Synthetic Biology and Biosecurity: Challenging the “Myths”. Frontiers in Public Health, 2, 115. https://doi.org/10.3389/fpubh.2014.00115.
Committee on Science, Technology, and Law, Policy and Global Affairs, Board on Life Sciences, Division on Earth and Life Studies, Forum on Microbial Threats, Board on Global Health, National Research Council, & Institute of Medicine. (2013). Appendix B: Official Statements. In Perspectives on Research with H5N1 Avian Influenza: Scientific Inquiry, Communication, Controversy: Summary of a Workshop. National Academies Press (US). https://www.ncbi.nlm.nih.gov/books/NBK206979/
Ibid.
Weng, G., Bhalla, U. S., & Iyengar, R. (1999). Complexity in Biological Signaling Systems. Science, 284(5411), 92–96. https://doi.org/10.1126/science.284.5411.92
Haas, P. M. (1992). Introduction: Epistemic Communities and International Policy Coordination. International Organization, 46(1), 1–35. https://www.jstor.org/stable/2706951
Latour, B., & WooIgar, S. (1986). Laboratory Life: The Construction of Scientific Facts. Princeton University Press. https://doi.org/10.2307/j.ctt32bbxc