Skip to main content

Summer Deadline: Sunday, March 29 @ 11:59pm PT. Click to apply.

Prompting Toxicity: Analyzing Biosafety Risks in Genomic Language Models

Prompting Toxicity: Analyzing Biosafety Risks in Genomic Language Models

December 1, 2025

Biological LLMs trained on vast genomic data can produce sequences with high similarity to harmful viruses or bacteria under carefully crafted inputs, creating dual-use risks. This paper analyzes bios...

Accepted to BioSafe GenAI @ NeurIPS 2025

Authors: Akshay Murthy, Mengmeng Zhang, Aashrita Koyyalamudi, Shanmukhi Kannamangalam

Biological LLMs trained on vast genomic data can produce sequences with high similarity to harmful viruses or bacteria under carefully crafted inputs, creating dual-use risks. This paper analyzes biosafety concerns in genomic language models, examining how models can be manipulated to generate DNA sequences resembling pathogenic organisms despite safety measures. We propose mitigation strategies including rigorous safety alignment during model training, robust output filtering mechanisms, and stringent access controls.

Begin Your Journey

The application takes 10 minutes and is reviewed on a rolling basis. We look for strong technical signal—projects, coursework, or competition results—and a genuine curiosity to do real research.

If admitted, you will join a structured pipeline with direct mentorship to take your work from ideation to top conference submission at venues like NeurIPS, ACL, and EMNLP.