Skip to main content

Summer Deadline: Sunday, March 29 @ 11:59pm PT. Click to apply.

Back to Research
Accepted to BioSafe GenAI @ NeurIPS 2025

Prompting Toxicity: Analyzing Biosafety Risks in Genomic Language Models

Akshay Murthy, Mengmeng Zhang, Aashrita Koyyalamudi, Shanmukhi Kannamangalam

Abstract

Biological LLMs trained on vast genomic data can produce sequences with high similarity to harmful viruses or bacteria under carefully crafted inputs, creating dual-use risks. This paper analyzes biosafety concerns in genomic language models, examining how models can be manipulated to generate DNA sequences resembling pathogenic organisms despite safety measures. We propose mitigation strategies including rigorous safety alignment during model training, robust output filtering mechanisms, and stringent access controls.

Citation

Akshay Murthy, Mengmeng Zhang, Aashrita Koyyalamudi, Shanmukhi Kannamangalam. "Prompting Toxicity: Analyzing Biosafety Risks in Genomic Language Models". Accepted to BioSafe GenAI @ NeurIPS 2025.

Resources

Details

Conference
Accepted to BioSafe GenAI @ NeurIPS 2025
Authors
4 authors

Publish Your Research

Join Algoverse and work with world-class mentors to publish at top AI conferences.

Start Your Application