Skip to main content

Spring Deadline: Sunday, March 1 @ 11:59pm PT. Click here to apply.

Back to Research
Accepted to BioMed @ CVPR 2025

Prompting Toxicity: Analyzing Biosafety Risks in Genomic Language Models

Akshay Murthy, Mengmeng Zhang, Aashrita Koyyalamudi, Shanmukhi Kannamangalam

Abstract

Biological LLMs trained on vast genomic data can produce sequences with high similarity to harmful viruses or bacteria under carefully crafted inputs, creating dual-use risks. This paper analyzes biosafety concerns in genomic language models, examining how models can be manipulated to generate DNA sequences resembling pathogenic organisms despite safety measures. We propose mitigation strategies including rigorous safety alignment during model training, robust output filtering mechanisms, and stringent access controls. [arXiv link TBA]

Citation

Akshay Murthy, Mengmeng Zhang, Aashrita Koyyalamudi, Shanmukhi Kannamangalam. "Prompting Toxicity: Analyzing Biosafety Risks in Genomic Language Models". Accepted to BioMed @ CVPR 2025.

Resources

Details

Conference
Accepted to BioMed @ CVPR 2025
Authors
4 authors

Publish Your Research

Join Algoverse and work with world-class mentors to publish at top AI conferences.

Start Your Application