The open-sourced Python toolbox for backdoor attacks and defenses.
-
Updated
Jun 5, 2025 - Python
The open-sourced Python toolbox for backdoor attacks and defenses.
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and Privacy 2019.
TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
A curated list of papers & resources on backdoor attacks and defenses in deep learning.
An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running any actual code on the victim's machine or thwart LLM-based fraud/moderation systems.
WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)
This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks](https://openreview.net/pdf?id=9l0K4OM-oXE) in PyTorch.
Persistent Powershell backdoor tool {😈}
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
BackdoorSim: An Educational into Remote Administration Tools
You should never use malware to infiltrate a target system. With the skill of writing and exploiting technical codes, you can do the best ways of penetration. This is done in order to test and increase the security of the open sourcecode.
ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341
[ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".
LoVerst is a backdoor generator and backdoor generating tools.
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"
[ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Fast integration of backdoor attacks in machine learning and federated learning.
[MICCAI 2024] Official code repository of paper titled "BAPLe: Backdoor Attacks on Medical Foundation Models using Prompt Learning" accepted in MICCAI 2024 conference.
Add a description, image, and links to the backdoor-attacks topic page so that developers can more easily learn about it.
To associate your repository with the backdoor-attacks topic, visit your repo's landing page and select "manage topics."