publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2025
- AIAAIA: Autoregression-based Injection Attacks against Text2SQL ModelsAAAI Conference on Artificial Intelligence (AAAI) , 2025
- Diff2Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion ModelsIn The 34nd USENIX Security Symposium (Security ’25), 2025
- RAPIDRAPID: Retrieval Augmented Training of Differentially Private Diffusion ModelsIn The 13th International Conference on Learning Representations (ICLR ’25), 2025
- RobustKVRobustKV: Defending Large Language Models against Jailbreak Attacks via KV EvictionIn The 13th International Conference on Learning Representations (ICLR ’25), 2025
2024
- DifficultyOn the Difficulty of Defending Contrastive Learning against Backdoor AttacksIn The 33nd USENIX Security Symposium (Security ’24), 2024
- DynamicAttentionImproving the Robustness of Transformer-based Large Language Models with Dynamic AttentionIn The Network and Distributed System Security Symposium (NDSS’ 24), 2024
- VFLHijack Vertical Federated Learning Models As One PartyIEEE Transactions on Dependable and Secure Computing (TDSC 24), 2024
- When Large Language Models Confront Repository-Level Automatic Program Repair: How Well They Done?ICSE Industry Challenge Track, 2024
- Model Extraction Attacks RevisitedACM ASIACCS, 2024
- Towards Query-Efficient Decision-Based Adversarial Attacks Through Frequency DomainIEEE Conference on Multimedia Expo 2024, 2024
2023
- IMPRESSIMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AIIn Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS’ 23), 2023
- FEW-SHOTDefending Pre-trained Language Models as Few-shot Learners against Backdoor AttacksIn Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS’ 23), 2023
- CTRLAn Embarrassingly Simple Backdoor Attack against Self-supervised LearningIn The 2023 International Conference on Computer Vision (ICCV ’23), 2023
2022
- AutoMLThe Dark Side of AutoML: Towards Architectural Backdoor SearchIn The 11th International Conference on Learning Representations (ICLR ’23), 2022
- LiveBuggerSeeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake EraIn The 31st USENIX Security Symposium (Security ’22), 2022
2021
- AmoebaTowards certifying the asymmetric robustness for neural networks: quantification and applicationsIEEE Transactions on Dependable and Secure Computing (TDSC 21), 2021