Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
Future Blog Post
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Blog Post number 4
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 3
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 2
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 1
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
portfolio
Portfolio item number 1
Short description of portfolio item number 1
Portfolio item number 2
Short description of portfolio item number 2
publications
Towards Reactive Acoustic Jamming for Personal Voice Assistants
Published in Proceedings of the 2nd International Workshop on Multimedia Privacy and Security, 2018
This paper develops Reactive Acoustic Jamming, a method that emits ultrasonic signals upon wake-word detection to block unauthorized voice recordings, providing proactive privacy protection for voice assistant users.
Recommended citation: Cheng, P., Bagci, I. E., Yan, J., Roedig, U. (2018). "Towards Reactive Acoustic Jamming for Personal Voice Assistants." *Proceedings of the 2nd International Workshop on Multimedia Privacy and Security*, 1–13.
Download Paper | Download Bibtex
Smart Speaker Privacy Control—Acoustic Tagging for Personal Voice Assistants
Published in IEEE Security and Privacy Workshops (SPW 2019), 2019
This paper introduces acoustic tagging for privacy control, embedding imperceptible tags into voice streams to enable privacy preference signaling and unauthorized recording traceability in voice assistant systems.
Recommended citation: Cheng, P., Bagci, I. E., Yan, J., Roedig, U. (2019). "Smart Speaker Privacy Control—Acoustic Tagging for Personal Voice Assistants." *IEEE Security and Privacy Workshops (SPW 2019)*, 144–149.
Download Paper | Download Bibtex
SonarSnoop: Active Acoustic Side-Channel Attacks
Published in International Journal of Information Security (IJIS), 2020
This paper demonstrates novel sonar-like attacks using smartphone acoustic hardware to infer user interactions like unlocking patterns. The work was a finalist for the Pwnie Award 2019 for most innovative research and received recognition from security experts.
Recommended citation: Cheng, P., Bagci, I. E., Roedig, U., Yan, J. (2020). "SonarSnoop: Active Acoustic Side-Channel Attacks." *International Journal of Information Security (IJIS)*, 19(2), 213–228.
Download Paper | Download Bibtex
Adversarial Command Detection Using Parallel Speech Recognition Systems
Published in Computer Security - ESORICS 2021 International Workshops, 2021
This paper proposes a defense mechanism leveraging parallel speech recognition systems to detect inaudible malicious commands targeting voice assistants, countering adversarial exploitation of voice-controlled systems.
Recommended citation: Cheng, P., Sankar, M. S. A., Bagci, I. E., Roedig, U. (2021). "Adversarial Command Detection Using Parallel Speech Recognition Systems." *Computer Security - ESORICS 2021 International Workshops*, 238–255.
Download Paper | Download Bibtex
Personal Voice Assistant Security and Privacy—A Survey
Published in Proceedings of the IEEE, 2022
This comprehensive survey examines acoustic-channel-driven security and privacy threats in voice assistants, providing a systematic analysis of vulnerabilities and defense mechanisms in personal voice assistant systems.
Recommended citation: Cheng, P., Roedig, U. (2022). "Personal Voice Assistant Security and Privacy—A Survey." *Proceedings of the IEEE*, 110(4), 476–507.
Download Paper | Download Bibtex
UniAP: Protecting Speech Privacy With Non-Targeted Universal Adversarial Perturbations
Published in IEEE Transactions on Dependable and Secure Computing (TDSC), 2023
This paper proposes UniAP, a non-targeted adversarial attack framework to obfuscate speech signals, achieving >87% success in real-world scenarios for speech privacy protection.
Recommended citation: Cheng, P., Wu, Y., Hong, Y., Ba, Z., Lin, F., Lu, L., Ren, K. (2023). "UniAP: Protecting Speech Privacy With Non-Targeted Universal Adversarial Perturbations." *IEEE Transactions on Dependable and Secure Computing (TDSC)*, 21(1), 31–46.
Download Paper | Download Bibtex
InfoMasker: Preventing Eavesdropping Using Phoneme-Based Noise
Published in Network and Distributed System Security Symposium (NDSS 2023), 2023
This paper designs ultrasonic noise injection systems to disrupt unauthorized recordings while preserving authorized access, reducing speech recognition accuracy to <50% even at low energy levels.
Recommended citation: Huang, P., Wei, Y., Cheng, P., Ba, Z., Lu, L., Lin, F., Zhang, F., Ren, K. (2023). "InfoMasker: Preventing Eavesdropping Using Phoneme-Based Noise." *Network and Distributed System Security Symposium (NDSS 2023)*.
Download Paper | Download Bibtex
Transferring Audio Deepfake Detection Capability Across Languages
Published in Proceedings of the ACM Web Conference (WWW 2023), 2023
This paper introduces domain adaptation techniques to transfer deepfake detection capabilities across languages, validated on 137-hour multilingual datasets to address the challenge of detecting deepfakes in low-resource languages.
Recommended citation: Ba, Z., Wen, Q., Cheng, P. (corresponding author), Wang, Y., Lin, F., Lu, L., Liu, Z. (2023). "Transferring Audio Deepfake Detection Capability Across Languages." *Proceedings of the ACM Web Conference (WWW 2023)*, 2033–2044.
Download Paper | Download Bibtex
Masked Diffusion Models Are Fast and Privacy-Aware Learners
Published in arXiv preprint, 2023
This paper explores privacy-preserving capabilities of masked diffusion models, demonstrating their potential for fast learning while maintaining privacy guarantees in generative AI applications.
Recommended citation: Lei, J., Wang, Q., Cheng, P., Ba, Z., Qin, Z., Wang, Z., Liu, Z., Ren, K. (2023). "Masked Diffusion Models Are Fast and Privacy-Aware Learners." *arXiv preprint arXiv:2306.11363*.
Download Paper | Download Bibtex
Phoneme-Based Proactive Anti-Eavesdropping with Controlled Recording Privilege
Published in IEEE Transactions on Dependable and Secure Computing (TDSC), 2024
This paper extends the InfoMasker framework with controlled recording privilege mechanisms, enabling selective privacy protection while maintaining authorized access to voice communications.
Recommended citation: Huang, P., Wei, Y., Cheng, P., Ba, Z., Lu, L., Lin, F., Wang, Y., Ren, K. (2024). "Phoneme-Based Proactive Anti-Eavesdropping with Controlled Recording Privilege." *IEEE Transactions on Dependable and Secure Computing (TDSC)*.
Download Paper | Download Bibtex
ALIF: Low-Cost Adversarial Audio Attacks on Black-Box Speech Platforms Using Linguistic Features
Published in IEEE Symposium on Security and Privacy (SP 2024), 2024
This paper proposes linguistic feature-based attacks using TTS/ASR reciprocity, enabling single-query adversarial samples with 97.7% query cost reduction. Validated on four commercial systems and adopted by NVIDIA for their AI security toolkit.
Recommended citation: Cheng, P., Wang, Y., Huang, P., Ba, Z., Lin, X., Lin, F., Lu, L., Ren, K. (2024). "ALIF: Low-Cost Adversarial Audio Attacks on Black-Box Speech Platforms Using Linguistic Features." *IEEE Symposium on Security and Privacy (SP 2024)*, 1628–1645.
Download Paper | Download Bibtex
Indelible “Footprints” of Inaudible Command Injection
Published in IEEE Transactions on Information Forensics and Security (TIFS), 2024
This paper discovers hardware-specific artifacts in ultrasound injections and designs DolphinTag to detect attacks via abnormal demodulation with 100% accuracy, achieving 99.8% accuracy on interference signatures through software methods.
Recommended citation: Ba, Z., Gong, B., Wang, Y., Liu, Y., Cheng, P. (corresponding author), Lin, F., Lu, L., Ren, K. (2024). "Indelible \"Footprints\" of Inaudible Command Injection." *IEEE Transactions on Information Forensics and Security (TIFS)*.
Download Paper | Download Bibtex
SurrogatePrompt: Bypassing the Safety Filter of Text-to-Image Models via Substitution
Published in Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS 2024), 2024
This paper exposes critical vulnerabilities in commercial text-to-image models through SurrogatePrompt, achieving 88% success rate in bypassing safety filters to generate unsafe content. The findings were acknowledged by Midjourney and Stability.ai.
Recommended citation: Ba, Z., Zhong, J., Lei, J., Cheng, P. (corresponding author), Wang, Q., Qin, Z., Wang, Z., Ren, K. (2024). "SurrogatePrompt: Bypassing the Safety Filter of Text-to-Image Models via Substitution." *Proceedings of the ACM SIGSAC Conference on Computer and Communications Security (CCS 2024)*, 1166–1180.
Download Paper | Download Bibtex
CLINDA: A Cross-lingual Domain Adaptation Framework for Challenging Audio Deepfake Detection Tasks across Languages
Published in Under Review, 2024
This paper proposes cross-lingual domain adaptation for audio deepfake detection, enabling generalization to low-resource languages using limited training data, addressing the challenge of detecting deepfakes when they are inaccessible in target languages.
Recommended citation: Wen, Q., Cheng, P., Ba, Z., Yi, L., Qin, Z., Lu, L., Wang, Q., Ren, K. (2024). "CLINDA: A Cross-lingual Domain Adaptation Framework for Challenging Audio Deepfake Detection Tasks across Languages." *Under Review*.
Download Paper | Download Bibtex
Test-Time Adaptation for Audio Deepfake Detection
Published in Under Review, 2025
This paper introduces test-time adaptation with self-supervised tasks including codec reconstruction and speed analysis for cross-domain audio deepfake detection, improving generalization across different domains and synthesis methods.
Recommended citation: Gong, B., Shuai, C., Wen, Q., Cheng, P., Wang, Q., Ba, Z., Wang, Z., Ren, K. (2025). "Test-Time Adaptation for Audio Deepfake Detection." *Under Review*.
Download Paper | Download Bibtex
JudgeRail: Harnessing Open-Source LLMs for Fast Harmful Text Detection with Judicial Prompting and Logit Rectification
Published in Under Review, 2025
This paper presents JudgeRail, a judicial prompting framework to align open-source LLMs with ethical guidelines, making them competitive with fine-tuned moderation models while significantly outperforming conventional solutions.
Recommended citation: Ba, Z., Fu, H., Yang, Y., Chen, H., Wang, Q., Cheng, P., Qin, Z., Ren, K. (2025). "JudgeRail: Harnessing Open-Source LLMs for Fast Harmful Text Detection with Judicial Prompting and Logit Rectification." *Under Review*.
Download Paper | Download Bibtex
SecHeadset: A Practical Privacy Protection System for Real-time Voice Communication
Published in Proceedings of the ACM MobiSys 2025, 2025
This paper presents SecHeadset, a practical privacy protection system that prevents third parties from eavesdropping on speech content in VoIP and voice message applications by adding vowel-based noise to speech audio signals, validated through 204-user studies.
Recommended citation: Huang, P., Pan, K., Wang, Q., Cheng, P., Lu, L., Ba, Z., Ren, K. (2025). "SecHeadset: A Practical Privacy Protection System for Real-time Voice Communication." *Proceedings of the ACM MobiSys 2025*.
Download Paper | Download Bibtex
Robust Watermarks Leak: Channel-Aware Feature Extraction Enables Adversarial Watermark Manipulation
Published in arXiv preprint, 2025
This paper reveals inherent tradeoffs in watermark robustness, enabling single-image attacks to extract and forge watermarks while maintaining visual fidelity, exposing fundamental vulnerabilities in current watermarking approaches.
Recommended citation: Ba, Z., Zhang, Y., Cheng, P. (corresponding author), Gong, B., Zhang, X., Wang, Q., Ren, K. (2025). "Robust Watermarks Leak: Channel-Aware Feature Extraction Enables Adversarial Watermark Manipulation." *arXiv preprint arXiv:2502.06418*.
Download Paper | Download Bibtex
WMCopier: Forging Invisible Image Watermarks on Arbitrary Images
Published in arXiv preprint, 2025
This paper develops DiffForge, a no-box attack using diffusion models to inject imperceptible watermarks, deceiving both open-source and commercial watermark detectors and revealing critical vulnerabilities in current watermarking systems.
Recommended citation: Dong, Z., Shuai, C., Ba, Z., Cheng, P., Qin, Z., Wang, Q., Ren, K. (2025). "Imperceptible but Forgeable: Practical Invisible Watermark Forgery via Diffusion Models." *arXiv preprint arXiv:2503.22330*.
Download Paper | Download Bibtex
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
Teaching experience 1
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Teaching experience 2
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.