AI can now defend itself against malicious messages hidden in speech


Publication/Creation Date
May 10 2019
Creators/Contributors
Matthew Hutson (creator)
Bo Li (contributor)
University Of Illinois At Urbana–Champaign (contributor)
Zhoulin Yang (contributor)
Shanghai Jiao Tong University (contributor)
Nicholas Carlini (contributor)
Alexandros Dimakis (contributor)
University Of Texas At Austin (contributor)
OpenAI (contributor)
Media Type
Journal Article
Persuasive Intent
Information
Description

Bo Li, a computer scientist at the University of Illinois at Urbana-Champaign, and her co-authors wrote an algorithm that transcribes a full audio clip and, separately, just one portion of it. If the transcription of that single piece doesn’t closely match the corresponding part of the full transcription, the program throws a red flag — the sample might have been compromised.

The authors showed that for several types of attack, their method almost always detected the meddling. Further, even if an attacker was aware of the defence system, attacks were still caught most of the time.

HCI Platform
Other
Location on Body
Not On The Body
Source
https://www.nature.com/articles/d41586-019-01510-1

Date archived
May 15 2019
Last edited
May 15 2019
How to cite this entry
Matthew Hutson. (May 10 2019). "AI can now defend itself against malicious messages hidden in speech". Nature: International Journal of Science. Springer Nature Publishing. Fabric of Digital Life. https://fabricofdigitallife.com/index.php/Detail/objects/3876