The Defense Department uses artificial intelligence (AI) to identify potential deepfakes that might compromise national security.
Deepfakes, meant to resemble the voice and appearance of a natural person to trick an unknowing viewer, have become more straightforward than ever to produce with the help of generative AI technology. The Silicon Valley firm DeepMedia has won the Department of Defense contract to implement its AI-informed deepfake detection tools. The startup aspires to deliver fast, accurate, deep fake identification to oppose Russian and Chinese information warfare.
DeepMedia’s machine learning algorithms can identify photoshopped or altered images of people of any race, age, or gender speaking any of the world’s main languages. Millions of actual and false samples across 50 languages have been used to train the company’s system, which can now tell with 99.5% accuracy whether something is genuine or artificially altered. By emphasizing the goal and the technique employed to achieve the manipulation, DeepMedia’s detection algorithms alert users in the Department of Defense to the fact that a piece of information has been changed.
The necessity of multilingual deepfake detection was highlighted by Russia’s assault on Ukraine. The Department of Defense has worked with DeepMedia before to commission a universal translator platform to improve and expedite linguistic communication amongst allies. Rijul Gupta, CEO and co-founder of DeepMedia, refers to the UN’s use of DeepMedia’s AI capabilities in automated translation and synthesis of speech across major global languages as an example of “deepfakes for good.”
The Japanese government also contacted DeepMedia to discuss how to regulate and promote the responsible use of AI.
To facilitate real-time communication with military allies, the business considered integrating its DubSync technology within Japan’s government systems and using DeepMedia’s DeepID platform to authenticate communications and foil any deepfake assaults by enemies properly.