New AI “Truth” Tool Targets Deepfakes and Misinformation

photo of blocks spelling fake and fact

By Andrii Yalanskyi for Adobe Stock

Words are powerful, but nothing is more persuasive than a video or audio recording of an event. But can we trust what we see or hear? 

Misinformation, disinformation and deepfakes are challenging obstacles in a world of fast-flowing information. But for law enforcement, misinformation—intentional or not—poses real threats when gathering evidence and ensuring the innocent are fairly represented in the courtroom.

photo of maryam taeb, doctoral candidate at famu-fsu engineering
Maryam Taeb, a doctoral candidate in electrical and computer engineering at the FAMU-FSU College of Engineering and creator of the algorithm. (M Wallheiser/FAMU-FSU Engineering)

In a new study, a team of FAMU-FSU engineering researchers led by Maryam Taeb, a doctoral candidate in the Department of Electrical and Computer Engineering, thinks that decentralized applications (Dapps) may provide a trusted platform to authenticate evidence. Dapps are software programs that run on a blockchain or peer-to-peer network of computers and provide safeguards for verifying information. 

“Maryam created an algorithm to address the disinformation problem,” Shonda Bernadin, an assistant professor in electrical and computer engineering and co-investigator, said. “Her framework will revolutionize how digital forensics evidence is collected, analyzed and stored. Her design also lends itself to generative AI methods, which is becoming a very transformative technology.”

The work has the potential to significantly impact the legal industry and it has global implications—particularly concerning the heightened data security and privacy concerns arising from the increased use of generative AI technology.

Creating an Algorithm

Initially, Taeb’s team created the algorithm to solve the current challenges of acquiring digital evidence. They found efficient ways to gather data, track metadata timestamps, and track interactions with the evidence and store it. Storing large volumes of evidence can be unmanageable if not done well. 

“Blockchain technology emerged as a pivotal solution, effectively tackling the initial phase and resolving the storage challenge,” Taeb said. “Then we looked for the most suitable algorithms and methods to detect deepfake media, ultimately opting for the most efficient to be the backbone of our model.” 

“We devised a robust trustworthiness system that incorporates all the associated metadata from the uploaded evidence and compares it in our deepfake detection model,” Taeb continued. “In the final output, this system confirms the authenticity of the evidence before presentation in court.”

Why It Matters

The finalized framework will be a standalone, open-source tool that law enforcement agency and legal professionals can readily employ. Lawyers who rely on credible evidence to build their cases can use the framework to verify their clients’ proof before presenting it in court.

“The system integrates into existing digital forensic applications,” Taeb said. “This allows agencies to identify and track suspects based on their intent to manipulate law enforcement, enhancing the overall capabilities of such frameworks.”

Future Goals

“We hope to establish a robust, user-friendly Decentralized Application (Dapp) that revolutionizes how evidence is handled in legal and forensic contexts,” Taeb said. “We aim to provide a trusted distribution channel for authenticating evidence and effectively mitigate the spread of misinformation and enhance transparency and accountability of the entire process.” 

Even though the team’s initial findings were successful, wider adoption within law enforcement agencies, legal practices and other user groups may take several years due to the evolution of generative AI tools and the continuous refinement of deepfake detection algorithms, Taeb explained. 

Who Is Involved?

Taeb is working with Hongmei Chi, a Florida A&M University (FAMU) professor and director of the FAMU Center for Cybersecurity in the Department of Computer and Information Sciences and Shonda Bernadin, an associate professor in electrical and computer engineering at the FAMU-FSU College of Engineering.

The Institute of Electrical and Electronics Engineering journal, IEEE Xplore recently published their research.

Who Was the Work Funded?

The study was funded through a grant from the National Centers of Academic Excellence in Cybersecurity, through FAMU and administrated through the National Security Agency. The Army Research Office provided additional funding through a separate grant. 

 


RELATED ARTICLES

FAMU-FSU Engineering, FSU Statistics Researchers Use Artificial Intelligence to Analyze Human Work Performance

Using machine learning and artificial intelligence, engineering researcher harnesses the power of small-experiment design

Improving Outcomes with AI-Powered Virtual Surgical Simulations