New tactics for detecting digital deception
Use agentic AI to combat false or misleading content. Digital deception tactics like deepfakes and impersonation require new approaches to detection. “Today, it’s less about how do you identify and determine if information is fake or real, but really particularly for us comms folks, how is your company and how are you responding?” asked Nick […] The post New tactics for detecting digital deception appeared first on Ragan Communications.

Use agentic AI to combat false or misleading content.
Digital deception tactics like deepfakes and impersonation require new approaches to detection.
“Today, it’s less about how do you identify and determine if information is fake or real, but really particularly for us comms folks, how is your company and how are you responding?” asked Nick Loui, co-founder and CEO of PeakMetrics, at Ragan’s AI Horizons conference.
Here’s how to spot real from fake:
Limits of detection
The increase in AI-generated content makes it increasingly difficult to distinguish deceptive content using traditional cues.
“We’re seeing how the volume of content that we need to monitor and track as folks that work in this space has increased, while the quality of content has significantly decreased,” said Loui.
Clues like unusual posting patterns or visual errors are harder to spot and text-based AI content is also far more coherent and persuasive than before.
Agentic AI
Loui said that as these tools grow in sophistication, so must communicators’ strategies for identifying and mitigating deception. Communicators can leverage AI agents to combat deceptive media.
For example, when a user submits a request to an agent such as: “combat deceptive media around my organization,” the following interactions between agents occur:
- Orchestration agent: Acts as the central coordinator by breaking down the initial request and assigning tasks to the appropriate AI agents.
- Research agent: Discovers relevant content by scanning news sites, social media, blogs, forums and fringe platforms for company-related narratives.
- Context agent: Analyzes patterns by clustering similar narratives and assessing their rate and scale of spread.
- Deception detector agent: Verifies content integrity by flagging manipulated images, deepfakes, AI-generated text and bot-driven activity.
- Threat scoring agent: Evaluates risk by ranking each narrative based on its potential to cause reputational or operational damage.
- Verific agent: Ensures accuracy by double-checking flagged content and confirming the validity of identified threats.
- Response agent: Executes crisis mitigation by alerting Nike’s crisis team and deploying countermeasures such as talking points, takedown notices, and community notes.
“When we’re thinking about this world of synthetic media overlapping with the concept of agentic AI that is now getting deployed into businesses every day, it creates a much more complex environment to be operating in,” said Loui.
The AI threat defense ecosystem
Loui said there are a lot of organizations that are thinking about what the future of combating these different types of threats looks like.
Loui categorizes the defense ecosystem into several domains:
- Threat intelligence firms tackling the problem from a cybersecurity lens
- Narrative intelligence companies that analyze information spread and sentiment
- Deepfake and bot detection specialists
- Trust and safety organizations focused on maintaining secure online environments
Watch the full presentation in the video below:
The post New tactics for detecting digital deception appeared first on Ragan Communications.