ENTER TO WIN THE TRUMP TRUCK!
click here for details
Screenshot of Youtube video: "Face2Face: Real-time Face Capture and Reenactment of RGB Videos (CVPR 2016 Oral)."

MIND-BLOWING: New Technology Could Be Used to Fake Interviews [VIDEO]

Screenshot of Youtube video: "Face2Face: Real-time Face Capture and Reenactment of RGB Videos (CVPR 2016 Oral)."

A University of Chicago Business School professor, Luigi Zingales, recently exposed the reality of mainstream media: repeat lies often and the public will believe them.

Speaking to Francine Lacqua, host of Bloomberg Television’s “The Pulse,” he said:

“If people are told enough by smart people on television that the economy has been fixed, and the market is a reflection of the fundamentals, then they’ll blindly support anything the Fed does.”

His assessment isn’t a “conspiracy theory,” either.

Stanford University’s Matthias Niebner Lab  published a paper, “Face2Face: Real-time Face Capture and Reenactment of RGB Videos,” which quite literally “shows” how easy it is to visually communicate lies.

They position an actor, use available technological tools, and create the illusion that someone else is speaking, when they aren’t. Call them “talking clones”: software created, sensory manipulated, webcam savvy, completely false “experts,” or talking heads.

Example of a “Talking clone”:

Screenshot of Youtube video: "Face2Face: Real-time Face Capture and Reenactment of RGB Videos (CVPR 2016 Oral)."
Screenshot of Youtube video: “Face2Face: Real-time Face Capture and Reenactment of RGB Videos (CVPR 2016 Oral).”

The paper’s abstract explains:

We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video).

The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure.

Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit.

Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.

In other words, whoever is believed to be seen speaking on television, could quite easily be someone else and not the assumed person at all. Because it’s real-time technology and conversation, the “talking clone” can talk, react, answer questions, move, and express emotion, conveying a totally convincing, flawless to the naked eye, deception.

Because the deceit is so effective, algorithm’s creators recognized communicated their intention for creating it. They write:

This demo video is purely research-focused and we would like to clarify the goals and intent of our work. Our aim is to demonstrate the capabilities of modern computer vision and graphics technology, and convey it in an approachable and fun way. We want to emphasize that computer-generated videos have been part in feature-film movies for over 30 years.

Virtually every high-end movie production contains a significant percentage of synthetically-generated content (from Lord of the Rings to Benjamin Button). These results are hard to distinguish from reality and it often goes unnoticed that the content is not real. The novelty and contribution of our work is that we can edit pre-recorded videos in real-time on a commodity PC.

Please also note that our efforts include the detection of edits in video footage in order to verify a clip’s authenticity. For additional information, we refer to our project website (see above). Hopefully, you enjoyed watching our video, and we hope to provide a positive takeaway.

However, imagine this technology being used by a government that rejects separations of power, the authority of the U.S. Constitution, the principles of liberty embedded in the Declaration of Independence, and Bill of Rights, and it’s not unlikely that such technological tactics could be used to manipulate and deceive low-information voters who will believe everything they hear and view on television, regardless if it’s completely false.

Watch the technology depict “real-time” deception in action:

Tags

Bethany Blankley

Bethany Blankley is a political analyst for Fox News Radio and has appeared on television and radio programs nationwide. She writes about political, cultural, and religious issues in America from the perspective of an evangelical and former communications staffer. She was a communications strategist for four U.S. Senators, one U.S. Congressman, a former New York governor, and several non-profits. She earned her MA in Theology from The University of Edinburgh, Scotland and her BA in Political Science from the University of Maryland. Follow her @bethanyblankley facebook.com/BlankleyBethany/ & BethanyBlankley.com.

Please leave your comments below

Facebook Comments

Disqus Comments

BECOME A CONSTITUTIONAL INSIDER
Thanks for sharing! We invite you to sign up for our free email newsletter, and get a free copy of Catechism on the Constitution of The United States.

Send this to friend