Deep Fakes Get Real Fake

Rob posted this on Facebook today:

My first reaction is that I should show this to my dad before he starts getting forwarded emails containing a video of Obama from an underground bunker in DC where he intends to stage a coup. Someone already sent him that email, un-ironically, but without a video. Add the video in and we may be flying under the BS radar of mainstream, intelligent people who might happen to lean a little bit to the right.

Here’s a video from the BBC talking about how this kind of thing is made:

At the end of it, the researcher mentions that once you know how to make one of these, you can then use technology to spot an edited video.

I do worry whether or not people will listen when they’re told they’re looking at a fake. Or when a real video is dismissed as fake.

“Listen, I know you think the FBI is dosing gun owners with LSD and conscripting them into UN drum circles, but a researcher named Dr. Fancy Pants at the University of Blue State ran a sophisticated algorithmic test on that video, and … wait … where are you going?”

Watch your step…there’s rough footing ahead.

Google is considering ways to fight fake news

It’s interesting to watch as tech companies are racing against regulation and financial harm resulting from all of the issues surrounding fake news that have come to light over the past two years.

Quartz is reporting that Google (I will never be able to call the company “Alphabet”) is floating some ideas to help deal with the “misleading information” problem, but more tellingly, that Google mentioned it as a financial risk to the company in its annual report:

Alphabet did flag “misleading” information and “objectionable content” as risks to the company’s financial performance in its annual report this week, for the first time ever. And the fact that executives were focused on the topic at Davos indicates the tech company’s willingness to take a more active role in filtering out fake news and propaganda.

Interesting to see as Twitter just posted their first profit since going public in 2013, but has famously become overrun with racists, nazis and Russian bots for the past year or two. At some point there’s going to be a decline in usage. Whether or not it’s business is being entirely supported by the insane rantings of a crazy person in decline, time will only tell.

The big problem with all of this is the judgement call that will have to be made. As Quartz mentions:

The idea presents some obvious hurdles—among them the question of who determines what is misinformation, which can involve individual judgment and political sensitivity.

On the surface it seems like a no-brainer. Crack down on this stuff before the world comes apart at the seams! But things look far different if the cracking down begins taking on a political bent or in pursuit of some kind of agenda that would benefit the platform or company.

Slippery slope, as they say. Next thing you know, we’ll be marrying toasters.

And therein lies the problem. None of the major platforms want to play the role of censor. There’s a little bit of a utopian belief that all of this will self-correct, but there’s also the realities that making judgements on content puts them in an odd place of power that they don’t want. But when it comes to the threat of being regulated or risking business performance, utopian dreams have to be put aside and complicated adult decisions have to be made.

A better way to fight fake news

Fake News has been a consistent topic in my classes over the past two years. The non-partisan idea that this stuff is out there, spreads all over social media, and that platforms like Facebook have to be really careful in how they step in to make judgements on what people post. None of our off the cuff 30-minute discussions led to a solution. Shocking, yes.

It turns out that a direct approach of fact checking wasn’t the answer, it was providing more information in the form of related articles.

Facebook found a better way to fight fake news