My first reaction is that I should show this to my dad before he starts getting forwarded emails containing a video of Obama from an underground bunker in DC where he intends to stage a coup. Someone already sent him that email, un-ironically, but without a video. Add the video in and we may be flying under the BS radar of mainstream, intelligent people who might happen to lean a little bit to the right.
Here’s a video from the BBC talking about how this kind of thing is made:
At the end of it, the researcher mentions that once you know how to make one of these, you can then use technology to spot an edited video.
I do worry whether or not people will listen when they’re told they’re looking at a fake. Or when a real video is dismissed as fake.
“Listen, I know you think the FBI is dosing gun owners with LSD and conscripting them into UN drum circles, but a researcher named Dr. Fancy Pants at the University of Blue State ran a sophisticated algorithmic test on that video, and … wait … where are you going?”
Alphabet did flag “misleading” information and “objectionable content” as risks to the company’s financial performance in its annual report this week, for the first time ever. And the fact that executives were focused on the topic at Davos indicates the tech company’s willingness to take a more active role in filtering out fake news and propaganda.
Interesting to see as Twitter just posted their first profit since going public in 2013, but has famously become overrun with racists, nazis and Russian bots for the past year or two. At some point there’s going to be a decline in usage. Whether or not it’s business is being entirely supported by the insane rantings of a crazy person in decline, time will only tell.
The big problem with all of this is the judgement call that will have to be made. As Quartz mentions:
The idea presents some obvious hurdles—among them the question of who determines what is misinformation, which can involve individual judgment and political sensitivity.
On the surface it seems like a no-brainer. Crack down on this stuff before the world comes apart at the seams! But things look far different if the cracking down begins taking on a political bent or in pursuit of some kind of agenda that would benefit the platform or company.
Slippery slope, as they say. Next thing you know, we’ll be marrying toasters.
And therein lies the problem. None of the major platforms want to play the role of censor. There’s a little bit of a utopian belief that all of this will self-correct, but there’s also the realities that making judgements on content puts them in an odd place of power that they don’t want. But when it comes to the threat of being regulated or risking business performance, utopian dreams have to be put aside and complicated adult decisions have to be made.
It’s also easy to see the tech evolving to include real-time scraping and analysis of social media, credit reports, or other data that could be used by sales people, con artists, or repressive governments.
As someone who has a terrible time remembering people’s names, I could see a use for it…
In what seems like a whole new genre of journalism, the Guardian [ran a piece]((https://www.theguardian.com/media/2018/jan/23/never-get-high-on-your-own-supply-why-social-media-bosses-dont-use-social-media) about how executives of these services don’t really use them, don’t let their kids use them, and in some cases, leave the industry out of disgust.
Former vice-president for user growth at Facebook, Chamath Palihapitiya:
The short-term, dopamine-driven feedback loops that we have created are destroying how society works. No civil discourse, no cooperation; misinformation, mistruth…
This is not about Russian ads. This is a global problem. It is eroding the core foundations of how people behave by and between each other. I can control my decision, which is that I don’t use that shit. I can control my kids’ decisions, which is that they’re not allowed to use that shit.
“Now watch this drive.”
Most people will read that and feel a sense of horror, hopelessness, and maybe start to rethink their use of social media.
But I have to think there people running holding companies that would kill to be that relevant to to the end of civilization.
The two companies are opening a new restaurant in Beijing “which employs facial recognition to make recommendations about what customers might order, based on factors like their age, gender and facial expression. Image recognition installed at the KFC will scan customer faces, seeking to infer moods, and guess other information including gender and age in order to inform their recommendation.”
People will use this. Not because it’s better. But because of the novelty. Imagine an online quiz, but without all of that dreadful hard work. And then imagine that after Facebook reveals your house to be House Targaryan, you were handed a 1500-calorie lunch.
I can’t imagine that this will be successful…at first. Eventually, things like this will work their way into the everyday world. The question is whether customers will see any of the benefit.