The Smarter way to get your business news - Subscribe to BloombergQuint on WhatsApp
Yes, Russia did try to interfere with the U.S. election.
We are building artificial intelligence based tools to find fake accounts.
We are not a nation-state, just a company.
We defend the people’s rights to say things even if they can be bad.
I think companies need to be learning organisms.
We don’t sell data.
It’ll take about three years to fully retool everything at Facebook
Bill Gates has always been a mentor and an inspiration for me.
Facebook CEO Mark Zuckerberg discussed a wide spectrum of issues with Kara Swisher of Recode.
Here are the best bits... courtesy Recode.
You saw the news about it (Trump-Putin Helsinki meet). Tell me what you think about his idea that there is no evidence that the Russians used social media, and did different things during the election.
Well the evidence that we’ve seen is quite clear, that the Russians did try to interfere with the election.
This is on Facebook?
Yes. All of what we saw is on Facebook. Then we’ve tried to cooperate with the government and the different investigations that are going on. They obviously have much more context than this. But what we saw, before the election, was this Russian hacking group, part of Russian military intelligence, that I guess our government calls APT28. They were trying to do more traditional methods of hacking: Phishing people’s accounts, just getting access to people’s accounts that way.
We identified this, actually, in the middle of 2015 and notified the FBI. They’ve clearly gone much further now, at this point, in terms of putting the whole story together.
Now, there’s a whole other area of election interference that we were slower to identify. Instead of APT28, that was this group, IRA, the Internet Research Agency, which basically was just setting up a network of fake accounts, in order to spread divisive information.
So, you believe it’s the Russian government, from where you’re sitting, was using or misusing Facebook? Unlike Trump, you believe it was the Russian government?
The information that we have on who these groups are largely comes from the U.S. government and U.S intelligence.
So you believe U.S. intelligence?
We have no reason not to. Certainly, we’ve seen the activity from APT28. That name comes from U.S. intelligence. Advanced Persistent Threat 28 from Russia, and the IRA. These are real things. We went out, we traced IRA activity, not only through what they’ve tried to do in the U.S., but we’ve traced that activity back to trying to manipulate culture and news in Russia itself, including taking down there, pages that are connected to sanctioned Russian news organizations that the government, Russcom, has said are real news organizations there, but what we’ve detected through our systems are actually essentially the same thing as the IRA. All the people who are running them are the same.
These things are real, and we’ve been aggressively pursuing them for the last couple of years. This is now just part of the ongoing playbook that we have for preventing these kind of disinformation campaigns.
"Now the playbook is, we build AI tools to go find these fake accounts, find coordinated networks of inauthentic activity and take them down; we make it much harder for anyone to advertise in ways that they shouldn’t be. "
“We Were Overly Idealistic”
...do you reflect on what it was within ‘cause you’re the leader here, you’re the head of this, that you didn’t see it? That you don’t see that side of humanity? Or, that you don’t understand your responsibility?
I’m not sure. I think… In retrospect, I do think it’s fair to say that we were overly idealistic and focused on more of the good parts of what connecting people and giving people a voice can bring. I think now we understand that, given where we are, both the centrality of Facebook, but also, frankly, we’re a profitable enough company to have 20,000 people go work on reviewing content, so I think that means that we have a responsibility to go do that. That’s a different position than we were in five or six years ago, or even when we went public and were a meaningfully smaller company at that point.
I do think it’s fair to say that we were probably… we were too focused on just the positives and not focused enough on some of the negatives. That said, I don’t wanna leave the impression that we didn’t care about security or didn’t have thousands of people working on it before then.
Facebook’s Power Over (Fake) News
Make the case for keeping them (Infowars), and make the case for not allowing them to be distributed by you.
There are really two core principles at play here. There’s giving people a voice, so that people can express their opinions. Then, there’s keeping the community safe, which is really important. We’re not gonna let people plan violence or attack each other or do bad things. Within this, those principles have real trade-offs and real tug on each other. In this case, we feel like our responsibility is to prevent hoaxes from going viral and being widely distributed.
The approach that we’ve taken to false news is not to say, you can’t say something wrong on the internet. I think that that would be too extreme. Everyone gets things wrong, and if we were taking down people’s accounts when they got a few things wrong, then that would be a hard world for giving people a voice and saying that you care about that. But at the same time, we have a responsibility to, when you look at… if you look at the top hundred things that are going viral or getting distribution on Facebook within any given day, I do think we have a responsibility to make sure that those aren’t hoaxes and blatant misinformation.
That’s the approach that we’ve taken. We look at the things that are getting the most distribution. If people have flag them as potential hoaxes, we send those to fact-checkers who are all well reputable and have followed standard principles for fact checking, and if those fact checkers say that it is provably false, then we will significantly reduce the distribution of that content...
Why don’t you wanna just say “get off our platform?”
Look, as abhorrent as some of this content can be, I do think that it gets down to this principle of giving people a voice.
"I’m Jewish, and there’s a set of people who deny that the Holocaust happened. I find that deeply offensive. But at the end of the day, I don’t believe that our platform should take that down because I think there are things that different people get wrong. I don’t think that they’re intentionally getting it wrong, but I think It’s hard to impugn intent and to understand the intent."
“Do You Really Want Me To Fire Myself Right Now?”
Why didn’t you see the possibility of the data being misused? (referring to the Cambridge Analytica matter)
Yeah, so the principles at play here are, on the one hand, you want people to have control over their information and be able to bring it out of Facebook to other different apps, because we’re not gonna build all of the social experiences and it should be easy for people to use their data anywhere.
But on the other hand, if they have that information in Facebook and the developer has some relationship with us, then we also have a responsibility to protect people and keep people safe. And what happened here was a developer built a quiz app, and then they turned around and sold the data that people gave them to someone else. And that is clearly against all of the policies that we have. I mean, that’s terrible, right? We don’t sell data, we don’t allow anyone to sell data. Because it was on their servers, we don’t necessarily see that transaction or whatever they’re doing.
But you have, in the past, caught people doing this and been much more rigorous in that.
So we do a number of things. One is, we do ongoing audits and we have built technical systems to see if a developer is requesting information in weird ways. We do spot checks where we can audit developers’ servers. But a lot of the stuff comes from flags that either people in the community or law enforcement or different folks send us, and that was actually similar here too. I think it was The Guardian who initially pointed out to us, “Hey, we think that this developer, Alexander Kogan, has sold information.” And when we learned about that, we immediately shut down the app, took away his profile, and demanded certification that the data was deleted.
Now the thing that I think, in retrospect, that we really messed up here is that we believed the certification. Now normally, I don’t know about you, but when someone writes a legal certification, my inclination is to believe that. But in retrospect, I think it’s very clear ...
There’s an expression in journalism, “If your mother says she loves you, check it.” But go ahead.
All right, that’s fair. I tend to have more faith in the rule of law, but...
...I think in retrospect ... You know, we didn’t know what Cambridge Analytica was there, it didn’t strike us as a sketchy thing. We just had no history with them. Knowing what I know now, we obviously would not have just taken their certification at its word and would’ve gone in and done an audit then.
Should someone have been fired for this?
Well, I think it’s a big issue. But look, I designed the platform, so if someone’s going to get fired for this, it should be me. And I think that the important thing going forward is to make sure that we get this right. In this case, the most important steps, in terms of, to prevent this from happening again, we’d already taken in 2014 when we had changed dramatically the way that the platform worked.
But overall, I mean, this is an important situation, and I think again it’s ... This to me is an example of, you get judged by how you deal with an issue when it comes up. And I think on this one, we’ve done the right things, and many of them I think we’d actually done years ago to prevent this kind of situation from happening again.
But to be clear, you’re not gonna fire yourself right now? Is that right?
Not on this podcast right now.
Do you really want me to fire myself right now?
Sure. It’s fine.
Just for the news?
There’s been some calls to break up some companies like Facebook or Amazon that become too big. Are you in fear of that in any way?
You know, I think that there’s ... It’s a very interesting debate overall. If you actually get down to why we’re big, it’s not ... In the traditional sense, we’re not big because we’re so big in the United States, although we are and a lot of people use our products here. If we weren’t an international company, if you said, “Okay, you have to shut down all of your services outside of the U.S.,” we actually would not be very profitable at all; we actually would probably be unprofitable.
So the reason why we are a successful and large company is because we have built something here that can now serve billions of people around the world as well, which is actually where all the margin comes from, in terms of ... I mean, we have the cost structure that we have, and then that’s where the business comes from and ... Don’t get me wrong, there’s a lot of revenue in the United States as well, but that would barely cover the cost of the company.
So I think you have this question from a policy perspective, which is, do we want American companies to be exporting across the world? We grew up here, I think we share a lot of values that I think people hold very dear here, and I think it’s generally very good that we’re doing this, both for security reasons and from a values perspective. Because I think that the alternative, frankly, is going to be the Chinese companies. If we adopt a stance which is that, “Okay, we’re gonna, as a country, decide that we wanna clip the wings of these companies and make it so that it’s harder for them to operate in different places, where they have to be smaller, then there are plenty of other companies out that are willing and able to take the place of the work that we’re doing.”
The Next Big Thing
What do you think the most exciting product area is right now?
Longer term, as a technologist, one of things that just excites me is there are always new computing platforms. Every 10 or 15 years a new one comes along. They’re always more native, they capture your human experience more. Immersively, you share more naturally what you’re experiencing. I just think that VR and AR are going to be a really big deal.
You can just see this trajectory from early internet, when the technology and connections were slow, most of the internet was text. Text is great, but it can be sometimes hard to capture what’s going on. Then, we all got phones with cameras on them and the internet got good enough to be primarily images. Now the networks are getting good enough that it’s primarily video. At each step along the way, we’re able to capture the human experience with greater fidelity and richness, and I think that that’s great.
Now, I do think that we’re gonna move towards this world where eventually you’ll be able to capture a whole experience that you’re in and be able to send that to someone. I think that that’s just gonna be an amazing technology for perspective taking and putting yourself in other people’s shoes, for being able to feeling like you’re really physically there with someone even when you’re not. One of the criticisms of technology today is you’re sitting and looking at your phone, and we could be sitting together but we’re actually fragmented.
"" - I think things like hyperloops and things like that can extend the suburbs, could be quite interesting, but I have to believe that, we’re here in 2018, it’s much cheaper and easier to move bits around than it is atoms. It strikes me that something like VR or AR, or even video conferencing on the path to that, has to be a more likely part of the solution.
Read the full Recode interview here.
The Smarter way to get your business news - Subscribe to BloombergQuint on WhatsApp
. Read more on Technology by BloombergQuint.