Susan Wojcicki, the chief executive officer of YouTube, was in a meeting on the second floor of her company’s headquarters in San Bruno, Calif., when she heard the first gunshot. It came from outside; more followed. Some of her employees ran for the exits; others barricaded themselves in conference rooms. Those eating lunch on the outdoor patio hid under the tables.
The incident was tragic and awful in all the ways that have become a depressing American routine. It also put an exclamation point on a dreadful stretch for YouTube, which has lurched from crisis to crisis over the past year. While Donald Trump has in one way or another kept Facebook and Twitter in the headlines—and not the good kind—YouTube has struggled to contain the fallout from the darker impulses of its vast, decentralized creative community.
And of course there’s the case of Logan Paul, a popular YouTube personality with a penchant for shock humor. In December, while visiting Japan, Paul posted a video showing a dead body he and his buddies discovered in a forest known for suicides. During the resulting backlash, Paul apologized for his insensitivity. A few days later he posted a video in which he tasered dead rats. “No rat comes into my house without getting tased,” he said. “I hate rats.”
For years, YouTube has bragged to marketers that its laissez-faire attitude toward video creators was a feature, not a bug. The company was pioneering a form of mass entertainment more democratic, diverse, and authentic than traditional TV, its argument went, because it was unfettered by producers, network executives, or regulators. Its legions of creators fly around the internet with minimal guidance or oversight: Here are the keys to the jumbo jet, kid—knock yourself out. The ensuing string of crashes has grown difficult for its 1.5 billion monthly users to ignore.
Creators and advertisers may grumble about YouTube’s imperfect editorial policies and woeful communication, but few comparable venues reach such a massive audience of youngsters. As a result, YouTube’s no-good, messy, horrifying year has also likely been massively lucrative for its parent company, Google.
YouTube, so far, has been better able to avoid political fallout than its fellow internet titans partly because politicians and media scolds don’t watch it as closely. The site skews younger than Facebook, the social network that actively helped the Trump campaign target ads, or Twitter, the president’s biggest platform. And because YouTube doesn’t look like social media, it’s tougher to recognize how its most horrifying videos spread. (You probably heard about them on Facebook or Twitter.) In the fall, when Facebook, Twitter, and Google sent lawyers instead of executives to testify before Congress about Russian meddling in the presidential election, Team Google repeatedly stressed that YouTube and its other properties aren’t really social networks and therefore can’t fall prey to the worst of the internet’s trolls, bots, or propagandists.
Much like Facebook and Twitter, however, YouTube has long prioritized growth over safety. Hany Farid, senior adviser to the Counter Extremism Project, which works with internet companies to stamp out child pornography and terrorist messaging, says that of the companies he works with, “Google is the least receptive.” With each safety mishap, he says, YouTube acts freshly shocked. “It’s like a Las Vegas casino saying, ‘Wow, we can’t believe people are spending 36 hours in a casino.’ It’s designed like that.”
That’s not how Google or YouTube see things. Over the past year, YouTube has made the most sweeping changes since its early days, removing videos it deemed inappropriate and stripping away the advertising from others. But to date, both the video-sharing service and its corporate parent have struggled to articulate how their plan will make things better. Only recently, as Washington has edged closer to training its regulatory eye on Silicon Valley, did YouTube executives agree to walk Bloomberg Businessweek through its proposed fixes and explain how the site got to this point. Conversations with more than a dozen people at YouTube, some of whom asked not to be identified while discussing sensitive internal matters, reveal a company still grappling to reach a balance between contributors’ freedom of expression and society’s need to protect itself.
But minimal infrastructure was a conscious choice, according to Hunter Walk, who ran YouTube’s product team from 2007 to 2011. When the markets tanked in 2008, Google tightened YouTube’s budgets and took staffers off community safety efforts—such as patrolling YouTube’s notorious comments section—in favor of projects with better revenue potential. “For me, that’s YouTube’s original sin,” Walk says. “Trust and safety has always been a top priority. This was true 10 years ago and it remains true today,” YouTube said in an emailed statement.
As oversight dwindled, the amount of material posted on YouTube doubled in two years. By 2010, 24 hours of video were being uploaded every minute. (Today, it’s more like 450 hours per minute.) Suddenly, YouTube needed a better system to help viewers navigate the deluge, something that would keep them from feeling overwhelmed and wandering back to the comfort of their TVs.
In 2010, YouTube hired French programmer Guillaume Chaslot, who soon began developing algorithms that could better match viewers with videos that would keep them watching. Eventually, YouTube engineers found a simple, winning formula: When a viewer finished a video, the site immediately recommended another on a similar topic with a comparable sensibility. Chaslot says the team learned it could increase engagement, and hit ad goals, by bumping up videos with a proven record of keeping viewers watching.
Over time, Chaslot saw adverse effects. Garbage often floated to the top—rants by both flat-Earthers and Holocaust deniers did well. If you watched one video starring a wild-eyed conspiracy theorist, the algorithm would feed you another, and another, and another—and on it went down a rabbit hole of untruth. “You come into that filter bubble, but you have no way out,” says Chaslot, who left the company in 2013 and now runs a project called AlgoTransparency. “There’s no interest for YouTube to find one.”
Despite YouTube’s rapid growth, advertisers stayed wary of the website, and it didn’t generate much revenue until 2010, when Kyncl arrived from Netflix Inc. Over the next several years, he turned YouTube’s amateur creator base from a weakness to a strength. Netflix had to spend hundreds of millions of dollars to license TV shows and movies, but YouTube’s users offered a bottomless reservoir of content for free. Kyncl set about identifying the most popular personalities and turning them into marketable stars.
In 2011, YouTube tweaked the rules so more creators could make money from ads that its algorithm automatically packaged with their videos. A year later it opened a studio in Los Angeles so some of the best amateurs could use high-end production equipment. YouTube also did some advertising of its own, plastering the faces of its rising stars on billboards in major cities. More and more, YouTube was starting to convince advertisers it had become the new TV. Kyncl said as much onstage at Madison Square Garden in 2015 during the company’s annual “brandcast,” at which executives showcase new YouTube programming in front of the world’s top advertisers. Google doesn’t release financial numbers for YouTube, but analysts at Morgan Stanley estimate that the service’s revenue will top $22 billion in 2019.
Unlike with traditional TV, where very little goes on the air unlawyered, top creators can achieve cultural sway without telling anyone at YouTube where they’re going, who they’re filming, or what they might be tasing. All of which, under U.S. law, protected YouTube from liability for damages stemming from the videos it distributed. It also left YouTube in a reactive position: Whenever controversies ignited, executives could do little but try to douse the flames of umbrage long after they had spread. And that bare-bones bucket brigade stood no chance of meeting the challenges of the Trump era.
Behind the scenes, YouTube executives acknowledged that their infrastructure-challenged megacity needed a massive police presence. They stopped Holocaust-denial videos from popping up in the recommendation feature. In March, YouTube reached out to BuzzFeed CEO Jonah Peretti for ideas, according to a person with knowledge of the exchange. (Peretti declined to comment for this article.)
Meanwhile, Wojcicki went on a listening tour, reassuring nervous advertisers. She met with top buyers at a marketing conference in late 2017 and at the Consumer Electronics Show in January. At some client meetings, YouTube brought along staffers from its content and software teams to ask how traditional TV standards departments work, says Susan Schiekofer, head of digital trading for media buyer GroupM.
In December, Wojcicki said in a company blog post that YouTube and Google would appoint as many as 10,000 people to help cut the spread of misinformation and abusive content. (The hires represented a 25 percent hike in moderator staffing.) YouTube also pledged that a human moderator would review every video in its Google Preferred program for advertisers before any ad was attached. The goal was to reduce the risk of, for example, a cereal ad running alongside a beheading video. The problem has persisted in the months since then, however, in part because of an institutional disconnect: The staff that monitors YouTube’s content is separate from the Google team that oversees the ads.
In recent months, Marc Pritchard, the top marketer at Procter & Gamble Co. and a prominent Google critic, has met with Wojcicki multiple times. “You went to a large galaxy that was beyond what anyone had ever seen,” Pritchard recalls telling her at one point. “And I don’t think you’ve realized the impact you’ve had.”
“We want to be on the right side of history,” Wojcicki assured him.
YouTube’s growing ranks of moderators now scan the worst videos online: torture, bombs, porn. Moderators are discouraged from working for more than two hours at a time. A psychologist is on call, and group therapy sessions are available. Some moderators are contract workers in the Philippines. Some, like those on Raul Gomez’s team, work out of San Bruno.
Weeks before the Aghdam shooting, Gomez walked Bloomberg Businessweek reporters through a scenario involving anger directed at his co-workers. Gomez isn’t his real name—he insisted on a pseudonym because people in his line of work often face death threats. Nobody likes it when his video is taken down; the psychologically disturbed, even less so.
Inside an office named “Gangnam Style,” Gomez projected a video on the wall. The clip was first posted last summer, soon after a Google engineer was fired for writing an inflammatory internal memo about gender imbalances in tech. Afterward, apoplectic right-wing commentators accused Google of discriminating against conservatives. The clip, produced by a popular right-wing media outlet, opened with a jaunty young woman directing viewers to her laptop, where she pulled up Twitter pages of several Google employees. She scoured through their posts, reading aloud certain passages while ridiculing the named individuals as dimwitted liberals.
With a grimace, Gomez said that while it may be unpleasant to see colleagues singled out for public mockery, the video didn’t violate YouTube’s harassment policy. The performer didn’t exhort her viewers to violence. And if the video were taken down, YouTube would likely face another round of censorship allegations at Breitbart News or the Drudge Report.
Not unlike their Silicon Valley peers, YouTube executives remain convinced that the long-term solution isn’t old-timey Homo sapiens but technology. During the service’s early days, it was rife with pirated videos uploaded and shared without copyright owners’ consent. Eventually, YouTube built an automated system to weed out copyright violations. The idea is that someday humans will be able to train the machines, in a similar manner, to sniff out misinformation, smut, and abuse. They’ve already made some progress. After October’s mass shooting in Las Vegas, YouTube engineers adjusted the algorithms so they recommended more content only from sources the company deemed authoritative.
Finding the right people to help this refinement process is proving to be a challenge. For months, YouTube has been trying to hire someone who can more clearly define its internal policies and messaging about what makes a video publishable. As of mid-April, the position remains vacant.