What do Tesla and Facebook have in common?
Not much, superficially, but in recent times, the experiences of both these companies in pushing the frontiers of automation have shown up its limits, and the criticality of human intervention.
Tesla first. In May, Tesla Motors reported its first known fatality in a self-driving car — a Model S, which was in Autopilot mode. It happened when the car’s sensor system failed to distinguish a large, white 18-wheel truck-and-trailer crossing a highway.
What went wrong?
Tesla Motors told a US Senate investigation that its crash-prevention system failed to work properly, but curiously it claimed that its Autopilot technology was not at fault. The distinction it seeks to make is a fine one, bordering on the ludicrous. The Autopilot mechanism, after all, encompasses automated steering, adaptive cruise control and the automatic braking system.
What is Tesla driving at?
The company considers the Autopilot system, in which it has invested big money, to be a “life-saving technology” and is unwilling to face up to any criticism of it.
And Facebook?
Late in August, Facebook found itself with egg on its face after it automated a feature that channels “trending” news onto its users’ newsfeeds. Barely days after it announced that it had eliminated jobs in its trending module, and had replaced them with computer programs known as algorithms, the social media platform was itself trending for all the wrong reasons.
What exactly happened?
In the absence of editors who were providing ‘the human touch’ by filtering out news from unrecognised mainstream media outlets, the automated algorithm picked up and pushed out false news reports about the sacking of a popular television show host, and an edgy commentary about a comedian’s invocation of a foul-mouthed epithet. It also disseminated an article with embedded links to a video of a man pleasuring himself with the aid of a McDonald’s chicken sandwich!
Ugh! Why did FB opt for algorithms over humans?
Barely months earlier, Facebook had come in for criticism on the ground that its all-too-human news “curators” were injecting their political biases by filtering out topics of conservative interest from the ‘trending news’ section. Facebook had at that time denied the allegations of bias, but the move towards automation of what are perceived as routine human functions is in line with a broader trend in workplaces. It’s happening in info-tech companies, in assembly lines, and increasingly in newsrooms. The Associated Press news agency uses software to generate news articles on corporate earnings. But, as the experiences of Tesla and Facebook show, such a transition is not without downsides.
What lessons may we draw?
Automation may well make for greater efficiency, but the ‘human touch’ is hard to replace in critical functions.
But humans are fallible too.
Of course. And in the news space, they come with their biases. As British poet Humbert Wolfe comically noted in his doggerel: You cannot hope to bribe or twist — / Thank God! — the British journalist. / But seeing what the man will do / Unbribed, there’s no occasion to. Yet, despite this, there is a compelling case for newsrooms to stay human, and for news consumers to pay to support good journalism. Only that can avert ‘autopilot’ crashes on the information superhighway.
A weekly column that helps you ask the right questions