n the summer of 2006, Fidel Castro unexpectedly announced that he was temporarily handing over power to his brother. Turns out he needed to undergo intestinal surgery. Afterward, an anchor on state-run television read a statement, said to have been written by Castro, attesting that all was well. But there were no photographs of Fidel in recovery, no nine-hour radio address from his hospital bed. Rumors flew that the longtime Cuban leader had died. Then, about two weeks after the operation, the Cuban regime released a picture of the bearded leader wearing an Adidas jacket and holding the August 12, 2006, edition of the Cuban Communist Party newspaper, Granma. He was alive, at least as of that date. Fidel Castro had been verified.
The Cuban regime was onto something. The people hadn’t believed statements that Castro was alive and well—so it found a way to offer hard-to-deny proof.
Today we are like the Cubans, circa 2006. In our case, fakery is gushing in from everywhere and we’re drowning in it. “Deepfake” videos mash up one person’s body with someone else’s face. Easy-to-use software can generate audio or video of a person saying things they never actually uttered...
Ниже есть продолжение.
...It was a crude but effective verification mechanism. We need to find digital equivalents, especially to verify the time and place of documents, photographs, and videos, as well as to authenticate individual identities. This is a daunting task that will mean developing hardware, software, and protocols, not to mention institutions to oversee the process.https://www.wired.com/story/zeynep-tufekci-facts-fake-news-verification/
How would this work? It’s harder than just showing the people an image of a print newspaper (if you can find one), because digital bits can easily be altered. But it is possible to develop schemes to approximate this. For example, the digital front page of The New York Times on the date and time a photograph was taken could be used to generate keys to “digitally sign” any photograph and its metadata. That’s a bit like making the photograph hold a copy of that moment’s New York Times, so to speak, except the “holding the paper” part is done by cryptographic digital signing. This is a simplification, and there would be many details to work out: a camera with specialized hardware, a spoof-resistant method of geolocation, a means to add a “taken before” verification (using existing methods such as trusted time stamps), and such. Blockchain databases—hyped for so much else—could actually be useful for verification.
We’ve already seen some efforts to vouch for human identity, like the blue check on Facebook and Twitter telling users they can trust that an account belongs to the person who claims to own it. But these programs were imperfect and limited, and as of this writing, both companies have paused verification and mostly quit issuing the blue checks (though existing ones are still in use).
An effective identification system, however, carries with it a worrisome truth: Every verification method carries the threat of surveillance. There are ways to mitigate this concern. We can develop schemes that protect identities or reveal as much as necessary in a given context—and then secure the evidence proving the authenticity after a person has been verified. Also, we need to make sure verification is a choice, not an obligation.
When people argue against verification efforts, they often raise the issue of authoritarian regimes surveilling dissidents. There’s good reason for that concern, but dissidents probably need verification more than anyone else. Indeed, when I talk to dissidents around the world, they rarely ask me how they can post information anonymously, but do often ask me how to authenticate the information they post—“yes, the picture was taken at this place and on this date by me.” When it’s impossible to distinguish facts from fraud, actual facts lose their power. Dissidents can end up putting their lives on the line to post a picture documenting wrongdoing only to be faced with an endless stream of deliberately misleading claims: that the picture was taken 10 years ago, that it’s from somewhere else, that it’s been doctored.
As we shift from an era when realistic fakes were expensive and hard to create to one where they’re cheap and easy, we will inevitably adjust our norms. In the past, it often made sense to believe something until it was debunked; in the future, for certain information or claims, it will start making sense to assume they are fake. Unless they are verified.
If this sounds like a suspicious and bureaucratic world—far from John Perry Barlow’s famous vision of a digital world in which ideas could travel without “privilege or prejudice”—it’s important to remember the alternative: a societal fracturing into a million epistemic communities, all at war with one another over the nature of truth.
If we can’t even come together around the nature of basic facts, we can’t hope to have the debates that really matter.
No comments:
Post a Comment