I read the The New York Times today, oh boy. "Obituary piracy," unscrupulous operators using Google and AI to make pennies per click by making up nasty stories about the tragic deaths of real people. Phony pics of Taylor Swift in vast circulation. The platforms that host and distribute this content are evidently unable or unwilling to stop it. So here's a mostly forgotten story about how we maybe could have prevented it.
Once upon a time, back in the mid-1990s when Al Gore had just helped free the Internet to go commercial (seriously, he played a key role even though he didn't invent it), when Amazon was just crawling out of the digital ooze and Google was just a gleam in Page and Brin's eyes, a bunch of lawyers and technologists based in the American Bar Association's Information Security Committee had a dream of secure electronic commerce using public key infrastructure (PKI) to administer "non-repudiable" digital signatures.
I was a young health IT lawyer, at a time when there were fewer than probably half a dozen who even knew that might become a thing. (You know who you are.) One of the big problems in health IT was and is "authentication," the ability to confirm the identity of an information source and that the information content hasn't been changed. This could be a real concern with electronic medical records, where errors can be literally a matter of life and death, and identifiable medical professionals need to be accountable for the information they provide. This is also really what we are worrying about when we worry about deepfakes and AI-generated content spoofing as human creations.
So I fell in with the ISC to help figure out how to solve these problems, which led me to some truly fun, educational and engaging brainstorming meetings and communications with Thai takeout and really smart people, about how to actually create a secure ecommerce and Internet communications infrastructure. The upshot was Digital Signature Guidelines and PKI Assessment Guidelines to help implement a system in which "certification authorities," preferably licensed, would authenticate the identity of participants in Internet transactions using digital "certificates." Without going into the eye-glazing details, this a system in which the independent third-party certification authorities would maintain a secure registry of authentication information they would administer so that Internet identities could be confirmed in real time, as could the contents of communications and transactions by registered users.
When the draft HIPAA security regulations came out in 1998 they included a proposed digital signature standard for health claims transactions. Washington state adopted a certification authority licensing scheme, and I helped Digital Signature Trust get licensed. There were the intellectual and legal underpinnings for an Internet authentication system in which you could always reliably know who had generated specific content, and that the content hadn't been changed since it was created. There were concerns about the possible use for surveillance, though that one guy who told me it would become the Mark of the Beast probably took it a little too far. But since we were lawyers we also figured there would and should be legal solutions and limitations. Transparency!
For good or bad that road was not taken. Congress came along with E-SIGN and basically said, anything and everything is fair game for e-commerce and electronic signatures, and deferred everything to agreement of the users. The technology companies - which I don't think most of us called "platforms" yet - didn't have a lot of incentive to support a secure infrastructure that might cost some money and slow down some implementations. Sure, there were lots of opportunities for avoidable ecommerce fraud, but the credit card companies stepped up and took the financial risk - for a fee, of course, but that was buried in everybody elses' charges. And we were off the the dot.com races, and the social media races, and the AI races . . .
And what happened is that without making a policy choice, much less thinking it through, we came to rely on the platforms we used to communicate and conduct transactions to authenticate identities and content. You trust that you know who wrote this, and that it is what I intended it to be, because LinkedIn manages my account, and yours, and its platform security authenticates both. Ditto Facebook, Amazon, whatever. If they fail or make mistakes? Their bad, sorry about that, your legal recourse is their terms and conditions. Your identification information and content used to track you? "You have zero privacy anyway. Get over it."
I'm not sure this is a story that has a moral, really. We could no doubt implement an authentication infrastructure like the ISC envisioned lo these many years ago, but that would take a lot of time and effort and money and conviction. I'm not going to bet on it, though it anyone wants to try I'll happily cheer you on.
As it is, it's up to us. We're going to have to consider what platforms we choose to trust for what purposes, and how far, and what secondary indicia we might use to confirm or deny validity. Quite apart from the noise around Musk's shenanigans with Twitter/X, what happened with that particular platform demonstrates the authentication/trust problem in a particularly melodramatic way. But all platforms have it to some degree, it's probably unavoidable for the model. So we're just going to have to be better users.
You can trust me on this one.