Is the future of humans online a 12,800-digit binary code?
That’s the premise of a feature this week on Tools for Humanity, a Sam Altman company which seeks to authenticate humans online, with just a few catches.
Today I want to point your attention to a story about an authentication system being developed to distinguish humans from bots online, a potentially necessary step in the coming AI revolution/apocalypse. Billy Perrigo at Time magazine has a nice piece about Sam Altman’s company Tools for Humanity, which is making an orb — well, “the Orb™” — that scans people’s retinas and in return generates a unique and lengthy digital code to authenticate humans and their accounts from automated and AI-driven accounts.
They’re calling it a World ID and it’s meant to be sort of a digital passport. The idea is purportedly meant to lessen the blow of the AI revolution by maintaining the primacy of humans, but it also raises many thorny questions.
As Perrigo points out, challenges around distinguishing between automated accounts and humans are not some future worry — it’s already happening: Bots are already shaping and influencing behavior and activity online. Perrigo writes:
Bot-driven accounts are amassing billions of views on AI-generated content. In April, the foundation that runs Wikipedia disclosed that AI bots scraping their site were making the encyclopedia too costly to sustainably run. Later the same month, researchers from the University of Zurich found that AI-generated comments on the subreddit /r/ChangeMyView were up to six times more successful than human-written ones at persuading unknowing users to change their minds.
Over the next year, some 7,500 orbs will show up at locations across the country — bodegas, gas stations, flagship stores — to offer the authentication service, which comes with the reward of a proprietary cryptocurrency donation — currently around $42 — to early adopters. This, of course, recalls the classic internet adage about how when a technology app or program is free, the user is the product being bought and sold.
The founders say they hope to create a critical piece of authentication infrastructure for the AI era and get rich in the process, through both the technology and the cryptocurrency.
There are a few catches, of course. One is that the company does not allow users to delete all their data from its system currently, a move that has drawn the attention of regulators in Europe. Another is that the company enables users to designate their World IDs to digital agents, essentially allowing bots or AI systems to act on their behalf.
Here is what else we’re reading this week:
ARTIFICIAL INTELLIGENCE AND CHILL: Anthropic, which is billing itself as a more socially and ethically conscious AI company, has appointed former Netflix head Reed Hastings to its board. Hastings remains a major tech donor on the left.
TX, BYE: Elon Musk is officially out of the Trump Administration, admitting to the Washington Post that his effort to supposedly slash federal spending fell well short of its goals. And all that talk about balancing the books in DC? Musk said he was disappointed the GOP spending and tax bill is slated to increase the deficit. He’s back on his Mars mission. The NYT has a deeper dive into the drifting relationship between Musk and Trump.
X-ED OUT: DOGE engineer Sahil Lavingia, who was let go from his post at the Department of Veterans Affairs after giving a media interview recently, wrote this piece on his experience as part of Musk’s shock troops. Barely a week into the job, “reality was setting in,” he writes:
DOGE was more like having McKinsey volunteers embedded in agencies rather than the revolutionary force I'd imagined. It was Elon (in the White House), Steven Davis (coordinating), and everyone else scattered across agencies.
Meanwhile, the public was seeing news reports of mass firings that seemed cruel and heartless, many assuming DOGE was directly responsible.
In reality, DOGE had no direct authority. The real decisions came from the agency heads appointed by President Trump, who were wise to let DOGE act as the 'fall guy' for unpopular decisions.
That whole thing about DOGE being more hat than cattle — better seen as cost-cutting theater for the twisted digital media ecosystem we live in, than a sincere program for actual spending reductions? We raised that question back in March.
DRIVERLESS TRUCKS have landed in Texas. For many years, truck driver was the most popular job classification in many U.S. states; 29 out of 50, according to a slightly out-of-date analysis from 2015. It is grueling, difficult work that is hard on humans and bound by their needs: chiefly, eating, sleeping, and using the restroom. Automated trucks present a clear business advantage, in that regard. Incredibly, there is no federal regulatory framework managing this currently. Experts that the New York Times spoke to were mixed about whether driverless trucks would be safer than human drivers or not. More Perfect Union released a 13-minute video on the issue, which focuses on concerns about job loss.
GOOGLE IT: The Nation has a strong piece about what happened to a group of contractors at Google who were helping it train AI tools after they started organizing. Multiple workers at the contracting company, GlobalLogic, were fired in what the Nation paints as pretty classic anti-union retaliation. The NLRB, which would be the agency in charge of regulating these issues, has been unable to function without a quorum since Trump fired member Gwynne Wilcox a week after he took office.
LEARNED TO CODE: The NYT has a good piece about how AI coding at Amazon is making the jobs of human coders more like warehouse work — upping production speeds and output.
The Tech Workers Coalition is still going — and has two onboarding sessions in June.
See you next week!