Palantir whistleblower: the world is way too complicated to model
He says we need community-based software solutions rather than monopolistic companies with centralized "operating systems"
In 2021, Juan Sebastián Pinto went to a monastery for several days to sit in silence and contemplate the pros and cons of a job opportunity. The job in question was a content strategist role at a company called Palantir.
Pinto ended up taking the job, but while there, some of his worst fears about the company were realized. Hard Reset spoke to Pinto about what he saw, why he left, and how we should start thinking about artificial intelligence’s impact on our lives when our siloed data can be connected.
AS: What is your background, and what was your trajectory before Palantir?
JP: I’m an immigrant, originally from Ecuador. I studied English and Philosophy and got my masters degree at Penn. I wanted to write because that’s what I was best at, but I was also very curious about what was going on in the tech world.
I’ve used both writing and designing disciplines in my career, in partnership with architecture firms and advertising companies. I also published criticisms about the shaping of contemporary architecture by capitalism—the leasing, the surveillance, and the commercialized visions of the future.
But people today no longer spend most of their time focusing on the real world architecture—now, they spend time in digital architecture. I’m fascinated by how those structures have power in our behavior, and how screens and phones shape our connections to other people.
AS: When did you start working with Palantir?
JP: In 2021. At the time, I was working at an architecture firm, and I’d heard good and bad things about Palantir. I’d heard that [CEO Alex] Karp was a Marxist, that he understood things about critical power theory. That appealed to me. But I was also aware of Palantir’s ICE contracts.
I went to a monastery in Colorado for a week to consider; I wanted to take some time off to unplug. It was a real hippie town, and after bringing a six-pack to share with some people I met at the local farmers market, I stumbled into this guy who was one of the first White House advisors for the environment, one of the founders of the movement. After talking to him about ecology, I realized that Palantir could be a great opportunity to understand systems-thinking, simulations, and data. I thought that inside a big company I could learn how the world works, in order to make it better.
I was also attracted to the work Palantir did in various humanitarian fields. It seemed like a great opportunity, that it would be very eye-opening and in an environment where I could meet fascinating people. And I did come into touch with some of the most consequential technologies and philosophies.
AS: What was it like at Palantir when you first started?
JP: I started as a content strategist, tasked with developing internal and external communications across the board, in almost every department and many different industries including automotive, mobility, and the government side of the business. I worked with other designers and video producers to create videos and blog posts, white papers, and marketing materials. I also worked hand-in-hand with sales and technical teams to communicate how big data and Palantir’s platforms could plug in to a bunch of different use cases.
For a time, I worked with public health officials to try and create solutions for the government in healthcare. Service-oriented and well-meaning people were involved. There was some good to it.
AS: When did you start to feel uncomfortable about some of your work?
JP: There were always indications that Palantir had huge opportunities to sell more of its services to governments. But I started to get sense that some of my work could be used unethically while working to market partnerships Palantir had with the automotive industry. I realized how much connected information modern vehicles could collect, and how people could be easily physically tracked down through that dataset.
I also worked with the defense teams, and saw how artificial intelligence was being used to wage warfare through sensors, surveillance, computers, and automated decisions. At that stage of human-augmented warfare there are a lot of ethical and moral implications.
AS: How have the problems at Palantir intensified since you left?
JP: After leaving Palantir, the automakers I had worked with questioned by the Senate for sharing private connected vehicle data without a warrant. Then, after October 7th, I saw Palantir turn aggressively towards supporting Israel.
And outside of the Israel-Palestine conflict, Palantir has been implemented in many situations without the understanding of its users or stakeholders. Palantir has been used by England’s national health and police systems, but critics there have spoken out about the problem of relying on one vendor or platform to use, access, and integrate all this data. Their belief is that homegrown solutions built by the people who live day-in and day-out at hospitals is more important than blanket solutions.
Palantir is often pushed on institutions in a subversive way via deep contracts, free software trials, and emergency situations. Then institutions become reliant, leading to vendor locking.
For example, Germany is considering using Palantir temporarily to manage police data across all different states. The ability to integrate all that data becomes invaluable, as it’s difficult to remove or change vendors once established. This leads to permanent changes in police practices without democratic oversight.
AS: What concerns you about the U.S. government’s increasing connection to platforms like Palantir?
JP: What’s most concerning is to see this administration defy courts, illegally detain deportees, and talk about suspending habeas corpus.
The government says that they simply use the technology Palantir presents to them as-is, while Palantir claims the government is responsible for the ethical implementation of their technology. In my opinion, everyone is obfuscating responsibility. These technologies can and will be used to continue humanitarian crimes.
We have to admit that we are streamlining commercially available data on citizens. The Office of the Director of National Intelligence is building an Intelligence Data Consortium, a one-stop-shop with vast amounts of information regarding Americans and their private activities. What happens when the government is able to connect siloed information systems and use them to target opponents and political dissidents?
AS: What do you think about tech workers’ movements, the idea that they are less powerful than in years past because of this administration and other macro forces?
JP: The trend across the industry is greater worker suppression, and greater leverage over employment. Workers have less leverage over their employment now, and employers implement their policies more forcefully.
In general, I feel disappointed at the industry’s rhetoric, and how it’s being pushed on the general public. There needs to be greater discourse on the consequences of tech on our civil rights. I find it difficult to return to that industry.
AS: What’s next for you, if not returning to the tech industry?
JP: I want to educate people on how these things work, as well as support legislation and move into research. I’m building a library on AI ethics, philosophy, and environmentalism, and I plan to stand up against illegal and unethical deportations.
I’m also very interested in strengthening whistleblower protections. States need to continue resisting the push for the federal government to have full control over AI policy, and they should stand up as examples for what ethical AI could look like. Unfortunately the latest budget that just passed places a restriction on the states’ rights to set their own policies on AI for ten years. There are serious challenges ongoing to prevent this clause from becoming law, including from many civil rights organizations and the National Organization of Attorneys General. Ultimately, Congress has to reject the AI moratorium language added to the budget reconciliation bill.
There is an immediate risk to governance by algorithm at this scale—decisions that used to be made by humans are being delegated en masse to unproven technologies.
AS: What are the risks you see when it comes to AI’s ubiquity?
JP: I often think about President Dwight Eisenhower’s farewell address: he warned us about the military theories, tools, and technologies spreading in the fabric of American society. These were already being used in the commercial sector to violate civil rights; for example, IBM’s tabulation machines were used at every stage of the Holocaust.
The world is way too complicated to model. We need to think about individuals, and create solutions from the ground up.
Of course there is a bias in data and in the types of information that you collect. There are copyright issues. And the designers of the algorithm are not designing that algorithm with everyone in mind.
For example, say at the border you’re using a big data system that is designed by people who only allow for two categories of gender. A person who is transgender could have issues traveling because the design of the platform does not address the existence of their category.
There are also issues with hallucinations and inaccuracies, and AI is still very error-prone and untested. These hallucinations or errors can be used to justify actions or delegate responsibility for these actions. In the worst case, it’s the removal of people’s rights or the deportation of people to an El Salvadoran prison. Abuses like these are happening today.
Today, the narrative is that AI will transform your institutions and be a more resourceful use of time. That’s a fantasy, considering the real applications. Most of the time, AI is used as an excuse to help an executive establish surveillance systems in organizations: to fire and remove employees, track customers, and predict their behavior.
AS: Are there any solutions to this massive and seemingly intractable problem that we should be thinking about?
JP: We need software solutions that are community-based and built rather than run by monopolistic companies that want to serve as the government’s centralized operating system.
And we need to return our decision-making to humans and federal employees, rather than thinking that AI tools made by for-profit corporations can make those decisions.
We are delegating a lot of decisions to the people who designed these systems, meaning the richest people on Earth. They have spoken candidly about their objective to advance technology at all costs, supposedly for Western democracy. But these technologies are increasingly authoritarian and unethical.
The design decisions should ideally be developed through a democratic process, and in order to do that, we need more education and support for AI guardrails. The administration is making this impossible, so the responsibility falls on people to advocate for their rights and to reconsider engagement with these technologies in their private and civic lives.
Hope you enjoyed the conversation, and make sure to check out JP’s writing. Here’s what we’re reading this week…
MINING FOR DATA: Journalist Karen Hao writes about how Indigenous communities in Chile are seeing accelerated copper and lithium extraction to build the power plants and power lines that support generative AI companies. Indigenous communities always mined copper, but the problem now is the scale at which companies want to extract.
BEYOND SATIRE. The creator of Succession has a new film out on HBO called Mountainhead, about three billionaires gathering in the home of a friend who they call Souper, short for soup kitchen because he is a centimillionare. The New York Times says it’s a movie about “men who feel they own the future.”
UNREQUITED LOVE? Zuck has to cozy up to Trump and MAGA, but will MAGA ever love him back?
JOBHUNTGPT: Knowledge workers like security analysts, communication specialists, and customer service representatives are getting laid off and replaced by AI. Many feel discarded, and some are turning to the very technology that replaced them to remain marketable in what’s next.
WHEN CLOUD MEETS CEMENT: How are data centers being built with community buy-in (or not) in places like Chile, Missouri, the Netherlands, Mexico, and South Africa? A new report measures the environmental and health impacts, community engagement strategies, and advocacy efforts in areas where data centers built to support AI and cloud computing are being developed.
See you next week!
how is this not viral?! incredible journalism
A very good summary and interview. More should read it - so I'm sharing it in Notes also. Keep on doing what you are doing!