
Sen. Bernie Sanders (I-VT) has finally found a source he trusts on artificial intelligence.
Naturally, it is artificial intelligence.
In a newly released video, Sanders sits down to question Anthropic’s chatbot Claude about the dangers of AI, walking through familiar concerns about privacy, data collection, and political manipulation. All of that is expected. What is not expected is what follows: He begins treating the answers not as generated responses but as something closer to sworn testimony, less like a tool responding to a prompt and more like a witness who just got sworn in without anyone asking too many follow-up questions.
And if that sounds strange, that is because it is.
Before getting into it, it is worth watching the exchange in full because the delivery carries almost as much weight as the substance.
I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights.
What an AI agent says about the dangers of AI is shocking and should wake us up. pic.twitter.com/rUGwuZLAye
— Sen. Bernie Sanders (@SenSanders) March 19, 2026
Sanders opens the conversation like he is chairing a hearing, slow and deliberate, as if C-SPAN might cut in at any moment and a staffer might slide him a note reminding him that the witness is, in fact, a chatbot.
“Claude. Claude, this is Senator Bernie Sanders… I want to know… just how much of the information that AI collects is being used…”
Claude responds exactly as designed, sounding informed, slightly ominous, and very sure of itself, which is to say, exactly how you would want it to sound if you were trying to impress a room that does not plan to fact-check you in real time.
“Companies are collecting data from everywhere… your browsing history, your location… even how long you pause on a web page.”
So far, nothing groundbreaking. Anyone who has clicked “agree” on a terms of service has heard some version of this before, even if they treated it like background noise on the way to using an app.
Then Sanders leans in and presses the question meant to unlock the bigger truth.
“Why is all of this information being collected? What’s the goal here?”
Claude does not hesitate.
“Money, Senator, it’s fundamentally about profit.”
Clean, direct, politically satisfying, and just a little too perfect, the kind of answer that sounds less like analysis and more like it already knows what the room wants to hear.
Which, to be fair, it does.
Because what follows is not framed as a familiar argument delivered clearly, but something closer to a reveal, as though the machine has surfaced a hidden truth rather than simply polishing one that has been floating around for years. That shift quietly upgrades the response from “well said” to “finally confirmed.”
The chatbot did not discover anything.
It just said it with better lighting.
It is the world’s most confident group project answer.
From there, the conversation builds into a broader warning about power and democracy, with Claude outlining how detailed data profiles can influence behavior in ways people do not fully understand.
“When companies and governments have detailed profiles… they have power over those people in ways most Americans don’t fully grasp.”
It sounds serious because it is designed to sound serious, pulling together widely discussed concerns and presenting them with the kind of clarity that makes it feel like a briefing instead of a remix. That is the product. Not revelation, not investigation, but synthesis delivered with enough confidence to pass for insight.
Instead of staying in the realm of explanation, the response is treated as confirmation, nudging the system out of its role as a tool and into something more useful politically, a validator that just happens to agree with the premise. Once that shift takes hold, the rest of the exchange does not need to prove anything. It only needs to continue.
And it does.
The assumption settles in. The tone follows it. The conclusion arrives right on schedule.
At that point, the structure matters more than the substance, because the system is no longer just answering questions. It is shaping the argument that those answers are then used to support, creating a loop that feels persuasive precisely because it never has to leave its own frame.
AI warns about AI.
The witness and the subject are the same thing.
Everyone nods.
No one asks the follow-up.
Read More: Bernie Sanders Was Asked About Growth — and His Socialism Fell Apart
You can agree with Sanders that data collection raises real concerns, that AI will reshape how information is delivered. That influence can be exercised in ways people do not always see. None of that depends on Claude, and none of it becomes more true because Claude says it in complete sentences.
What changes here is not the argument, but the posture, as a generated response is elevated into something that carries the weight of evidence without doing any of the work required to become it.
And once that happens, everything that follows feels less like analysis and more like confirmation by repetition, the same idea moving forward with slightly different phrasing until it starts to sound like consensus rather than agreement with a well-written answer.
That is the part worth paying attention to.
Not the warning itself, which is familiar, but the method used to deliver it, which is doing more of the heavy lifting than anyone in the clip seems willing to acknowledge.
Because if the people preparing to regulate artificial intelligence start treating its outputs as validation rather than interpretation, the risk is not just that they misunderstand the technology.
It is that they stop questioning it.
And once that happens, the machine does not just answer the questions.
It starts setting them.
Editor’s Note: Do you enjoy RedState’s reporting that cuts through the spin and exposes the Left’s ideological shift? Support our work so we can keep bringing you the facts the media won’t.
Join RedState VIP and use the promo code FIGHT to get 60% off your VIP membership.














