<![CDATA[Artificial Intelligence]]><![CDATA[Big Tech]]><![CDATA[Donald Trump]]>Featured

Here Are a Few Issues That Should Be Addressed in the Planned Executive Order on AI – RedState

President Donald Trump announced Monday that he plans to implement an executive order related to artificial intelligence this week, saying on Truth Social that in order to continue leading in AI development, the United States must have only “One Rulebook” instead of a patchwork of state regulations and laws. 





Trump wrote

There must be only One Rulebook if we are going to continue to lead in AI. We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS. THERE CAN BE NO DOUBT ABOUT THIS! AI WILL BE DESTROYED IN ITS INFANCY! I will be doing a ONE RULE Executive Order this week. You can’t expect a company to get 50 Approvals every time they want to do something. THAT WILL NEVER WORK!

The issue has been a contentious one all year, especially among Republicans. In November, after a similar Truth Social post from Trump, an alleged draft executive order related to AI was leaked, but the White House would not confirm its authenticity. The White House wanted a 10-year moratorium on state regulation of AI as part of the Big Beautiful Bill, but that provision was stripped before final passage. Sen. Ted Cruz (R-TX) and Rep. Steve Scalise (R-LA) “have pushed language to preempt state action, most recently in the annual defense bill. Congress has rejected these efforts twice,” thus the return to plans for an Executive Order.


DIVE DEEPER: Proposed Moratorium on Regulating AI Is Bad for Everyone, but Especially for Conservatives


Trump’s point that “you can’t expect a company to get 50 approvals every time they want to do something” is a good one. However, there’s got to be some regulatory framework given the issues we’re already seeing with AI engines and AI-generated content, which demonstrate that companies, creators, and the American people need protection from the harms AI can do and clarity about who’s liable for some of those harms. If Trump’s proposed executive order can address these concerns, it will be a good start.





For instance, we’ve seen that AI large language models (LLMs) perpetuate Big Tech’s “woke” bias – even Elon Musk’s Grok. A November 2025 study tested responses from five current frontier models-GPT-5 (OpenAI), Claude-Opus (Anthropic), Gemini-2.5-Pro (Google), DeepSeek-Chat, and Grok-4 (xAI) when asked to evaluate the truth or falsity of “ten statements selected for being highly polarising in 2025 American society, and covering topics including creationism, climate change, and the honesty or otherwise of Donald Trump” and to “give a brief justification.” It found:

Quantitative results and qualitative inspection show a striking convergence across all five systems. Grok’s responses align closely with those of the other models. Contrary to its marketing as an “anti-woke” model, Grok does not display any systematic pattern of ideological divergence. The findings suggest that contemporary alignment and reinforcement-learning procedures have led to a shared epistemic framework among frontier models – a form of emerging consensus intelligence that transcends corporate branding and ideological rhetoric. 

Any executive order on AI should address systemic wokeness in AI models and prohibit their automated use in any type of fact-checking or censorship efforts.

Another extremely serious problem that’s come to light is AI’s tendency to perpetuate false information or completely manufacture things like quotes or case law citations (as a few unlucky attorneys have found out). Reporting on the phenomenon, MIT researchers said, “These inaccuracies are so common that they’ve earned their own moniker; we refer to them as ‘hallucinations.'” When these “hallucinations” create defamatory or otherwise dangerous content, as in the example of a conservative podcaster who’s been falsely accused of being arrested for possession of child pornography by Grok, who’s liable for that?





That podcaster, who goes by “The Misfit Patriot,” is now pursuing legal action against Musk and Grok and possibly others, influencers who might have taken steps to manipulate Grok’s output. 

The Misfit Patriot’s full post reads:

@Grok and .@xai have accused me of being arrested for possession of child pornography and this is VERIFIABLY false, as shown in this video. 

I will be pursuing legal action regardless to clear my name in a court of law, but the purpose of this video is to debunk these disgusting claims in the mean time [sic] for the court of public opinion. 

The negligence and irresponsibility of whoever programmed the AI to not verify something like this can lead to me being either harmed or even murdered over a lie.

I demand an apology, and a statement from .@elonmusk, X, and/or xAi immediately retracting this and verifying the falsehood of claims to hopefully deter people from making even more claims on my life, which over the past two weeks where this has not been corrected as just yesterday grok was still spreading this lie, there have been several death threats.

The damage is done, but do the right thing and at least help me not die because of your negligence. Several large creators have already repeated this lie and used it to smear my name and destroy my reputation. Legal action will be taken against them as well if done maliciously, which I suspect.

Do the right thing, and put out a statement before someone tries to harm me by coming to my address, which has already been doxxed dozens of times on THIS platform.





In addition, AI developers currently are under no obligation to be transparent about how their models are trained and what potentially copyrighted material they’re using in that effort. We know that Meta employees used a huge database of pirated books called Library Genesis, or LibGen (which includes books from several conservative authors, including President Trump and his family), to train its Llama 3 AI model. Both Meta and OpenAI have been sued for copyright infringement by authors of books in LibGen, and both companies argue that their use of copyrighted works to train generative AI models is “fair use” because they create a new work from the original material. Unless authors/publishers can show that AI developers scraped their content in training their models, they’ll have a difficult time showing they have standing to sue. To avert this problem, Trump’s executive order should require developers to specifically list what content they’re using to train their models.

These issues are in addition to the concerns I raised in my June piece on the issue: harm from deepfakes, ChatGPT-induced psychosis, accumulation of cognitive debt, and the erosion of intellectual property protections for creatives like songwriters, composers, filmmakers, writers, illustrators, and photographers, all of which should be addressed in any executive order on the topic.


Editor’s Note: The mainstream media continues to deflect, gaslight, spin, and lie about President Trump, his administration, and conservatives.

Help us continue to expose their left-wing bias by reading news you can trust. Join RedState VIP and use promo code FIGHT to get 60% off your membership.





Source link

Related Posts

1 of 945