Biden’s AI Order: Destructive, Dangerous, Possibly Unconstitutional

Craig HueyBig Tech, Current Events, Government, Congress, and Politics, Politics Leave a Comment

Title: Warning: Biden’s AI “Executive Order” is Destructive and Dangerous… And Probably Unconstitutional

Working with former President Barack Obama behind the scenes, President Biden worked with tech companies to shape Artificial Intelligence (AI) strategy and develop AI policies to regulate the new technology.

In his latest executive order (EO), Biden imposed federal regulation on the development of AI technology. The executive order gives emergency powers gleaned from the Defense Production Act enacted during the Korean War.

The Defense Production Act “allows the Department of Defense (DoD) to utilize its Defense Production Act (DPA) Title III authorities to invest in advanced microelectronics capacity and ensure the production of state-of-the-art integrated circuits in the United States.”

President Biden’s administration says Biden is using the emergency authority based on his climate change policies (“climate is an existential threat”) to utilize the Defense Production Act (DPA). But it’s really a power grab for the federal government to control AI for central planning purposes—It’s more power and control for the federal government.

The EO requires that all AI developers share their safety test results and other crucial information with the federal government. Government AI “regulators” will oversee any “foundation model” that “poses a serious risk to national security, public health & safety security, and economic security” by requiring developers to report to the Secretary of Commerce the results of “red team safety tests.”

Foundational models such as OpenAI’s ChatGPT, Google’s PalM-2, and Meta’s LlamA2 will be “attacked” by Red Teams who will try to hack these AI systems in order to uncover weaknesses, biases, and security flaws, even though tech companies have their own red teams to test their systems.

AI developers must also work with the National Institute of Standards and Technology to set up additional safety standards. Meeting all of the government’s requirements will only slow down the process of testing the safety and security of these systems.

These new regulations will only result in squeezing out smaller AI developers and competitors who can’t afford the costs of complying with the new EO.

The other problem with the government getting involved is the addition of hundreds of bureaucrats who will undoubtedly slow technological advancements down to a snail’s pace. Imagine how long it takes to bring a new drug or product to the market… 7 to 12 years for drugs and medical devices.

This could slow down AI technology for advancing medicine or military defense, placing us in danger, especially when Communist China, Russia, Iran, North Korea, and other enemy states have no such regulations on AI development.

Biden’s executive order also directs the Department of Commerce to ensure AI-generated images, audio, text, and video are watermarked on anything AI creates, even though AI developers are already doing that.

The executive order also instructs the Department of Commerce to “address” any job displacement or “disruption” caused by AI technology.

More government regulation and bureaucrats will only weaken America’s tech center and hamper or destroy innovation. It also puts any investment in AI at risk of being shut down at the whim of a bunch of unelected bureaucrats.

At the end of the day, we are only beginning to unpack the true scope of Biden’s executive order.

What we can see so far is:

  • Businesses that rely on computer data to “feed” to AI systems may not be able to do that.
  • Greater transparency and opt-out/in requirements.
  • AI marketing oversight and standards creating huge expenses for AI developers and AI used for business applications- including for government audits
  • Government assistance programs—the government will pick winners and losers based on their criteria—race, gender, etc.
  • More rigorous standards and guidelines with more and more government regulations (look at the government’s involvement in healthcare, for example.)
  • Anti-discrimination barriers (the government gets to decide who is being discriminated against and who is doing the discriminating).

All of this will increase the costs of AI development, decrease investment and development, and benefit larger companies that can comply with the regulations and the cost of compliance.

Leave a Reply

Your email address will not be published. Required fields are marked *