top of page

Artificial Intelligence and the Regulatory Gap: Is it Time for a New Approach?

Updated: Apr 14

Balancing Artificial Intelligence (AI) Innovation and Responsibility.

Want to hear a joke?

I asked ChatGPT4 to write me a joke about how slow the government is to regulate technology. Here’s what it came up with:

The government moves slower than a snail on Xanax when it comes to regulating things like cybersecurity and artificial intelligence. By the time they get around to it, we'll all be driving flying cars and communicating telepathically.

Funny, because it’s true.


Governments around the world are grappling with how to regulate AI and to maximize its potential benefits while also minimizing its potential harms.

From my perspective, the government's ability to regulate artificial intelligence (AI) is severely limited, if not entirely non-existent.

Let’s just say, judging off historical performance, I have some doubts.

There are many examples throughout history of times when the government was too slow to regulate something, often with negative consequences.

Tobacco: For decades, the tobacco industry was largely unregulated in the United States, despite mounting evidence of the health risks associated with smoking. It wasn't really until the 1970s that the government began to regulate tobacco, with measures such as warning labels on cigarette packages, restrictions on advertising, and taxes on tobacco products.

Pollution: For many years, industries were allowed to pollute the environment with very few restrictions. (Many still are, and/or they do with little punishment.) The government began to regulate pollution, with measures such as the Clean Air Act and the Clean Water Act but those keep getting rolled back and deregulated, and we are FAR from where we need to be as far as pollution and environmental regulations. The Trump Administration alone rolled back over 100 environmental safeguards.

Financial sector: The financial crisis of 2008 highlighted the dangers of a largely unregulated financial sector. Prior to the crisis, banks and other financial institutions were able to take on risky investments with few restrictions. It wasn't until after the crisis that the government passed regulations such as the Dodd-Frank Wall Street Reform and Consumer Protection Act to try to prevent similar crises from happening in the future. Although with the recent Silicon Valley Bank run, it’s proof they are still behind on things in finance.

Cybersecurity: As technology has advanced, cyber threats have become increasingly prevalent. However, the government has been slow to regulate cybersecurity, leaving many companies and individuals vulnerable to cyber attacks. Remember the MASSIVE OPM hack!? It was downright brutal. It wasn't until 2015 that the U.S. government passed the Cybersecurity Information Sharing Act to encourage companies to share information about cyber threats and to create a framework for responding to such threats.

In each case, the delay in regulation had negative consequences, whether in terms of public health, the environment, the economy, or national security.

And they are all still major problems today and lack comprehensive policy.

It is important for the government to be proactive and not reactive in identifying and regulating any emerging risks, rather than waiting until it is too late.

There are known reasons why the government can be slow to regulate things such as:

Bureaucracy: The process of creating and implementing regulations can be slow and bureaucratic, with multiple layers of review and approval required. This makes it very difficult for the government to respond quickly to emerging issues or changes in the landscape.

Political gridlock: This one is so annoying. This should not be a problem. How can literally every issue nowadays be almost exactly 50/50 all the time. There used to be things called majorities and minorities. In a political system, this gridlock is often divided along party lines, it can be difficult to pass legislation or regulations that are supported by both sides. Political gridlock can slow down the process of creating and implementing regulations, or even prevent it altogether.

Industry influence: Industries that are subject to regulation may lobby against it, using their financial and political power to delay or water down regulations. This can be especially true in industries that are highly profitable, where there is a lot at stake in terms of profits and market share. There is also the issue of regulatory capture, where government regulators become too closely aligned with the industry they are supposed to be regulating.

Lack of expertise: Regulating complex industries or technologies requires a deep understanding of the issues involved, including the science, engineering, economics, and politics. If government regulators lack the necessary expertise, it can be difficult to create effective regulations that balance the interests of all stakeholders. AI is a highly technical and complex field that requires a deep understanding of the underlying science and engineering. Just look at the most recent five-hour-long hearing with the CEO of TikTok! It was clear the elected officials asking questions did not entirely comprehend the technology they were asking questions about.

Resistance to change: In some cases, government regulators may be resistant to change, preferring to stick with established regulations and processes rather than adapting to new challenges or technologies. I mean, this has always since the dawn of man been a barrier. Eventually, people come around. But it takes years to change movements in society, and with the pace of technology developments today, it is moving much faster than us.

Also, to make things even more difficult, these factors above are usually combined with each other in various ways, making it more difficult for the government to keep up with emerging risks or challenges such as with AI.

Regulating technology is probably thee most challenging task because it requires a deep understanding of the underlying science and engineering, as well as the social, economic, and political implications of the technology. And no one knows it all.

There are quite a few reasons why the government lacks the expertise needed for regulating technology.

Rapidly evolving technology: Technology is evolving at a rapid pace, and it can be difficult for government agencies to keep up with the latest advances. By the time a regulation is proposed and implemented, the technology may have already advanced beyond the scope of the regulation.

Limited resources: Government agencies may have limited resources and staff, which can make it difficult to keep up with emerging technologies and to develop regulations that take into account the latest scientific and engineering developments. Many budgets in government agencies are either appropriated ahead of time, and thus, limited, or budgeted for in advance based on projections which are made off of current knowledge. If they don’t know what’s coming in the near future (technology-wise) they cannot plan for it ahead of time. It’s a major problem.

Industry influence: In some cases, the industry being regulated may have more expertise in the technology than the government regulators themselves. This can give the industry an advantage in lobbying against or influencing regulations that could affect their profits.

Regulatory capture: Regulatory capture occurs when regulators become too closely aligned with the industry they are supposed to be regulating, often due to the revolving door between industry and government. This can result in a lack of oversight and regulation, as the regulators may be more focused on protecting the interests of the industry rather than the public. This happens less in tech compared to other industries because the tech industry knows they are ahead.

Lack of incentives: There may be little incentive for government agencies to invest in developing expertise in new technologies, particularly if the technology is not yet widely adopted or does not pose an immediate risk to public health, safety, or welfare.

And that’s where we end up having even bigger problems.


What does that mean for tech?

The pace of technological change has been increasing exponentially in recent years. By the time regulations are put in place, the technology has often moved on to a new phase of development, making the regulations obsolete or inadequate. This is what we are currently witnessing with AI.

With the release of ChatGPT4, it is clear we have moved into the next phase of technology’s evolution.

Secondly, government regulation of AI could potentially stifle innovation and slow down progress overall. The AI industry is highly competitive, and companies are constantly pushing the boundaries of what is possible.

If the government steps in with too much regulation, it could discourage companies from investing in AI research and development, which would ultimately hurt the industry and slow down progress.

In short, overly restrictive regulation could have a chilling effect on innovation.

If regulations are too onerous or restrictive, companies may be less willing to invest in AI research and development, or may be forced to spend more time and resources complying with regulations rather than innovating.

Also, government regulators do not have the technical expertise needed to effectively regulate AI. They may be able to attempt regulation but, this could (will) lead to poorly designed regulations that do more harm than good.

AI is a complex field, involving a range of technical and scientific disciplines, including computer science, mathematics, statistics, and cognitive psychology.

Again, no one knows everything.

Finally, government regulation of AI could have other unintended consequences.

  • Regulations that aim to ensure the safety and reliability of AI systems may result in overly cautious approaches that limit the capabilities of these systems.

  • Similarly, regulations that focus on protecting privacy and data security may hinder the development of new AI applications that could benefit society.

While the regulation of AI is an important issue that needs to be addressed, I have serious doubts about the government's ability to effectively regulate this rapidly advancing technology.

Perhaps instead of relying solely (mostly) on government regulation, we should also encourage the development of industry-led best practices and self-regulation within companies developing new AI technology and algorithms. This approach would allow the AI industry to continue to innovate and advance while also minimizing potential harms.

But what is their incentive to do so?

What steps can companies or people take to self-regulate?

For one, the biggest names in tech can work collaboratively (with small names) to create guidelines and standards for the development and deployment of AI technology.

Developing industry-led best practices can help ensure that the technology is used responsibly and ethically. Frameworks like MITRE, NIST, and OWASP are examples of best practices in the field of technology that is widely implemented and followed by professionals and organizations.

Companies can also form self-regulatory governing bodies that are responsible for overseeing the development and deployment of AI technology within their company or industry. These bodies can create and enforce guidelines, methodologies, and standards, as well as investigate and penalize companies that violate them.

Another area where companies can self-regulate is by conducting regular internal audits of their AI technology to identify potential risks and ensure that it is being used in a responsible and ethical manner.

Finally, they can share their transparency and accountability reports openly. Companies can be transparent about their use of AI technology and provide clear explanations of how it works and what data it collects. They can also establish mechanisms for users to report concerns or violations of ethical standards.

These steps already exist across multiple industries, not just in tech, however, AI technology has already gotten ahead of us.

All I know is - The typical policy making process will not be effective here.

What do you think? Any suggestions for how to proceed with AI?




bottom of page