Whenever a new technology emerges, it can inspire a mix of excitement and fear. The bright new possibilities may encourage some, while others are prone to focus only on potential abuses. If you’re of a more pessimistic frame of mind, the idea of a government created task force to oversee development of artificial intelligence may strike you as a good idea. In reality, creating such a task force is more likely than not to hinder the innovative potential of AI to do good without mitigating many harms.
Todd Myers’ recently published legislative memo highlights a few key points, including the lost potential for innovation, the good AI is already doing, and the likelihood of regulatory abuse from a task force of non-AI experts. This important and excellent piece is available to read here.
In addition to the main arguments from the publication above, there are three supplementary points to consider when it comes to a regulatory body like this for AI:
- Security in AI
- Decentralized development
- Costs of falling behind
It’s important to understand that the reason we’ve had the recent explosion of AI has been a combination of innovation in both software and hardware. Technically speaking, the ability to create effective AI has been around for many years on the software side, but only recently have we had chipsets powerful enough to execute modern AI as we know it. While many people think of AI as purely a software, digital landscape, the hardware capabilities are just as important to making it work.
Security in AI
One common objection to permissionless innovation in the AI sphere is the concern about security. Many worry that AI development could lead to security weaknesses in machinery, databases, and other mechanisms which integrate AI. With its added complexity, AI can create new vulnerabilities for hacking: adversarial attacks, data poisoning, and model inversion attacks to name a few. While security is always a concern, this thought process neglects the reality that AI designers already have great incentive to prioritize security. The chip manufacturers that create the hardware to run AI, such as NVIDIA, already recognize the need for strong security. Without strong security protocols, nobody would trust their hardware to run essential functions.
They also recognize the tradeoffs overbearing security can cause – much of the security measures to protect AI comes in the form of a physical anti-tamper protection on the chip. This defense mechanism involves layers of protective shielding and coating which make it difficult to physically alter the chip. This too, however, needs balance, as adding anti-tamper protection adds a separate layer of potential vulnerability, as bad actors only need disturb the anti-tamper protection to trigger the tamper sensors, causing the chip to shut down.
It takes a fine level of precision to balance all the factors at play. Between the physical space constraints, balancing efficiency of the chipset, and creating anti-tamper protection without adding in new weaknesses, companies developing the hardware for AI to run have a lot on their plate. A group of politically appointed individuals adding outdated regulations could be counterintuitive to security. Even if they regulate the highest standard of security for a chipset, it won’t matter if it makes the hardware so inefficient nobody will use the product!
Decentralized Development
The idea of a state-level AI regulating body is confusing when you consider how the development of AI happens. If the regulatory body wants to set standards of privacy, how much authority would be extended to the chip manufacturers, software companies, or AI using applications based out of the state? While Microsoft may be based in Washington, the company’s partnership with OpenAI is not an ownership, making it unclear if OpenAI would even be subject to rules set up in Washington state.
If Washington sets up regulatory rules within its own borders, it seems as though very little would be impacted on the development side in the big picture of AI. Regulations would impact companies or products used in Washington state which utilize AI technology, but not their competitors in other states or countries. Rather than addressing security, privacy, and other concerns, the regulations would simply shift development outside the state.
This leads to my final point.
Costs of falling behind
While it is not certain a state task force would have the ability to affect AI development, if it does, each of the two potential impacts would lead to undesirable outcomes.
First, if overbearing regulations are created for security measures, either physically or digitally, AI chipset manufacturers and software companies risk massive inefficiencies and falling behind. Companies not subject to these regulations will provide offerings with more powerful capabilities and therefore receive broader adoption. Naturally, this will cause Washington’s security regulations to be moot, as nobody will use less efficient technology voluntarily. Companies that have a motive to have their products purchased and used will find the right balance between security and efficiency.
If Washington wants to stifle the growth of AI development in the state, it risks relegating itself to irrelevance.
If, on the other hand, Washington’s regulations are powerful enough to reach beyond state lines, we risk the national AI industry becoming secondary to other foreign powers. If we stifle AI use for any businesses or products within Washington state, we jeopardize our state’s leadership in technology and advancement. Washington is already alienating the tech community with anti-competitive practices, such as instituting a capital gains income tax for successful innovators. Only allowing AI use and expansion in the state within limited parameters set up by bureaucrats would be another incentive for the tech sector to move elsewhere. In an age where work is more mobile and decentralized than ever, it makes little sense to smother one of our most promising industries.
Artificial intelligence conjures emotional responses from people, but we must be rational when it comes to policy decisions. Again, I would strongly encourage everyone to read Todd Myers’ full legislative memo on this topic. Even today, Governor Inslee’s statement on generative AI regulation references DALL-E 2; the image generation engine version that has been out of date for four months. AI is already creating massive positive change and has the potential to do even more; let us not weigh it down out of fear.