What Regulations Need to Be Put in Place to Ensure the Safe Use of AI in the U.S.?

As interest in artificial intelligence (AI) has swelled since the release of ChatGPT and other generative AI at the end of last year, so have concerns around its use. Even top AI developers from Google and OpenAI co-signed a statement at the end of May from non-profit the Center for AI Safety that said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Meanwhile, governments have been hard at work at developing AI regulations. In June, the European Union proposed changes to its draft AI Act, which was originally proposed in April 2021, before it was voted on and approved by the European Parliament. However, the act will still need to be negotiated among and approved by member states and the European Commission before it becomes law.

For now, the act proposes a risk-assessment system that would ban intrusive and discriminatory uses of AI, including real-time remote biometric identification systems; predictive policing systems; emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. It would also require generative AI systems to comply with transparency requirements such as disclosing when content is AI generated and distinguishing deep-fake images from real ones. Penalties for companies could include fines up to €30 million or 6% of global income.

The U.S. is also working on AI regulations of its own. In June, Senator Chuck Schumer proposed his SAFE Innovation Framework, a broad plan which includes convening panels of experts in the space to discuss potential regulations with sessions starting in September. Based on those findings, the Senate would then start to develop bills in an attempt to formalize AI regulations.

To get a better grasp on what eventual AI regulations could and should look like, PM360 spoke with Nick Adams, Founding Partner at Differential Ventures. In addition to starting the venture capital firm focused on AI/machine learning in 2018, Adams is also a member of the cybersecurity and national security subcommittee for the National Venture Capital Association and recently briefed members of Congress on AI policy and potential regulation.

PM360: In your view, what kind of regulatory framework do you envision being put in place to protect against some of dangers of AI that people are worried about?

Nick Adams: First of all, I do believe regulations need put in place, modified, and updated. I’ll start with that because my next comment may sound contrary to that, but it’s not. Regulation on its own is not likely to be successful. I believe that governments should be fostering private market solutions that can combat the bad actors. We already have a $200 billion dollar cybersecurity industry that could grow to $2 trillion to protect against the flaws of the internet and I think AI will have a similar market to fix and protect against the risks from this technology.

What I am concerned about is more regulation is likely a good thing for the bigger companies that have some early advantage in the AI space. If we make it so cumbersome to deploy a model, only the big companies will be able to afford the process of getting through the regulatory oversight. Just the other day, seven of the largest AI companies agreed to comply with voluntary safeguards which don’t seem to have any meaningful consequences for non-compliance but give the illusion of creating a higher standard of oversight that could be prohibitive for startups.

To that end, I also think that’s why we’re hearing comments about putting a “pause” on AI development from business leaders like Elon Musk, who is investing in new companies in this space, as well as Google and Microsoft, who currently have an advantage in the market.  You probably saw the leaked document from Google saying that nobody has an advantage in technology and I think they’re trying to build a regulatory moat to make it harder for startups that are working very hard right now deploy solutions that  could eat some of their market share. I believe that the real innovation in this space, both in terms of advancing AI capabilities and in making it easier and safer to deploy, will come from a competitive marketplace that includes the startup community building solutions to challenge the large incumbents. I think that’s the balance our elected leaders and policymakers need to need to strike.

With that said, the starting point for any regulations should be a clear, consistent, and enforceable data privacy policy. Europe has a pretty significant advantage right now with GDPR. The U.S. has fumbled trying to get to a standard national policy. We were actually close, but California and Nancy Pelosi blocked a national policy in this area because they felt like they were leading the nation with with the California Consumer Privacy Act (CCPA). As a result, we have an inconsistent policy with unclear enforcement. And the challenge on the enforcement side will be if the penalties aren’t meaningful enough, then they will be like a speed bump to companies like Facebook and Google compared to the to the revenue that they realize from using data promiscuously and, in the future, deploying algorithms without a clear set of rules in place.

From there, I think most of the existing regulatory bodies such as the FDA, FTC, Consumer Financial Protection Bureau (CFPB) will cover the bad actors in use cases that come out of AI as long as we know what data can be used, how can it be used, and what the penalties are for using it the wrong way. Once we integrate that into the existing regulatory framework with appropriate updates for the digital age, then we’ll find the real outliers and at least be in position to regulate them appropriately.

In terms of penalties, you mentioned making them harsher. What would they need to be to prevent larger companies from violating any policies put in place?

It’s a good question. When you have the kind of money that some of these organizations have, financial penalties are not always meaningful. So I think bad actors should get the equivalent of a suspension in the sports world, where if they do abuse some of their powers and data privacy policy then they are disallowed from selling a certain product into a certain community for a period of time. That’s one way you can go about this, where the impact on revenue will be extremely substantial for a period of time. But that is probably in the worst-case scenario, and is as far as I can imagine regulation going. Prior to reaching that level, it would be more along the lines of increased fines and just having a stronger ability to enforce them by clearly defining what is an instance of misuse of data privacy.

As I mentioned before, I don’t think regulation on its own is going to be successful for this exact reason.

What are the clearer guidelines that you would like to see put in place? You mentioned GDPR, what can the U.S. learn from that? And what other guidelines or restrictions do you think are necessary?

I think GDPR was a good starting point for data management. We will need more clarity around data regarding what you can keep, for how long, and how it can be used, because that is still a little bit of a gray area.  Then on the actual AI side, the National Institute of Standards and Technology (NIST) built a comprehensive starting point for an AI risk framework. Those two things combined are a good starting point. I think implementing similar standards that what we have seen in other parts of technology would be helpful in AI, such as with ISO standards and SOC 2 compliance. For instance, I can see us reaching a place where both the suppliers of AI technology and the consumers of AI technology agree to a standard of AI certification that resembles SOC 2 certification.

Going beyond regulation, are there any other kinds of infrastructure or guardrails that need to be put in place to ensure AI is used properly? For instance, to prevent things like data bias from influencing the output of any AI algorithm.

There is potentially a regulatory component to that because New York City has a hiring law that recently went into effect requiring any company using an AI model to inform their hiring process needs to disclose that is using such technology and it needs to go through an annual audit. But the reality is getting AI to work properly is also a business requirement. AI adoption has been previously slow because of the historical philosophy of “crap in, crap out.” Forgive the saying, but, if you have crappy data and you build and train a model on it, then your outputs aren’t going to be useful or will just be flat out wrong. The repercussion is that the business performance will also suffer or not generate a return on investment. So yes, I do think we need to be careful of some of the societal bias that can come out of using AI to automate decisioning but I also think it is mostly a business performance question.

When might we actually see regulations put in place in the U.S.?

We’re coming up on election season pretty quickly, so my sense is that in the Fall or Winter we’ll see real development on some sort of policy here. The other reason I think it will happen sooner than later is what I’m constantly hearing is that the U.S. is extremely paranoid about China and its development of AI. More specifically, whether China is going to be the one to capture the lion’s share of this technology because when we do get AI technology working well at-scale it will have a huge economic effect for the winners. That is really driving a lot of action on the part of our policymakers. So I think something will happen during late ’23 or early to mid-2024, so that one party or the other—or both parties—can hang their hat on their involvement in getting that policy pushed through.

Ads