Recent advancements in machine learning technologies such as ChatGPT left lawmakers and regulators scrambling. Like many technologies with paradigm-shaping potential, the risks are often hypothetical while the benefits are tangible.
Legislators should be hesitant to pass broad, overly burdensome regulations that hamper the development of new technologies. Transparency for consumers about new technologies is a better solution than heavy-handed restrictions, and it facilitates informed consumer choice and the development of better products and tools for consumers.
With the incredibly rapid popularity of tools like ChatGPT, questions regarding the potential risks of machine learning and artificial intelligence have prompted regulatory action. The Federal Trade Commission recently requested information from OpenAI, the creator of ChatGPT, with the intent of preventing fraud and deception. The FTC’s concern is that programs like ChatGPT may generate “false, misleading or disparaging statements” that cause reputational harm.
Systems like ChatGPT are in the early days of development, and there are many examples of it generating false information, but that doesn’t mean regulation provides a better alternative. Thankfully, companies like OpenAI have been clear about the current limitations of their technology to ensure consumers understand the reliability of their products.
President Biden put forward a more comprehensive proposal in his Blueprint for an AI Bill of Rights, and his staff regularly meets on AI regulation. Proposals are expected soon, with Sen. Chuck Schumer, D-New York, stating he expects legislation on AI regulations in “months.”
Recently, the threat of regulation garnered the cooperation of seven AI companies, which met with Biden and willingly agreed to follow certain safeguards. It remains to be seen how the specifics will play out, but the agreements focus on accepting security testing, using watermarks to distinguish AI content, reporting on capabilities and limitations, and researching the risks of the technology.
When considering regulations on machine learning and AI, lawmakers should be careful not to hinder the development of a powerful new tool, and the agreement the companies made with the administration is a better step than heavy-handed regulations. Regulations that limit innovation will slow the development of these technologies, preventing them from becoming more reliable and ultimately limiting what tools and products consumers have at their disposal.
The EU’s AI regulations limit innovation in ways that the United States would do well not emulate. Educational and vocational tools have potentially high rewards through developing more effective teaching methods, yet are heavily restricted by EU regulations.
The EU also restricts the use of machine learning in aspects of private businesses’ management of employees, potentially reducing efficiency in the future. It is also important to note that by preventing companies from operating more efficiently, potential cost savings are not realized and harm consumers’ interests.
Even for predictive language models like ChatGPT, EU regulations require companies to not only disclose that content was AI generated but companies would also have to provide a summary of what materials were used to train their AI system, which could discourage future investment and development out of fear of weak intellectual property rights.
While transparency is crucial, requiring companies to tell competitors how their systems work discourages investment to improve it and incentivizes piggybacking on other companies’ work instead.
Justin Leventhal is a senior policy analyst for the American Consumer Institute.
Share with others