BHS

Balancing AI Innovation and Regulation: How the US Commerce Department’s New Rules Will Shape the Future

Examine the US Commerce Department's newly proposed artificial intelligence rules and discuss how they aim to protect national security without stifling innovation. Analyze the potential challenges and benefits these regulations may bring to the AI industry.

Reading Time: 4 minutes

Balancing AI Innovation and Regulation

Illustration: Lenka Tomasevic

The year of AI

The potential of Artificial Intelligence (AI) to transform many aspects of our lives, from business processes and supply chains to healthcare and employment, is immense, as evidenced by the rapid success of ChatGPT (or Chat Generative Pre-Trained Transformer).

Launched just three months prior, this language-model chatbot already had an estimated 100 million users in January. AI also has a great deal of potential to revolutionize, for example, the medical field, from preventative health apps and wearables to research paper-writing and summarizing patient health records. From diagnosis and clinical decision-making to hospital data management, AI in medicine is set to unlock vast opportunities. 

Meanwhile, achieving a harmonious equilibrium between encouraging creativity and ensuring conscientious AI utilization requires a comprehensive understanding of generative models’ strengths and restrictions, plus creating regulations for their moral application. This tricky act necessitates collaboration between product makers, service suppliers, and users.

Why we need regulation

AI presents an array of far-reaching benefits with its potential to amplify the capabilities of existing technology. When properly trained, algorithms can make non-prejudiced decisions, accelerate procedures and make them more effective, resolve intricate matters, and offer several other likely advantages to society. However, AI can also present several difficulties and risks, including cyber-attacks, illegitimate use of autonomous weapons, likely mishandling of models due to lack of proper training, and more.

The United States was looking to pass formal regulations to provide clear guidance for industry and protect society while also making room for innovation. Government policies should prevent abuses and misuse of technologies, and outright bans should be avoided, so the industry can collaborate with governments and academia to find sustainable solutions. 

Artificial Intelligence

Photo illustration: Freepik

Investments, infrastructure, and training are essential for the workforce, and setting proper regulations on AI development and use will keep the world safer. US agencies such as NIST, the DoD, and DHS should set baseline safety standards and protocols to safeguard against potential risks like cyber security threats, misinformation, and disinformation.

Establishing the rules

In early April 2023, the US made shy steps toward AI regulation. According to a report from the Guardian, the United States Department of Commerce recently issued a request for public contributions to develop accountability mechanisms for artificial intelligence. They seek help in offering American policymakers a valuable structure for evaluating the technology. 

The “Bill of rights” presented by the Biden administration encompasses five key elements for businesses to bear in mind while creating Artificial Intelligence (AI) systems: safeguarding data privacy, preventing algorithmic bias, and maintaining clarity on the usage and implementation of an automated technology. To enhance security and foster public confidence, the National Institute of Standards and Technology introduced an AI risk management structure, a series of optional guidelines that organizations can adopt to minimize possible hazards.

To date, chatbots and other AI technologies have been widely embraced by businesses in numerous sectors, regardless of apprehensions related to privacy, misinformation, and lack of insight into the training process. This has been facilitated by the absence of any restrictive federal regulation or framework governing the release of these AI tools, allowing for the rapid adoption of solutions such as ChatGPT.

Where is Europe in all of this?

As AI technology becomes more widespread, it is essential to implement regulations to safeguard personal and confidential data from potential hazards, such as prejudice and false information. For instance, Google’s ChatGPT and Bard have already demonstrated the generation of incorrect data. To address these concerns, the European Commission introduced the AI Act in April 2021 to govern AI within the European Union. 

Moving to the current situation, the Council of the EU has established a shared stance on the Act, and the European Parliament is set to vote on the draft AI Act shortly. Subsequently, dialogue among the Commission, Parliament, and Council is anticipated, ultimately leading to the Act’s adoption by the end of 2023. By doing so, the EU is leading the charge in setting a global benchmark, which other nations engaged in commerce with the Union will also need to comply with.

Balance is the key

Rather than suppressing creativity, regulation of Artificial Intelligence should be seen as a way to promote responsible innovation that benefits everyone. Crafting a balanced approach to innovation and responsibility necessitates a unified effort that incorporates all stakeholders.

AI world

Photo illustration: Freepik

It is imperative that AI developers strive to create AI with safety, transparency, and consideration for the wider public in mind. Policymakers should be proactively engaged in the process of forming regulations that address the risks of AI. Civil society organizations have a crucial role to play in guaranteeing that regulations abide by the values of fairness and human rights.

Furthermore, people should be enlightened about AI and the potential implications it has for society. In doing so, the public can take part in the regulation process. Lastly, the global implications of regulating AI should not be overlooked. As AI is not confined to any one nation or region, international collaboration is essential for establishing a unified system of AI regulation.

Shaping the future with regulation

The EU and the US have an immense task before them in creating rules for AI, for these regulations could well set the global standard due to the plethora of companies present and based in the EU and US. It is now up to policymakers and leaders to craft rules that would improve the lives of people and bring about equity. 

Negotiators might be seduced to utilize AI in the hope of delivering cost savings or reviving the economy. But taking shortcuts in public services or using AI without any public advantage can be detrimental to our lifestyles and the liberties we cherish. The EU and US must inquire how our societies can benefit from AI to actualize our rights and freedoms.

It seems that most of the regulations will be focused on transparency and ethical use of AI. Additionally, there might be additional developments which will “neuter” AI as we know it today and as we envisioned it developing. This approach may, in fact, be logical, due to fears of AI becoming too powerful, which it already is in one sense. At the moment, it is difficult to predict what exactly the future of AI will look like, but it will be interesting to follow its developments and our interactions with it. 

Conclusion 

AI’s rise offers both advantages and drawbacks for our society. Its transformative power and capacity to improve our lives is undeniable, yet concerns about its effects on employment, prejudice, security, and safety are also legitimate. To guarantee AI is cultivated and utilized properly, there is a need for an inclusive, ethical, and universal policy. Such regulation should be created with involvement from every relevant party to effectively balance progress and responsibility and make sure AI benefits all of us.

Dino Kurbegović is a project coordinator and an investor and technology enthusiast with years of experience in managing complex projects. His journey into content writing began in 2014, covering finance, investing, crypto, technology and complex technical topics.

[the_ad_placement id="end-body"]

EDITOR'S CHOICE

Subscribe to our newsletter and stay updated !