Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesCopyBotsEarn
AI Regulation, Global Governance and Challenges

AI Regulation, Global Governance and Challenges

Daily HodlDaily Hodl2024/11/12 16:00
By:by Daniel Keller
HodlX Guest Post   Submit Your Post
 

The spontaneous pace of innovative growth in AI technologies over the past few years has continued to provoke a spectrum of reactions, ranging from curiosity and enthusiasm to concern and outright fear.

One thing, however, is fairly certain – there is an ongoing global race for AI dominance.

On the one hand, this is fueled by the comparative drop in computing costs, and on the other, by the rapid adoption and application of AI tools by users in varying capacities.

Business owners, companies and professionals in different sectors are coming to terms with the massive potential for growth, cost efficiency, reduction in human error and improved profit margins that AI offers.

At the same time, the risks and inherent danger of ‘Wild West’ AI have become so apparent, necessitating the need for regulation and AI governance.

State of the AI

The launch of ChatGPT by OpenAI in 2022 was a wake-up call for innovators in the AI space.

Prior to its release, while companies like Google, Microsoft and Meta were already on the AI train, not much success had been realized especially in the public domain.

BlenderBot 3 was the subject of harsh criticism , Galactica had to be pulled down after three days and Tay didn’t survive 24 hours on Twitter.

The commercial success of ChatGPT unleashed a wave of AI technologies with a new focus on tools and applications that allowed for direct user interaction.

Google launched Bard now Gemini as a direct competitor, while Microsoft’s $13 billion investment in OpenAI allowed it to integrate generative AI technology into its search engine Bing.

Other sectors are not left out in the ‘AI revival’ – financial institutions employ AI solutions for fraud detection with algorithms that leverage behavioral analysis, natural language processing and pattern recognition to identify fraudulent activities.

In the healthcare industry, AI is helping to improve patient experience and diagnosis , interpret X-ray results, manage healthcare data and more.

The need for regulation

As companies and businesses increasingly incorporate AI technology into their products, decision-making processes and service delivery, the spotlight is on the data process behind these algorithms and the AI outcomes.

Misinformation, perhaps, remains one of the biggest genuine risks of generative AI.

In 2022, an image purportedly showing an explosion near the Pentagon made the rounds on social media and briefly triggered a panic reaction in the stock market.

Even more dangerous is the political effect that AI-generated news and deepfakes can cause.

Media outlets publishing real news side by side with AI pieces can spread misinformation on a large scale and erode the public’s trust in what they see or hear.

An example is a news piece uncovered by NewsGuard claiming the involvement of the Israeli prime minister in the death of his psychiatrist.

Biased AI models can also result in large-scale discrimination. A research article by the University of California uncovered racial bias in a widely used healthcare algorithm.

Since AI systems are typically used in large organizations, algorithmic discrimination can amplify bias on a scale that dwarfs the capabilities of conventional systems.

While a doomsday AI taking over human civilization might be a little too imaginative, advanced scams are not.

Malicious actors using AI can orchestrate and pull near-perfect scams even as it becomes harder for the public to distinguish between fake and real.

AI regulation measures

Recognizing the inherent danger of unregulated AI, governments all over the world are paying closer attention to this subject.

Some have even gone ahead to release guidelines and frameworks for guiding the use of AI technology let’s take a look at some of them.

The EU artificial intelligence act

Just as it did with the GDPR (general data protection regulation), the European Union is one of the first governmental bodies to articulate legislation on AI.

The EU AI Act “lays the foundation for the regulation of AI in the European Union” and classifies AI risks into four different risk categories, namely as follows.

  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal risk

By applying specific requirements to AI systems based on the risk category they fall in, the EU hopes to establish an AI environment that improves trust and minimizes the negative implications of such technologies.

For example, AI systems that fall under subliminal manipulation and the biometric classification of people based on sensitive characters e.g, electoral disinformation tools and biased algorithms are classified under unacceptable risks and prohibited.

The Act also covers other measures for post-market monitoring and information sharing.

The United States AI Executive order

In 2023, the Office of Science and Technology Policy in the White House rolled out a ‘ Blueprint for an AI bill of rights ,’ the National Institute of Standards and Technology also released an ‘ AI Risk Management Framework .’

However, perhaps the most important AI regulation move is President Biden’s Executive order on the ‘Safe, Secure and Trustworthy Development and Use of AI.’

The order covers eight policy fields to “ensure new standards for AI safety and security, advance equity and civil rights consumers and protect citizens’ privacy from AI-related risks,” among others.

China AI regulation

China started to work on AI laws in 2021, beginning with the ‘ New Generative AI Code of Ethics .’

Other measures like China’s Deep Synthesis Provisions, Provisions on the Management of Algorithmic Recommendations in Internet Information Services, Interim Measures for Generative Artificial Intelligence Service Management and the Personal Information Protection Law all seek to capture the position of the socialist government on the development, use and security control of AI technologies in China.

Challenges to AI regulation

  • Technology growth pace The fast acceleration rate of AI innovation makes it difficult for government regulations to predict or enact a comprehensive framework of regulations. The EU AI Act attempts to address this by using different tiers of classification. However, the rapid evolution of AI technology could outpace existing regulations, necessitating constant flexibility and response agility.
  • Bureaucratic confusion AI regulations laws, in many cases, rely, interact and overlap with other existing regulations. This can sometimes cause bureaucratic confusion in local implementation and even hinder international collaboration, especially due to differences in cross-boundary regulatory standards and frameworks.
  • Regulation-innovation balance Regulating AI technology in some cases may stifle innovation and limit explorative growth. Deciding which/when regulatory measures are innovation-friendly or not can be a tricky challenge with dire consequences.

Rounding up

Effective AI regulation requires a collaborative approach involving governments, industry leaders and private sector experts to ensure ethical standards keep up with technological advancements.

However, it is also important to strike a critical balance between mitigating the potential risks of AI and leveraging the technology for the greater good of humanity.

Daniel Keller is the CEO of InFlux Technologies and has more than 25 years of IT experience in technology, healthcare and nonprofit/charity works. He successfully manages infrastructure, bridges operational gaps and effectively deploys technological projects. An entrepreneur, investor and disruptive technology advocate, Daniel has an ethos that resonates with many on the Flux Web 3.0 team – “for the people, by the people” – and is deeply involved with projects that are uplifting to humanity.

 

Generated Image: DALLE3

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Locked for new tokens.
APR up to 10%. Always on, always get airdrop.
Lock now!

You may also like

KOMAUSDT now launched for futures trading and trading bots

Bitget has launched KOMAUSDT for futures trading with a maximum leverage of 75, along with support for futures trading bots, on December 23, 2024 (UTC+8). Welcome to try futures trading via our official website (www.bitget.com) or Bitget APP. KOMAUSDT-M perpetual futures: Parameters Details Listing

Bitget Announcement2024/12/23 03:11