Vote for Cryptopolitan on Binance Square Creator Awards 2024. Click here to support our content!

Who Will Answer for AI Ethics, Commitment to Real-World Social Responsibilities?

In this post:

  • Accountability for ethical AI extends to tech firms, policymakers, and many others, but marginalized voices must not be excluded. 
  • AI’s ethical development requires a focus on social contexts and paradigm shifts are necessary for responsible innovation. 
  • The use of cheap global labor in AI development raises concerns about exploitation and ethical responsibilities in the tech industry.

The recent controversies surrounding artificial intelligence (AI) have reignited the debate over who should be held accountable for ensuring the ethical development and deployment of AI. As generative AI like ChatGPT becomes increasingly mainstream, questions about the tech industry’s commitment to AI Ethics or responsible innovation continue to loom large.

The hidden cost of AI

Investigations have revealed that OpenAI, a leading AI company backed by Microsoft, has been relying on low-paid overseas contractors to handle the sensitive task of content moderation, a crucial component in the development of “safe” systems like ChatGPT. These contractors, based in Kenya, were paid a mere $2 per hour to label disturbing texts and images, a task that has reportedly caused significant psychological trauma. Following the revelation of the impact of this work, OpenAI’s outsourcing partner severed ties with the company.

This exposé has shed light on the tech industry’s dependence on cheap labor from around the world to carry out the most taxing tasks that underpin advancements in AI. This comes at a time when prominent AI safety teams are being disbanded, despite the industry’s high-minded rhetoric about ethics.

The reality behind the PR spin

Several high-profile figures in the tech industry have called for a pause in AI development until appropriate regulations can be put in place. However, some experts argue that relying solely on policymakers and corporations to shape the future of AI is a mistake, as it excludes key perspectives. Dr. Alison Powell of the Ada Lovelace Institute has pointed out that the current discourse focuses too much on the potential for artificial general intelligence to surpass human cognition, rather than addressing the realities of the present.

“This is harmful because it focuses on an imagined world rather than the actual world we live in,” Powell said. She argues that the decision-making capabilities often attributed to AI overlook the real-world social responsibilities that come with such power.

Oxford researcher Abid Adonis has also noted that the voices of marginalized groups, who are directly affected by AI, are conspicuously absent from the debate. “It’s important to hear what marginalized groups say because it’s missing from the discussion,” Adonis said.

Read Also  Hawaii Set to Host Trailblazing Defense AI Hackathon in 2024

The shortcomings of “ethical AI”

There is already evidence of algorithmic bias and discrimination in the AI systems that are currently in use, from facial recognition technology to algorithms used in housing and loan assessments. Despite tech companies’ lofty claims of ethical principles, their actions often tell a different story.

Generative models like ChatGPT, which are trained on a limited set of internet data, inevitably inherit the biases present in that data. The much-touted reasoning abilities of these models often fall short under scrutiny. For example, ChatGPT has been found to generate blatantly false claims about real people. The lack of transparency in the sourcing of commercial data for AI training only adds to these concerns.

A broader perspective on AI ethics

Dr. Powell suggests that instead of viewing ethics as a technical problem to be solved, we should examine the social contexts in which harm occurs. “AIs are institutional machines, social machines, and cultural machines,” she said. Focusing solely on adjusting algorithms ignores the fact that exclusion often stems from the institutions and culture surrounding the technology.

According to researcher Abid Adonis, strong public discourse and norms will play a crucial role in shaping the future of innovation. Ensuring accountability means enforcing existing laws fairly, rather than simply regulating the technology itself.

Paradigm will shape the corridors of innovation

Abid Adonis, Researcher

As AI continues to rise in prominence, ensuring that it produces just outcomes is a responsibility that must be shared among tech firms, policymakers, researchers, the media, and the public. However, the current discourse is heavily skewed toward those with the most power. In order to move forward, we need to include a wider range of perspectives on the potential pitfalls of AI, so that we can develop ethical technology that truly meets human needs. This will require us to look beyond the PR spin of big tech companies and confront the real harms that are currently being caused.

The featured image conveys the high-pressure, deadline-driven working conditions that typify the tech industry’s reliance on cheap labor.

A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decision.

Editor's choice

Loading Editor's Choice articles...

Stay on top of crypto news, get daily updates in your inbox

Most read

Loading Most Read articles...
Subscribe to CryptoPolitan