Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Block AI Report
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Block AI Report
    Home»AI News»Who’s to Blame When AI Goes Rogue? The UN’s Quiet Warning That Got Very Loud
    logo
    AI News

    Who’s to Blame When AI Goes Rogue? The UN’s Quiet Warning That Got Very Loud

    February 7, 20263 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    kraken


    From Silicon Valley to the U.N., the question of how to assign blame when AI goes wrong is no longer an esoteric regulatory issue, but a matter of geopolitical significance.

    This week, the United Nations Secretary-General posed that question, highlighting an issue that is central to discussions about AI ethics and regulation. He questioned who should be held responsible when AI systems cause harm, discriminate, or spiral beyond human intent.

    The comments were a clear warning to national leaders, as well as to tech-industry executives, that AI’s capabilities are outpacing regulations, as previously reported.

    But it wasn’t just the warning that was remarkable. So too was the tone. There was a sense of exasperation.

    changelly

    Even desperation. If AI-driven machines are being used to make decisions that involve life and death, livelihoods, borders and security, then someone can’t just wimp out by saying it’s all too complicated.

    The Secretary-General said the responsibility “must be shared, among developers, deployers and regulators.”

    The notion resonates with long-held suspicions in the UN about unbridled technological force, which has been percolating through UN deliberations on digital governance and human rights.

    That timing is important. As governments try to draft AI regulations at a moment when the technology is changing so rapidly, Europe already has taken the lead in passing ambitious laws that will apply to high-risk AI products, establishing a regulatory standard that will likely serve as a beacon – or cautionary tale – for other countries

    But, honestly: laws on a page aren’t going to shift the power dynamics. The Secretary-General’s words enter the world in the face of AIs that are currently being used in immigration vetting, predictive policing, creditworthiness, and military choices.

    Civil society has been warning about the dangers of AI if there’s no accountability. It’s going to be the perfect scapegoat for human decision-making with very human repercussions: “the algorithm made me do it.”

    We should also mention that there is also a geopolitics problem that is barely discussed: What will happen if AI explainability regulations in one country are incompatible with those of a neighboring country?

    What will happen when AI traverses boundaries? Can we talk about the rights to export AI? Antonio Guterres, the Secretary General of the UN, spoke about the need for universal guidelines to develop and use AI, much like it is done with nuclear and climate laws.

    And this is not an easy task in a world with a disintegration of international relations and international agreements, which is heading towards a situation of complete deregulation.

    My interpretation? This wasn’t diplomacy speaking. This was a draw-the-line speech. It wasn’t a complicated message, even if it’s a complicated problem to solve: AI is not excused from accountability just because it’s clever or quick or lucrative.

    There must be an entity to whom it’s accountable for its results. And the more time the world spends deciding what that entity will be, the more painful and complex the decision will become.



    Source link

    aistudios
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Crypto Expert
    • Website

    Related Posts

    Physical AI moves closer to factory floors as companies test humanoid robots

    May 14, 2026

    Mira Murati’s Thinking Machines Lab Introduces Interaction Models: A Native Multimodal Architecture for Real-Time Human-AI Collaboration

    May 13, 2026

    Study: Firms often use automation to control certain workers’ wages | MIT News

    May 11, 2026

    AI tool poisoning exposes a major flaw in enterprise agent security

    May 10, 2026
    Add A Comment

    Comments are closed.

    livechat
    Latest Posts

    The Best TSX Stocks to Buy Now If You Want Both Income and Growth

    May 14, 2026

    Should Bitcoin Investors Be Worried?

    May 14, 2026

    Kelp DAO, Aave Advances rsETH Recovery

    May 14, 2026

    Mira Murati’s Thinking Machines Lab Introduces Interaction Models: A Native Multimodal Architecture for Real-Time Human-AI Collaboration

    May 13, 2026

    Upexi Stock Falls Amid Q3 Widened Net Loss on Solana Holdings

    May 13, 2026
    notion
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    Physical AI moves closer to factory floors as companies test humanoid robots

    May 14, 2026

    Bitcoin Firm Nakamoto Surges In Revenue But Bleeds Cash In Q1

    May 14, 2026
    coinbase
    Facebook X (Twitter) Instagram Pinterest
    © 2026 BlockAIReport.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.