Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Block AI Report
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Block AI Report
    Home»AI News»Physical AI raises governance questions for autonomous systems
    Physical AI raises new governance questions for autonomous systems
    AI News

    Physical AI raises governance questions for autonomous systems

    May 4, 20266 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Customgpt


    Governance around Physical AI is becoming harder as autonomous AI systems move into robots, sensors, and industrial equipment. The issue is not only whether AI agents can complete tasks. It is how their actions are tested, monitored, and stopped when they interact with real-world systems.

    Industrial robotics already provides a large base for that discussion. The International Federation of Robotics said 542,000 industrial robots were installed worldwide in 2024, more than double the annual level recorded a decade earlier. It expects installations to reach 575,000 units in 2025 and pass 700,000 units by 2028.

    Market researchers are also applying the Physical AI label to a wider group of systems, including robotics, edge computing, and autonomous machines. Grand View Research estimated the global Physical AI market at US$81.64 billion in 2025 and projected it to reach US$960.38 billion by 2033, though the category depends on how vendors define intelligence in physical systems.

    From model output to physical action

    The governance challenge is different from software-only automation because physical systems can operate around workplaces, infrastructure, and human users. They can also be connected to equipment that requires clear safety limits. A model output can become a robot movement or a machine instruction. It can also become a decision based on sensor data. That makes safety limits and escalation paths part of system design.

    Customgpt

    Google DeepMind’s robotics work is one recent example of how AI models are being adapted for this environment. The company introduced Gemini Robotics and Gemini Robotics-ER in March 2025, describing them as models built on Gemini 2.0 for robotics and embodied AI. Gemini Robotics is a vision-language-action model designed to control robots directly, while Gemini Robotics-ER focuses on embodied reasoning, including spatial understanding and task planning.

    A robot using this type of model may need to identify an object, understand an instruction, and plan a sequence of movements. It also needs to assess whether the task has been completed correctly. That creates a control problem that includes both model behaviour and the mechanical limits of the system.

    Google DeepMind said useful robots need generality, interactivity, and dexterity. Generality covers unfamiliar objects and environments. Interactivity relates to human input and changing conditions. Dexterity refers to physical tasks that require precise movement.

    In its launch materials, Google DeepMind said Gemini Robotics could follow natural-language instructions and perform multi-step manipulation tasks. Examples included folding paper, packing items into a bag, and handling objects not seen during training.

    The technical requirements for Physical AI are broader than language understanding. Systems need visual perception and spatial reasoning. They also need task planning and success detection. In robotics, success detection matters because the system must decide whether a task has been completed, whether it should retry, or whether it should stop.

    Google DeepMind’s Gemini Robotics-ER 1.6, introduced in April 2026, shows how those functions are being packaged in newer models. The company describes the model as supporting spatial logic, task planning, and success detection, with the ability to reason through intermediate steps and decide whether to move forward or try again.

    Google’s developer documentation says Gemini Robotics-ER 1.6 is available in preview through the Gemini API. The documentation describes it as a vision-language model that brings Gemini’s agentic capabilities to robotics. Those capabilities include visual interpretation, spatial reasoning, and planning from natural-language commands.

    Google AI Studio provides a developer environment for working with Gemini models, while the Gemini API provides a route for integrating those models into applications. In the context of embodied AI, that places testing and prompting closer to the developers building agentic applications.

    Safety controls move into system design

    Governance becomes more complex when these systems can call tools, generate code, or trigger actions. Controls need to define what data the system can access, what tools it can use, which actions require human approval, and how activity is logged for review.

    McKinsey’s 2026 AI trust research points to the same issue in enterprise AI more broadly. It found that only about one-third of organisations reported maturity levels of three or higher in strategy, governance, and agentic AI governance, even as AI systems take on more autonomous functions.

    In robotics, safety also includes the physical behaviour of the machine. Google DeepMind has described robot safety as a layered problem, covering lower-level controls such as collision avoidance, force limits, and stability, as well as higher-level reasoning about whether a requested action is safe in context.

    The company also introduced ASIMOV, a dataset for evaluating semantic safety in robotics and embodied AI. Google DeepMind said the dataset was designed to test whether systems can understand safety-related instructions and avoid unsafe behaviour in physical settings.

    The same controls used for software agents become harder to manage when systems are connected to robots, sensors, or industrial equipment. These include access rights, audit trails, and refusal behaviour. They also include escalation paths and testing.

    Governance frameworks such as the NIST AI Risk Management Framework and ISO/IEC 42001 provide structures for managing AI risks and responsibilities across the system lifecycle. In Physical AI, those controls need to account for model behaviour, connected machines, and the operating environment.

    Google DeepMind has also worked with robotics companies as part of its embodied AI development. In March 2025, the company said it was partnering with Apptronik on humanoid robots using Gemini 2.0, and listed Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools among trusted testers for Gemini Robotics-ER.

    The 2026 update also referenced work with Boston Dynamics involving robotics tasks such as instrument reading. That type of use case depends on visual understanding, task planning, and reliable assessment of physical conditions.

    Physical AI applies to industrial inspection, manufacturing, and logistics. It also applies to facilities and warehouses. These settings require systems to interpret real-world conditions and act within defined limits. The governance question is how those limits are set before autonomous systems are allowed to make or execute decisions.

    Google DeepMind and Google AI Studio are listed as hackathon technology partners for AI & Big Data Expo North America 2026, taking place on May 18–19 at the San Jose McEnery Convention Center.

    (Photo by Mitchell Luo)

    See also: AI agent governance takes focus as regulators flag control gaps

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



    Source link

    frase
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Crypto Expert
    • Website

    Related Posts

    Sakana AI Introduces KAME: A Tandem Speech-to-Speech Architecture That Injects LLM Knowledge in Real Time

    May 3, 2026

    You’re allowed to use AI to help make a movie, but you’re not allowed to use AI actors or writers

    May 2, 2026

    Improving understanding with language | MIT News

    May 1, 2026

    One tool call to rule them all? New open source Python tool RunPod Flash eliminates containers for faster AI dev

    April 30, 2026
    Add A Comment

    Comments are closed.

    ledger
    Latest Posts

    Why Smart Investors Are Eyeing These 3 Canadian Stocks Right Now

    May 4, 2026

    Sakana AI Introduces KAME: A Tandem Speech-to-Speech Architecture That Injects LLM Knowledge in Real Time

    May 3, 2026

    Bitcoin, Altcoins Breakout With Strength: Are New Highs Next?

    May 3, 2026

    Strategy Skips Weekly Bitcoin Buy After 108 Total Purchases, 818,334 BTC Holdings

    May 3, 2026

    Bitcoin Price Outlook In May: Historical Data Suggests A Negative Performance

    May 3, 2026
    binance
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    Physical AI raises governance questions for autonomous systems

    May 4, 2026

    Trump’s World Liberty Finance (WLFI) sues Tron’s Justin Sun

    May 4, 2026
    bybit
    Facebook X (Twitter) Instagram Pinterest
    © 2026 BlockAIReport.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.