Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Block AI Report
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Block AI Report
    Home»AI News»Physical Intelligence Team Unveils MEM for Robots: A Multi-Scale Memory System Giving Gemma 3-4B VLAs 15-Minute Context for Complex Tasks
    Physical Intelligence Team Unveils MEM for Robots: A Multi-Scale Memory System Giving Gemma 3-4B VLAs 15-Minute Context for Complex Tasks
    AI News

    Physical Intelligence Team Unveils MEM for Robots: A Multi-Scale Memory System Giving Gemma 3-4B VLAs 15-Minute Context for Complex Tasks

    March 4, 20263 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Customgpt


    Current end-to-end robotic policies, specifically Vision-Language-Action (VLA) models, typically operate on a single observation or a very short history. This ‘lack of memory’ makes long-horizon tasks, such as cleaning a kitchen or following a complex recipe, computationally intractable or prone to failure. To address this, researchers from Physical Intelligence, Stanford, UC Berkeley, and MIT have introduced Multi-Scale Embodied Memory (MEM).

    https://www.pi.website/download/Mem.pdf

    The Dual-Scale Memory Architecture

    MEM factorizes robotic memory into two distinct scales to balance semantic context with real-time control constraints.

    (1) Short-Term Video Memory

    For tasks requiring fine-grained spatial awareness—like resolving self-occlusions or adapting a grasp—dense visual data is required. MEM utilizes an efficient video encoder that extends standard Vision Transformers (ViTs). To maintain real-time inference (the 380ms ‘real-time barrier’), the architecture avoids joint attention over all patches. Instead, it uses Space-Time Separable Attention, interleaving spatial attention within frames with causal-temporal attention across frames every fourth layer.

    The computational complexity is reduced from O(n2K2) to O(Kn2+nK2), where n is the number of spatial patches and K is the number of timesteps. By dropping tokens from past timesteps in upper layers, the model passes only the current observation’s representation to the VLA backbone, keeping the token count invariant compared to single-frame models.

    livechat

    (2) Long-Term Language Memory

    To handle tasks spanning up to 15 minutes, MEM uses a language-based representation for semantic events. The system decomposes the action prediction as:

    $$\pi(a_{t:t+H},l_{t+1},m_{t+1}|o_{t-T:t},m_{t},g) \approx\pi_{LL}(a_{t:t+H}|o_{t-K:t},l_{t+1},g)\pi_{HL}(l_{t+1},m_{t+1}|o_{t},m_{t},g)$$

    Here, a high-level policy (πHL) maintains a running language summary (mt) of past events and generates subtask instructions (lt+1) for a low-level policy (πLL). This language memory is trained using LLM-generated summaries that compress information (e.g., ‘I placed three bowls’ instead of individual attributes), reducing the risk of training-inference distribution shifts.

    https://www.pi.website/download/Mem.pdf

    Implementation and Performance

    The research team integrated MEM into the π0.6 VLA, which is initialized from a pre-trained Gemma 3-4B model. The model was pre-trained on a diverse mixture of robot demonstrations, vision-language tasks, and internet video data.

    Key Results:

    • In-Context Adaptation: MEM enables robots to adapt manipulation strategies based on recent failures. In evaluation, this led to a +62% success rate increase in opening refrigerators with unknown hinge directions and a +11% increase in picking up chopsticks at variable heights.
    • Long-Horizon Tasks: The model successfully performed 15-minute tasks like ‘Recipe Setup’ (retrieving ingredients from multiple locations) and ‘Kitchen Cleaning’ (washing dishes and wiping counters). Memory-less VLAs failed these tasks significantly more often.
    • Efficiency: The video encoder allows the model to process up to 16 observation frames (spanning ~1 minute) while remaining under critical real-time inference thresholds on a single NVIDIA H100 GPU.

    MEM demonstrates that combining dense, short-term visual tokens with compressed, long-term language summaries allows VLAs to scale their ‘working memory’ without incurring prohibitive computational costs.

    Check out the Paper and Technical details. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.



    Source link

    10web
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Crypto Expert
    • Website

    Related Posts

    AI tool poisoning exposes a major flaw in enterprise agent security

    May 10, 2026

    RingCentral adds Shopify, Calendly, and WhatsApp to AI Receptionist

    May 9, 2026

    Anthropic Introduces Natural Language Autoencoders That Convert Claude’s Internal Activations Directly into Human-Readable Text Explanations

    May 8, 2026

    U.S. Officials Want Early Access to Advanced AI, and the Big Companies Have Agreed

    May 7, 2026
    Add A Comment

    Comments are closed.

    aistudios
    Latest Posts

    4 Reasons I’m Not Afraid to Retire With a Mortgage

    May 10, 2026

    Major Bitcoin Mining Pools Join Stratum V2 Collaborative Organization

    May 10, 2026

    RingCentral adds Shopify, Calendly, and WhatsApp to AI Receptionist

    May 9, 2026

    Bitcoin Strength Carries On As Altcoins Remain Under Clear Pressure

    May 9, 2026

    Layerzero Discloses RPC Poisoning Incident Linked to $292M KelpDAO Hack

    May 9, 2026
    bybit
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    Crypto Burglar ‘GothFerrari’ Sentenced After $250M Theft Ring Targeted US Victims

    May 11, 2026

    AI tool poisoning exposes a major flaw in enterprise agent security

    May 10, 2026
    Customgpt
    Facebook X (Twitter) Instagram Pinterest
    © 2026 BlockAIReport.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.