Skip to main content

Breakthrough in Edge AI Robotics! Jetson Thor × GR00T-1.5 Practical Deep Dive

· loading
Author
Advantech ESS
Table of Contents

Have you ever imagined a future where robots no longer rely on the cloud, but can “think” and “act” instantly on-site? Advantech converts this concept into reality through the latest technical verification, bringing cutting-edge Physical AI models and technology directly to edge devices. This allows robots to perceive, decide, and execute in real-time—all accomplished on a compact ASR-A702 (NVIDIA Jetson Thor). This is not just a technical upgrade; it is a critical step for the robotics industry toward intelligent automation!

Technical Background: What is ASR-A702 × Isaac-GR00T-N1.5?
#

First, let’s meet the protagonists:

1. The Flagship of Edge Computing: Advantech ASR-A702 This system is the ultimate “brain” built by Advantech specifically for Autonomous Mobile Robots (AMR) and humanoid robots.

  • Core Engine: Powered by the NVIDIA Jetson Thor platform, utilizing the latest Blackwell GPU architecture.
  • Extreme Performance: Supports the latest FP8 / NVFP4 TensorRT inference technology, designed specifically to handle complex multimodal AI models.
  • Industrial Design: The ASR-A702 features superior heat dissipation and shock resistance, ensuring stable and powerful computing output even in harsh industrial environments.

2. The Soul of Physical AI: NVIDIA Isaac-GR00T-N1.5 This is a Physical AI foundation model built for “General Robot Manipulation.” Unlike traditional AI, it understands the physical laws of the real world.

  • Multimodal Perception: Can see (multi-view video), hear (language commands), and sense (robot arm state).
  • Contextual Understanding: When facing vague commands like “put the cube in the bucket,” it combines vision with physical common sense to perform precise operations.

When combined, a single Advantech ASR-A702 is all that is needed on-site to complete the intelligent closed-loop of the robot—from perception and judgment to action, responding instantly without relying on the cloud!


Implementation Revealed: From Data Collection to Real-Time Control
#

Core Training Strategy: Real2Real – Imitation Learning

Next, we will demonstrate how to complete the entire process—from data collection to edge control—using the Real2Real approach:

Step 1 | Data Collection on ASR-A702
#

In this project, we adopt Imitation Learning, an intuitive and efficient learning paradigm. Through human-operated teleoperation demonstrations, the robot directly captures subtle dynamics and contact feedback from the real physical world.

This approach allows the AI to learn like an apprentice observing an expert, enabling it to quickly master complex tasks or handle irregular objects without requiring hand-crafted kinematic models or complex motion programming.

Data quality determines AI capability! The Advantech technical team selected the Lerobot-SO101 Leader / Follower robotic arms, paired with dual cameras (front view + wrist view), collecting all video and motion data at 30 FPS with 640×480 resolution. The entire process is conducted on the ASR-A702, ensuring the data is closest to actual applications.

Collecting over 70 Episodes significantly improves the model’s stability and generalization capabilities.

Step 2 | Model Fine-Tuning on RTX A6000
#

Training AI requires powerful computing power, so the model fine-tuning stage is conducted on an NVIDIA RTX A6000 (24GB VRAM). Using a Dockerized Isaac-GR00T environment, with a batch size of 16, the model is trained for approximately 10,000 steps until the loss drops below 0.01 before moving to the next stage.

Step 3 | ASR-A702 Exclusive TensorRT Inference Engine
#

This step is the ASR-A702’s greatest advantage! On the ASR-A702, Isaac-GR00T-N1.5 is accelerated via TensorRT using the latest FP8 / NVFP4 precision. Compared to traditional FP16, this significantly reduces latency and lowers memory usage, making it particularly suitable for real-time robot control.

Step 4 | Real-Time Deployment & On-Site Control
#

After completing the TensorRT conversion, the Server and Client inference architecture runs synchronously on the ASR-A702. The Server provides inference services, while the Client receives language commands and controls the Lerobot-SO101 robotic arm in real-time. In the live demonstration, the system successfully executed “Pick the cube and put it in the bucket,” completing the entire process on the edge device without relying on the cloud!

Want to see the live operation video? Click here to watch

Server / Client Inference Architecture:

  • Server (Inference Side): Loads the fine-tuned Isaac-GR00T-N1.5 model, providing PyTorch or TensorRT inference services, supporting high-efficiency FP16 / FP8 / NVFP4 precision.
  • Client (Sensing & Control Side): Captures images from front + wrist cameras, collects robotic arm status, sends inference requests, and executes action commands.

This architecture offers three major advantages:

  • Pure Edge Deployment: No fear of network latency or cloud disconnection.
  • Edge Inference + Remote Training: High flexibility.
  • Scalability: Can be expanded to multiple robots, supporting diverse application scenarios.

Highlights & Value: Why is this Architecture Important?
#

Key Value Description
Full Edge Deployment No cloud required; Perception → Decision → Action is completed locally.
Multimodal Integration Integrates vision, language, state, and physical perception, comprehensively evolving the robot’s understanding of the real world.
High-Efficiency Inference FP8 / NVFP4 TensorRT ensures low latency and high performance.
Engineering Replicability Docker + LeRobot + Isaac GR00T allows for rapid deployment.
Real-World Verification Proven effectiveness running on ASR-A702 (powered by NVIDIA Jetson Thor).

Applications & Industrial Value
#

By utilizing Physical AI models to perceive, decide, and act, combined with Advantech’s hardware and software system integration capabilities, this system can be applied in the future to:

  • Industrial Automation: Real-time sorting and packaging by robotic arms.
  • Smart Manufacturing: Autonomous collaboration of on-site equipment.
  • Edge AI Robots: Unmanned warehousing and logistics distribution.

Compared to traditional cloud robots, this Edge-First architecture not only responds faster and is more reliable but also significantly reduces deployment costs and maintenance difficulty. Advantech continues to deepen its roots in the Edge AI field, dedicated to driving industrial upgrades and innovation!

Conclusion & Future Outlook
#

The combination of NVIDIA Isaac GR00T-1.5 × ASR-A702 (NVIDIA Jetson Thor ) ensures that general-purpose robotics is no longer just theoretical, but an engineering system capable of immediate “deployment” on-site. This is not just a technological breakthrough; it creates a brand-new blueprint for the industry: “One edge device driving an entire intelligent robotic system.”

In the future, Isaac Sim (Omniverse) Sim2Real integration can be leveraged to build highly realistic digital twin environments. Large-scale synthetic data can be generated in the virtual world for model training, and the trained models can then be deployed on ASR-A702 for inference.

This approach significantly reduces the cost and risk of physical-world training, unlocking the full potential of “train in simulation, deploy in reality.”

Advantech will continue to invest in R&D and innovation to make AI robots more accessible and smarter, allowing industries such as manufacturing and logistics to enjoy the benefits of intelligent automation. We welcome partners, customers, and technology enthusiasts from all sectors to witness this Edge AI robotics revolution together!


Reference Links #

Related

AI Model Optimization Unveiled: Achieve Lightning-Fast LLM Performance on Your Hardware!
· loading
Unveiling the New Era of Edge AI! Advantech AIMB-2210 × AMD Ryzen™ 8000 NPU Next-Gen Experiment Revealed
· loading
Do Large Language Models Need to "Slim Down"? Advantech's Quantization Technology Experiment Reveals the Secrets!
· loading