NVIDIA and Siemens Healthineers: NV-Raw2Insights-US learns directly from raw ultrasound signals and corrects images in real time
Why it matters
NV-Raw2Insights-US is an AI model by NVIDIA and Siemens Healthineers that learns directly from raw ultrasound channel data — before traditional beamforming — and in a single AI pass generates a patient-specific speed-of-sound map through tissue. The map is used for adaptive image focusing during live scanning. The model, weights, and dataset have been released openly on HuggingFace and GitHub, with deployment via NVIDIA Holoscan and Blackwell GPUs.
NVIDIA and researchers from Siemens Healthineers published on 28 April 2026 NV-Raw2Insights-US — a model that inverts the traditional ultrasound pipeline. Classical beamforming compresses millions of echoes from the ultrasound probe into a final image, relying on physics assumptions (e.g. constant speed of sound through the body). NV-Raw2Insights-US learns before that compression, directly from raw channel data.
What does the model actually do?
In this first Raw2Insights application, NVIDIA and Siemens estimate the speed of sound through tissue for adaptive image focusing. The system generates a patient-specific sound-speed map in a single AI pass and streams it back to the scanner for image correction during live acquisition. What previously required complex compute becomes single-pass inference. The collaboration with Siemens Healthineers was led by Ismayil Guracar and Rickard Loftman from the AI & Advanced Platforms group.
How is it deployed?
Raw ultrasound channel data is not easily accessible on clinical scanners because the signal has very high bandwidth. NVIDIA’s Holoscan Sensor Bridge (HSB), an open-source FPGA IP, transfers data into GPU memory via RDMA over Converged Ethernet with low latency. The demonstration uses an Altera Agilex-7 FPGA development kit, an ACUSON Sequoia scanner (Siemens), and a technology NVIDIA calls “Data over DisplayPort” — streaming raw signals through the scanner’s DisplayPort output. Inference runs on a Blackwell-class GPU inside an NVIDIA IGX Thor or DGX Spark system, under the NVIDIA Holoscan edge AI platform.
What has been released openly?
NVIDIA has published the complete research package openly: GitHub (github.com/NVIDIA-Medtech/NV-Raw2insights-US), model weights (huggingface.co/nvidia/NV-Raw2Insights-US), and dataset (huggingface.co/datasets/nvidia/NV-Raw2Insights-US). Detailed benchmark figures or clinical validation are not included in this announcement — the focus is architectural and conceptual. NVIDIA emphasizes: the technology is under investigational development and is not clinically approved. The broader message: an “AI-native imaging” approach that learns from each patient’s physics rather than from pre-processed images could become a modular foundation for the next generation of AI-driven diagnostic systems.
This article was generated using artificial intelligence from primary sources.
Related news
arXiv:2604.21764: 'Thinking with Reasoning Skills' reduces reasoning tokens while improving accuracy — ACL 2026 Industry Track
Thinking with Reasoning Skills (ACL 2026 Industry Track): fewer tokens, higher accuracy through retrieval of reasoning skills
DeepSeek releases V4-Pro and V4-Flash: two open-source models with one million token context and 80.6 on SWE Verified