top of page
heder_bg.jpg

Memory over Fabrics​

TM

Dynamic Scaleup of AI and HPC Infrastructures


UnifabriX Memory over Fabrics™ rack-level solutions
push performance boundaries above and beyond

UnifabriX Memory over Fabrics™ provides unmatched performance and scalability to GPUs and AI Accelerators at the rack level, from training trillion‑parameter LLMs to running multi‑physics simulations entirely in‑memory.

 

Major benefits include:

  • Near‑linear scaling for data‑, tensor‑ and pipeline‑parallel workloads, reducing gradient exchanges and activation swaps to sub‑microsecond latency rather than milliseconds of network round‑trips

  • Higher GPU utilization & lower energy per token/step - keeping accelerators busy on maths instead of waiting on PCIe or Ethernet, cutting idle cycles and shrinking cluster power draw

  • Simpler programming & memory management: a unified, I/O coherent address space means kernels can hop by pointer to remote GPU memory without explicit copies or complex gather/scatter code

  • Rack‑scale elasticity: let operators compose just‑right GPU islands for each job, then re‑shape them in software as model sizes or batch requirements change

​

Base system specifications:

  • Up to 16 x OSFP ports connecting up to 16 x GPUs

  • Up to 64 TB of DDR5 memory

  • Memory bandwidth per-port exceeding 80 GB/s

  • CXL and UALink ports

  • Server-grade RAS

  • Hot-swapping of memory modules without system interruption

  • Software-defined access policies

  • Extensive telemetry engines

Buy Now: Get an Online Quotation

Thanks for submitting!

bottom of page