Processing In Memory: The Mythic Advantage

Deep neural networks demand massive parallel compute capabilities that weigh heavily on conventional local AI – but not on Mythic’s platform, which performs hybrid digital/analog calculation inside flash arrays. This entirely new approach performs the inference step of deep neural networks inside the memory array that stores the processing weights long term – bringing huge advantages in performance, power life and accuracy.

Desktop GPU Capabilities in a Shirt Button-sized Device

Mythic’s platform delivers the power of desktop GPU in a button-sized chip that supports multi-million weight neural networks. Using rapid scaling technology, it delivers massive parallel computing when intelligence is needed then vanishes when not. The result is near-weightlessness in terms of device constraints – but all the power you need to support deep neural networks.

Features and Benefits

  • Supports all popular styles of deep neural networks, including convolutional, recurrent, and fully connected.
  • Zero performance issues: The weight storage also handles the processing, so no matter the size of the network, processing capabilities will always be sufficient.
  • The massive parallel processing of this approach means unmatched benefits in terms of low latency and high throughput.
  • 50x lower power: Co-located processor/storage and massively parallel analog arithmetic deliver huge energy efficiencies compared to all-digital platforms.
  • Absolute accuracy: Mythic’s techniques ensure negligible accuracy loss during neural network inference compared to an all-digital system.
  • A software development environment that interfaces directly with machine learning packages such as TensorFlow. We make it trivial to push any deep neural network ensemble onto our platform.
Learn More About Our Unique Platform

Contact Mythic For More Information