M1108 Analog Matrix Processor
High-performance and low-power AI inference
Overview
Features
Workflow
Models
Resources

- Array of 108 AMP tiles, each with a Mythic Analog Compute Engine (Mythic ACE™)
- Capacity for up to 113M weights - able to run single or multiple complex DNNs entirely on-chip
- On-chip DNN model execution and weight parameter storage with no external DRAM
- Deterministic execution of AI models for predictable performance and power
- Execution of models at higher resolution and lower latency for better results
- Support for INT4, INT8, and INT16 operations
- 4-lane PCIe 2.1 interface with up to 2GB/s of bandwidth for inferencing processing
- Available I/Os – 10 GPIOs and UARTs
- 19mm x 19mm BGA package
- Typical power consumption running complex models ~4W
DNN models developed in standard frameworks such as Pytorch, Caffe, and TensorFlow are implemented and deployed on the Mythic Analog Matrix Processor (Mythic AMP™) using Mythic’s AI software workflow. Models are optimized, quantized from FP32 to INT8, and then retrained for the Mythic Analog Compute Engine (Mythic ACE™) prior to being processed through Mythic’s powerful graph compiler. Resultant binaries and model weights are then programmed into the Mythic AMP for inference. Pre-qualified models are also available for developers to quickly evaluate the Mythic AMP solution.

Mythic provides powerful pre-qualified models for the most popular AI use cases. Models have been optimized to take advantage of the high-performance and low-power features of Mythic Analog Matrix Processors (Mythic AMP™). Developers can focus on model performance and end-application integration instead of the time-consuming model development and training process. Available pre-qualified models in development:
Mythic is continuously adding more pre-qualified models and use cases to our portfolio. For more information, please contact
sales@mythic-ai.com