Description
Mobile Neural Network (MNN) is a mobile inference engine tailored to mobile applications. The contributions of MNN include: (1) presenting a mechanism called pre-inference that manages to conduct runtime optimization; (2) delivering thorough kernel optimization on operators to achieve optimal computation performance; (3) introducing backend abstraction module which enables hybrid scheduling and keeps the engine lightweight.
Papers Using This Method
CompactFlowNet: Efficient Real-time Optical Flow Estimation on Mobile Devices2024-12-17NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement2023-06-20Walle: An End-to-End, General-Purpose, and Large-Scale Production System for Device-Cloud Collaborative Machine Learning2022-05-30MNN: A Universal and Efficient Inference Engine2020-02-27