CMU Researchers Introduce TNNGen: An AI Framework that Automates Design of Temporal Neural Networks (TNNs) from PyTorch Software Models to Post-Layout Netlists

Blockonomics
CMU Researchers Introduce TNNGen: An AI Framework that Automates Design of Temporal Neural Networks (TNNs) from PyTorch Software Models to Post-Layout Netlists
Paxful


Designing neuromorphic sensory processing units (NSPUs) based on Temporal Neural Networks (TNNs) is a highly challenging task due to the reliance on manual, labor-intensive hardware development processes. TNNs have been identified as highly promising for real-time edge AI applications, mainly because they are energy-efficient and bio-inspired. However, available methodologies lack automation and are not very accessible. Consequently, the design process becomes complex, time-consuming, and requires specialized knowledge. It is through overcoming these challenges that one can unlock the full potential of TNNs for efficient and scalable processing of sensory signals. 

The current approaches to TNN development are fragmented workflows, as software simulations and hardware designs are handled separately. Advancements such as ASAP7 and TNN7 libraries made some aspects of hardware efficient but remain proprietary tools that require significant expertise. The fragmentation of the process restricts usability, prevents the easier exploration of design configurations with increased computational overhead, and can’t be used for more application-specific rapid prototyping or large-scale deployment purposes.

Researchers at Carnegie Mellon University introduce TNNGen, a unified and automated framework for designing TNN-based NSPUs. The innovation lies in the integration of software-based functional simulation with hardware generation in a single streamlined workflow. It combines a PyTorch-based simulator, modeling spike-timing dynamics and evaluating application-specific metrics, with a hardware generator that automates RTL generation and layout design using PyVerilog. Through the utilization of TNN7 custom macros and the integration of a variety of libraries, this framework realizes considerable enhancements in simulation velocity as well as physical design. Additionally, its predictive abilities facilitate precise forecasting of silicon metrics, thereby diminishing the dependency on computationally demanding EDA tools. 

TNNGen is organized around two principal elements. The functional simulator, constructed using PyTorch, accommodates adaptable TNN configurations, allowing for swift examination of various model architectures. It has GPU acceleration and accurate spike-timing modeling, thus ensuring high simulation speed and accuracy. The hardware generator converts PyTorch models into optimized RTL and physical layouts. Using libraries such as TNN7 and customized TCL scripts, it automates synthesis and place-and-route processes while being compatible with multiple technology nodes like FreePDK45 and ASAP7. 

Binance

TNNGen achieves excellent performance in both clustering accuracy and hardware efficiency. The TNN designs for time-series clustering tasks show competitive performance with the best deep-learning techniques while drastically reducing the utilization of computational resources. The approach brings major energy efficiency improvements, obtaining a reduction in die area and leakage power compared to conventional approaches. In addition, the runtime of the design is dramatically reduced, especially for larger designs, which benefit most from the optimized workflows. Moreover, the comprehensive forecasting instrument provides accurate estimations of hardware parameters, allowing researchers to evaluate design viability without the necessity of engaging in physical hardware procedures. Taken together, these findings position TNNGen as a viable approach for streamlining and expediting the creation of energy-efficient neuromorphic systems. 

TNNGen is the next step in the fully automated development of TNN-based NSPUs by unifying simulation and hardware generation into an accessible, efficient framework. The approach addressed key challenges in the manual design process and made this tool much more scalable and usable for edge AI applications. Future work would involve extending its capabilities toward support for more complex TNN architectures and a much wider range of applications to become a critical enabler of sustainable neuromorphic computing. 

Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

🚨 Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence….

Aswin AK is a consulting intern at MarkTechPost. He is pursuing his Dual Degree at the Indian Institute of Technology, Kharagpur. He is passionate about data science and machine learning, bringing a strong academic background and hands-on experience in solving real-life cross-domain challenges.

🧵🧵 [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)



Source link

fiverr

Be the first to comment

Leave a Reply

Your email address will not be published.


*