| Date: | 10:30-12:00 on March 24, 2026 |
| Venue: | 3F, Kerry Hotel Pudong, Shanghai |
| Meeting Room: | Pudong Ballroom 5 |
|
|
Speaker 1: Dr. Shaodi Wang, CEO of Witmem Abstract: Over the past year, large language models (LLM) have demonstrated increasingly advanced capabilities, with their deployment across various applications expanding rapidly. Concurrently, there has been a growing demand for low-power, cost-effective inference solutions. However, the development of larger models and the handling of extended contexts present significant challenges related to high-bandwidth memory and chip interconnection. In-memory computing (IMC) has emerged as a highly efficient approach for large-scale matrix multiplication—the core computational component of LLMs. The implementation of IMC technologies varies depending on the memory materials employed. This tutorial examines the key challenges in LLM infrastructure and introduces the performance enhancements enabled by IMC technologies.Biography: Shaodi received his B.S. degree from Peking University in 2011.and the Ph.D. degree from UCLA in 2017. He founded Witmem Co. Ltd in 2017, and currently serves as the CEO of Witmem. Shaodi and Witmem are dedicated to developing in-memory computing (IMC) technology. Since 2022, Witmem have multiple mass-production chips in markets. |
|
|
|
|
|
Speaker 2: Prof. Bin Gao, Full Professor, Tsinghua University Abstract: The rapid growth of AI workloads is encountering the classic “memory wall” bottleneck, where data transfer between memory and compute units severely limits performance and energy efficiency. Computing-in-Memory (CIM) presents a promising solution by embedding computation directly within memory arrays, significantly reducing data movement and latency. This tutorial will provide an in-depth overview of CIM technologies across the full stack, including novel memory devices, analog computing circuits, system-level architectures, and software-algorithm co-design. Emphasis will be placed on cross-layer co-optimization strategies that enable scalable, efficient, and accurate CIM systems. Recent breakthroughs and emerging research directions will also be discussed.Biography: Prof. Gao is currently a Full Professor with the School of Integrated Circuits, Tsinghua University. He received the B.S. degree in 2008 and Ph.D. degree in 2013, both from Peking University. He has been a visiting scholar in Nanyang Technical Univeristy, Singapore, and Stanford University, USA. He joint Tsinghua University in 2015, and was promoted to Associate Professor with tenure in 2022. His current research interests include advanced memory technologies and computation-in-memory chips. He has published more than 100 journal papers on Science, Nature, Nature Electronics, Nature Nanotechnology, Nature Machine Intelligence, and more than 50 papers on the top conferences (IEDM, ISSCC, VLSI). His total citation is over 18000. He served as Sub-committee Chair of IEDM (2021), IRPS (2023, 2024), EDTM (2021), and ICTA, and TPC member of DAC, IMW, IPFA, etc. |