An Interest In:
Web News this Week
- April 20, 2024
- April 19, 2024
- April 18, 2024
- April 17, 2024
- April 16, 2024
- April 15, 2024
- April 14, 2024
April 5, 2016 08:00 pm
Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/fCO8-sMNxfc/nvidia-creates-a-15b-transistor-chip-with-16gb-bandwidth-memory-for-deep-learning
NVIDIA Creates a 15B-Transistor Chip With 16GB Bandwidth Memory For Deep Learning
An anonymous reader cites a report on VentureBeat: NVIDIA chief executive Jen-Hsun Huang announced that the company has created a new chip, the Tesla P100, with 15 billion transistors, 16GB high-bandwidth memory for deep-learning computing. It's the biggest chip ever made, Huang said. "We decided to go all-in on A.I.," Huang said. "This is the largest FinFET chip that has ever been done." The chip has 15 billion transistors, or three times as much as many processors or graphics chips on the market. It takes up 600 square millimeters. The chip can run at 21.2 teraflops. Huang said that several thousand engineers worked on it for years. Jim McGregor, writing for Forbes (the link is not accessible to ad-blocking tool users): It features NVIDIA's new Pascal GPU architecture, the latest memory and semiconductor process, and packaging technology -- all to create the densest compute platform to date. In addition, it combines 16GB of die stacked second-generation High-Bandwidth Memory (HBM2). The memory and GPU are combined into a multichip module on a state-of-the-art silicon substrate. The P100 has NVIDIA's NVLink interface technology to connect to multiple Tesla P100 GPU modules.Read more of this story at Slashdot.
Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/fCO8-sMNxfc/nvidia-creates-a-15b-transistor-chip-with-16gb-bandwidth-memory-for-deep-learning
Share this article:
Tweet
View Full Article
Slashdot
Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..More About this Source Visit Slashdot