MIT researchers fabricated 3D chips with alternating layers of semiconducting material grown directly on top of each other. The method eliminates thick silicon between layers, leading to better and faster computation, for applications like more efficient AI hardware.
The electronics industry is approaching a limit to the number of transistors that can be packed onto the surface of a computer chip. So, chip manufacturers are looking to build up rather than out. Instead of squeezing ever-smaller transistors onto a single surface, the industry is aiming to stack multiple surfaces of transistors and semiconducting elements — akin to turning a ranch house into a high-rise. Such multilayered chips could handle exponentially more data and carry out many more complex functions than today's electronics. A significant hurdle, however, is the platform on which chips are built. Today, bulky silicon wafers serve as the main scaffold on which high-quality, single-crystalline semiconducting elements are grown. Any stackable chip would have to include thick silicon "flooring" as part of each layer, slowing down any communication between functional semiconducting layers. Now, MIT engineers have found a way around this hurdle, with a multilayered chip design that doesn't require any silicon wafer substrates and works at temperatures low enough to preserve the underlying layer's circuitry. In a study appearing today in the journal Nature, the team reports using the new method to fabricate a multilayered chip with alternating layers of high-quality semiconducting material grown directly on top of each other. The method enables engineers to build high-performance transistors and memory and logic elements on any random crystalline surface — not just on the bulky crystal scaffold of silicon wafers. Without these thick silicon substrates, multiple semiconducting layers can be in more direct contact, leading to better and faster communication and computation between layers, the researchers say. The researchers envision that the method could be used to build AI hardware, in the form of stacked chips for laptops or wearable devices, that would be as fast and powerful as today's supercomputers and could store huge amounts of data on par with…