Researchers from MIT have developed a fully integrated photonic processor that performs deep neural network computations using light, potentially enabling ultrafast and energy-efficient AI applications. Unlike traditional hardware that relies on electronic components, this new chip integrates both linear and nonlinear operations optically, eliminating the need for external digital processors. The photonic processor successfully completed a machine-learning classification task in less than half a nanosecond with more than 92% accuracy, making it a promising alternative for computationally intensive fields like lidar, astronomy, and telecommunications.
The breakthrough was achieved by integrating nonlinear optical function units (NOFUs) into the chip, overcoming a major limitation of previous optical neural networks. The system encodes neural network parameters into light and processes data through programmable beamsplitters and NOFUs, maintaining ultra-low latency. Fabricated using commercial CMOS foundry processes, the chip has the potential for large-scale production and integration into existing technologies. The research, led by MIT’s Quantum Photonics and AI Group, was published in Nature Photonics and was supported by the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, and NTT Research.
- Institution: MIT, in collaboration with various researchers.
- Technology: Fully integrated photonic processor for deep neural network computations.
- Performance: Achieved over 92% accuracy with processing speeds under half a nanosecond.
- Key Innovation: Integration of nonlinear operations optically using NOFUs.
- Potential Applications: Lidar, astronomy, high-speed telecommunications.
- Manufacturing: Fabricated using commercial CMOS processes for scalability.
- Publication: Research published in Nature Photonics.
- Funding: Supported by the U.S. NSF, U.S. Air Force Office of Scientific Research, and NTT Research.
Source: Adam Zewe, MIT News (Original Article).






