Towards an Optoelectronic Chip That Mimics the Human Brain

2023/4/15 18:24:14

Views:

How does the human brain, made up of some 86 billion neurons connected in neural networks, perform extraordinary feats of computation, but consume only a dozen watts? IEEESpectrum recently spoke with Jeffrey Shainline, a physicist at the National Institute of Standards and Technology, whose work may shed light on this question. chain line is seeking a method of computation that could power advanced forms of artificial intelligence - -so-called spiking neural networks, which more closely mimic the way the brain works than the artificial neural networks now widely deployed. Today, the dominant paradigm uses software running on digital computers to create artificial neural networks with multiple layers of neurons. These "deep" artificial neural networks have proven to be very successful, but they require significant computational resources and energy to operate. And these energy requirements are growing rapidly: in particular, the computation involved in training deep neural networks is becoming unsustainable. 


Toward an optoelectronic chip that mimics the human brain.png


Researchers have long been intrigued by the prospect of creating artificial neural networks that better reflect what happens in biological neuronal networks, where a neuron may reach an activation threshold level when it receives signals from multiple other neurons, causing it to "fire," meaning it produces an output signal spike that is sent to other neurons, possibly inducing some of them to be excited as well. meaning that it generates an output signal spike that is sent to other neurons, possibly inducing some of them to be excited as well. Shainline's research has focused on the use of superconducting optoelectronic elements in such networks. His work has recently evolved from studying theoretical possibilities to conducting hardware experiments. He spoke to Spectrum about these recent developments in his lab. 


Q: I've been hearing about neuromorphic processing chips from IBM and elsewhere for years, but I don't get the sense that they have any real-world applications - is it just me? 

JeffreyShainline: Good question: spiking neural networks - what are they good for? IBM's True North chip, introduced in 2014, caused a stir because it was new, different, and exciting. More recently, Intel has been doing great things with its Loihi chip. Intel now has a second generation of its products. But whether these chips can solve real-world problems is still a big question. We know that biological brains can do things that digital computers can't match. However, these spiky neuromorphic chips don't immediately blow our minds. Why not? I don't think that's an easy question to answer. One thing I would point out is that none of them has 10 billion neurons (about the number of neurons in a human brain). Even the brain of a fruit fly has about 150,000 neurons, but the latest Loihi chips at Intel don't even have that. I know they're struggling with how to use this chip, and the folks at Intel are doing something smart: they're offering scholars and startups cheap access to their chips - in many cases for free. They're crowdsourcing ideas in the hope that someone will find a killer app. 


Q: What would you guess would be the first killer app for this type of chip? 

Shainline: Maybe a smart speaker, a speaker that needs to be waiting for you to say some keyword or phrase all the time. This usually requires a lot of power. But research has shown that very simple pulsed neural algorithms running on a simple chip can do this with almost no power consumption.


Q: Tell me about the optoelectronic devices you and your NIST colleagues are working on and how they can improve spiking neural networks. 

Shainline: First, you need to understand that light will be the best way you can communicate between the neurons in a spiking neural system. That's because nothing can be faster than light. So, using light for communication will give you the largest spiking neural network. But just sending signals fast is not enough. You also need to operate in an energy-efficient manner. So once you choose to send a signal as light, the best energy efficiency you can achieve is if you send just one photon from a neuron to each of its synaptic connections. You can't get any less light. The superconducting detectors we're working on are the best at detecting single photons of light - in terms of how much energy they dissipate and how fast they operate. However, you can also build a spiking neural network that uses room-temperature to send and receive light signals. Right now, it's not clear which strategy is best. But because I'm biased, let me share some reasons for pursuing the superconducting approach. Admittedly, there is a lot of overhead associated with using superconducting components - you have to build everything in a cryogenic environment so that your device stays cold enough to superconduct. But once you do that, you can easily add another key element: something called a Josephson junction. Josephson junction is a key component of superconducting computing hardware, whether they are used for superconducting quantum bits in a quantum computer, superconducting digital logic gates, or superconducting neurons. Once you decide to use light for communication and use superconducting single-photon detectors to sense light, you have to build your computer in a cryogenic environment. So, without any more overhead, you can now use Josephson junction. This brings a non-obvious benefit: it turns out that it is easier to integrate the Josephson junction in 3D than it is to integrate MOSFET n 3D. This is because, for semiconductors, you fabricate the MOSFET on the lower plane of the silicon wafer. then you place all the wiring layers on top. Placing another MOSFET layer on top of it using standard processing techniques is essentially impossible. In contrast, it is not difficult to fabricate Josephsonjunction on multiple planes. Two different research groups have demonstrated this. The same is true for the single-photon detectors we've been talking about. This is a key benefit when you consider allowing these networks to scale to something as complex as the brain. You can fit a lot more neurons and synapses on a silicon chip than you can on a semiconductor because you can stack them in three dimensions. You can have 10 layers, which is a big advantage. 


Q: The theoretical implications of this computational approach are impressive. But what kind of hardware have you and your colleagues actually built? 

Shainline: One of our most exciting recent results is a demonstration of a superconducting single-photon detector integrated with a Josephson junction. What we were allowed to do was to take single-photon light and use it to switch the Josephson junction and generate an electrical signal, and then integrate the signal from many photon pulses. We have recently demonstrated this technique in our lab. We have also built chip light sources that can operate at low temperatures. We have also spent a lot of time studying the waveguides needed to transmit optical signals on a chip. I mentioned the 3D integration (stacking) that can be achieved using this computing technology. However, if you want each neuron to communicate with thousands of other neurons, you also need some way for the light signal to be transmitted from one layer of waveguides to another layer of waveguides without loss. We have demonstrated that these waveguides have up to three stacking planes and believe we can expand that to about 10 layers. 


Q: When you say "integrated," do you mean you've connected the components together, or do you have everything on a single chip? 

Shainline: We did combine a superconducting single-photon detector with a Josephson junction on a single chip. The chip is mounted on a small printed circuit board, and we put it in a cryostat to keep it cold enough to maintain superconductivity. We use fiber optics for communication from room temperature to cryogenic temperatures.

 

Q: Why are you so enthusiastic about this approach and why aren't others doing it? 

Shainline: There are some very strong theoretical arguments about why this approach to neuromorphic computing might be a game-changer. But it requires interdisciplinary thinking and collaboration, and right now, we're really the only team dedicated to doing this. I would love it if more people were involved. As a researcher, my goal is not to be the first person to do all of these things. I would be happier if researchers from different backgrounds contributed to the development of this technology!

Tags:

Related Information

Home

Home

Products

Products

Phone

Phone

Contact Us

Contact