Advanced opto-neuro-prosthetics on-the-cheap: artificial neurons communicate with biological neurons via trashed video-projector

Our study is a further step toward the development of neuro-prosthetic devices where biomimetic artificial networks could replace damaged brain circuits and restore communication with the brain circuitries. We provide further evidence of the potentiality of using optogenetics stimulation.

Like 1 Comment
Read the paper

The high price of replacement bulbs for video projectors make replacing the bulbs of old projectors more expensive than the value of the projector in many cases. This leads to many otherwise-working projectors without bulbs just lying around in universities/companies.  What can you do with a trashed video-projector? I mean, once the light bulb is gone, no other option than trash it.

Figure 1: Example of broken video projector

 Some of these “neglected” devices, of the many there in forgotten corners of our universities, can be given a second life.

In our study (www.nature.com/articles/s41598...), we used one of these “forgotten guys” to help artificial neurons communicate in real-time with biological neurons.

But let’s go step by step, and introduce all of the protagonists. And let’s keep in mind that all of them were already there in different stages, and we just gave them an option to meet and play, each with its own innate talent. And indeed, they coordinate and play well, with entrained rhythms as you can see in the results section of this paper!

 Our research started about in 2012, when the EU-FET Brainbow project (https://cordis.europa.eu/project/id/284772) was funded, with the aim to develop an in-vitro platform to develop and test neuro-prosthetic devices using real-time bi-directional communication with artificial and living neurons. Futuristic projects, very challenging, and also very young considering that at that time we were all in our thirties.

So, Dr Timothée Levi at U Bordeaux was responsible for developing a neuromorphic chip mimicking a biological circuit using artificial neurons 1, while the three other PIs including Dr Paolo Bonifazi worked on complementary tasks involving real biological networks with varying complexity, spanning from simple, finite in-vitro engineered circuits to ex-vivo whole brain preparation2. The goal was to damage those biological circuits and use the neuromorphic chip to restore lost communications and circuit dynamics.

So, by the end of that project whose results have been published last year3, we had real-time biomimetic spiking neural network running on a FPGA board, capable of sending its output in real-time to trigger an electrical stimulator, which would turn that input into an electrical pulse to stimulate the target biological network via an electrode. So why not add a video output to that device, in the “optogenetics” era, when neurons can respond to patterned optical stimuli4, i.e. to images?

Indeed, that was on the way, since Dr. Paolo Bonifazi in TAU already used a video-projector system to spatially pattern optical stimulations in neuronal cultures, in the attempt to selectively stimulate arbitrary sets of neurons  in the circuit, attempting to probe “hub” neurons in cultures2,5,6.

Therefore, it was kind of straightforward to try to connect the artificial neuronal network to a biological neuronal network (in vitro neuron culture) using binary images for their communication. Indeed, Dr. Paolo Bonifazi already showed about fifteen years ago in the context neurocomputing7, how images can be processed by biological networks on a dish.

But how should the communication be established? How can the two networks “talk” the same language? First, the spiking neural network (SNN) had to play a similar melody as the biological. We designed real-time biomimetic SNN using Izhikevich neuron model with short term plasticity and synaptic noise to obtain better biological neuronal behaviors.

Cultured networks show spontaneous activity, in forms of synchronizations recruiting quasi periodically the all neurons. Such spontaneous collective dynamics are intrinsic “rhythms” played by the circuits, which is indeed a very typical property across species, developmental stage and systems8,9. Therefore, SNN was programmed and tuneable to be able to generate a number of such “rhythms and melodies”. Once a given pattern of electrical activity, or short melody, was spontaneously played by the SNN, this was converted into a binary image, where the all-or-nothing activity of 64 representative artificial neurons, would be converted into a binary image representing an 8x8 matrix (composed of 64 elements). There was a unique defined match between artificial neurons and matrix’s elements and a defined rule for colouring the image, i.e. when a neuron is active his corresponding square is illuminated with blue light, when the same neuron is silent the same square is black (no light). So, at this point of the story we have an SNN running in real time with a millisecond resolution, generating spontaneously “rhythmic melodies”, which are converted in real-time into blue light 8x8 big pixel images.  Next figure describes the SNN on FPGA board and how neurons are converted in 8x8 matrix image.

Figure 2: Left part: Genesys 2 FPGA board used for experiments. Middle part: real-time raster plot generated by the SNN. Right part: conversion of neuronal activity to 8x8 matric image 

What’s next? The biological neuronal network sitting on a dish located at the focal plane of the microscope, also generating independently such “rhythmic melodies”, recorded by a set of electrodes embedded on the dish. Most importantly, these neurons, following a genetic manipulation, were expressing an optogenetics actuator (fast Channelrhodopsin2 variant ChIEF10) which allow the neurons to be responsive to blue light, in a similar fashion to the neurons in the retinal system, which can convert light into electrical pulses in the cells. So, at this point all the ingredients were there, the SNN and the BNN, each playing his “melody”, and the video-projection for transducing the SNN melody, into images, and the images into electrical activation of the biological neurons.

Figure 3: Calcium imaging and video-projection of images. Left part: The blue light (460 nm) was triggering the response of light sensitive neurons, expressing the ChR2 variant CHiEF in the same stimulated area. Right part: the activity of the neurons was also recorded with the red calcium sensor RHOD-3. Bottom part: Two representative blue light patterns of stimulation (black and white image). Note that the illumination highlights the metal strip of the electrodes.

 So, we collected all the actors on the stage adding a microscope to which the video-projector was coupled in order to demagnify the images to micrometer dimensions, the relevant spatial scale for neurons. The show started and blue light flashes appeared quasi-periodically.

Figure 4: Left part: Scheme of the optical path and TTL-control of the video-projection. The FPGA board (P) sends two simultaneous output: the VGA video signal (F) to the video-projector (E, Modified Sharp Notevision XR-10X DMD) coding for the image representing the 8x8 binary matrix shaped by the SNN activity, and the TTL signal triggering the stimulator (A). The stimulator generates the signal (B) with 5 pulses of 5V lasting 30 ms with an interval of 40 ms which control the custom-made power supply of the Luminus PT-120-B High Power blue LED (D).  The image generated by the video-projector (G) is at the focal length (250 mm) of the coupling lens (H) located in front of the cube (I) with the long-pass dichroic mirror and the emission filter (located above the cube). The other cube (J) contains the dichroic mirror and excitation notch filter (dsRed) for red calcium imaging. The image (G) is focused at the adjustable stage (L) of the microscope (L) through a 10x objective (K). The sample image located at the stage (L) is recorded by the camera mounted on the microscope (N) after being focused by the tube lens M. A PC (O) recorded the camera images.  Right part: Picture of the optical set-up including DMD projector which projects into an up-right epifluorescence microscope through an additional optical pathway obtained in between the camera and the excitation/dichroic cube placed above the neuron culture, by splitting orthogonally the camera pathway with a dichroic mirror. 

By tuning the SNN settings, adjusting frequency and threshold to make “melodies” more or less audible, we could finally achieve a communication between the in-silico network and the biological network.

Figure 5: BNN and SNN dynamics. Zoom on a hundred of seconds of the BNN activity (bottom two plots). After about 50 seconds the communication between the SNN and BNN is switched ON (red horizontal bar) and the timing of the stimuli coming from the SNN are shown (black asterisks), corresponding to the synchronized events in the SNN. The raster plot of the activity of the SNN (top) and BNN (bottom) are shown in time bins of a hundred milliseconds. Blue plots represent the total numbers of spikes in the SNN (top) and BNN (bottom) in the in the same time bins of the raster plots. Note the high correspondence between peaks in the SNN and BNN when the communication between the two networks is switched ON.

Our study first shows such communication required entrainment of melodies, so the spontaneous SNN synchronizations would drive the BNN synchronizations. And second the communication required a fast-linear response of the BNN to the stimuli. Once both these conditions were met, similar activity in the SNN would be translated into similar activity in the BNN.

So, what did we learn out of this? What’s new? Indeed, we did not create a new system, as we said at the beginning, we just joint pieces previously described and matched them, as in an orchestra with different instruments, to generate a meaningful sound or outcome, which in these cases would correspond to an entrained informative stimulus-response, i.e. similar melodies generated by the ANN would be translated into similar melodies played by the BNN.

Our study, is a further step toward the development and use of neuro-prosthetic devices where biomimetic artificial networks could replace damaged brain circuits and restore communication and information processing with the brain circuitries. In this study, we provide further evidence of the potentiality of using optogenetics stimulation, which have the potentiality to can achieve higher spatial and cellular-type resolution than electrical stimulation.

 

References:

 1.        Ambroise, M., Levi, T., Joucla, S., Yvert, B. & Saïghi, S. Real-time biomimetic Central Pattern Generators in an FPGA for hybrid experiments. Frontiers in Neuroscience 7, (2013).

2.        Bonifazi, P. et al. In vitro large-scale experimental and theoretical studies for the realization of bi-directional brain-prostheses. Front. Neural Circuits 7, (2013).

3.        Buccelli, S. et al. A Neuromorphic Prosthesis to Restore Communication in Neuronal Networks. iScience 19, 402–414 (2019).

4.        Ronzitti, E. et al. Recent advances in patterned photostimulation for optogenetics. J. Opt. 19, 113001 (2017).

5.        Abstracts. J Mol Neurosci 51, 1–135 (2013).

6.        Luccioli, S., Ben-Jacob, E., Barzilai, A., Bonifazi, P. & Torcini, A. Clique of Functional Hubs Orchestrates Population Bursts in Developmentally Regulated Neural Networks. PLoS Comput Biol 10, e1003823 (2014).

7.        Bonifazi, P., Ruaro, M. E. & Torre, V. Statistical properties of information processing in neuronal networks. European Journal of Neuroscience 22, 2953–2964 (2005).

8.        Blankenship, A. G. & Feller, M. B. Mechanisms underlying spontaneous patterned activity in developing neural circuits. Nat Rev Neurosci 11, 18–29 (2010).

9.        Stringer, C. et al. Spontaneous behaviors drive multidimensional, brainwide activity. Science 364, eaav7893 (2019).

10.      Lin, J. Y., Lin, M. Z., Steinbach, P. & Tsien, R. Y. Characterization of engineered channelrhodopsin variants with improved properties and kinetics. Biophys. J. 96, 1803–1814 (2009).

Go to the profile of Timothée LEVI

Timothée LEVI

Associate Professor, IMS lab, University of Bordeaux, France and LIMMS/CNRS-IIS, the University of Tokyo, Japan

No comments yet.