DEFENSE ADVANCED RESEARCH PROJECT AGENCY (DARPA) and the Brain Initiative
The White House announced the BRAIN initiative in April 2013. Today, the initiative is supported by several federal agencies as well as dozens of technology firms, academic institutions, scientists and other key contributors to the field of neuroscience. DARPA is supporting the BRAIN initiative through a number of programs, continuing a legacy of DARPA investment in neurotechnology that extends back to the 1970s. An article in our 60th anniversary magazine provides an overview of the agency’s recent research aimed at expanding the frontiers of the field and enabling powerful, new capabilities.
The ElectRx program aims to help the human body heal itself through neuromodulation of organ functions using ultraminiaturized devices, approximately the size of individual nerve fibers, which could be delivered through minimally invasive injection.
The HAPTIX program aims to create fully implantable, modular and reconfigurable neural-interface microsystems that communicate wirelessly with external modules, such as a prosthesis interface link, to deliver naturalistic sensations to amputees.
The NESD program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the brain and the digital world.
The Neuro-FAST program seeks to enable unprecedented visualization and decoding of brain activity to better characterize and mitigate threats to the human brain, as well as facilitate development of brain-in-the loop systems to accelerate and improve functional behaviors. The program has developed CLARITY, a revolutionary tissue-preservation method, and builds off recent discoveries in genetics, optical recordings and brain-computer interfaces.
The N3 program aims to develop a safe, portable neural interface system capable of reading from and writing to multiple points in the brain at once. Whereas the most advanced existing neurotechnology requires surgical implantation of electrodes, N3 is pursuing high-resolution technology that works without the requirement for surgery so that it can be used by able-bodied people.
Reliable Neural-Interface Technology (RE-NET) (Archived)
The RE-NET program seeks to develop the technologies needed to reliably extract information from the nervous system, and to do so at a scale and rate necessary to control complex machines, such as high-performance prosthetic limbs.
The RAM program aims to develop and test a wireless, fully implantable neural-interface medical device for human clinical use. The device would facilitate the formation of new memories and retrieval of existing ones in individuals who have lost these capacities as a result of traumatic brain injury or neurological disease.
The RAM Replay program will investigate the role of neural “replay” in the formation and recall of memory, with the goal of helping individuals better remember specific episodic events and learned skills. The program aims to develop novel and rigorous computational methods to help investigators determine not only which brain components matter in memory formation and recall, but also how much they matter.
The Revolutionizing Prosthetics program aims to continue increasing functionality of DARPA-developed arm systems to benefit Service members and others who have lost upper limbs. The dexterous hand capabilities developed under the program have already been applied to small robotic systems used to manipulate unexploded ordnance, reducing the risk of limb loss among Soldiers.
The SUBNETS program seeks to create implanted, closed-loop diagnostic and therapeutic systems for treating neuropsychological illnesses.
The TNT program seeks to advance the pace and effectiveness of cognitive skills training through the precise activation of peripheral nerves that can in turn promote and strengthen neuronal connections in the brain. TNT will pursue development of a platform technology to enhance learning of a wide range of cognitive skills, with a goal of reducing the cost and duration of the Defense Department’s extensive training regimen, while improving outcomes.
SIMILARLY TAGGED CONTENT
DARPA-funded efforts in the development of novel brain–computer interface technologies
Author links open overlay panel
Under a Creative Commons license
DARPA’s programs foster multi-disciplinary collaborations.
DARPA’s BCI programs span four major challenges: detect, emulate, restore, & improve.
Aims: restore function after injury; improve performance of healthy individuals.
The Defense Advanced Research Projects Agency (DARPA) has funded innovative scientific research and technology developments in the field of brain–computer interfaces (BCI) since the 1970s. This review highlights some of DARPA’s major advances in the field of BCI, particularly those made in recent years. Two broad categories of DARPA programs are presented with respect to the ultimate goals of supporting the nation’s warfighters: (1) BCI efforts aimed at restoring neural and/or behavioral function, and (2) BCI efforts aimed at improving human training and performance. The programs discussed are synergistic and complementary to one another, and, moreover, promote interdisciplinary collaborations among researchers, engineers, and clinicians. Finally, this review includes a summary of some of the remaining challenges for the field of BCI, as well as the goals of new DARPA efforts in this domain.
DARPABrain–computer interfaceBrain–machine interfaceNeuroscience
Brain–computer interfaces (BCI) are systems that mediate signaling between the brain and various technological devices. The first demonstrations of BCI in humans and animals took place in the 1960s. In 1964, Grey Walter demonstrated use of non-invasively recorded encephalogram (EEG) signals from a human subject to control a slide projector (Graimann et al., 2010). Shortly thereafter, Fetz demonstrated that, by providing food reward to awake, non-human primates along with auditory or visual feedback on the firing rates of neurons in the motor cortex, these neurons could be operantly conditioned to increase their firing rates by 50–500% (Fetz, 1969). In 1971 the term brain-computer interface (BCI) was coined by Jaceques J. Vidal, who laid out a comprehensive experimental research plan to interface the human brain with computers (Vidal, 1973), including the XDS Sigma 7 at the University of California at Los Angeles that coincidently also served as the first node of the Advanced Research Projects Agency Network (ARPANET). Following these initial demonstrations, the field of BCI has expanded significantly, encompassing both invasive and non-invasive neural recordings in humans and animals, spanning a range of sensorimotor and cognitive functions, and incorporating novel feedback mechanisms in closed-loop systems.
While most closed-loop BCI systems provide feedback to the user on system performance through the presentation of sensory (primarily visual) information, approaches have also been developed to provide sensory feedback through direct stimulation of the nervous system. For instance, unidirectional systems such as cochlear and retinal implants can provide partial restoration of sight and hearing through the direct stimulation of neurons within the cochlea and retina, respectively (see Géléoc and Holt, 2014, Chuang et al., 2014 for review). Recent explorations of targeted reinnervation in amputees suggest that such approaches may not only enable prosthetic limb control through peripheral nerve signals but may also provide a means of conveying somatosensory sensation of touch, temperature, pain, and vibration to these patients (Hebert et al., 2013).
Sensory percepts can also be elicited through direct brain stimulation (Schiller et al., 2011, Kar and Krekelberg, 2012, Larson and Cheung, 2012, Tabot et al., 2013, Zaaimi et al., 2013, May et al., 2013, Johnson et al., 2013). Such findings provide a proof of concept for the integration of stimulation-induced sensory feedback into BCI systems as a novel mechanism of closing the loop (e.g., see O’Doherty et al., 2011). In addition to providing an alternative means of sensory perception, direct brain stimulation can also be used to bridge the gap across perturbed neural connections (Berger et al., 2011). Studies suggest that neural stimulation may even have the potential to restore functional connectivity and associated behaviors through modulation of molecular mechanisms of synaptic efficacy (Jacobs et al., 2012, Rahman et al., 2013, Song et al., 2013). In this regard, BCI technologies may not only be useful for enabling function, but also have the potential for implementation as a therapeutic device for restoring function.
A primary application of BCI is to provide a mechanism for movement or communication by patients who are unable to move or communicate through normal pathways. Such approaches have included the translation of recorded neural signals associated with sensory and goal-directed mechanisms into navigation or selection commands, enabling the user to move through a virtual or real environment or to select letters to type for purposes of communication (Thurlings et al., 2010). Other approaches have included decoding of neural signals directly associated with the intent to move (Collinger et al., 2013, Doud et al., 2011) or speak (Pei et al., 2011). In addition to approaches that leverage correlates of user intent, BCI has been utilized to provide neurofeedback to users, enabling them to regulate neural and behavioral functions normally not under volitional control. Such functions include attention, pain, emotion, and memory (Birbaumer et al., 2009).
While people living with injury remain a primary end-user target population for the field of BCI, the increasing availability of portable hardware for real-time non-invasive sensing of neural activity has also led to the development of commercial BCI applications for healthy individuals, as seen by recent incorporation of BCI within the gaming industry. BCI games have used neural signals to control or influence functions such as steering through virtual environments, changing the form and function of avatars, or controlling the movement of a virtual ball through collaboration or competition among multiple users (for review, see Coyle et al., 2013, Marshall et al., 2013).
Recent advances in the field of BCI have been achieved via a broad spectrum of funding sources across academic, industry, clinical, and various international government organizations. The current review, however, is focused on BCI research funded by the Defense Advanced Research Projects Agency (DARPA). Established in 1958 in response to the Soviet launch of the world’s first satellite, Sputnik, DARPA’s mission is to maintain technological superiority of the United States military and prevent technological surprise by U.S. adversaries (Defense Advanced Research Projects and Agency, 2013). To achieve this mission, DARPA invests in revolutionary, high-risk/high-reward research efforts ranging from fundamental scientific discoveries to the application of these discoveries for military use. DARPA’s primary constituents are the military services and American warfighters. The agency’s goal is to provide these constituents with the capabilities to perform their complex duties and to quickly and effectively recover from adverse events.
While DARPA itself does not conduct scientific research, the agency’s program managers and directors, whose expertise spans diverse scientific and military fields, are highly immersed in the scientific research community as well as the U.S. military community. Through interactions with these communities, DARPA assesses current needs and state-of-the art scientific and technological achievements and identifies areas in which groundbreaking advances could revolutionize national security capabilities. Through its programs, DARPA funds research and development efforts conducted by a broad spectrum of industry, academic, and other government organizations. These efforts range from fundamental scientific exploration to development of prototype technological devices with specific end-user applications. Additionally, DARPA facilitates transition and operationalization of successful results from its programs for military and commercial use.
In recent years, DARPA has supported highly innovative research in the field of neuroscience, fostering multi-disciplinary collaborations among neurobiologists, neuropsychologists, mathematicians, and engineers. The goals of these efforts span four major challenges:
Detect – Develop diagnostics, models, and devices to characterize and mitigate threats to the human brain.
Emulate – Leverage inspiration from functional brain networks to efficiently synthesize information.
Restore – Reestablish behavioral and cognitive function lost as a result of injury to the brain or body.
Improve – Develop brain-in-the loop systems to accelerate training and improve functional behaviors.
Of relevance to this special issue, many of DARPA’s investments in neuroscience have encompassed the development of novel BCI technologies. These DARPA-funded efforts have enabled new neural interface technologies for detecting multi-scale and multi-region brain function in real time, as well as complex mathematical algorithms that emulate the translation of neural activity into activity in downstream brain areas and resulting behavioral functions. Together, these neural interfaces and mathematical models are integrated into BCI systems that can restore and/or facilitate near – natural neural and behavioral function.
DARPA’s initial investments in BCI began in 1974 under the Close-Coupled Man/Machine Systems(later renamed Biocybernetics) program. This program investigated the application of human physiological signals, including brain signals as measured non-invasively using either EEG or magnetoencephalography (MEG), to enable direct communication between humans and machines and to monitor neural states associated with vigilance, fatigue, emotions, decision-making, perception, and general cognitive ability. The program yielded notable advancements, such as detailed understanding of single-trial, sensory-evoked responses in the EEG of human participants. These efforts demonstrated that neural activity in response to visual checkerboard stimuli, alternating at different frequencies at each of four fixation points, could be decoded in real time and used to navigate a cursor through a simple maze (Vidal, 1977). In 2002 DARPA took a deeper dive into the field of BCI by launching its Brain Machine Interface (BMI) program, shortly followed by the Human Assisted Neural Devices(HAND) program. These early programs tackled a wide array of BCI challenges including sensorimotor control of prosthetic devices (Carmena et al., 2003), facilitation of memory encoding (Song et al., 2007), decoding of visual inputs (Hung et al., 2005), development of dynamic neural decoding algorithms (Gage et al., 2005), as well as the development of new devices for high-resolution neural imaging (Vetter et al., 2004). These DARPA-funded efforts provided many of the foundational discoveries and technologies that have enabled more recent developments in this field.
This review highlights several recent and ongoing DARPA-funded programs that are aimed at utilizing BCI to either restore neural and behavioral function following injury to the brain, or to improve human performance through intervention during training or operational tasks. Notably, under President Obama’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, announced in April 2013, DARPA is currently supporting new research efforts aimed at the development of novel BCI technologies for restoring function in human clinical populations with either neuropsychiatric or memory dysfunction. The goals of these new programs will be described further in the conclusion of this review.
2. DARPA BCI efforts to restore neural and behavioral function
Recent and ongoing DARPA programs supporting the development of BCI technologies to restore neural and behavioral function include Revolutionizing Prosthetics, Reorganization and Plasticity to Accelerate Injury Recovery (REPAIR), Restorative Encoding Memory Integration Neural Device (REMIND), and Reliable Neural Interface Technology (RE-NET). These programs are complementary and synergistic, leveraging novel techniques to interface with the nervous system, providing new fundamental approaches to modeling the nervous system, and enabling direct communication with the brain, body, and environment. For the application of actuation, Revolutionizing Prosthetics is translating state of the art BCI systems to restore sensorimotor function in humans, and REPAIR is utilizing animal models to advance neural decoder capabilities through the incorporation of multi-scale, dynamic models that account for the brain’s plastic changes underling sensorimotor function during learning or following injury. The BCI system developed by the REMIND program targets a different neurobehavioral system – memory – and has demonstrated the improvement and restoration of performance on memory tasks in animal models. Finally, the RE-NET program is addressing challenges involved in developing safe, robust BCI systems for chronic use and is applicable to a broad spectrum of BCI applications.
2.1. Revolutionizing Prosthetics
The Revolutionizing Prosthetics program began in 2006 with the vision of restoring near-natural dexterity for people with loss of upper-limb control. The objective was to allow Wounded Warrior amputees to improve quality of life, maximize function and independence, enable activities of daily living, and return to service (if desired). DARPA embarked on this challenge in response to the increased incidence of amputations and injuries to the nervous system suffered by service members. Major upper extremity disabilities are a significant problem for the Department of Defense (DoD). Between 2000 and 2011, there were nearly 6000 amputations of service members within the US Armed Forces, with over two-thirds of these instances involving upper extremity amputations (O’Donnell, 2012). Approximately 16.5% of amputees returned to active duty, with return-to-duty rates of single amputees reaching 20% (Stinner et al., 2010). Therefore, there is a need for functional solutions that enable service members to deliver high performance. In addition to amputees, the Revolutionizing Prosthetics program also serves individuals with loss of upper extremity function as a result of spinal cord injury (SCI). It is estimated that there are nearly 300,000 individuals in the US living with SCI with approximately 12,000 new cases every year (National Spinal Cord Injury Statistical Center, 2013).
Prior to DARPA’s investments in this area, there were few options for military personnel suffering with these disorders. Remarkably, one of the most commonly used solutions to upper extremity loss was the split hook prosthetic developed in 1912 (Dorrance, 1912). Evaluation of the state-of-the-art revealed that very little progress had been made in prosthetic innovation since this time with meager advances with the 1938 Becker design (Becker, 1942), National Academy of Science Artificial Limb Program of 1945 (Furman, 1962), and the available one-degree-of-freedom (DOF) hands that were on the market in the early 2000s. DARPA responded to the lack of advanced prosthetic limb options for upper-extremity amputees with a two-pronged development strategy. Both efforts focused on developing modular arm systems that could provide support to a variety of amputees including transradial, transhumeral, as well as full shoulder disarticulation. Development of these prosthetic limbs involved highly demanding specifications that mimicked attributes and capabilities of real human arms including weight, shape, and grip strength. Two teams of investigators, DEKA and The Johns Hopkins University Applied Physics Laboratory (JHU/APL), took on the task of designing and assembling these next generation arms to meet the program demands (Johannes et al., 2011, Resnik et al., 2013). The advanced DARPA arm systems, one with ten and the other with seventeen miniature motors, enabled replication of near-natural hand and arm movements and are available to the research and clinical communities. Additionally, a virtual arm system is available for prototyping (Armiger et al., 2011, Collinger et al., 2014).
In addition to the arms themselves, the Revolutionizing Prosthetics program also produced innovations in the user control interface. Since the diversity in loss of upper extremity function was high among military personnel, the control interfaces had to provide multiple options such that the user could choose the solution most appropriate to their own needs, thus leading to personalized medicine. The first control interface developed under the Revolutionizing Prosthetics program was a non-invasive control modality that does not require surgical procedures. It consisted of inertial measurement units that could be placed on the shoes as well as pressure/bump switches that could be attached to the torso. Movement of the inertial units enabled actuation of all of the degrees of freedom while the bump switches enabled changes in the arm modes (hand grips, for example). The use of these interfaces was well received by prosthetic limb users, and over 7000 h among 77 amputees and quadriplegics have been logged during pilot testing. Other viable approaches for peripheral control of these arm systems include the electromyogram (EMG), targeted muscle reinnervation (TMR), as well as nerve interfaces such as implanted myoelectric sensors (IMES) (for a review, see Ortiz-Catalan et al., 2012). In May 2014, the DEKA arm system received U.S. Food and Drug Administration (FDA) approval. Initial input control modalities in this approval include inertial and EMG control. Efforts are ongoing to make these systems available to military and civilian personnel.
Beyond the use of inertial or EMG control, Revolutionizing Prosthetics has provided transformative innovations in direct brain control of prosthetic limbs that enabled human users to think about moving in much the same way they would control their own arm to actuate the prosthetic arm systems (Collinger et al., 2013). This line of research and development involved real-time recording and decoding of motor cortical signals to provide research participants with tetraplegia the ability to control up to ten DOF with the prosthetic arm systems. The main enablers for near-natural control include micro-electrode arrays for recording brain signals and complex algorithms to translate neural activity into commands for the motors throughout the prosthetic arm system. In the Revolutionizing Prosthetics program, single unit neuronal recordings have been achieved in a human clinical patient via the implantation of two intracortical microelectrode arrays (Blackrock Microsystems, Salt Lake City, UT), each with 96 electrode shanks. The 4 mm × 4 mm assembly was implanted in the participant’s motor cortex (M1) and connected percutaneously through the use of two head-mounted pedestals. Using preoperative structural and functional MRI and magnetoencephalography (MEG) to identify hand and finger activation areas in M1 for the purpose of decoding grasp behaviors, the arrays were implanted 14 mm apart through the use of a stereotaxic surgical navigation system (Collinger et al., 2014). The combined signals from these arrays allowed for the simultaneous recording of over 250 unique single units, which were then processed in real-time to ultimately send commands to move the JHU/APL prosthetic limb. The recorded signals passed through a Blackrock Microsystems NeuroPort data acquisition system, which converted neuronal firing rate (30-ms bins) into a functional mapping for prosthetic limb commands in endpoint velocity space. Real-time visual feedback from the prosthetic limb to the participant enabled closed-loop control. Using this system, the participant achieved control of the arm in three DOF (endpoint of the wrist) within two weeks of implantation, and began operating on seven DOF within five weeks (Collinger et al., 2013). Depending on the types of signals acquired from the brain (single neuron vs. electrocorticogram (ECoG)), new population vector decoding methodologies and shared control architectures needed to be developed to allow users to initialize their control of the system and then adaptively learn to obtain increasing control of a greater number of degrees of freedom (e.g., see Collinger et al., 2013). Applying such methodologies to novel behavioral paradigms, Revolutionizing Prosthetics efforts have further elucidated the neural mechanisms underlying human-tool interaction. These discoveries have enabled a deeper understanding of how the brain represents motor control, environmental cues, object interaction, and perception of neuroprosthetic control (Hauschild et al., 2012, Collinger et al., 2013). Performance was established both in terms of completion of various reach and grasp tasks, and also by way of evaluation against functional metrics such as the Action Research Arm Test (ARAT), a clinical outcome measure derived from the stroke rehabilitation community (Lyle, 1981).
To continue to push the frontier of the intersection between brain science and technology and deliver the most natural arm systems, DARPA researchers have laid the foundation for adding the next generation of neuroprosthetic control by exploring the restoration of the sense of touch. Constructing closed-loop complete sensorimotor systems is essential for identifying objects, manipulating objects, and even grasping objects in the absence of vision. Some of the first steps in developing these next generation interfaces have involved investigating how the non-human primate brain encodes sensory information provided via natural means (tactile stimulation of the subject’s own fingers), and comparing that to the psychometric evaluation of the encoding of sensory information delivered through cortical stimulation (Tabot et al., 2013, Zaaimi et al., 2013). These efforts explored simple percepts of touch, as well as complex encoding of slip and texture. For the Revolutionizing Prostheticsprogram, a series of experiments were performed at the University of Chicago to demonstrate safety of chronically implanted stimulating electrode arrays in the somatosensory cortex of non-human primates. The implant configuration consisted of two 100-channel, sputtered iridium oxide film (SIROF)-tipped Utah electrode arrays (UEA) connected via Cereport connectors to a CereStim R96 stimulator, all manufactured by Blackrock Microsystems. Stimulation was delivered to three non-human primates at 300 Hz across a range of charge amplitudes, duty cycles, and interval durations. Sensory stimulation was performed for 4 h per day over a period of six months. The results revealed no deficits in fine motor control and demonstrated safety of the electrode-tissue interface (Chen et al., 2014). In tandem with the safety study, an efficacy study was also performed at University of Chicago to characterize the relationship between mechanical and electrical stimulation on tactile tasks (Berg et al., 2013). Using both a 96-electrode SIROF-tipped UEA implanted in the hand representation of Brodmann’s area 1 and two 16-electrode Floating Microelectrode Arrays (MicroProbes for Life Sciences, Gaithersburg, MD) targeting the hand region in Brodmann’s area 3b, the implants and corresponding stimulation via a CereStim stimulator were used in a series of electrical detection tasks and compared against mechanical detection tasks. It was demonstrated that electrical stimulation sent in response to tactile stimulation of the prosthetic finger showed equivalent detection performance to mechanical stimulation of the native finger, with a psychometric curve function defining the relationship between mechanical and electrical sensation for use in subsequent stimulation experiments. This combined suite of safety and efficacy data has been critical in the support of FDA Investigational Device Exemption (IDE) approval for testing in human clinical populations. The ultimate vision for transitioning these efforts for clinical use is to enable signals from sensors on prosthetic fingers to be translated into stimulation signals delivered directly to the sensory cortex, enabling patients to ‘feel’ when their prosthetic hand touches objects. This transition from visually driven closed-loop control to full sensorimotor closed-loop control (see Fig. 1) is anticipated to enable increased user control of prosthetic limbs, with faster response times and near-natural sensation during performance of tasks with occluded views or those that require tactile feedback. It is hoped that these advances will continue to improve independence and quality of life after injury in users of prosthetic limbs.
Fig. 1. Idealized bidirectional brain–computer interface for closed-loop prosthetic control. Neural correlates of motor intent are recorded from electrode arrays implanted in motor areas of the brain such as the primary motor cortex (M1). The signals are decoded and used to control the movement of a prosthetic arm. Sensors on the robotic arm detect information on touch (via contacts with external objects) and/or proprioception (via movement and position of the prosthetic limb). Outputs from these sensors are then converted to patterns of stimulus pulses that are delivered via implanted electrode arrays to sensory regions of the brain, such as primary somatosensory cortex (S1).
Reprinted from “Restoring sensorimotor function through intracortical interfaces: progress and looming challenges,” by S.J. Bensmaia and L.E. Miller, 2014, Nature Reviews Neuroscience, 15, p. 315. Copyright 2014 by Nature Publishing Group. Reprinted with permission.
2.2. Reorganization and Plasticity to Accelerate Injury Recovery (REPAIR)
Although the BCI technologies developed under the Revolutionizing Prosthetics program have proved quite remarkable in enabling direct neural control of robotic limbs, the neural decoding algorithms do not capitalize on one of the brain’s fundamental characteristics – plasticity. Importantly, brain function is not fixed or static, but rather it adapts in response to learning new information (such as meeting a new person) or new behavioral skills (such as riding a bicycle). Moreover, these learning processes are subserved by brain activity at multiple spatial scales, ranging from the activity of single neurons to coordinated patterns of activity across small and large networks of neurons, and ultimately, to behavior. Such changes also occur across multiple temporal scales, encompassing millisecond level functional changes as well as structural changes that can result in alterations in neural activity over days, weeks, months, or longer. In addition to adaptation of the brain in response to learning, the brain’s function can also be dynamically altered by injury, either of the brain itself, such as in the case of traumatic brain injury (TBI), or of the body – for instance, amputees no longer have normal sensations of touch that are sent to the brain’s sensory regions, and they can no longer use the brain’s motor control systems to directly move their affected limb (e.g., see Pohlmeyer et al., 2014).
The goal of DARPA’s REPAIR program, initiated in 2010 and projected to continue through 2015, is to develop a multi-scale, biologically accurate model of neural function that accounts for the brain’s adaptation over time. The computational models developed under REPAIR have focused on sensorimotor function, that is, how the brain learns to use sensory information such as touch or vision to generate appropriate reaching and grasping behaviors in order to perform complex tasks (e.g., see Sanchez et al., 2012, Shenoy and Nurmikko, 2012, Andersen et al., 2012). Under this program, new neural interface tools have been designed to detect and alter the activity of large populations of neurons in awake, behaving non-human primates across multiple spatial and temporal scales. Neural recordings from these interfaces have been used to develop and validate computational models that emulate dynamic functions of the brain and resulting behaviors. Through highly collaborative efforts across eight universities, the computational models developed under REPAIR are being integrated with one another and interfaced directly with the brain for the purpose of restoring neural and behavioral function following neural injury or sensory deprivation. The final dynamic model developed under the REPAIR program is anticipated to be incorporated into a co-adaptive BCI system that optimally facilitates sensorimotor function in a non-human primate by enabling adaptation of both the in silico model and biological brain. While research efforts conducted under REPAIR have utilized only animal models, the new technologies and scientific insights will undoubtedly provide a foundation for the development of novel therapeutic devices for human clinical populations.
Computational models developed under REPAIRinclude biomimetic models that mimic the properties of neuronal firing and dynamic connectivity across networks of neurons (Kerr et al., 2012). Such models have been integrated with higher-level biomimetic models of reinforcement learning that the brain undergoes while learning to perform new tasks (Mahmoudi and Sanchez, 2011, Neymotin et al., 2013). REPAIR researchers have also developed low-dimensional models that predict state space trajectories of population level neural activity involved in planning and execution of sensorimotor behaviors (Shenoy et al., 2013, Ames et al., 2014). Additionally, new sophisticated algorithms have been developed that decode neural representations of force related variables, such as torque, in addition to traditional kinematic variables (e.g., position), resulting in more natural sensorimotor prosthetic control and the ability to compensate for novel dynamic environments, compared to traditional decoders based on kinematic variables alone (Chhatbar and Francis, 2013). REPAIR investigators are developing and testing their models by recording neural activity in animals performing complex behavioral tasks, and, importantly, determining whether the models correctly predict multi-scale changes in brain activity and behavior when certain aspects of the brain’s activity are temporarily perturbed. Many of these studies are investigating neural correlates of complex behaviors in freely moving non-human primates and have been made possible by recent developments in wireless neural interfaces (Foster et al., 2012, Borton et al., 2013).
To perform precise, reversible perturbations of brain activity, REPAIR researchers have leveraged and expanded upon recent developments in optogenetics, enabling expression of bioengineered opsins (light-sensitive channels) within the cell membranes of specific types of neurons (Diester et al., 2011, Zalocusky and Deisseroth, 2013). These opsins undergo modified configurations in response to specific wavelengths of light, altering the thresholds for neuronal action potentials, and ultimately resulting in an increase or decrease of neuronal firing activity in the presence of specific colors of light. Moreover, new neural interface hardware developed under the REPAIR program has enabled optical neuromodulation simultaneous with electrical recording of single and multi-unit activity in awake, behaving non-human primates (e.g., see Fig. 2) (Shenoy and Nurmikko, 2012, Ozden et al., 2013). These new developments have been of utmost importance in the REPAIRprogram to facilitate modeling of the brain’s dynamic responses to perturbations of neural activity.
Fig. 2. Coaxial optrode for simultaneous electrical recording and optogenetic neuromodulation. (a) Coaxial optrode cross-sectional schematic (top) and photograph (bottom). (b) Scanning electron microscope images of coaxial optrode tip. (c) Full-length photograph of coaxial optrode. (d) Photograph of coaxial optrode depicting reinforcing thin stainless steel tube (310 μm diameter) and tissue penetrating portion of shaft (165 μm diameter). (e) Performance of the coaxial optrode is shown by optically modulated, in vivo electrophysiological recordings from somatosensory cortices in an anesthetized mouse (transgenic Thy1-ChR2/YFP, left), anesthetized rat (transduced with viral construct AAV5-CAMKIIα-C1V1-eYFP, middle), and awake behaving non-human primate (transduced with AAV5-CAMKIIα-C1V1-eYFP, right). Blue and green bars indicate timeframes of light delivery (at 473 nm for the mouse and 561 nm for the rat and non-human primate, respectively), highlighting light-induced increases in neuronal spiking activity. (For interpretation of the references to color in figure legend, the reader is referred to the web version of the article.)
Adapted from “A coaxial optrode as multifunction write-read probe for optogenetic studies in non-human primates,” by I. Ozden, J. Wang, Y. Lu, T. May, J. Lee, W. Goo, D. J. O’Shea, P. Kalanithi, I. Diester, M. Diagne, K. Deisseroth, K. V. Shenoy, and A. V. Nurmikko, 2013, Journal of Neuroscience Methods, 219(1), p. 144 & 148. Copyright 2014 by Elsevier. Adapted with permission.
In addition to reversibly perturbing brain activity to develop and validate computational models, REPAIR researchers have also demonstrated that direct brain stimulation can be used to substitute for missing sensory information. Dadarlat et al. demonstrated the use of intracortical electrical microstimulation (ICMS) of the non-human primate somatosensory cortex to convey the dynamic location of the animal’s arm in relation to an unseen target location (Dadarlat et al., 2013). Notably, the patterns of stimulation were arbitrary (i.e., did not reflect the brain’s actual firing patterns) but were fixed with respect to paired visual stimuli in conveying the location of the target. Thus, the animal used the visual information to learn the “meaning” of the electrical stimulation patterns and was later able to successfully reach to targets using cortical stimulation as feedback when the information conveyed by the visual stimuli was either degraded or completely removed. As part of this effort, the researchers also developed a computational model of multisensory integration in a neural network (Makin et al., 2013). The model demonstrates that correlations between visual and ICMS inputs are sufficient for the network to learn the ICMS signal. After learning, the model predicts how behavioral performance will change depending on the saliency of sensory inputs. The model’s predictions were validated by the empirical results reported by Dadarlat et al., which demonstrated an increasing behavioral bias toward information conveyed by the somatosensory prosthesis as the information provided by the visual stimulus was degraded. Ongoing REPAIR efforts are investigating the use of computational models to derive optimal stimulation patterns and to determine whether sensory feedback can be provided to animals through the use of targeted optogenetic stimulation information (see Gilja et al., 2011 for discussion). For example, preliminary results suggest that artificial tactile sensation in the digit area can be induced through optogenetic stimulation of the non-human primate somatosensory cortex (May et al., 2013). These efforts offer unique approaches to providing sensory feedback within the context of closed-loop BCI.