Eliza Strickland: Hi, I’m Eliza Strickland for IEEE Spectrum‘s Fixing the Future podcast. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. You’ve probably heard of Neuralink, the buzzy neurotech company founded by Elon Musk that wants to put brain implants in humans this year. But you might not have heard of another company, Synchron, that’s way ahead of Neuralink. The company has already put 10 of its innovative brain implants into humans during its clinical trials, and it’s pushing ahead to regulatory approval of a commercial system. Synchron’s implant is a type of brain-computer interface, or BCI, that can allow severely paralyzed people to control communication software and other computer programs with their thoughts alone. Tom Oxley is a practicing neurologist at Mount Sinai Hospital in New York City and the founder and CEO of Synchron. He joined us on Fixing the Future to tell us about the company’s technology and its progress. Tom, thank you so much for joining me on Fixing the Future today. So the enabling technology behind Synchron is something called the Stentrode. Can you explain to listeners how that works?

Tom Oxley: Yeah, so the concept of the Stentrode was that we can take a endovascular platform that’s been used in medicine for decades and build an electronics layer onto it. And I guess it addresses one of the challenges with implantable neurotechnology in the brain, which is that-- well, firstly, it’s hard to get into the brain. And secondly, it’s hard to remain in the brain without having the brain launch a pretty sophisticated immune response at you. And the blood-brain barrier is a thing. And if you can stay inside on one side of that blood-brain barrier, then you do have a very predictable and contained immune response. That’s how tattoos work in the skin. And the skin is the epithelial and the blood vessels have an endothelial layer and they kind of behave the same way. So if you can convince the endothelial layer of the blood vessel to receive a package and not worry about it and just leave it be, then you’ve got a long-term solution for a electronics package that can use the natural highways to most regions within the brain.

Strickland: Right. So it’s called a Stentrode because it resembles a stent, right? It’s sort of like a mesh sleeve with electrodes embedded in it, and it’s inserted through the jugular. Is that correct?

Oxley: We actually called it a Stentrode because, in the early days, we were taking stents. And Nick Opie and Gil Rind and Steve as well were taking these stents that we basically took out of the rubbish bin and cleaned them, and then by hand, we’re weaving electrodes onto the stent. So we just needed a name to call the devices that we were testing back in the early days. So Stentrode was a really organic term that we just started using within the group. And I think then 2016 Wired ran a piece, calling it one of the new words. So we’re like, “Okay, this word seems to be sticking.” Yeah, it goes in the jugular vein. So in what we’re seeking to commercialize as the first product offering for our implantable BCI platform, we’re targeting a particular large blood vessel called the superior sagittal sinus. And yes, the entrance into the body is through the jugular vein to get there.

Strickland: Yeah, I’m curious about the early days. Can you tell me a little bit about how your team came up with this idea in the first place?

Oxley: The very early conceptualization of this was: I was going through medical school with my co-founder, Rahul Sharma, who’s a cardiologist. And he was very fixated on interventional cardiology, which is a very sexy field in medicine. And I was more obsessed with the brain. And it looked—and this was back around 2010—that intervention was going to become a thing in neurology. And it took until 2015 for a real breakthrough in neurointervention to emerge, which was for the treatment of stroke. And that was basically a stent going up into the brain to pull out a blood clot. But I was always less interested in the plumbing and more interested in how it could be that the electrical activity of the brain created not just health and disease but also wellness and consciousness. And that whole continuum of the brain, mind was why I went into medicine in the first place. But I thought the technology— the speed of technology growth in the interventional domain in medicine is incredible. Relative to the speed of expansion of other surgical domains, the interventional domain, and now into robotics is, I would say, the most fast-moving area in medicine. So I think I was excited about technology in neurointervention, but it was the electrophysiology of the brain that was so enticing. And the brain has remained this black box for a long period of time.

When I started medicine, doing neurology was a joke to the other types of ambitious young medical people because, well, in neurology, you can diagnose everything, but you can’t treat anything. And now implantable neurotechnology is opening up access into the brain in a way which just wasn’t possible 10 or 15 years ago. So that was the early vision. The early vision was, can the blood vessels open up avenues to get to the brain to treat conditions that haven’t previously been treated? So that was the early conceptualization of the idea. And then I was bouncing this idea around in my head, and then I read about brain-computer interfaces, and I read about Leigh Hochberg and the BrainGate work. And then I thought, “Oh, well, maybe that’s the first application of functional neurointervention or electronics in neurointervention.” And the early funding came from US defense from DARPA, but we spent four or five years in Melbourne, Australia, Nick Opie hand-building these devices and then doing sheep experiments to prove that we could record brain activity in a way that was going to be meaningful from a signal-to-noise perspective that we felt was going to be sufficient to drive a brain-computer interface for motor control.

Strickland: Right. So with the Stentrode, you’re recording electrical signals from the brain through the blood vessels. So I guess that’s some remove. And the BrainGate Consortium that you referenced before, they’re one of many, many groups that have been doing implanted electrodes inside the brain tissue where you can get up close to the neurons. So it feels like you have a very different approach. Have you ever doubted it along the way? Feel like, “Oh my gosh, the entire community of BCI is going in this other direction, and we’re going in this one.” Did it ever make you pause?

Oxley: I think clinical translation is very different to things that can be proven in an experimental setting. And so I think, yeah, there’s a data reduction that occurs if you stay on the surface of the brain, and particularly if you stay in a blood vessel that’s on the surface of the brain. But the things that are solved technically make clinical translation more of a reality. And so the way I think about it more is not, “Well, how does this compete with systems that have proven things out in an experimental domain versus what is required to achieve clinical translation and to solve a problem in a patient setting?” So they’re kind of different questions. So one is kind of getting obsessed with a technology race based upon technology-based metrics, and the other is, “Well, what is the clinical unmet need and what are particular ways that we can solve that?” And I’ll give an example of that, something that we’re learning now. So yeah, this first product is in a large blood vessel that only gives a constrained amount of access to the motor cortex. But there are reasons why we chose that.

We know it’s safe. We know it can live in there. We know we can get there. We know we have a procedure that can do that. We know we have lots of people in the country that can do that procedure. And we understand roughly what the safety profile is. And we know that we can deliver enough data that can drive performance of the system. But what’s been interesting is there are advantages to using population-level LFP-type brain recordings. And that is that they’re more stable. They’re quite robust. They’re easy to detect. They don’t need substantial training. And we have low power requirements, which means our power can go for a long time. And that really matters when you’re talking about helping people who are paralyzed or have motor impairment because you want there to be as little troubleshooting as possible. It has to be as easy to use as possible. It has to work immediately. You can’t spend weeks or months training. You can’t be troubleshooting. You can’t be having to press anything. It just should be working all the time. So these things have only become obvious to us most recently.

Strickland: So we’ve talked a little bit about hardware. I’m also curious about the software side of things. How has that evolved over the course of your research? The part of your system that looks at the electrical signals and translates them into some kind of meaningful action.

Oxley: Yeah. It’s been an awesome journey. I was just visiting one of our patients just this week. And watching him go through the experience of trying out different features and having him explain to us— not all of our patients can talk. He can still talk, but he’s lost control of his hands, so he can’t use his iPhone anymore. And hearing what it feels like for him to— we’re trying out different levels of control, in particular in this case with iPad use. And it’s interesting because we are also still feeling very early, but this is not a science experiment. We’re trying to zero in and focus on features that we believe are going to work for everyone and be stable and that feel good in the use of the system. And you can’t really do that in the preclinical setting. You have to wait until you’re in the clinical setting to figure that out. And so it’s been interesting because what do we build? We could build any number of different iterations of control features that are useful, but we have to focus on particular control interaction models that are useful for the patient and which feel good for the patient and which we think can scale over a population. So it’s been a fascinating journey.

Strickland: Can you tell me a little bit about the people who have participated in your clinical trials so far and why they need this kind of assistive device?

Oxley: Yeah. So we’ve had a range of levels of disability. We’ve had people on the one end who have been completely locked in, and that’s from a range of different conditions. So locked-in syndrome is where you still may have some residual cranial nerve function, like eye movements or maybe some facial movements, but in whom you can’t move your upper or lower limbs, and often you can’t move your head. And then, on the other end of the spectrum, we’ve had some patients on the neurodegenerative side with ALS, in particular, where limb function has impaired their ability to utilize digital devices. And so really, the way I think about-- how we’re thinking about the problem is: the technology is for people who can’t use their hands to control personal digital devices. And why that matters is because they-- we’ve all become pretty dependent on digital devices for activities of daily living, and the things that matter from a clinically meaningful perspective are things like communication, texting, emailing, messaging, banking, shopping, healthcare access, environmental smart control, and then entertainment.

And so even for the people who can still— we’ve got someone in our study who can still speak and who can actually still walk, but he can’t use a digital device. And he’s been telling us-- like you’d think, “Oh, well, what about Siri? What about Alexa?” And you realize that if you really remove the ability to press any button, it becomes very challenging to engage in even the technology that’s existing. Now, we still don’t know what the exact indication will be for our first application, but even in patients who can still talk, we’re finding that there are major gaps in their capacity to engage in digital devices that I believe BCI is going to solve. And it’s often very simple things. I’ll give you an example. If you try to answer the phone when Siri-- if you try to answer the phone with Siri, you can’t put it on speakerphone. So you can say, “Yes, Siri, answer the phone,” but then you can’t put on the speakerphone. So there are little things like that where you just need to hit a couple of buttons that make the difference to be able to give you that engagement.

Strickland: I’d like to hear about what the process has been like for these volunteers. Can you tell me about what the surgery was like and then how-- or if you had to calibrate the device to work with their particular brains?

Oxley: Yeah. So the surgery is in the cath lab in a hospital. It’s the same place you would go to to have a stent put in or a pacemaker. So that involves: first, there are imaging studies to make sure that the brain is appropriate and that all the blood vessels leading up into the brain are appropriate. So we have our physicians identify a suitable patient, talk to the patient. And then, if they’re interested in the study, they’ve joined the study. And then we do brain imaging. The investigators make a determination that they can access that part of the brain. Then the procedure, you come in; it takes a few hours. You lie down; you have an X-ray above you. You’re using X-ray and dye inside the blood vessels to navigate to the right spot. We have a mechanism to make sure that you are in the exact spot you need to be. The Stentrode sort of opens up like a flower in that spot, and it’s got self-expanding capacity, so it stays put. And then there is a device that-- so the lead comes out of the skull through a natural blood vessel passage, and then that gets plugged into an electronics package that sits on the chest under the skin. So the whole thing’s fully implanted. The patients have been then resting for a day or so and then going home. And then, in the setting of this clinical study, we’re having our field clinical engineers going out to the home two to three times per week and practicing with the system and practicing with our new software versions that we keep releasing. And that’s how we’re building-- that’s how we’re building a product.

By the time we get to the next stage of the clinical trial, the software is getting more and more automated. From a learning perspective, we have a philosophy that if there’s a substantial learning curve for this patient population, that’s not good. It’s not good for the patient. It’s not good for the caregiver. These patients who are suffering with severe paralysis or motor impairment may not have the capacity to train for weeks to months. So it needs to work straight away. And ideally, you don’t want it to be recalibrated every day. So we’ve had our system-- I mean, we’re going to publish all this, but we’ve working and designing towards having the system working on day one as soon as it’s turned on with level of functionality that lets the user immediately have functionality at some particular level that is enough to let them perform some of the critical activities of daily living, the tasks that I just mentioned earlier. And then I think the vision is that we build a training program within the system that lets users build up their capability to increasing levels of capability, but we’re much more focused on the lowest level of function that everyone can achieve and make it easy to do.

Strickland: For it to work right out of the box, how do you make that work? Is one person’s brain signals pretty much the same as another person’s?

Oxley: Yeah, so Peter Yoo is our superstar head of algorithms and neuroscience. He has pulled together this incredible team of neuroscientists and engineers. I think the team is about 10 people now. And these guys have been working around the clock over the last 12 months to build an automated decoder. And we’ve been talking about this internally recently as what we think is one of the biggest breakthroughs. We’ll publish it at a point that’s at the right time, but we’re really excited about this. We feel like we have built a decoder that does not need to be tuned individually at all and will just work out of the box based upon what we’ve learned so far. And we expect that kind of design ethos to continue over time, but that’s going to be a critical part of the focus on making the system easy to use for our patients.

Strickland: When a user wants to click on something, what do they do? What’s the mental process that they go through?

Oxley: Yeah. So I’ve talked about the fact that we do population-level activation of motor cortical neurons. So what does your motor cortex do? Your motor cortex is about 10% of your brain, and you were born with it, and it was connected to all of these muscles in your body. And you learned how to walk. You learned how to run. My daughter just learned how to jump. She’s two and a little bit. And so you spend those early years of your life training your brain on how to utilize the motor cortex, but it’s connected to those certain physically tethered parts of your body. So one theory in BCI, which is what the kind of multi-unit decoding theory is, is that, “Let’s train the neurons to do a certain task.” And it’s often like training it to work within certain trajectories. I guess the way we think about it is, “Let’s not train it to do anything. Let’s activate the motor cortex in the way that the brain already knows how to activate it in really robust, stable ways at a population level.” So probably tens of thousands of neurons, maybe hundreds of thousands of neurons. And so how would you do that? Well, you would make the brain think about what it used to think about to make the body move. And so in people who have had injury or disease, they would have already lived a life where they have thought about pressing down their foot to press the brake pedal on the car, or kicking a ball, or squeezing their fist. We identify robust, strong motor intention contemplations, which we know are going to activate broad populations of neurons robustly.

Strickland: And so that gives them the ability to click, and I think there’s also something else they can do to scroll. Is that right?

Oxley: Yeah. So right now, we’re not yet at the point where we’ve got the cursor moving around the screen, but we have a range of— we have multi-select, scroll, click, click and hold, and some other things that are coming down the pipeline, which are pretty cool, but enough for the user to navigate their way around a screen like an Apple on like an iOS and make selections on the screen. And so the way we’re thinking about that is so converting that into a clinical metric. David Petrino at Mount Sinai has recently published this paper on what he’s called the digital motor output, DMO. And so the conversion of those population neurons into these constrained or not constrained, but characterized outputs, we’re calling that a DMO. And so the DMO-- the way I think about a DMO is that is your ability to accurately select a desired item on a screen with a reasonable accuracy and latency. And so the way we’re thinking about this is how well can you make selections in a way that’s clinically meaningful and which serves the completion of those tasks that you couldn’t do before?

Strickland: Are you aiming for eventually being able to control a cursor as it goes around the screen? Is that on the roadmap?

Oxley: That is on the roadmap. That’s where we are headed. And I mean, I think ultimately, we have to prove that it’s possible from inside a blood vessel. But I think when we do prove that, I think— I’m excited that there’s a history in medicine that minimally invasive solutions that don’t require open surgery tend to be the desired choice of patients. And so we’ve started this journey in a big blood vessel with a certain amount of access, and we’ve got a lot of other exciting areas that we’re going to go into that give us more and more access to more brain, and we just want to do it in a stepwise and safe fashion. But yeah, we are very excited that that’s the trajectory that we’re on. But we also feel that we’ve got a starting point, which we think is the stepwise fashion, a safe starting point.

Strickland: I think we’re just about out of time, so maybe just one last question. Where are you on the path towards FDA approval? What do you anticipate happening as next steps there?

Oxley: So we’ve just finished enrollment of our 10th patient in our feasibility study. Well, we had four patients in our first Australian study and now six patients in an early feasibility study. That will continue to run formally for another, I believe, six months or so. And we’ll be collecting all that data. And we’re having very healthy conversations with the FDA, with Heather Dean’s group in the FDA. And we’ll be discussing what the FDA need to see to demonstrate both safety and efficacy towards a marketing approval with what we hope will be the first commercial implantable BCI system. But we’ve still got a way to go. And there’s a very healthy conversation happening right now about how to think about those outcomes that are meaningful for patients. So I would say over the next few years, we’re just moving our way through the stages of clinical studies. And hopefully, we’ll be opening up more and more sites across the country and maybe globally to enroll more people and hopefully make a difference in the lives of this condition, which really doesn’t have any treatment right now.

Strickland: Well, Tom, thank you so much for joining me. I really appreciate your time.

Oxley: Thank you so much, Eliza.

Strickland: That was Tom Oxley speaking to me about his company, Synchron, and its innovative brain-computer interface. If you want to learn more, we ran an article about Synchron in IEEE Spectrum‘s January issue, and we’ve linked to it in the show notes. I’m Eliza Strickland, and I hope you’ll join us next time on Fixing the Future.