COLUMNS

Speaking One’s Mind

A story worth sharing.

March 2024

Reading time min

Illustration of a man with thought bubbles in brain.Illustration: Brian Stauffer


We Stanford editors are usually good at sharing—handing off a story to a colleague to rebalance our workloads, even after we’ve developed the concept and assigned it to a writer. So when senior editor Jill Patton, ’03, MA ’04, generously offered to edit our feature on brain-computer interfaces, I surprised myself:

“No,” I said immediately. “It’s about restoring communication.”

I have a 19-year-old son with nonstandard speech. Which means I know in my soul that current speech-generating devices—which users operate via switch, touch screen, or eye tracking—are better than nothing but ultimately inadequate to the task. They’re a vital outlet for those who have no other path to speech, but they’re slow and cumbersome. People like my son, who can communicate orally if listeners hold up their end of the bargain, have been known to resist them. (Since preschool. But I digress.)

Which means I’ve long been interested in the potential to more directly tap the brain to decode and amplify what people have to say. So have professor of neurosurgery Jaimie Henderson and the late professor of electrical engineering Krishna Shenoy. Like me, they had family experiences that imbued them with an understanding of the profound importance of speech. Unlike me, they were in a position to do something about it. The Neural Prosthetic Translation Lab, which they co-founded, creates and tests brain-computer interfaces (BCIs). Henderson implants sensors the size of baby aspirins into the brains of research participants, and Shenoy devised systems to decode the neural commands they receive. Through their efforts and those of their collaborators at Stanford and elsewhere, speech BCIs may soon become the first type available to patients, ahead of those that move robotic limbs or operate smartphones.

Like me, they had family experiences that imbued them with an understanding of the profound importance of speech. Unlike me, they were in a position to do something about it.

In our story, you’ll meet two of the community members whose participation in studies has proved invaluable to the scientists. One, Dennis DeGray, has helped progress the research from cursor movement to imagined hunt-and-peck typing to creating a mental version of a handwritten alphabet. (Lately, he also gets to fly a drone with his mind.)  “I like to think of it like we’re developing the alphabet that other people will use to write books,” he says. The other, Pat Bennett, recently demonstrated that a BCI could decode her speech at more than 60 words a minute. “So many years of not being able to communicate and then suddenly the people in the room got what I said,” she recalled in an email interview. “I don’t remember what I exactly said after the prescribed script finished, but it had to be along the lines of ‘Holy shit, it worked, I’m so happy, and you guys did it.’ ”

I may not have been willing to share this story with my colleague. But I’m thrilled to share it with you.


Kathy Zonana, ’93, JD ’96, is the editor of Stanford. Email her at kathyz@stanford.edu.

You May Also Like

© Stanford University. Stanford, California 94305.