Neurotechnology May Be Beneficial to Patients With Aphasia

A chronic condition like aphasia may be treatable thanks to neurotechnology solutions developed by companies like BIOS Health. Adoption, on the other hand, is still a long way off. 

Since the news broke yesterday that Bruce Willis has been diagnosed with aphasia, any fan of his will be more aware of the condition. This is due to the fact that aphasia is a communication disorder. 

On a personal level, the news has an impact on me. As a result of my mother’s significant stroke in February, my family has been attempting to come to terms with aphasia and find ways to cope with it over the last few weeks. It can be very upsetting to witness a person who used to be very involved in a wide variety of social and community networks suddenly unable to communicate their wants, needs, and desires. 

According to the National Aphasia Association, aphasia affects two million people in the United States (NAA). Aphasia is caused by a brain injury, such as that caused by a stroke or a blow to the head. In my mother’s case, the immediate cause was a stroke that struck her suddenly. The question is, how does it affect a person? A person with aphasia According to the National Aphasia Association, people with aphasia “may have difficulty producing words, but their intelligence is intact; their ideas, thoughts, and knowledge are still in their heads — it’s just communicating those ideas, thoughts, and knowledge that is disrupted” (NAA). 

I’ve let my imagination run wild since the day we first met the rehabilitation staff at the hospital about what kinds of technologies might help my mother get back to whatever the new normal might be, and enable some form of quality of life in which she can engage with people better. I’m hoping that one of these days she’ll be able to accomplish both of those goals. 

As I dug deeper into my knowledge, images of Stephen Hawking using his speech synthesis system and Elon Musk’s Neuralink, a brain implant system, came to mind. I also remembered speaking with BIOS Health the previous year about reading brain signals, processing those signals, and writing back to the brain to do something with the information; in their case, to aid in the treatment of chronic disease. 

I looked into each of these possible courses of action. ACAT, which stands for “assistive context-aware toolkit,” is an open-source toolkit developed by Intel Labs in-house. This toolkit enabled Stephen Hawking to regain his ability to communicate after he had lost it. The Windows Communication Framework was used to integrate the predictive text functionality with the intelligent predictive text engine Presage. The Windows Communication Framework powered the presage. As a result, users are able to communicate with one another through the use of keyboard simulation, word prediction, and speech synthesis. 

According to what I’ve learned, this strategy calls for the use of some sort of switch, such as a blink sensor, to select and generate the desired output. 

Both brain-computer interfaces and neurotechnology have advanced significantly in recent years. A neural prosthesis, for example, is the neural implant used by Neuralink to connect to neurons in the brain, record the activity of these neurons, and process these signals in real-time. The next step is to decipher the signals to determine what the user’s brain intends to do, and then transmit this information to the user’s computer via Bluetooth to provide useful information or control external devices. 

Reading brain signals appears to be an essential component of resolving any problems associated with aphasia. BIOS Health is a company that is working on developing methods for treating chronic conditions by reading and writing electrical signals to the body’s neural network. It will use artificial intelligence and machine learning as its primary tools to translate the nervous system’s “language” in order to aid in the treatment of chronic diseases. 

I thought this was fantastic, but then I wondered if these signals could also be used to help interpret brain-generated speech signals. So I reached out to Emil Hewage, co-founder and CEO of BIOS Health, and asked him the question. He explained very clearly why we are not yet in a position to allow patients to access and decode neural signals. He stated that we are not yet there. 

In a broader sense, Hewage stated, “Over time, what we’ll start to recognize is that the most valuable part of the emerging neurotechnology landscape is the products and applications that translate the data from whatever biological information there is to that end application.” 

He stated that the solution to this problem is to create an information translation tool or a language translator. “That’s the value-creating aspect that’s at the heart of all of these products,” he continued. That is the component that must be improved and made more accessible. We are working hard to increase the number of seamless and beneficial innovations on the market that benefit one’s health and quality of life. 

So, for example, we would want to ensure that the models that translate what works today are available. Then, as new implantables hit the market, we’d like to ensure that the higher resolution of data leads to higher quality experiences. Finally, we want to ensure that higher resolution data can result in higher quality experiences. As a result, if you can gather more information, the patient will be able to talk for a longer period of time. When we can interface with the other side of biology, the muscular side, you will be able to transition from a machine voice to a human one. 

The primary goal of BIOS Health, according to him, is to improve its capabilities as the reading and writing layer. “Take whatever you can afford for reading today and run it through the best models we have, then deploy it through the best forms of writing back.” “From whatever you can afford for reading today.” 

He’s referring to something called the neurotechnology stack framework

In addition to a computation layer, this framework includes reading and writing tools. The reading tools collect data from neural signals and feed it into the computation layer, which processes the data, performs analysis, and controls the simulation outputs. These outputs are then used by the writing tools to produce the desired effect on the nervous system or neural state. The fundamental stack is designed with the intention of allowing software developers to build higher-level applications on top of it based on specific use cases and requirements. 

Through our conversation, I gained a better understanding of the potential role that neurotechnology could play in the treatment of conditions like aphasia in the future. However, that is still a long time in the future. There was no specific information available about the cause of Bruce Willis’ aphasia. 

For the time being, however, we will have to rely on existing tools for augmentative and alternative communication, as well as the various apps that have been developed to train patients to speak again. This is true for both him and my mother. Furthermore, we will continue to hire true professionals in the field of speech and language therapy. I am open to new ideas and suggestions from our audience.

Scroll to top