Facial ‘decoder’

A conformable, piezoelectric patch for spatiotemporal decoding of facial strains

Like Comment
Read the paper

It was a beautiful, warm April night when I met Dr. Steven Hawking at Harvard Society, where I was a Junior Fellow just prior to joining the MIT Media Lab as a faculty. Harvard Society dinners are known for their lively, scintillating conversations amongst some of the most brilliant scholars and interesting minds; a place where conversations carry-on, hours into the night. During dinner, Dr. Hawking exuded such a warm and patient presence with so much to tell and share, yet I sensed his struggles; it was taking too long for him to compose a sentence via his computer system. On that night, while sitting by his side, I made up my mind to tackle his struggle by designing and developing a conformable interface which would allow him (and those like him) to compose messages seamlessly, and thus carry on the conversation.

Canan Dagdeviren with late Stephen Hawking at Harvard Society Dinner in Cambridge, MA; April 25th 2016

Imagine a world in which a simple smile or a twitch is all that it takes to compose a personalized message or send an email. That world is here in my research group, Conformable Decoders, at the MIT Media Lab.

It is known that many neuromuscular disorders, such as amyotrophic lateral sclerosis (ALS), often manifest themselves through physiological changes, including gradual loss of the ability to exercise fine motor skills and to vocalize intelligible speech. Predictable methods for continuous tracking of dynamic skin strain on the face, therefore, can enable new forms of communication for individuals with such disorders. Present methods for in vivo characterization of facial deformations involve electromyography (EMG), skin impedance measurements, or camera tracking. Yet these typically result in high uncertainties, cumbersome response times, or have bulky structures with highly visible interfaces to soft skin, presenting difficulty for continuous use in daily life, especially for individuals with neuromuscular disorders. The aim of our report just published in Nature Biomedical Engineering is to realize conformable sensors and systems that can translate patterns of facial soft tissue biomechanics in vivo into interpretable electrical signals to enable new forms of non-verbal communication. The concepts, materials, system design and characterization methods to be introduced in this project can offer new routes for rapid, in vivo biokinematic assessment of epidermal surfaces during dynamic movements. Such systems can help for continuous clinical monitoring of a wide range of neuromuscular conditions, where variations are anticipated due either to (i) time-dependent alterations in muscle movements, and thus measurable epidermal deformations due to neurodegeneration progression, or (ii) a response throughout medical therapy.

Precise measurements of soft tissue biokinematics, such as skin strain during facial deformations, can be used to computationally recognize distinct facial motions, and thus facilitate nonverbal communication for patients who lack the ability to speak or interact with traditional electronic communication interfaces. However, existing nonverbal communication systems typically result in cumbersome computational load or are bulky and unsuitable for use on curvilinear regions of the body, such as the face. A widely-deployable system for real-time detection of facial motions—when combined with the use of low-cost materials, easily manufacturable processes, and a seamless pipeline for fabrication, testing, and validation—offers unprecedented potential for clinically-realizable nonverbal communication technologies. The primary goal of our research is to introduce a set of materials, device designs, fabrication steps, theoretical calculations, simulations, and validation protocols that realize robust, mechanically-adaptive, predictable, and visually-invisible in vivo monitoring of spatiotemporal epidermal strains and decoding of distinct facial deformation signatures through the use of conformable devices comprised of piezoelectric thin films on compliant substrates.

With this article, I’d like to thank Dr. Robert Brown of the University of Massachusetts Medical School for his help in recruiting ALS patients, and for the fruitful discussion on cFaCES application on the ALS subjects. I also sincerely thank our ALS friends, Phyl Gerber and Dennis Ceruti, who became an important part of our reported work and helped us tremendously with human trials, as well as their families for their generous help and profound dedication during the ALS patient trials.

Canan Dagdeviren

Assistant Professor, MIT Media Lab

No comments yet.