Do you understand him/her? Does he/she understand you? Are you often troubled by your inability to empathize with others or being empathized with? Human emotions are complex and changeable psychological states that are often hidden behind "masks" and difficult to be accurately understood by others. Although non-verbal signals such as facial expressions and language intonation provide some clues, there are still many challenges that make emotional communication between humans complex and ambiguous. With the continuous advancement of science and technology and the growing demand for intelligent systems, emotion recognition technology, as an important part of the field of human-computer interaction, is receiving more and more attention . Traditional emotion recognition technology is mainly based on the analysis of human non-verbal communication methods such as facial expressions and voice intonation. However, these methods are often limited by factors such as environmental conditions and individual differences, making it difficult to achieve accurate and efficient emotion recognition. Due to the abstractness and ambiguity of human emotions such as emotions, moods and feelings, understanding and accurately extracting emotional information has always been a difficult problem. Therefore, seeking a new way to achieve accurate emotion recognition in various environments has become one of the focuses of current research. Recently, a research team from the Ulsan Institute of Science and Technology and Nanyang Technological University proposed the world's first real-time wearable human emotion recognition technology and developed a multimodal human emotion recognition system that combines language and non-verbal expression data to effectively utilize comprehensive emotional information . The related research paper, titled “Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface”, has been published in the scientific journal Nature Communications. It is reported that the core of the system is the Self -powered Skin-interfaced Facial and Vocal Emotion Recognition Interface (PSiFI) system. This technology is based on the self-powered triboelectric principle. Through a special sensor unit, it can simultaneously sense the signals generated by facial and vocal expressions , thereby achieving accurate recognition of human emotions. "With this system, real-time emotion recognition can be achieved with just a few learning steps and without complicated measurement equipment," said Jin Pyo Lee, the paper's first author. "This opens up the possibility of future portable emotion recognition devices and next-generation emotion-based digital platform services." In addition, the research team also applied the system to VR environments (smart homes, private theaters, smart offices, etc.) to provide personalized music, movie and book recommendations by recognizing individuals' emotions in different situations. Accurately recognize facial expressions and voices PSiFI technology is a facial and vocal expression perception interface based on physiological signals. The technology uses flexible, transparent, and processable materials such as PDMS and PEDOT:PSS to build sensor units to capture physiological signals such as human face and voice, and realize the recognition and monitoring of emotional states. Among them, PDMS has excellent softness and transparency, and the PEDOT:PSS conductive layer can provide good electrical conductivity . Both are key materials for building the PSiFI system. The processability and modulatability of these materials can be used to achieve the construction and preparation of PSiFI technology. Figure | PSiFI system overview. A. PSiFI includes triboelectric sensors (TES), data processing circuits for wireless communication, and deep learning classifiers for facial expression and speech recognition. B. PSiFI 2D layout of a wearable mask and depicts two different types of TES in terms of sensory stimuli such as facial tension and sound vibration. C. Schematic diagram of a TES consisting of a simple two-layer structure such as an electrode layer and a dielectric layer and photos of TES components. Scale bar: 1 cm. D. Schematic diagram of TES fabrication components. PEDOT:PSS-based electrodes are made by a semi-curing process (left). Considering the sensing stimuli such as strain and vibration, the dielectric layer is designed differently to achieve the best sensing performance. The middle inset is an SEM image of the nanostructured surface of the strain-type dielectric layer, and the right inset shows a photo of the punched holes that serve as acoustic holes for the vibration-type dielectric layer. Scale bars: 2 μm and 1 mm. The PSiFI system consists of a PDMS substrate, a PEDOT:PSS conductive layer, and acoustic holes. PDMS provides softness and comfort, the PEDOT:PSS conductive layer is used to transmit signals, and the acoustic holes are used to enhance the sensitivity and performance of the sensor. The PDMS base material and curing agent are mixed in a certain ratio, and coated on the substrate to form the desired thin film after vacuum degassing. The conductive layer is formed by mixing the aqueous dispersion of PEDOT:PSS and additives, and then coating and curing on the substrate. Figure | Working mechanism and characterization of the strain sensing unit. A. Schematic diagram of the strain sensing unit. Inset: A magnified view of the sensing unit detecting facial strain. B. Potential distribution of the strain sensing unit in flexion and extension. Figure | Real image of the experimental setup used for output measurement. Scale bar: 1 cm. The PDMS and PEDOT:PSS layers were combined to form the sensor unit through a specific coating and curing method. Acoustic holes were prepared on the PDMS substrate using laser cutting technology to enhance the sensitivity and responsiveness of the sensor . Figure | Working mechanism and characterization of the vibration sensing unit. A. Schematic diagram of the vibration sensing unit. Inset: A magnified view of the sensing unit detecting the vibration of the vocal cords. B. Potential distribution of the sensing unit during the working cycle. C. Schematic diagram of the hole pattern configuration for the sound vibration sensor and SEM image of the holes in the 32-hole configuration. Scale bar: 2 mm (Inset: A magnified view showing the sound holes. Scale bar: 400 μm). After building the PSiFI system, the research team conducted a series of tests to ensure its performance. First, the researchers used a spectrometer to test the transparency of the PSiFI system . The results showed that the PSiFI system has good transparency in the visible light range, allowing light to penetrate, allowing users to wear the PSiFI device comfortably without feeling blocked or obstructed. The researchers then conducted stretching and bending tests to verify the flexibility and stability of the PSiFI system under various deformations. The experimental results show that even when stretched or bent, the PSiFI system can still maintain its structural integrity and performance stability, showing excellent flexibility. Subsequently, the research team conducted emotion recognition experiments. They used the PSiFI sensor to collect facial movement signals and used advanced algorithms to process and analyze them. The results showed that by monitoring the tiny changes in facial muscle movements, the PSiFI system can accurately capture and record facial expression characteristics under different emotional states. In addition, the research team also used PSiFI technology to collect sound signals and perform emotion recognition through acoustic analysis methods. The results showed that the PSiFI system can effectively capture and analyze the frequency, amplitude and other characteristics of sound, thereby identifying the emotional information contained in speech. In the future, maybe machines will understand you better than humans The research team said that in the future, PSiFI technology can be applied to the field of emotion recognition. By monitoring facial and voice signals, it can identify the emotional state of individuals and provide emotional health monitoring and diagnosis services for smart medical systems. In the field of human-computer interaction , PSiFI technology can also be integrated with smart devices to achieve natural human-computer interaction. For example, in a virtual reality environment, PSiFI technology can be used to perceive the user's emotional state, thereby achieving a more intelligent and personalized user experience. In terms of robotics , PSiFI technology can be applied to the field of emotionally intelligent robots, enabling robots to recognize and understand human emotional expressions, so as to better communicate and interact with humans. It must be pointed out that this technology is not perfect and currently has some limitations . First, the sensor of PSiFI technology needs to have high sensitivity to accurately capture and analyze weak facial and voice signals, but the sensitivity of the sensor needs to be further improved . This can be achieved by optimizing the structure and material selection of the sensor, improving the sensitivity and stability of the sensor, and making it more suitable for emotion recognition tasks in different situations and environments. In addition, PSiFI technology needs to process a large amount of physiological signal data and perform complex data analysis and algorithm processing, which poses a challenge to data processing capabilities and requires more efficient and intelligent algorithms and processing platforms. In the future, more efficient and accurate data processing algorithms can be developed to improve PSiFI technology's ability to analyze and identify physiological signals, thereby improving the accuracy and reliability of emotion recognition. Of course, this technology combines expertise and techniques from fields such as biomedicine, engineering, and psychology, and requires more interdisciplinary collaboration to jointly promote the development and application of PSiFI technology and expand its application prospects in smart healthcare, human-computer interaction, and other fields. In recent years, emotion recognition technology has received extensive attention and research in the fields of human-computer interaction, medical health, etc. In addition to PSiFI technology, many other related studies have made important progress. For example, facial expression recognition technology based on image processing and deep learning, and speech emotion recognition technology based on sound signal processing and pattern recognition, etc. These research results provide rich technical means and methods for the development of emotion recognition technology. With the continuous emergence of innovative technologies such as PSiFI and the continuous expansion of application scenarios, in the near future, people may be able to interact with smart devices and systems more intelligently and naturally, and medical and health management will become more personalized and precise. By then, perhaps machines will understand you better than humans. Reference Links: https://www.nature.com/articles/s41467-023-44673-2 https://www.mdpi.com/1424-8220/23/5/2455 |
<<: Is childbirth so painful an evolutionary bug?
January 5 (Reporter Zhang Zhichang) As one of the...
On November 19, Meizu officially released its fla...
How to do user operations? What are some good met...
Niu anchor created beauty, 0 basic live broadcast...
What time can I visit the examination room for th...
I think everyone knows that the placement of the ...
Before talking about the operation of an app, it ...
In May 2023, the Tianzhou-6 cargo spacecraft was ...
Geely recently released the official photos of th...
First look at this picture: I divided the operati...
As a search engine media advertising platform, 36...
[[198507]] No one can live without finance. Altho...
Regarding the topic of "Why does advertising...
Before the formal introduction, let me say one th...
1) What are the different modes of Juping adverti...