FDACU: Facial Acupoint Localization Demo by Fudan University
Powered by: p5.js
· ONNX Runtime
· MediaPipe Face Mesh
This demo showcases our facial acupoint recognition model based on real-time face landmark detection and neural regression. It is built using p5.js for visualization, MediaPipe for extracting 478 facial landmarks, and ONNX Runtime to run our FDACU model for predicting 13 clinically significant facial acupoints.
Usage
- Model loading: Please wait a few seconds while the models are loading (1) MediaPipe Face Mesh (478 landmarks), (2) FDACU ONNX neural regression model.
If the app seems unresponsive, try refreshing the page. - Mouse controls: Please move the mouse pointer to acupoints you are interested, information of the acupoint will show up.
- Keyboard controls:
- Press
A
to toggle the face mesh, - Press
S
to toggle the 478 landmarks, - Press
D
to toggle predicted acupoints, - Press
I
to toggle the video, - Move the mouse pointer to the interested acupoint, its name will show up.
- Press
Privacy and Copyright
- Privacy: All computations run locally in your browser. We do not collect, store, or transmit any personal data, including your facial images or video streams. Your privacy is fully protected.
- Copyright: This model is developed by the research team at Fudan University and is intended for demonstration purposes only. All intellectual property rights are reserved. Redistribution or commercial use without permission is strictly prohibited.
System and Performance
Tested device performance (CPU inference):
- Windows PC — i7-12700H, 16GB RAM, Edge: 60–80 FPS
- iPad (iPadOS 16) — Safari: ~60 FPS
Contact
For technical issues, please contact: Dr. Liu-Jie Ren (renliujie@fudan.edu.cn)
Reference
X Qiao, Y Yu, Y-C Li, et al. "Facial acupoint localization via dense face landmarks and neural regression", in submission.