Project, Research & Publication
1. Research: 1st System for Automatic 3D Animation of Piano Performances, June 2010 – July 2012
- Role: Lead research assistant, directed by Professor Bernd Hamann and Michael Neff.
- System introduction: Given a musical piece, this is the only existing system to automatically generate 3D animations of piano performance.
- Demos:
-
-
-
-
- Important components implemented:
- Graph theory is used to design a novel piano fingering planning method, deciding which set of fingers should strike the piano keys for each chord.
- Based on the generated fingering, initial key poses of the hands, are determined based on geometry constrain of the piano hand and piano theory.
- An optimization method is designed to refine these initial poses, producing a smooth and minima-energy key pose sequence.
- Natural motion transitions between neighboring key poses are generated using a combination of sampled piano playing motion and
music features, allowing the system to support different playing styles such as finger crossovers, chord, and arpeggio.
- Coding skill includes: C++ for the main codes, MATLAB for fingering generation and optimization computation, Maya API for a basic animation platform, STK for midi import.
- Related professional software and hardware: Vicon Digital Optical Motion Capture System, Maya.
- Paper published in a famous computer animation journal as 1st author. Yuanfeng Zhu, Ajay Sundar Ramakrishnan, Bernd Hamann and Michael Neff. Journal CAVW (Computer Animation and Virtual Worlds), 21 SEP 2012. Indexed by SCI and EI.
- Noted in famous scientific website and magazine: Phys.Org: Link; New Scientist Magazine: Link
- A short version of this research is accepted as a finalist in the Written Paper Category in 2013 Interdisciplinary Graduate and Professional Student Symposium, UC Davis. This symposium provides a platform to make this research recognizable by the other professional schools and academic colleges at UC Davis.
- Download: Paper
- Cooparation and many thanks to: My partner Ajay Sundar Ramakrishnan provides continuous help on paper polishing. 3D artist and programmer Paris Maroidis for his quite important contribution to the development of our system. Jonathan Graham for his elegant modeling of the piano character. Yingying Wang and Pengcheng Luo for their help during motion capture process. Professor Hong Zhu for her suggestion related to piano theory. Binglin Li’s piano performance for motion capture work. Many valuable suggestions from IDAV group members Yejin Kim, Tyler Martin and Nick Toothman, and Yubo Zhang from VIDI Lab.
2. Project: Cross-Platform Animation System, June 2011 – December 2012
- Role: Lead and sole research assistant (PhD GSR Funding Source), directed by Professor Michael Neff.
- System introduction: Given a user input such as a sentence, this system is supposed to automatically generate a corresponding 3D human motion with various guestures and facial expressions.
- Demos:
-
-
-
- Important components implemented:
- Motion-capture-data-driven full body animation (using BVH standard to represent motion data)
- Script-driven facial animation (using BML standard to represent facial animation)
- Real-time stream body and facial animation from server to client.
- Real-time communication between Android Client, Unity Server(Windows and Android version available), C++ Server (Windows version), C# Server (Windows version). C++ and C# servers provide interfaces to communicate with other C++ and C# based components to enrich system functions, such as to embed an audio recognition module, motion database management system, motion gesture analysis and synthesis module).
- Coding skills: C#, game engine Unity3D, C++, C++ Interop.
- Doc: ReadMe
- Programs: (Please ask me for special version (Windows and or Android versions) of the following programs if needed.)
-
-
-
-
-
3. Research: Balance Control in Interactive Motion, March 2012 – December 2012
- Role: Lead research and sole assistant, directed by Professor Bernd Hamann and Michael Neff.
- System introduction: Given a hit to the character in a full-dynamic system, and a desired motion pose with exact timing requirement (optional), the system uses optimization framework to automatically generate a responsive balance motion satisfying various user-requirements on motion styles.
- Demos: Demo 1 Demo 2 Demo 3
- Important components implemented:
- MATLAB to implement a proposed optimization framework to analyze and predict the motion balance state of a candidate motion, and find optimum motion solution.
- Use the famous physics engine ODE for human motion simulation, collision detection and response.
- Novelty 1: Uses one Lyapunov Function based constraint to quickly (56% better) eliminate candidate solutions which failed to achieve the desired motion goal.
- Novelty 2: Uses another Lyapunov Function based constraint to more accurately (36~93% better based on different motion types) and efficiently (3%~24% better) predict motion balance state controlled by a candidate motion.
- Coding skill includes: C++ for the main codes, physics engine ODE, MATLAB
- Download: PPT and Demo Thesis, More Demo and Experiment Data
4. Project: Reconstruction of Criminal Virtual Environment for Dalian Police Station, February 2005 – May 2007
- Role: Lead research assistant (China NSF GSR Funding Source), directed by Professor Jun Meng.
- Key Tasks: Upgraded the original system to support physical collision detection and response between environment and characters using ODE physics engine; designed a user-friendly GUI with GLUI GUI library.
- Coding skills: C++, OpenGL.
5. Research: Real-time Collision Detection between Rigid Bodies, January 2006 – January 2007
- Role: Lead research assistant, directed by Professor Jun Meng.
- Improved: executive speed with the same accuracy as others’ work by design of a hybrid hierarchical bounding volume (HBVH).
- Wrote my first paper for the international publication as 1st author "Research on Real-Time Collision Detection Based on Hybrid Hierarchical Bounding Volume." Journal of System Simulation, ISSN 1004-731X, Volume 20, No.02, Jan. 20, 2008. Indexed by EI.
- Download: Paper
6. Research: Real-time Collision Detection and Response, January 2007 – January 2008
7. Research after M.S. Graduation, January 2008 – November 2009
- Role: Sole researcher, selft-directed.
- Resumed my foregoing work separately for real-time collision detection between rigid bodies, and collision detection between a rigid body and a soft body.
- Demos: Friction Restitution Structure Damping Frog Bunny Dragon
- Designed new experiments to demonstrate better execution speed, precision, and robustness.
- Published as 1st author "Real-time Collision Detection and Response Techniques for Deformable Objects Based on Hybrid Bounding Volume Hierarchy." COMPEL ISSN 0332-1649, Volume 28, Issue 6, 2009. Indexed by SCI and EI. Award recommended to COMPEL by the IEEE Asia Simulation Conference 2008 as among 50 Most Excellent Papers. Download: Paper
- Published as 1st author "Efficient Approach Based on Hybrid Bounding Volume Hierarchy for Real-time Collision Detection." Journal of System Simulation, ISSN 1004-731X, Volume 20, No.19, Oct.6, 2008. Indexed by EI.
- Award recommended to Journal of System Simulation by the IEEE Asia Simulation Conference 2008 as among excellent Papers.
- Download: Paper
8. Project: Navigation in Unity3D-based Application with Intel Mocap Camera, April 2013
- Role: Sole researcher, selft-directed.
- Project introduction: This is a small project to experience how to use Intel Mocap Camera to develop a novel interactive way in Unity-based application. During preparing for a job, I spend my spare time within 5 days on reading the SDK document, learn the examples, develop my own demo and summarize the evaluation on Intel PC SDK version 1.0.7383.
- Demo: Demo
- Important components implemented:
- Facial Landmarks-based navigation
- Palm Landmarks-based navigation
- Coding skills: C#, game engine Unity3D, Intel PCSDK.
- Programs: Helicopter Navigation Control (Please ask me for code if needed.)
- Please email me for the evaluation document "Problems, New Ideas and Potential Further Development on Unity-version of Intel PCSDK".
9.Project: Kinect Server for MAYA, May 2013
- Role: Sole researcher, selft-directed.
- Project introduction: This is a small project to experience how to use Microsoft Kinect Mocap Camera with OpenNI v2.x driver to drive the skeleton animation in MAYA in real time.
OpenNI is a famous open source driver for various motion capture devices including Microsoft Kinect. Because OpenNI v2.x is not compatitable with OpenNI v1.x, the redesign of the application is needed to support OpenNi v2. This program is designed based on newest version of OpenNI v2.x alpha.
- Demo: Demo
- Important components implemented:
- Algorithm to generate skeleton animation using the motion captured joint world position captured from Kinect.
- A motion-capture server framework to potentially connect any motion capture camera to MAYA for real time motion-capture sampling and skelention animation through MAYA API.
- Coding skills: C++, MAYA API and MEL, OpenNI v2.x, PrimeSense NiTE 2.0, FLTK 2 .
- Programs: MayaKinectOpenNi2 (Please ask me for code if needed.)
10. Project: A Simpler JavaScript UI development, May 2013
- Role: Sole researcher, selft-directed.
- Project introduction: This is a very tiny project to experience JavaScript OOP style by developing a simple UI system.
- Demo: Demo
- Important components implemented includes: button, check box and text box which is supported with various interesting customerizations.
- Coding skills: JavaScript
- Programs: JavaScriptUIDesign (this link includes all the soure codes)
Coming soon...