DeepMotion Avatar Patch 2 Update

zombiegifpatch2blog

DeepMotion Avatar is beginning to mature, due in large part to the help of our core users. We’re nearly halfway through our closed alpha, and excited to share our Patch 2 updates, bringing you massive improvements, new features, and broader platform support.

Apply perpetual force to your objects with our Continuous Force Motor, use our new iOS or WebGL plugin for Unity, and take advantage of our improved 3-point-tracking simulation rig for the development of your social VR experiences. Read the full list below.

Features and Improvements:

  • Greatly improved 3-point-tracking accuracy
  • Better character foot planning
  • Position lock threshold is automatically calculated
  • Continuous force motor is now supported
  • Better collision matrix support
  • Collision layer was introduced for Unreal Engine
  • Friction and restitution combine mode was added for Unity
  • Tolerance for pose controller is now customizable
  • Quality and stability improvement for dog controller
  • Anchored compound collider is supported for Unreal Engine
  • Convex and concave mesh collider is supported for Unreal
  • Improvement on physics raycast
  • iOS plugin is available for Unity
  • WebGL plugin (experimental) is available for Unity
  • Web Rig Editor livesync is improved for Unity

Bugs fixed:

  • Several bugs in the core solver
  • Bugs in character recreation
  • Bugs in breakable joint, including hinge joints, universal joints, prismatic joints

Changes:

  • Kernel is dynamically linked for Unreal
  • More parameters were exposed to ACE character control APIs
  • Locomotion 3-point-tracking VR scene is included for Unreal

DeepMotion is working on core technology to transform traditional animation into intelligent simulation. Through articulated physics and machine learning, we help developers build lifelike, interactive, virtual characters and machinery. Many game industry veterans remember the days when NaturalMotion procedural animation used in Grand Theft Auto was a breakthrough from IK-based animation; we are using deep reinforcement learning to do even more than was possible before. We are creating cost-effective solutions beyond keyframe animation, motion capture, and inverse kinematics to build a next-gen motion intelligence for engineers working in VR, AR, robotics, machine learning, gaming, animation, and film. Interested in the future of interactive virtual actors? Learn more here or sign up for our newsletter.