In desktop mode, when an item is grabbed, the hand is thrust forward with dedicated tracking.
This mechanism can be converted into Avatar 3.0, It is possible to recognize the grab state within Animator, make the Avatar more scalable.
So that in desktop mode, while grabbing, the right hand can be held with the left hand, and the gun can be held in motion.
In VR, it would be a good idea to equip the avatar's weapon to the target hand only while not grabbing, or to display an animation to imagine that the item is being grabbed.
Translated with WWW.DeepL.com translator (free version)
When you grab an item while in desktop mode, it performs a special tracking action of sticking out your right hand to hold the item.
I think that if the mechanism that recognizes this grab on the program side is brought to AV3.0 and the animator side can recognize the grab, the expandability of avatars can be increased.
(Hand gestures can tell that they are in the shape of multiple types of gloves, but it is not possible to recognize whether they actually have the item.)
If it is in desktop mode, it is possible to complete the movement of holding a gun by attaching the left hand to the grabbed right hand.
Also, if this parameter can be used in VR, something like a weapon in an avatar will be displayed only when the item is not grabbed.
(There are many weapon gimmicks that use avatar hand gestures that use grab states and are easy to interfere)
Conversely, I think it will be possible to “prepare special animations only during grabs.”
·