Avatar Bugs & Feature Requests

Post about current Avatar bugs and Avatar Feature Requests. One item per post!
Non-constructive and off-topic posts will be moved or deleted.
Components that take Transform values and output them to Custom Shader properties/Animator parameters?
This is something that feels like has been lacking for a good while, but having components that could take vectors of a transform property and output them to either a custom string for either a material property or an animator parameter to use. The implementation would be similar to how Physbones allows creators to use custom strings to name parameters that the animator will use and output specific animation reactions depending on the value. Example for the animator: Rotation Offset: There's one feature that Blender has that Unity lacks, and that is using corrective shape keys. Corrective ShapeKeys is a thing in Blender that lets users use bone transform angles to correct deforming mesh based on the angle. So you could bend your elbow in a specific way without needing to redo your weight paints or mesh topology. You could limit the angle of influence of how far the rotation can go for the blendshape amount within the driver itself in Blender. Positional Offset: Similar case to the above, you could also use a child of an empty game object to control how blendshapes look dependent on the child object's transform positional offsets relative to the parent object. If I say dragged a ball that was in world space to the left, I could make the face look angry, or make a character on the screen of a prop on my model move to the left. Examples for shader properties: Vectors, Floats, and Colors: Genshin Impact, love or hate it, is a decent game with an interesting shading style, especially with how they do their facial shadows. HoyoVerse uses SDFs to achieve this effect by getting the forward and right vectors of the head bone to determine the lighting angle along a gradient, and flip the gradient depending on specific conditions. The problem is the only method of doing this within Unity is to change the root bone of a skinned mesh renderer to the head bone rather than the hip bone, which is bad for those that want to have a consistent bounding box across all meshes and for shaders that manipulate the vertices of the skinned mesh. A way to solve for this would be to have a component that could output the forward, right, and even upward vectors to a custom string for a material property to read from and inherit the transform values of. Alternatively you could also output the transform positions for proper dissolve effects or for motion capture based systems that use colors to output, or to even give a good wobble effect/get the volume of a glass bottle for a liquid shader, or have it magically fill up. If possible, this could also be an extension to the Avatar Dynamics, and it could also be implemented into PhysBones for the transforms, but having this be extended to other use cases like the ones mentioned would be extremely helpful and possibly more optimized than using lights + depth tricks, CRTs + Cameras on our avatars, or whatever jank and unoptimized methods we use for our creations.
3
Whitelist LOD Group component
There is no reason not to. And no, impostors are not a valid substitute for actual authored LODs, though they could be a decent pre-cull step; neither is distance culling. Relying on occlusion is also not an option, as layouts vary greatly, often not bringing camera close enough to targets. We're currently stuck between two extremes: either you run on eleven-and-a-half vertices, or you rock LOD0 and effectively output microtriangles for anyone two or more meters away; more often than not it's the latter, and given commonly used fragment shader complexities (and their "quality") the performance impact grows exponentially. I guess this needs to be said: it's not 1996, and we can have more than 512 verts per scene. Especially in VR, where detail is paramount. Previously stated excuses for not whitelisting the component were laughable at best. Just because few people will make use of it doesn't mean no-one should be able to use it. Safety system accommodates for LOD meshes under the mesh visibility umbrella. Existing solutions rely on safety system allowing animator evals, as well as relying on contact receivers. It is janky, unreliable, and takes too much time to set up. There is no need for implementing some magical automatic LOD generator either. Most automatic generators tend to mangle topology and related attributes beyond reasonable level even for flatscreen. If an author doesn't want to properly author LOD meshes, they can choose to use existing automatic generators themselves.
0
Load More