Avatar Bugs & Feature Requests

Post about current Avatar bugs and Avatar Feature Requests. One item per post!
Non-constructive and off-topic posts will be moved or deleted.
Allow users to create their own custom LOD models for avatars
I would love to have the ability to optimize my avatars even further by using hand-made Level of Detail models at a distance. This sounds like it could be easily exploited, but with enough restrictions, avatar creators who want to go the extra mile could optimize their avatars to a ludicrous degree. Perhaps the polygon count on LOD models could be rigidly locked to a percentage value of the avatar's original polycount. For example, LOD1 could be 75%, LOD2 could be 50%, and so on. If I have an avatar that's 70k triangles, then LOD1 would have to be 52,500 triangles or less, and LOD2 would have to be 35,000 triangles or less. Perhaps LOD models past LOD1 could be forced to only have 1 skinned mesh. LOD models could require that you use the same textures and materials as the main model, to prevent the exploitation of swapping to unoptimized materials. Perhaps the number of custom materials could be reduced to 2, maximum. Maybe the Quest fallback version of a model could just be the lowest custom LOD model instead, if the LOD model in question is optimized enough. If there was a separate Performance Ranking System for LOD models that restricted the ability to upload them if they went over the limit, I don't see this feature being exploited. At worst, most users would ignore it, and automatically generated LODs could be used instead. At best, an avatar could be optimized massively, which could lead to improved performance across the board, especially on Quest or Android. The only downside would be an increased download size, but if the LOD models are forced to use the same textures and materials as the main model, the impact would be negligible.
0
Components that take Transform values and output them to Custom Shader properties/Animator parameters?
This is something that feels like has been lacking for a good while, but having components that could take vectors of a transform property and output them to either a custom string for either a material property or an animator parameter to use. The implementation would be similar to how Physbones allows creators to use custom strings to name parameters that the animator will use and output specific animation reactions depending on the value. Example for the animator: Rotation Offset: There's one feature that Blender has that Unity lacks, and that is using corrective shape keys. Corrective ShapeKeys is a thing in Blender that lets users use bone transform angles to correct deforming mesh based on the angle. So you could bend your elbow in a specific way without needing to redo your weight paints or mesh topology. You could limit the angle of influence of how far the rotation can go for the blendshape amount within the driver itself in Blender. Positional Offset: Similar case to the above, you could also use a child of an empty game object to control how blendshapes look dependent on the child object's transform positional offsets relative to the parent object. If I say dragged a ball that was in world space to the left, I could make the face look angry, or make a character on the screen of a prop on my model move to the left. Examples for shader properties: Vectors, Floats, and Colors: Genshin Impact, love or hate it, is a decent game with an interesting shading style, especially with how they do their facial shadows. HoyoVerse uses SDFs to achieve this effect by getting the forward and right vectors of the head bone to determine the lighting angle along a gradient, and flip the gradient depending on specific conditions. The problem is the only method of doing this within Unity is to change the root bone of a skinned mesh renderer to the head bone rather than the hip bone, which is bad for those that want to have a consistent bounding box across all meshes and for shaders that manipulate the vertices of the skinned mesh. A way to solve for this would be to have a component that could output the forward, right, and even upward vectors to a custom string for a material property to read from and inherit the transform values of. Alternatively you could also output the transform positions for proper dissolve effects or for motion capture based systems that use colors to output, or to even give a good wobble effect/get the volume of a glass bottle for a liquid shader, or have it magically fill up. If possible, this could also be an extension to the Avatar Dynamics, and it could also be implemented into PhysBones for the transforms, but having this be extended to other use cases like the ones mentioned would be extremely helpful and possibly more optimized than using lights + depth tricks, CRTs + Cameras on our avatars, or whatever jank and unoptimized methods we use for our creations.
3
Load More