After seeing the fixed 3rd order spherical harmonics I was curious how that might be optimized. DropAnSH-GS (Feb 2026) drops out high order SH coefficients to force low frequency color into low order coefficients (3.4. Spherical Harmonics Dropout). They conclude that high order coefficients can be discarded to trade off speed/size for detail. They don’t seem to have considered encoding as sparse coefficients post training to discard all near zero coefficients.
In case anyone is wondering why this is important: the spherical harmonics are frequently most of the data in a Gaussian splat data set — as much as 80% of the data for good quality scenes.
The magic of Gaussian splats is their ability to render photorealistic outputs without material properties, explicit ray tracing, etc. They do this by synthesizing complex light transfer in the scene via these fuzzy blobs overlapping — but they need to be able to change color and transparency with view angle to recreate much of that light transport. Hence, relatively heavyweight data.
There are many approaches to reducing the data volume, and they get increasingly complex when you add a time component. Not even worth listing publications here because it's changing so quickly, just plan to look at the SIGGRAPH pre-prints in a couple of months. Exciting times!
Mainly by having view-dependent (i.e. changes with the camera angle) material reflectance (diffuse colour and specular highlight).
i.e. the colour (and possibly other surface properties) vary depending on their direction, which is (or at least can be) encoded spherically (as spherical harmonics).
The width/size of each point/splat is also not just a radius, it can be anisotropic, and have an orientation in space, so again, it can vary its size depending on orientation when rendered.
It has been mildly amusing watching the AI crowd learn about point clouds though, and use things the VFX industry was using in the early 00s (spherical harmonic encoded materials - we had light-dependent as well for relighting - points with direction and anisotropic widths, etc)...
https://arxiv.org/pdf/2602.20933
The magic of Gaussian splats is their ability to render photorealistic outputs without material properties, explicit ray tracing, etc. They do this by synthesizing complex light transfer in the scene via these fuzzy blobs overlapping — but they need to be able to change color and transparency with view angle to recreate much of that light transport. Hence, relatively heavyweight data.
There are many approaches to reducing the data volume, and they get increasingly complex when you add a time component. Not even worth listing publications here because it's changing so quickly, just plan to look at the SIGGRAPH pre-prints in a couple of months. Exciting times!
i.e. the colour (and possibly other surface properties) vary depending on their direction, which is (or at least can be) encoded spherically (as spherical harmonics).
The width/size of each point/splat is also not just a radius, it can be anisotropic, and have an orientation in space, so again, it can vary its size depending on orientation when rendered.
It has been mildly amusing watching the AI crowd learn about point clouds though, and use things the VFX industry was using in the early 00s (spherical harmonic encoded materials - we had light-dependent as well for relighting - points with direction and anisotropic widths, etc)...
This in particular has been hilarious for the exact reason you mentioned. For anybody curious, here's a paper from 2008 about this technique:
https://www.ppsloan.org/publications/StupidSH36.pdf