Storage space is something that both consumers and tech giants have issues with at some point or another, and a big reason for that is photos. Some users host thousands of photos on their device at any given time, and with services like Google Photos, Google is hosting millions of photos with more coming everyday. Now the company is working on a new JPEG compression method that allows smaller files sizes compared to current standards, and it’s all done using neural networks…
To do this, Google is training neural networks by having them break down over 6-million random photos, which were previously compressed, into 32×32 pieces. The network then selects 100 of those pieces that it determines to have the least effective compression compared to a PNG image. This method could be considered “the hard way” and forces these neural networks to be better prepared. The idea here is that if the network can handle compressing the worst of the worst, it should have no issue compressing everything else.
What sets this method of compression apart compared to usual methods is that it can compress the image in patches, rather than treating the file as one big image. Google had previously been testing this method out, but at the time it was never used beyond the larger 64×64 images. By using the lower 32×32, Google is able to compress these photos into even smaller sizes.
Unfortunately, it’s unclear if/when Google will ever use this on a product like Google Photos. The best judge for if compression was successful is the human eye, and right now, there’s no efficient way to test if the network is doing its job. If this project is ever implemented, it could bring photos to the point where size is no longer a concern, regardless of the number of photos you have.
FTC: We use income earning auto affiliate links. More.
Comments