Google finds way to compress data using brain like network!
clever people over at Google have devised to a new way to compress
our images to a very small size, while barely affecting their
quality. And this is accomplished using neural networks. This kind
of pursuit has benefit for both the company and its customers. On the
consumers end, a smaller file size means more space to fit things on
their computers, cell phones and tablets. But for the company that
actually has to deal with terabytes of selfies on a daily basis, this
tech can mean reduced server loads, lesser power consumption, greater
transfer speeds and ultimately saving of money.
kind of idea isn’t exactly unknown, with HBO’s critically
acclaimed television series Silicon
having spread the idea of an extremely efficient compression
algorithm in the general public. Google’s
work teaches neural networks how to scrimp and save data by looking
at examples of how standard compression works in random images from
the internet, according to a technical paper published
on ArXiv. The paper shows that neural networks can beat standard
JPEG compression on standard tests, according to the Google team.
However, it doesn’t mean that this is ready to be implemented into
network is trained by breaking 6 million randomly-selected,
previously-compressed photos into tiny 32×32 pixel pieces, and then
selects 100 pieces with the least effective compression from which to
learn. Effectiveness in this case is gauged by the pieces that retain
their size the most when compressed into a PNG—they resist
compression. By training with tougher problems, researchers theorize
that the neural nets would be more prepared to take on the easy
patches. The network itself then predicts how the image would look
after it would be compressed, and then generates that image. The big
differentiator in this research is that the neural networks can
decide the best way to variably compress separate patches of a given
photo, and how those patches fit together, rather than treating the
whole image as one big piece.
on the same topic was published
earlier this year by Google, but previous work never proved
the method could be used beyond tiny 64×64 pixel images. This work
is not limited by the size of the file.
far, it’s not really on the level shown to be achieved in Silicon
but Google just proved that the idea isn’t so far-fetched after all
and there just might come a day, when our files are of negligible