Google and MIT algorithm has learnt to fix your smartphone photos before you take them
Algorithm was shown 5,000 professionally-edited photos to lean how to re-touch in real time.
Developers from Google and MIT have created a computer algorithm which cannot only improve images automatically, but it performs so quickly that photographs are given a professional makeover before they are even taken.
Using just the power of a smartphone, the system can apply a range of automatic enhancements - to exposure, colour, contrast and more - live in the camera's viewfinder.
The above image shows how a poorly-lit scene on the left can be instantly and automatically improved, becoming the bright and colourful scene on the right.
The remarkably new technology has been created by a team of researchers from the Massachusetts Institute of Technology (MIT) and Google. Such is the system's speed and power, users can see how an image would look after being edited by a professional, live, through the viewfinder and before they actually take it.
Machine learning has been used to teach the software what makes for a well-edited image, and how that differs from the original. Five thousand pairs of images - one raw, one edited by five photographers - were used analysed until the system was intelligent enough to correctly edit what it sees.
The software builds on a previous system developed at MIT, which saw smartphones upload a low-resolution copy of an image taken by the camera. This was sent to a server which analysed it and sent back a recipe for making it look better, which would be performed by the phone on the original high-resolution image.
The new system developed jointly with Google sees the server stage skipped, and all image processing and editing done locally by the smartphone itself.
To slim down the work required by the algorithm, changes made to photos were turned into formulae and a three-dimensional grid was used to map out images. This means the changes required for each photo can be described mathematically rather than visually, saving storage space, data transfer time and demands on the phone's processor.
Google researcher Jon Barron said: "This technology has the potential to be very useful for real-time image enhancement on mobile platforms. Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones. This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience."
© Copyright IBTimes 2024. All rights reserved.