This right here is a the before and after of a digital camera picture run through a deblur filter. Now, maybe that’s a best-case scenario being used for a tech demo, but (1) technology like this only gets better, never worse, and (2) even if it is, holy crap.

The obvious next steps you can infer from seeing that are that it will be possible to do it frame by frame in video soon, so you’ll be able to get a reasonably crisp video just by pointing your camera at the most important thing you can find and clicking record. So we’re not far from postprocessing being able to, if not actually “zoom in and enhance”, then stabilize and clarify even crap-phone handheld video.

But the other thing is, how many shaky-handed, not-quite-in-focus pictures have you got archived? I don’t know about you but I’ve got thousands, and in not too long I’m going to be able to point my entire photo collection at this thing and let it run for a month, just to see what comes out the other side. Odds are pretty good that there’s some gems in there that I’ll be able to recover; you could probably do this to scans of old film, too.

If that’s not an object lesson on the value of keeping 100% of your data, I don’t know what is. Never delete anything, you never know what wonders the future will have for you.


  1. Posted October 18, 2011 at 8:02 pm | Permalink

    With the exception of test photos I had no intention of keeping, I also keep everything. I have some shots that I really would love to see this done to. I’ll assume that’s a tool for full fledged Photoshop though, which I have no intention of ever buying, but if they add it Lightroom, I’m all over it…

  2. Mike Richters
    Posted October 18, 2011 at 11:07 pm | Permalink

    That example pretty clearly shows motion blur, not out-of-focus blur. Do they also have an example of focus-blur correction? That seems likely to be substantially harder, if not “impossible” (one could “cheat”, I suppose, by recognizing an object and filling in missing data from an image archive).

    Still, that’s quite an impressive demo, even though the blurred image seems to be optimally easy to correct.

  3. Posted October 18, 2011 at 11:27 pm | Permalink

    And this very same principle is why at my workplace we’re approaching a petabyte to a petabyte and a half of running NAS storage (not sure what the figures are for SAN). And while I don’t expect the home user to really ever reach the scale where they have to take a cold, hard look at keeping everything under the sun (why is our server team killing themselves to retire or virtualize as much of our data center as they can? So we can remove the hardware and the racks they’re sitting in to make space for more racks of storage. And we won’t talk about the realities of backing up all those PB of storage (nor will we even hint at the thought of restoring more then a couple of folders at a time)), they do overlap in their ability to find their data when they want it.

    I’ve got a few folders on the home desktop system, where pictures DSC_0165 – DSC_0279 are all in folder IMG0001. Really not looking forward to the day(s) when I have to dig through all of them to actually figure out what I’ve got, and apply some sort of metadata tags to them.

  4. mhoye
    Posted October 19, 2011 at 7:50 am | Permalink

    Yeah, it does seem to be for motion-blur only. NEVERTHELESS.

  5. Leonhard Euler
    Posted October 19, 2011 at 9:01 pm | Permalink

    You cannot create matter out of nothing.
    You cannot create energy out of nothing.
    You cannot create (meaningful) data out of nothing (or chaos, aka, non-data).

    Motion-blur correction works because you’re still capturing all the meaningful data in your shot, it’s just smeared and thus a little messy. It can be rearranged. Out of focus image captures are just that — an incomplete image. Like one of your Mike’s said, unless they’re willing to fill in the gaps with external information, I have little hope that something this sweet can ever be developed for out of focus blur correction.

  6. Ted Mielczarek
    Posted October 22, 2011 at 9:05 am | Permalink

    Adobe has blogged about the tech now:

    It does seem to be primarily for motion blur.