QuickTopic free message boards logo
Skip to Messages


video inpainting

Air Jordan Flight 45
04:32 AM ET (US)
Air Jordan 1LeBron VII (7) Heroes Pack Deion Sanders and Penny Hardaway were the big names for Nike in the 90s, and surely enough, we all had a pair of Pennys or Deions Diamond Turf Trainers. Paying homage to two of LeBron James childhood idols, the LeBron VII was fashioned in two great colorways, making up the Nike Air Jordan 2010 Heroes Pack. Pennys pair features the trademark crackled swoosh from the Air Jordan 1 of Blue White Black Penny while Primetimes pair features a gaudier, spotlight stealing patent leather red with gold accents. Like Deion returning an INT for a TD or Penny driving to the hole,Air Jordan Flight 45 both pairs will be out of our reach as the LeBron VII Heroes Pack will not see a public release.
02:13 AM ET (US)
Links of London store offers kinds of elegant jewelries for wholesale and retail. The more you buy, the more the discount you get and you have chance of free shipping (UK only) and magic gifts from links of London.
  Spam messages 12-11 deleted by QuickTopic between 08-18-2010 02:04 AM and 12-10-2009 02:08 AM
04:50 PM ET (US)
Two of the basic assumptions the authors mention include 1) the background is stationary and 2) camera motion is handled if it is parallel to the plane of the image.

If the background is far away from the foreground object (like in their outdoor examples), you can approximate it as stationary, but if the background isn't very far away from the foreground object (indoors), it will not be stationary with camera motion. In this case is their assumption violated and their algorithm unusable?
Carolina GalleguillosPerson was signed in when posted
04:36 PM ET (US)
I don't have clear how they separate the background from the foreground when the motion confidence mask doesn't work. Maybe they consider the cases when is known to work?.
02:56 PM ET (US)
This might be interesting as a form of video compression for cyclical repetitive motions. I don't think you can 'fill in the blank' in all cases, but if you could sense the right kind of motion, you can blank it out and recover it at the other end.
01:06 PM ET (US)
So, it would seem that using a median optical flow amount for segmenting background is reasonable in certain quite limited circumstances. Can anyone explain to me how this algorithm would work in the presence of a really 3D background (forest of trees at different depths who's velocities are clearly scaled by their depths)? Is that addressed in either of the papers?
12:43 PM ET (US)
One of the assumptions is that the foreground has periodic motion; all of the videos on their website are of people walking in a very regulated fashion. How much do you think this method would break down if their motion was more "natural"?

Also, other than people walking, what are some other applications of this method?
12:40 PM ET (US)
I have a friend in the math department who has implemented code for this inpainting stuff. He is very familiar with the papers presented, and actually he feels there are much better papers out there on this subject than these 2. (he's going to give me the references) He has implemented code to do the inpainting, actually, he uses PDE's and Stokes' equation! For those interested, he is giving a talk on it next month!! Once I know all the details, I'll let you know!! =) Ciao!
12:14 PM ET (US)
In the examples they show a black box covering a person's figure. This approach would only work if that black box occlusion is stationary, correct? If there is another moving person blocking another person for a few frames, it wouldn't work because the occlusion is actually moving right?

Tingfan Wu
11:36 AM ET (US)
1. I think the repetitive constraint for the foreground need to be enhanced to repetition with same frequency/speed. Otherwise the optical flow component used in SSD matching will not match.

2. Is the "temporal search of best matching foreground frame" frame by frame or shot by shot? If it is frame by frame, is it possible that the corresponding matched frames of consecutive occluded frames are not consecutive(assume the video is long enough for several repetition)?

3. Human vision seems to tolerate more artifact in video than in image.
(a) fast motion (b) low resolution & MPG blocking effect (c) complex background texture on boundary.

4. For the smoothing problem /2, since the frame copy(either foreground or background) is pixel by pixel rather than block by block. Each new pixel is determined by using existing neighboring pixels.
Therefore, artifacts wont happen except the last several pixels.
08:08 PM ET (US)
For inpainting stationary backgrounds:

I'm surprised there's no averaging/smoothing taking place when filling in holes. Even for static scenes, there are likely to be small changes over time. If you're filling in a hole by grabbing neighboring temporal patches, when the hole is finally filled it seems like you'll have pixel values from points originally far apart (the beginning and end of the occlusion) now adjacent. Similar situation for spatial filling as well.
06:25 PM ET (US)
A quick question about matching. When performing matches between the candidate frame patches and the target patch, how do we have optical flow values for the damaged patch? Given that the patch of pixels is missing, how can we calculate the optical flow in order to match to the candidate frames. Also, it doesn't mention this, but I would assume that the calculation of the SSD between the candidate and target patch is done across only the foreground pixels?

Print | RSS Views: 3034 (Unique: 1215 ) / Subscribers: 0 | What's this?