COMP343 ASSIGMENT3
Video Processing
Name : Chan Tsz Kin
Email : eg_ctk
¡@
Part 1 : Blue Screen
Extracting foreground from bluescreen video
Aims : An algorithm to extract the foreground from the mono-color background
such as blue color based on
foreground video
, background video & output
video
RGB
& HSV domain .
- Firstly , we pick a background color from the input video
- Then ,compare the color for each pixel in the frames with this background
color
- The steps are
- By using the MPEG2 decoder , we get a frame from the background video
and put it to the output frame (in memory)
- Get a frame from the bluescreen video
- For each pixel do
- Get the RGB values
- Compute the color difference with the background color's RGB
values
- If the pixel is background, ignore it
- Otherwise output the pixel to the output frame
- Save the output frame
- Go back to step 1 until all frames are processed
- After all the output frames are produced, they can be put together to form
the result video
In the program , there are some slider provided for the
user to decide the RGB , H , V threshold to extract the background video .
¡@
Bluescreen object
offset
Aims : To
provide a way for the user to put the extracted video in a right position by
selecting the X Y position
foreground video , background
video & output video
- There is a two slider to
select the X Y offset
- This offset would then be
applied to every pixel in the extraced foreground
Bluescreen
alpha blending
Aims : To
provide a way to improve the merging of the foreground & background video .
foreground video , background
video &
output
video1(low blending) output video
2 (high blending)
- If you look at this
example of a bluescreen effect (near the end), the result is poor
- The foreground object (the
woman) does not look like she is properly 'mixed' with the new background
(the sea)
- The reason is because in the
original bluescreen video strong white light shone on her clothes, making
the edges much lrighter
- But these bright pixels are
definitely not part of the blue screen - so they get copied to the output
- As a result, the edges look
very white and do not blend well
So , we need to
deal with the edge pixel processing .
How to find the
edge ?
Firstly , we create
a 2D array to distinguish the extracted foreground & background of the input
video called Bitmap .
if current_pixel = foreground_pixel then
Bitmap [ X , Y] = 1
else if current_pixel = background_pixel then
Bitmap [ X , Y] = 0
end
if
Furthermore , we need to built a Grid to cover
on the input RGB array to help us to find out the extracted foreground
& background pixel
This is a Grid
where C = current RGB value & O = surrounding byte
after covering the grid on the input RGB array , we may have some
pattern like this
where B = background RGB value , and F = foreground RGB value , C = current
RGB value
By using the Bitmaps , we can know which byte is foreground and which
byte is background since , Bitmap[X , Y] = 0 ---> background , Bitmaps[X,Y] =
1 ---> foreground.
As the result , we can run this algorithm to find the edge :
Assume current position = [X , Y] and using 3x3 grid
foreground_count = 0
for index = X - 1 to X + 1
for index = Y - 1 to Y +1
if not( Bitmaps[X , Y] = 0) then
foreground_count = foreground_count + 1
end if
next
next
For any pixel in the input array , we know that how many foreground bytes
surrounding it and total number of the bytes is equal to the grid size ,
so the output array should be
output[X,Y](RGB value) = ( foreground_count * input(RGB) +
(1-foreground_count) *background(RGB) ) / grid_size
¡@
So , as the result , the output RGB contains the average of background color
& foreground color
, so the edge become getting smooth
¡@
Part 2 : Motion Blur
Past Blur & Future Blur
Aims : To blur the past & future motion , to make any motion in the video
become overlap with its past and future sequence .
- The idea is just get
the average RGB value of past & future frames as the following figure .
input video ,
output video1 (Past low blur) , output
video2 (Past high blur) ,
output video3 (Future low blur) , output
video4 (Future high blur)
output video5 (Future and past low blur ) ,
output video6 (Future and past high blur)
- So , we need to define how many past frames and future frame we need to
average them
- In fact , if we use more frames , the effect is more domanint
- We use three buffer for each of RGB values
- The buffer value is then divided by the
number of frames in the buffer
¡@
- Here is some example pseudo-code for handle PAST BLUR
Start blur = # current frame -
number of past frame(frameinblur) +1
For Frame = StartBlur To startblur + number_of_past_frame
For Each Pixel (X, Y)
'
Accumulate the frames in the Buffer
BufferR(X, Y) =
BufferR(X, Y) + CurrentFrameR(X, Y)
BufferG(X, Y) =
BufferG(X, Y) + CurrentFrameG(X, Y)
BufferB(X, Y) =
BufferB(X, Y) + CurrentFrameB(X, Y)
Next
Next
- Note that StartBlur must greater than or equal to 0
- The buffer is then divided by the number of frames in the buffer
- In this case, FrameInBlur = 4
For Each Pixel (X, Y)
' Divide each pixel value in the
buffer
ResultR(X, Y) = CInt(BufferR(X,
Y) / FrameInBlur)
ResultG(X, Y) = CInt(BufferG(X,
Y) / FrameInBlur)
ResultB(X, Y) = CInt(BufferB(X,
Y) / FrameInBlur)
Next
- The value of Result(X, Y) is the result pixel for frame Index
¡@
- Here is some example pseudo-code for handle FUTURE BLUR
startblur = current frame and
farmeinblur = #future blur
For Frame = StartBlur To startblur + number_of_future_blur
For Each Pixel (X, Y)
'
Accumulate the frames in the Buffer
BufferR(X, Y) =
BufferR(X, Y) + CurrentFrameR(X, Y)
BufferG(X, Y) =
BufferG(X, Y) + CurrentFrameG(X, Y)
BufferB(X, Y) =
BufferB(X, Y) + CurrentFrameB(X, Y)
Next
Next
- Note that StartBlur must greater than or equal to 0
- The buffer is then divided by the number of frames in the buffer
- In this case, FrameInBlur = 4
For Each Pixel (X, Y)
' Divide each pixel value in the
buffer
ResultR(X, Y) = CInt(BufferR(X,
Y) / FrameInBlur)
ResultG(X, Y) = CInt(BufferG(X,
Y) / FrameInBlur)
ResultB(X, Y) = CInt(BufferB(X,
Y) / FrameInBlur)
Next
¡@
- Here is some example pseudo-code for handle PAST & FUTURE BLUR
StartBlur = current Blur - #Past
blur and farmeinblur = pastblur + futureblur
For Frame = StartBlur To startblur + number_of_frame_to_be_blur
For Each Pixel (X, Y)
'
Accumulate the frames in the Buffer
BufferR(X, Y) =
BufferR(X, Y) + CurrentFrameR(X, Y)
BufferG(X, Y) =
BufferG(X, Y) + CurrentFrameG(X, Y)
BufferB(X, Y) =
BufferB(X, Y) + CurrentFrameB(X, Y)
Next
Next
- Note that StartBlur must greater than or equal to 0
- The buffer is then divided by the number of frames in the buffer
- In this case, FrameInBlur = 8 , where pastblur = futureblur
= 4
For Each Pixel (X, Y)
' Divide each pixel value in the
buffer
ResultR(X, Y) = CInt(BufferR(X,
Y) / FrameInBlur)
ResultG(X, Y) = CInt(BufferG(X,
Y) / FrameInBlur)
ResultB(X, Y) = CInt(BufferB(X,
Y) / FrameInBlur)
Next
¡@
¡@
¡@
¡@
Furthermore , there is a slider to
adjust the importance of the current frame ,
which can make the blurring different .
¡@
¡@
¡@
¡@
¡@
¡@
¡@