COMP343 ASSIGMENT3

Video Processing

Name : Chan Tsz Kin

Email  : eg_ctk

@

Part 1 : Blue Screen

Extracting foreground from bluescreen video

Aims : An algorithm to extract the foreground from the mono-color background such as blue color based on

       foreground video  , background video & output video

          RGB & HSV domain .

    In the program , there are some slider provided for the user to decide the RGB , H , V threshold to extract the background video .

@

Bluescreen object offset

    Aims : To provide a way for the user to put the extracted video in a right position by selecting the X Y position   

    foreground video  , background video & output video

 

Bluescreen alpha blending

   Aims : To provide a way to improve the merging of the foreground & background video .

        foreground video  , background video &

        output video1(low blending)     output video 2 (high blending)

   So , we need to deal with the edge pixel processing .

   How to find the edge ?

   Firstly , we create a 2D array to distinguish the extracted foreground & background of the input video called Bitmap .

        if current_pixel = foreground_pixel then

            Bitmap [ X , Y] = 1

       else if current_pixel = background_pixel then

            Bitmap [ X , Y] = 0

      end if

Furthermore , we need to built a Grid to cover on the input RGB array  to help us to find out the extracted foreground & background pixel

This is a Grid

O O O
O C O
O O O

where C = current RGB value & O  = surrounding byte

after covering the grid on the input RGB array ,   we may have some pattern like this

B B F
B C F
F F F

where B = background RGB value , and F = foreground RGB value , C = current RGB value

By using the Bitmaps  , we can know which byte is foreground and which byte is background since , Bitmap[X , Y] = 0 ---> background , Bitmaps[X,Y] = 1 ---> foreground.

As the result , we can run this algorithm to find the edge  :

Assume current position = [X , Y] and using 3x3 grid

foreground_count = 0

for index = X - 1 to X + 1

    for index = Y - 1 to Y +1

        if not( Bitmaps[X , Y] = 0) then

            foreground_count = foreground_count + 1

        end if

    next

next

For any pixel in the input array , we know that how many foreground bytes surrounding it and total number of the bytes is equal to the grid size  , so the output array should be

output[X,Y](RGB value) = (  foreground_count * input(RGB) +

                                              (1-foreground_count) *background(RGB) ) / grid_size

@

So , as the result , the output RGB contains the average of background color & foreground color

, so the edge become getting smooth

@

Part 2 : Motion Blur

Past Blur & Future Blur

Aims : To blur the past & future motion , to make any motion in the video become overlap with its past and future sequence .

          input video output video1 (Past low blur)  , output video2 (Past high blur) ,

         output video3 (Future low blur) , output video4 (Future high blur)

        output video5 (Future and past low blur ) , output video6 (Future and past high blur)

@

        Start blur = # current frame - number of past frame(frameinblur) +1

@

        startblur = current frame  and farmeinblur = #future blur

        StartBlur = current Blur - #Past blur   and farmeinblur = pastblur + futureblur

@

@

@

@

   Furthermore , there is a slider to adjust the importance of the current frame ,

which can make the blurring different .

@

@

@

@

@

@

@

1