Skip to main content

Panoramic human structure maintenance based on invariant features of video frames

Abstract

Panoramic photography is becoming a very popular and commonly available feature in the mobile handheld devices nowadays. In traditional panoramic photography, the human structure often becomes messy if the human changes position in the scene or during the combination step of the human structure and natural background. In this paper, we present an effective method in panorama creation to maintain the main structure of human in the panorama. In the proposed method, we use an automatic method of feature matching, and the energy map of seam carving is used to avoid the overlapping of human with the natural background. The contributions of this proposal include automated panoramic creation method and it solves the human ghost generation problem in panorama by maintaining the structure of human by energy map. Experimental results prove that the proposed system can be effectively used to compose panoramic photographs and maintain human structure in panorama.

Introduction

Generation of panorama from a set of individual photos has been a useful and attractive research topic within the researches in the domain for several years so far. Even though the researchers focused more into personal computer based solutions at the beginning nowadays much focus is being diverted to mobile platform based solutions making it a very convenient and attractive application for the users. As an example many recent smart mobiles are equipped with applications that are capable of generating even a 360° panorama in a scene. The panorama generation solution presented in Yingen Xiong’s method [1] consumes less processing time as the processing is done in memory. Wang Meng [2] presented an approach to create a single view point full view panorama photograph from a set of image sequence. Individually ordered frames which are extracted from a panning video sequence have been used as the input making it simple for both shooting and stitching. Going forward another step of panorama generation Wagner Daniel et al. [3] presented a method for the real-time creation and tracking of panoramic maps on mobile phones. Specially, the maps generated are accurate and allow drift-free rotation tracking. But, most of the current technologies used for panorama generation are targeted for natural landscape capturing. Hence, in the situations where human objects appear in the background, the result of panorama may contain blurred human objects, as the structure of human object cannot be detected very precisely via feature extraction, which in turn results in low quality panorama. In regular feature extraction method, defining feature points in human object is very difficult unless there are obvious feature points available on the clothes. Therefore in this paper, we present our efforts in generating a panorama which show the landscape and human objects in the background without any blurred effects.

On the other hand more information of the natural scenery and buildings that we want to capture can be obtained via panoramic photography. Hence, ppanoramic photography can be considered best suited where the user needs more natural scenery in one picture. Even though panoramic photos can be created using commercially available image processing tools in several steps by appropriately segmenting available human objects and combining relevant background features together from the source frame sets, it is very time consuming manual work and the results are not satisfactory. There, in the combining step, most of the images cannot be combined via simple manual methods even in the same scene, b due to the problem of always existing cylindrical distortion exists in camera lenses which is difficult to recognize by the user in the source images. Therefore, we also propose an automated calibration mechanism in the proposed method which in turn reduces steps and time consuming in manual methods.

In summary the main goal of our work is to develop a system to take panoramic photographs, eliminating blurred effects created due to the human objects in frames with the background. Presented solution also reduce the steps comparing to the manual methods, allowing the user to obtain a panoramic photograph via our panning shooting method in video.

The schematic steps of proposed method is shown in Figure 1. User first captures a short video focusing main human character following a designed circle path like in the left part of Figure 1. Then the frames are extracted from the short video via proposed system and selects 6 source image out of that frame set and produce panorama after proposed method. Composition of the paper is organized as follows. Section III and IV discusses the human structure maintenance and panorama creation phases consecutively. Experimental results and analysis are presented in Section V. We conclude our contributions and future works in Section VI.

Figure 1
figure 1

The schematic steps of the proposed approach.

Related works

Feature extraction

Feature extraction can be done by matching the similar objects between difference images. We can regulate and track objects via the information obtained from feature extraction. Even though human eye can detect the features in different images it is not an easy task to be done in computers. One famous method which is used to detect features is the Scale-Invariant Feature Transform (SIFT) algorithm by Matthew Brown [4] and Saeid Fazli [5]. SIFT algorithm is a very robust method that can detect and describe local features in the image and it can find some features in different images as well. It uses Difference of Gaussian (DOG) function and image pyramid technology to find extreme values in different scale-space. Then a linear least square solution and threshold value is used to decide height-contrast feature points or to excise low-contrast feature points and use each feature points’ gradient direction and feature points, strength to allocate the feature points. Therefore, the information of feature is very credible and can be used in calibration images using calibration matrix.

Though SIFT algorithm can describe local features very robustly, the cost of process time is very large, and some features are not very import and apparent in image. In order to solve the problem of cost, Yingen Xiong [1] and Zhengyou Zhang [6] presented the Speeded Up Robust Features (SURF) algorithm that can be used which is faster than SIFT algorithm. But the number of features that can be extracted is less than SIFT algorithm.

Panorama creation

In recent years, panorama creation has been attracted by many researchers in the world developing very robust solutions. Matthew Brown et al. [4] used SIFT algorithm for feature matching in source images where their source images were not in order as per their research. Hemant B. Kekre et al. [7] presented a panorama generation approach to nullify effect of rotation of partial images on process of vista creation. Their method is capable of resolving the missing region in the vista caused due to the rotation of partial image parts used during the vista creation. Helmut Dersch [8] presented the open source of panorama creation that can be create panorama via parameters of open source functions. Image inpainting has been used during the process to fill the missing region. That missing view regeneration method was also able to overcome the problem of missing view in vista due to cropping, irregular boundaries of partial image parts and errors in digitization. Wang Meng [2] presented an approach to create a single view point full view panorama photograph. Song Baosen et al. [9] then presented another panorama generation based research to enlarge the horizontal and vertical angles of view for an image.

To fulfill the fast developing mobile devices market panorama creation solutions for mobile devices has been presented in recent years. Yingen Xiong et al. [1] proposed a fast method of panorama creation for mobile devices. In order to reduce the process time they used the default direction of photography instead of the method of calibration. A smoothly varying affine stitching field which is flexible enough to handle parallax while retaining the good extrapolation and occlusion handling properties of parametric transforms was presented by Wen-Yan Lin et al. in [10]. Their algorithm which jointly estimates both the stitching field and correspondence permits the stitching of general motion source images, provided the scenes do not contain abrupt protrusions.

Human structure maintenance

The panoramic photograph creation process of the proposed method have two main phases as human structure maintenance and panorama creation, respectively as shown in the flow chart of Figure 2. The proposed system is semi-automatic allowing users to adjust the output based on their preferences. We discuss the human structure maintenance in Section III and the panorama creation method in Section IV.

Figure 2
figure 2

The flowchart of proposed approach.

Preserving the full human

Most of the technologies for panoramic creation nowadays often used for natural landscapes or interior landscapes and the panorama can be constructed in a very good quality. That is because such landscape backgrounds are static during the small time of capturing. But when the panorama is being captured with human objects in the foreground, the result of panorama may be resulted a poor quality human object as shown in the example in Figure 3. The reason for such bad quality human object is the deformation of the feature structure of human due to the calibration of the structure of images. It is required to retain the main human and more scenery in the resultant panorama. The background structure can be maintained using feature matching and camera calibration but the structure of main human may look distorted after camera calibration. In order to solve this problem, some of panoramic creation technologies also use the image editing method. Using editing method is a good idea to solve the problem of deformed human object in image, but editing method have to spend most time to adjustment the object and landscapes with artificial.

Figure 3
figure 3

The poor result of human in panoramic creation.

In order to solve the problem of human panoramic creation, we use the inpainting technology. Because general panoramic creation may produce a panorama with blurred human object or human object with a wrong structure details. Therefore, we obtain the position of human from source images, then find the largest region position of human and recover it into panorama. The panoramic image often maintain the largest dimensions (the height and width of image) and information. The complete human object will be available in the panorama after merging the relevant parts from the source images. Finally, since we have the information of human, we can use panoramic creation to produce the natural landscapes without the human object, and then recover the largest human object into the empty region in the panoramic image.

In normal circumstances, in the pictures or videos taken for the generation of the panorama, human objects do not move in a short time, and background information also have similar regions in difference source images. But, we cannot obtain the background information that is shielded due to human. Therefore, we use the surrounding region patch to fill structure of human. This concept is very easy and fast. But, this concept does not guarantee the structure in the repaired regions. In this way, the structure cannot be retained and the resultant panorama becomes unnatural. In maintenance of structure in repair regions, we use image inpainting method [11]. Image inpainting method can retain the structure in specified area via user definition. The repair patch consider the similarity of structure in background and filter incorrect structure via inpainting method.

An important problem of inpainting that is used in the proposed method is the structure of background cannot be repaired very accurately in the regions to be repaired. Because image inpainting method has to select the sequence of repair regions depending on the similarity of structure. Therefore, the structure of boundary has a distinguishing feature that captures the complete human from source images and recover into panoramic image.

In order to solve the problem of the structure in repaired region, we have to add the original background information around the boundary of human object. The sequence selection of repair region of image inpainting finds the structure that can be obtained from background information and human boundary. Two of the methods in image processing, dilation and expansion, can control the size of the object boundary, effectively. We use dilation method to obtain the large area of object region than original object boundary and the expansion region has the background information. An example of dilation method is shown in Figure 4. Note that, to avoid the wrong structure to be found in image inpainting method, the expansion region of human boundary is not allowed to expand too much in the proposed method. If the expansion region of human boundary in source images are too large, the image inpainting method may find different structure and introduces bad quality into panoramic.

Figure 4
figure 4

The example of dilation method, (A) is the original image and (B) is the result of dilation method.

After this step, the repair structure of human boundary becomes similar and prevent the clutter of structure in repaired regions. The panoramic image becomes a disarray and unnatural in repaired regions via image inpainting method as shown in the example in Figure 5. Steps of the proposed method are presented in the following algorithm and a sample result is shown in Figure 6.

Algorithm: Human panoramic creation- Preserving the Full Human

Data: Human source images

Result: Human inpainting image

begin

  1. I.

    Select source image and find the largest human region in each source image.

  2. II.

    Using panoramic creation method produce the panorama with a hole region of human.

  3. III.

    Using dilation and expansion method for largest human region obtain human boundary correspondence information.

  4. IV.

    Differentiate the foreground and background region of human boundary.

  5. V.

    Using image inpainting method repair human boundary.

  6. VI.

    Recover human region into panoramic image.

Figure 5
figure 5

The disarray structure of human.

Figure 6
figure 6

The integrity structure of human.

Preserving the incomplete human

In most cases, user cannot control the distance between camera and human. When the human is close to camera and it is required to obtain more background in panorama, the human structure becomes incomplete in some frames. We cannot use the method described in part A, because the incomplete human may be in the same height as in frames. In order to solve the problem in incomplete human, we use energy map and find the seam in proposed method.

Some of panoramic creation methods often use the average value of RGB (or other color space value) on the overlapping region in the combination of source images. The average value is a fair-minded method, but, use of average value method may produce the ghost effect in panoramic image. The average value needs to rely on robust panoramic position method and accurate camera parameters, and there should not any moving or apparent object in source images. Therefore the average value method is not very reliable for our method of this step. Hence we use image stitching method in the proposed method as steps given below.

The concept in proposed method of image stitching is to find the optimal seam in the overlapping region between two images and to remove the ghost problem in the matching structure in panoramic image. Main steps of image stitching can be divided into three parts as registration, calibration and blending [4, 12], and using dynamic programming method to find optimal seam. We can find the best seam (the shortest path) via dynamic programming method, this problem can be common in graphs. For each pixel in the source image can be converted into nodes and the relationship of neighboring pixels can be regarded as path between two nodes. Therefore the source image can be converted into a multi-stage graph. We use the concept of seam craving to find optimal seam in overlap region between two images and the schematic diagram is shown in Figure 7. The red line is the optimal seam between image A and image B.

Figure 7
figure 7

The schematic diagram of optimal seam.

The concept of seam craving [13] is used energy map to find the optimal seam and avoid the important area in image. The important area in the current part is definition area of human. We hope to avoid the human area in combination step of panoramic creation. And the energy map with seam carving can avoid the important area reducing the ghost effect problem as well. The energy map M E can be generated by summing all values with smaller energy coefficients of the following direction up to down or left to right. All energy coefficients will increase downward or rightward. Figure 8 is the example of energy map constructed and the energy map M E that can be defined as in Equation (1) where (x, y) represent the current position, the M S is the image after sobel processing. The value with black font represents the value from gradient map M s ; the value with red font is generated from energy value summed by smaller energy value in last row. Afterwards, energy value is generated; the optimal seam can be found and removed in the direction from the bottom to up.

M E x , y = M S x , y + min M E x − 1 , y − 1 , M E x − 1 , y , M E x − 1 , y + 1
(1)
Figure 8
figure 8

The example of Energy map generating phase and energy map.

After the step of optimal seam two images can be combined into panorama as shown in Figure 9. Red lines on top of Figure 9 are found seams on the overlapping region via energy map value. We found 30 seams on the overlapping region, respectively. After matching the most similar patch on the overlapping region and define the current seam as the optimal seam. Note that, because human area often presents in the center of panoramic image. Therefore, the overlapping region and to find optimal seam will be used on the surrounding of source images. The optimal seam for the overlapping region after combination step is showing in the down of Figure 9, and green area is the overlapping region of current source images.

Figure 9
figure 9

Seam in overlapped region of images (top: original images of seams, bottom: combined photo with overlapped region is marked in red).

Panoramic creation

Matching position of images

In order to establish a complete panorama, one important factor is to find the correct structure in source images. Most of the current methods use artificially marking of the structure points in the images. This is very time-consuming when there are large number of images. One shortcoming of artificially marking is the accuracy of the matching construction. Because the position of marks in images becomes different when we have visual differences or when there are artificial errors of marking the structure points. But, identification correct structure or graphics is also difficult target in automatic processing of computer. Therefore, many useful methods have been proposed based on feature matching and structure identification.

As described in Section 2, SIFT algorithm [5, 12], can be used to detect and describe the local features on source images which even captured in difference view angles. A sample of SIFT algorithm is shown in Figure 10A. As previously mentioned, SIFT algorithm very time consuming during the processing and not effective for the images which captured in large shooting angles. Another worth noting drawback is the inefficiency in detecting features if the structure is too smooth or if there are many same feature. In order to improve above shortcomings, Morel proposed Affine-SIFT (ASIFT) [14] that can be used to find more features even in the images captured in large shooting angles between images. A sample result is shown in Figure 10B. ASIFT algorithm use six parameters to compute and record the zoom, rotation and translation of images. Morel proposed two important concepts that are against any prognosis and simulating all views depending that can be find more feature in large angle of shooting images. Because ASIFT algorithm increase some concepts that will become robust than the SIFT algorithm. But in processing time, ASIFT algorithm will spend more processing time than SIFT algorithm. According to our experiment the difference is very small.

Figure 10
figure 10

The human result of SIFT and ASIFT algorithm, (A) is the result of SIFT algorithm and (B) is the result of ASIFT algorithm.

In some cases, where there are many similar many similar features in one image, ASIFT algorithm may still have the wrong feature matching. Hence, we use a simple concept to filter the wrong feature matching via slope. For example, assume that we find the A frame and B frame have the same feature in Y axis and distance is 10 pixels. In subsequent frames, we find the C frame also have the same feature with A frame and B frame, but the feature point is in Y axis, distance is 5 pixels and 5 pixels in X axis. In this way, the concept of fixed displacement cannot use to filter wrong feature matching. Therefore, we use the slope S in all coordinates of feature matching, because we obtained video in same scene and same moving direction of photography, so most of the same feature matching will be same corresponding between two frames and have most same slope in feature matching. Through the slope concept, we can filter wrong feature matching between two frames, and to reserve the true feature matching information. At the same time, we can reduce the processing time in compute calibration parameters matrix. Steps of slope concept are in the following algorithm.

Algorithm: Matching Position of Images

Data: Source frames

Result: Feature Coordinate information

begin

  1. I.

    Use ASIFT algorithm to find matching information with two images. Definitions (X A , Y A ) and (X B , Y B ) present the matching coordinates of source A and B and compute the slope S:

    S = Y B − Y A X B − X A = ΔY ΔX
    (2)
  2. II.

    If S ≠ 0, using SAD method in a small range bounded by a 3×3 pixels block compute the number of matching information.

  3. III.

    Using coordinates of matching information compute calibration matrix via the maximum number of S, then repeat for all source frames.

Image calibration

Image calibration is very important in image processing, image combination and stereo vision. The combined image becomes poor in quality on the boundary of source images even though source images were taken in same time and same scene during a short time. The reason is the cylindrical distortion available on camera lenses which cannot be avoided during the time of capturing of videos and images with camera. Figure 11 shows an example of a simple merging of a source image set without using any calibration methods.

Figure 11
figure 11

The result of easy combination.

In order to guarantee the structure and to avoid distortion in panorama, we have to determine the photography of panorama. In the proposed method we set the direction of capturing the scene as a circular path to obtain a source video of a small time. Then a set of frames are separated from the short video to be used as the source frame set. After that ASIFT algorithm is used to obtain the coordinate information of matching features based on the source images. After this step, we need to compute the camera parameter matrix [6, 15] and transformation matrix in order to compensate the distortions in adjacent frames, although we obtained the source videos as smooth as possible.

We use the homography matrix [15] to ensure that all source images can be projected into the same plane maintaining the correct structure and information in panoramic image and defined as in Equation (1). Therefore it is required to calculate the homography matrix via matching information that can be extracted by ASIFT algorithm.

sm ' = Hm
(3)

where, s is the scale matrix, H is the homography matrix, m = (x, y, 1) and m’=(x’, y’, 1) is a pair of corresponding points matrix in the original image and in panorama plane. The m and m’ are the corresponding feature point in difference frame. The scale martix s will not affect the result of homography matrix, so we set the smallest constant into s. Then the homography matrix H, corresponding feature point m and m’ can be expanded by,

S x ' y ' 1 = H 11 H 12 H 13 H 21 H 22 H 23 H 31 H 32 H 33 x y 1
(4)

where, H ij represents each element of the homography matrix. Equation 4 can be further simplified as follows,

x ' H 31 x + H 32 y + H 33 = H 11 x + H 12 y + H 13 y ' H 31 x + H 32 y + H 33 = H 21 x + H 22 y + H 23
(5)

Since Equation 4 have eight parameters (i.e., the scale of H is variable and h 33 is usually normalized to 1 [15]) at least four pairs of corresponding points are required to solve eight parameters and the expanded equation is given below.

0 x 1 y 1 1 0 0 0 − x 1 ' x 1 − x 1 ' y 1 0 0 0 x 1 y 1 1 − y 1 ' x 1 − y 1 ' y 1 x 2 y 2 1 0 0 0 − x 2 ' x 2 − x 2 ' y 2 0 0 0 x 2 y 2 1 − y 2 ' x 2 − y 2 ' y 2 x 3 y 3 1 0 0 0 − x 3 ' x 3 − x 3 ' y 3 0 0 0 x 3 y 3 1 − y 3 ' x 3 − y 3 ' y 3 x 4 y 4 1 0 0 0 − x 4 ' x 4 − x 4 ' y 4 0 0 0 x 4 y 4 1 − y 4 ' x 4 − y 4 ' y 4 H 11 H 12 H 13 H 21 H 22 H 23 H 31 H 32 = x 1 ' y 1 ' x 2 ' y 2 ' x 3 ' y 3 ' x 4 ' y 4 '
(6)

Moreover according to the characteristic of homography matrix these points in the three-dimensional space must be on the same plane. Thus the following algorithm clusters all feature points and calculate the best homography matrix.

Algorithm : Finding Optimal Homography Matrix

Data: Coordinates of matching information

Result: Homography matrix

begin

  1. I.

    Cluster feature points according to color features via mean-shift algorithm.

    1. i.

      Transform the color space into CIELuv.

    2. ii.

      Create a 2D array arrayLU and give the L and U dimension parameters of CIELuv.

    3. iii.

      According to the arrayLU perform the clustering process and eliminate small regions by merging with neighbor regions.

  2. II.

    For each group calculate the homography matrix by using the feature points within the group. Solve at least four pair of corresponding points. If there is no sufficient number of points the group is neglected.

  3. III.

    The homography matrix of each group is fed into Equation (3) to calculate the value of H*m and compare the deviation dev between the actual m’ and the calculated H*m, where num is the number of matching feature pairs.

    dev = ∑ i = 0 num m i ' − H m i / num
    (7)
  4. IV.

    Define the optimal homography matrix is the one with the minimum deviation that is the one with the smallest dev.

After above steps, we can obtain the calibration parameters matrix. In the proposed method, we use calibration parameters matrix to transform source images into panoramic images as a example result shown in Figure 12. Note that we also have used the color adjustment method of poisson [16, 17] during the combination of the source images. This color adjustment was not applied in Figure 11. Therefore, we can clearly detect that even the source images are taken in same time same scene, the light is different when we take images or video. Hence, we have to add color adjustment method when images are combined that can be obtained the conformity panoramic images.

Figure 12
figure 12

The result of image combination.

Experiment results

In this section, the results of the experiment are discussed. Without using any supporting device for the camera (like tripod) input videos were captured to simulate a regular user who uses a regular camera. The main human did not move in a short time as previously mentioned. For each video, we take the time about 12~16 second that we have been try to keep for one cycle of circle in our photography. We only take video in outdoor, because user often want to retain the natural landscape and human in one image. Although the proposed method also can be used in interior scenes.

The specification of PC with 1.8 GHz CPU and 2 GB RAM is used for our experiment. All of source video, 6 frames were obtained automatically to compose the panorama. Time taken in each phase of the process for the eight videos was measured and displayed in Figures 13 and 14. However time taken in the generation of panorama phase for eight source videos is largely different. The reason for this probably is the color complexity and structural complexity of the input frame set like S04 and S06 in Figure 14. The Panorama Creation ensures that main human subject appears clearly in panorama as discussed in Section 3 and 4.

Figure 13
figure 13

Time taken to generate the panorama in each phase for the experimental results given in Figure 14 .

Figure 14
figure 14

Experimental results (left: one frame from source video, right: resultant panorama). (A) S1: original video. (B) S1: panorama result. (C) S2: original video. (D) S2: panorama result. (E) S3: original video. (F) S3: panorama result. (G) S4: original video. (H) S4: panorama result. (I) S5: original video. (J) S5: panorama result. (K) S6: original video. (L) S6: panorama result. (M) S7: original video. (N) S7: panorama result. (O) S8: original video. (P) S8: panorama result.

In the photography environment, we do not restrict much in distance and brightness. Because we transform video to panorama assuming the rate of the camera moving is not fast. When the rate of the camera moving is too fast, we obtain largely blurred frames. In this way, we obtain low quality results of panorama. Several selected experimental results are shown in Figure 14.(A) to 14.(P). S1~S4 videos were captured in outdoor and the resultant panorama clearly shows that the main person in the panorama. S5 and S7 were captured in outdoor and the resultant panorama clearly shows that main of two persons in the panorama. S6 and S8 videos were captured again in outdoor and the resultant panorama clearly shows that main of three persons in the panorama,

Conclusion

This paper proposes a novel method for generation of panorama image from a video captured from a simple digital camera by a novice user. It further provides details of composing a human panoramic image which provides more scenery information in one image. Main concepts of the proposed method are use of inpainting method and energy map method in human maintenance for panoramic creation. User does not need to tag or give a label of source images. We also combined the advantage of traditional panoramic creation and image stitching in proposed method and proved that proposed method is effective in use as per the shown results in experiment results section.

In panoramic creation, the processing is required to pay more attention to reduce the time taken for the processing. And often it is required to concentrate in feature matching step, because of the feature information are important in image position matching and in computing the homography matrix. Therefore, all source images need to be coordinated in same step of feature matching which results an increase in time complexity when the amount of source images is large in input processing. Authors are working on a proper solution to remove the empty black color regions around the boundary of the panorama and to develop that proposed solution for the mobile devices as well.

Authors’ information

Shih-Ming Chang is a PhD student at Department of Computer Science and Information Engineering of Tamkang University, Taiwan. He acquired the Master degree in Department of Computer Science and Information Engineering of Tamkang University of Taiwan in 2009. His research interests are in the area of Computer Vision, Interactive Multimedia and multimedia processing.

Hon-Hang Chang is a PhD and student and currently reading at the Department of Computer Science and Information Engineering, National Central University (NCU), Taiwan (R.O.C.). He acquired his Master’s degree in Department of Photonics and Communication Engineering of Asia University of Taiwan in 2011. His research fields are image processing, information hiding and water marking.

Shwu-Huey Yen is currently an associate professor in Computer Science and Information Engineering (CSIE) department of Tamkang University, New Taipei City, Taiwan. She is also an author of over 50 journal papers and conference papers. Her academic interests are signal processing, multimedia processing and medical imaging.

Timothy K. Shih is a Professor of the Department of Computer Science and Information Engineering, National Central University, Taiwan. He was a Department Chair of the CSIE Department at Tamkang University, Taiwan. Dr. Shih is a Fellow of the Institution of Engineering and Technology (IET). In addition, he is a senior member of ACM and a senior member of IEEE. Dr. Shih also joined the Educational Activities Board of the Computer Society. His current research interests include Multimedia Computing and Distance Learning. Dr. Shih has edited many books and published over 440 papers and book chapters, as well as participated in many international academic activities, including the organization of more than 60 international conferences. He was the founder and co-editor-in-chief of the International Journal of Distance Education Technologies, published by the Idea Group Publishing, USA. Dr. Shih is an associate editor of the ACM Transactions on Internet Technology and an associate editor of the IEEE Transactions on Learning Technologies. He was also an associate editor of the IEEE Transactions on Multimedia. Dr. Shih has received many research awards, including research awards from National Science Council of Taiwan, IIAS research award from Germany, HSSS award from Greece, Brandon Hall award from USA, and several best paper awards from international conferences. Dr. Shih has been invited to give more than 30 keynote speeches and plenary talks in international conferences, as well as tutorials in IEEE ICME 2001 and 2006, and ACM Multimedia 2002 and 2007.

References

  1. Xiong Y, Pulli K: Fast image stitching and editing for panorama painting on mobile phones. In IEEE Comput Soc Conf Comput Vis Pattern Recogn Workshops (CVPRW). San Francisco, CA; 2010.

    Google Scholar 

  2. Wang M: Panorama Painting: With a Bare Digital Camera. In Image and Graphics, 2009. ICIG'09. Fifth International Conference. Xi'an, Shanxi; 2009.

    Google Scholar 

  3. Wagner D, Mulloni A, Langlotz T, Schmalstieg D: Real-time panoramic mapping and tracking on mobile phones. Waltham, MA: Virtual Reality Conference (VR); 2010.

    Book  Google Scholar 

  4. Brown M, Lowe DG: Automatic panoramic image stitching using invariant features. Int J Comput Vis 2007, 74(1):59–73. 10.1007/s11263-006-0002-3

    Article  Google Scholar 

  5. Fazli S, Pour HM, Bouzari H: Particle filter based object tracking with sift and color feature. Dubai: International Conference on Machine Vision; 2009.

    Book  Google Scholar 

  6. Zhang Z: A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 2000, 22(11):1330–1334. 10.1109/34.888718

    Article  Google Scholar 

  7. Kekre HB, Thepade SD: Rotation invariant fusion of partial image parts in vista creation using missing view regeneration. WASET Int J Electr Comput Eng Syst (IJECSE) 2008, 47: 660.

    Google Scholar 

  8. Helmut D: Panorama Tools. Open source software for immersive imaging international VR photography conference, 2007. 2007. ~dersch/IVRPA.pdf, Accessed June 15–20 2007 http://webuser.fhfurtwangen.de/~dersch/IVRPA.pdf

    Google Scholar 

  9. Song B, Yongqing F, Wang J: Automatic panorama creation using multi-row images. Inf Technol J 2011, 10: 1977–1982.

    Article  Google Scholar 

  10. Wen-Yan L, Siying L, Yasuyuki M, Tian-Tsong N, Loong-Fah C: Smoothly varying affine stitching. Computer vision and pattern recognition (CVPR). Providence, RI: IEEE Conference; 2011.

    Google Scholar 

  11. Criminisi A, Perez P, Toyama K: Object removal by exemplar-based inpainting. IEEE Comput Soc Conf Comput Vis Pattern Recogn 2004, 2: 721–728. 2003 2003

    Google Scholar 

  12. Matthew B, Lowe DG: Recognising Panoramas. In Proceedings of the 9th International Conference on Computer Vision (ICCV2003). Nice, France; 2003:1218–1225.

    Google Scholar 

  13. Avidan S, Shamir A: Seam carving for content-aware image resizing. ACM Transactions on Graphics (TOG) 2007, 26(3):10. 10.1145/1276377.1276390

    Article  Google Scholar 

  14. Morel J-M, Guoshen Y: ASIFT: A new framework for fully affine invariant image comparison. SIAM J Imag Sci 2009, 2(2):438–469. 10.1137/080732730

    Article  MATH  Google Scholar 

  15. Criminisi A, Reid I, Zisserman A: A plane measuring device. Image Vis Comput 1999, 17(8):625–634. 10.1016/S0262-8856(98)00183-8

    Article  Google Scholar 

  16. Sun J, Jia J, Tang C-K, Shum H-Y: Poisson matting. ACM Trans Graph 2004, 23(3):315–321. 10.1145/1015706.1015721

    Article  Google Scholar 

  17. Pérez P, Gangnet M, Blake A: Poisson image editing. ACM Trans Graph 2003, 22(3):313–318. 10.1145/882262.882269

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shih-Ming Chang.

Additional information

Competing interests

The authors declare that they have no competing interest.

Authors’ contributions

All authors contributed to the content of all sections, read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Chang, SM., Chang, HH., Yen, SH. et al. Panoramic human structure maintenance based on invariant features of video frames. Hum. Cent. Comput. Inf. Sci. 3, 14 (2013). https://doi.org/10.1186/2192-1962-3-14

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2192-1962-3-14

Keywords