您当前所在位置: 首页 > 学者
在线提示

恭喜!关注成功

在线提示

确认取消关注该学者?

邀请同行关闭

只需输入对方姓名和电子邮箱,就可以邀请你的同行加入中国科技论文在线。

真实姓名:

电子邮件:

尊敬的

我诚挚的邀请你加入中国科技论文在线,点击

链接,进入网站进行注册。

添加个性化留言

已为您找到该学者20条结果 成果回收站

上传时间

2010年07月07日

【期刊论文】一种针对MPEG24 AVC/ H. 264 的用于快速码流生成的运动信息描述算法

李厚强, 王毅), 李厚强), 孙晓艳), 吴枫), 刘政凯)

计算机学报,2007,30(6):1005~1013,-0001,():

-1年11月30日

摘要

由于MPEG24AVC/H-264采用了可变块尺寸(variable block size) 和率失真最优化(rate distortion opti-mization)两项技术,视频编码流程中复杂度最高的运动估计(motion estimation)模块变得更加复杂。另一显著的变化就是,所获取的运动信息与目标码率紧密相关。这给传统的快速转码技术带来了很大困难。该文首先提出了一种分层模型,按由粗到细的方式描述每个宏块(macroblock)的运动信息。基于这个分层模型,文中提出了一种通过预编码获取运动信息的算法,从而生成视频序列运动特性的完整描述。有了这种运动信息描述,在编码或是转码的过程中,编码器可以跳过运动估计过程,编码复杂度因此大大减少。为进一步加快编码速度,还提出了从运动信息描述中抽取最优运动信息的快速算法。实验结果验证了所提算法的有效性,在大大降低编码复杂度的同时,编码性能与最优的MPEG24 AVC/H-264 非常接近。

运动估计, 运动矢量, 宏块, 率失真, 模式

上传时间

2010年07月07日

【期刊论文】VOLUME GRAPH MODEL FOR 3D FACIAL SURFACE EXTRACTION

李厚强, Lei Wu, Houqiang Li, Nenghai Yu, Mingjing Li

,-0001,():

-1年11月30日

摘要

3D facial extraction from volume data is very helpful in virtual plastic surgery. Although the traditional Marching Cubes algorithm (MC) can be used for this purpose, it could not separate the facial surface from other tissue surfaces. This weakness greatly limited the accuracy of the facial model and its application in plastic surgery. In this paper a volume graph model is proposed, in which the facial surface extraction is formulated as a min-cut problem and can be solved by existing graph cut algorithm. Based on this model, irrelevant tissue surfaces are effectively excluded and a more accurate 3D virtual face can be built for plastic surgery.

上传时间

2010年07月07日

【期刊论文】VIDEO INPAINTING FOR LARGELY OCCLUDED MOVING HUMAN

李厚强, Haomian Wang, Houqiang Li, Baoxin Li

,-0001,():

-1年11月30日

摘要

In this paper, a video inpainting approach is proposed, which targets at repairing a video containing moving humans that are largely or completely occluded or missing for some of the frames. The proposed approach first categorizes typically periodic human motion in a video into a set of temporal states (called motion states), and then estimates the motion states for the frames with missing humans so as to repair the missing parts using other undamaged frames with the same motion states. This deviates from common approaches that directly repair the pixels of the damaged parts. Experiments demonstrate that the proposed method can well repair the damaged video sequences without introducing strong artifacts that exist in many existing techniques.

上传时间

2010年07月07日

【期刊论文】Video Error Concealment Using Spatio-Temporal Boundary Matching and Partial Differential Equation

李厚强, Yan Chen, Student Member, IEEE, Yang Hu, Oscar C. Au, Senior Member, Houqiang Li, and Chang Wen Chen, Fellow

IEEE TRANSACTIONS ON MULTIMEDIA, VOL.10, NO.1, JANUARY 2008,-0001,():

-1年11月30日

摘要

Error concealment techniques are very important for video communication since compressed video sequences may be corrupted or lost when transmitted over error-prone networks. In this paper, we propose a novel two-stage error concealment scheme for erroneously received video sequences. In the first stage, we propose a novel spatio-temporal boundary matching algorithm (STBMA) to reconstruct the lost motion vectors (MV). A well defined cost function is introduced which exploits both spatial and temporal smoothness properties of video signals. By minimizing the cost function, the MV of each lost macroblock (MB) is recovered and the corresponding reference MB in the reference frame is obtained using this MV. In the second stage, instead of directly copying the reference MB as the final recovered pixel values, we use a novel partial differential equation (PDE) based algorithm to refine the reconstruction.We minimize, in a weighted manner, the difference between the gradient field of the reconstructed MB in current frame and that of the reference MB in the reference frame under given boundary condition. A weighting factor is used to control the regulation level according to the local blockiness degree. With this algorithm, the annoying blocking artifacts are effectively reduced while the structures of the referenceMBare well preserved. Compared with the error concealment feature implemented in the H.264 reference software, our algorithm is able to achieve significantly higher PSNR as well as better visual quality.

Error concealment,, H., 264,, motion compensation,, partial differential equation.,

上传时间

2010年07月07日

【期刊论文】VIDEO CODING WITH SPATIO-TEMPORAL TEXTURE SYNTHESIS AND EDGE-BASED INPAINTING

李厚强, Chunbo Zhu, Xiaoyan Sun, Feng Wu, and Houqiang Li

,-0001,():

-1年11月30日

摘要

This paper proposes a video coding scheme, in which textural and structural regions are selectively removed in the encoder, and restored in the decoder by spatio-temporal texture synthesis and edge-based inpainting. In the proposed scheme, two types of regions are classified based on two motion models: local motion and global motion. In local motion regions, conventional blockbased motion estimation is employed for region removal and spatio-temporal texture synthesis is applied for recovery of the removed regions. In global motion regions, edge-based image inpainting is utilized to recover removed regions, and sprite generation is used as an auxiliary tool to keep temporal consistency. In the proposed scheme, both structures and textures are handled and some kinds of assistant information which can guide restoration are extracted and coded. This approach is blockbased and thus is flexible and generic to be implemented into standard-compliant video coding schemes. It has been implemented into H.264/AVC and achieves up to 35% bitrate saving at similar visual quality levels compared with H.264/AVC without our approach.

Video coding,, texture synthesis,, inpainting,, region removal,, exemplar selection

合作学者

  • 李厚强 邀请

    中国科学技术大学,安徽

    尚未开通主页