您当前所在位置: 首页 > 学者
在线提示

恭喜!关注成功

在线提示

确认取消关注该学者?

邀请同行关闭

只需输入对方姓名和电子邮箱,就可以邀请你的同行加入中国科技论文在线。

真实姓名:

电子邮件:

尊敬的

我诚挚的邀请你加入中国科技论文在线,点击

链接,进入网站进行注册。

添加个性化留言

已为您找到该学者20条结果 成果回收站

上传时间

2010年07月07日

【期刊论文】GRAPH CUT BASED ACTIVE CONTOUR FOR AUTOMATED CELLULAR IMAGE SEGMENTATION IN HIGH THROUGHPUT RNA INTERFACE (RNAi) SCREENING

李厚强, Cheng Chen, Houqiang Li, Xiaobo Zhou, Stephen T.C. Wong

,-0001,():

-1年11月30日

摘要

Recently, image-based, high throughput RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Effective automated segmentation technique is significant in analysis of RNAi images. However, graph cuts based active contour (GCBAC) method needs interaction during segmentation. Here, we present a novel approach to overcome this shortcoming. The process consists the following steps: First, region-growing algorithm uses extracted nuclei to get the initial contours for segmentation of cytoplasm. Then, constraint factor obtained from binary segmentation of enhanced image is incorporated to improve the performance of cytoplasm segmentation. Finally, morphological thinning algorithm is implemented to solve the touching problem of clustered cells. Our approach can automatically segment clustered cells with polynomial time-consuming. The excellent results verify the effectiveness of the proposed approach.

上传时间

2010年07月07日

【期刊论文】VIDEO CODING WITH SPATIO-TEMPORAL TEXTURE SYNTHESIS AND EDGE-BASED INPAINTING

李厚强, Chunbo Zhu, Xiaoyan Sun, Feng Wu, and Houqiang Li

,-0001,():

-1年11月30日

摘要

This paper proposes a video coding scheme, in which textural and structural regions are selectively removed in the encoder, and restored in the decoder by spatio-temporal texture ynthesis and edge-based inpainting. In the proposed scheme, two types of regions are classified based on two motion models: local motion and global motion. In local motion regions, conventional blockbased motion estimation is employed for region removal and spatio-temporal texture synthesis is applied for recovery of the removed regions. In global motion regions, edge-based image inpainting is utilized to recover removed regions, and sprite generation is used as an auxiliary tool to keep temporal consistency. In the proposed scheme, both structures and textures are handled and some kinds of assistant information which can guide restoration are extracted and coded. This approach is blockbased and thus is flexible and generic to be implemented into standard-compliant video coding schemes. It has been implemented into H.264/AVC and achieves up to 35% bitrate saving at similar visual quality levels compared with H.264/AVC without our approach.

Video coding,, texture synthesis,, inpainting,, region removal,, exemplar selection

上传时间

2010年07月07日

【期刊论文】VOLUME GRAPH MODEL FOR 3D FACIAL SURFACE EXTRACTION

李厚强, Lei Wu, Houqiang Li, Nenghai Yu, Mingjing Li

,-0001,():

-1年11月30日

摘要

3D facial extraction from volume data is very helpful in virtual plastic surgery. Although the traditional Marching Cubes algorithm (MC) can be used for this purpose, it could not separate the facial surface from other tissue surfaces. This weakness greatly limited the accuracy of the facial model and its application in plastic surgery. In this paper a volume graph model is proposed, in which the facial surface extraction is formulated as a min-cut problem and can be solved by existing graph cut algorithm. Based on this model, irrelevant tissue surfaces are effectively excluded and a more accurate 3D virtual face can be built for plastic surgery.

上传时间

2010年07月07日

【期刊论文】VIDEO INPAINTING FOR LARGELY OCCLUDED MOVING HUMAN

李厚强, Haomian Wang, Houqiang Li, Baoxin Li

,-0001,():

-1年11月30日

摘要

In this paper, a video inpainting approach is proposed, which targets at repairing a video containing moving humans that are largely or completely occluded or missing for some of the frames. The proposed approach first categorizes typically periodic human motion in a video into a set of temporal states (called motion states), and then estimates the motion states for the frames with missing humans so as to repair the missing parts using other undamaged frames with the same motion states. This deviates from common approaches that directly repair the pixels of the damaged parts. Experiments demonstrate that the proposed method can well repair the damaged video sequences without introducing strong artifacts that exist in many existing techniques.

上传时间

2010年07月07日

【期刊论文】Buffer Requirement Analysis and Reference Picture Marking for Temporal Scalable Video

李厚强, Coding Qiu Shen*, Ye-Kui.Wang†, Miska M. Hannuksela†, Houqiang Li*, and Yi Wang*

,-0001,():

-1年11月30日

摘要

Temporal scalable video coding is a useful feature supported in all existing video coding standards. Though coding efficiency is still the predominant performance metric, buffer requirement is another important factor for assessment of temporal scalable video coding methods. This paper presents analyses of minimum buffer requirement for temporal scalable coding based on the most typical temporal scalable video coding structure, i.e., the so-called hierarchical B picture coding structure. Minimum buffer sizes required for decoding of both the full bitstreams and thinned bitstreams are derived. After that, some reference picture management methods are proposed in order to enable both minimum buffer consumption and maximum coding efficiency. Comparative experiment results showing how different reference picture marking methods affect coding efficiency are also provided.

合作学者

  • 李厚强 邀请

    中国科学技术大学,安徽

    尚未开通主页