Feng Liu


I am currently a postdoctoral researcher in the Computer Vision Lab at Michigan State University. I am fortunate to be advised by Prof. Xiaoming Liu. I previously graduated with Ph. D. degree in Computer Science from Sichuan University where I was advised by Prof. Zhisheng You and Prof. Qijun Zhao.


My research interests span the areas of joint analysis of 2D images and 3D shapes, including 3D modeling, semantic correspondence, and coherent 3D scene reconstruction.


cv | email | github | google scholar


liufeng6@msu.edu
  
My picture

News


      [Sep  2020]                     A paper is accepted by NeurIPS 2020 as oral presentation (1.1% acceptance rate)
      [May 2020]                     A paper is accepted by TPAMI
      [Feb  2020]                     A paper is accepted by CVPR 2020

Publications



2020

[New] Learning Implicit Functions for Topology-Varying Dense 3D Shape Correspondence
Feng Liu, Xiaoming Liu
NeurIPS, 2020 (Oral presentation)
bibtex   abstract   project page   pdf   supp   video   poster   code

The goal of this paper is to learn dense 3D shape correspondence for topology-varying objects in an unsupervised manner. Conventional implicit functions estimate the occupancy of a 3D point given a shape latent code. Instead, our novel implicit function produces a part embedding vector for each 3D point, which is assumed to be similar to its densely corresponded point in another 3D shape of the same object category. Furthermore, we implement dense correspondence through an inverse function mapping from the part embedding to a corresponded 3D point. Both functions are jointly learned with several effective loss functions to realize our assumption, together with the encoder generating the shape latent code. During inference, if a user selects an arbitrary point on the source shape, our algorithm can automatically generate a confidence score indicating whether there is a correspondence on the target shape, as well as the corresponding semantic point if there is one. Such a mechanism inherently benefits man-made objects with different part constitutions. The effectiveness of our approach is demonstrated through unsupervised 3D semantic correspondence and shape segmentation.

@inproceedings{ learning-implicit-functions-for-topology-varying-dense-3d-shape-correspondence,
  author = { Feng Liu and Xiaoming Liu },
  title = { Learning Implicit Functions for Topology-Varying Dense 3D Shape Correspondence },
  booktitle = { In Proceeding of 2020 Conference on Neural Information Processing Systems },
  address = { Virtual },
  month = { December },
  year = { 2020 },
}

Gait Recognition via Disentangled Representation Learning
Ziyuan Zhang, Luan Tran, Feng Liu, Xiaoming Liu
IEEE TPAMI, 2020
bibtex   abstract   project page   pdf   dataset   code

Gait, the walking pattern of individuals, is one of the most important biometrics modalities. Most of the existing gait recognition methods take silhouettes or articulated body models as the gait features. These methods suffer from degraded recognition performance when handling confounding variables, such as clothing, carrying and view angle. To remedy this issue, we propose a novel AutoEncoder framework to explicitly disentangle pose and appearance features from RGB imagery and the LSTM-based integration of pose features over time produces the gait feature. In addition, we collect a Frontal-View Gait (FVG) dataset to focus on gait recognition from frontal-view walking, which is a challenging problem since it contains minimal gait cues compared to other views. FVG also includes other important variations, e.g., walking speed, carrying, and clothing. With extensive experiments on CASIA-B, USF and FVG datasets, our method demonstrates superior performance to the state of the arts quantitatively, the ability of feature disentanglement qualitatively, and promising computational efficiency.

@article{ on-learning-disentangled-representations-for-gait-recognition,
  author = { Ziyuan Zhang and Luan Tran and Feng Liu and Xiaoming Liu },
  title = { On Learning Disentangled Representations for Gait Recognition },
  journal = { IEEE Transactions on Pattern Analysis and Machine Intelligence },
  month = { May },
  year = { 2020 },
}

On the Detection of Digital Face Manipulation
Feng Liu*, Hao Dang*, Joel Stehouwer*, Xiaoming Liu, Anil Jain
CVPR, 2020
bibtex   abstract   project page   pdf   supp   poster   dataset   code

Detecting manipulated facial images and videos is an increasingly important topic in digital media forensics. As advanced face synthesis and manipulation methods become available, new types of fake face representations are being created and raise significant concerns for their implications in social media. Hence, it is crucial to detect the manipulated face image and localize manipulated regions. Instead of simply using multi-task learning to simultaneously detect manipulated images and predict the manipulated mask (regions), we propose to utilize the attention mechanism to process and improve the feature maps of the classification model. The learned attention maps highlight the informative regions to further improve the binary classification, and also visualize the manipulated regions. In addition, to enable our study of manipulated face detection and localization, we collect a large-scale database that contains numerous types of facial forgeries. With this dataset, we perform a thorough analysis of data-driven fake face detection. We demonstrate that the use of an attention mechanism improves manipulated region localization and fake detection.

@inproceedings{ on-the-detection-of-digital-face-manipulation,
  author = { Hao Dang* and Feng Liu* and Joel Stehouwer* and Xiaoming Liu and Anil Jain },
  title = { On the Detection of Digital Face Manipulation },
  booktitle = { In Proceeding of IEEE Computer Vision and Pattern Recognition },
  address = { Seattle, WA },
  month = { June },
  year = { 2020 },
}

2019

3D Face Modeling from Diverse Raw Scan Data
Feng Liu, Luan Tran, Xiaoming Liu
ICCV, 2019 (Oral presentation)
bibtex   abstract   project page   pdf   supp   video   poster   code

Traditional 3D face models learn a latent representation of faces using linear subspaces from limited scans of a single database. The main roadblock of building a large-scale face model from diverse 3D databases lies in the lack of dense correspondence among raw scans. To address theseproblems, this paper proposes an innovative framework tojointly learn a nonlinear face model from a diverse set ofraw 3D scan databases and establish dense point-to-pointcorrespondence among their scans. Specifically, by treatinginput scans as unorganized point clouds, we explore the use of PointNet architectures for converting point clouds to identity and expression feature representations, from which the decoder networks recover their 3D face shapes. Further, we propose a weakly supervised learning approach that does not require correspondence label for the scans. We demonstrate the superior dense correspondence andrepresentation power of our proposed method, and its contribution to single-image 3D face reconstruction.

@inproceedings{ 3d-face-modeling-from-diverse-raw-scan-data,
  author = { Feng Liu and Luan Tran and Xiaoming Liu },
  title = { 3D Face Modeling from Diverse Raw Scan Data },
  booktitle = { In Proceeding of International Conference on Computer Vision },
  address = { Seoul, South Korea },
  month = { October },
  year = { 2019 },
}

Towards High-fidelity Nonlinear 3D Face Morphable Model
Luan Tran, Feng Liu, Xiaoming Liu
CVPR, 2019
bibtex   abstract   project page   pdf   poster   code

Embedding 3D morphable basis functions into deep neural networks opens great potential for models with better representation power. However, to faithfully learn those models from an image collection, it requires strong regularization to overcome ambiguities involved in the learning process. This critically prevents us from learning high fidelity face models which are needed to represent face images in high level of details. To address this problem, this paper presents a novel approach to learn additional proxies as means to side-step strong regularizations, as well as, leverages to promote detailed shape/albedo. To ease the learning, we also propose to use a dual-pathway network, a carefully-designed architecture that brings a balance between global and local-based models. By improving the nonlinear 3D morphable model in both learning objectiveand network architecture, we present a model which is superior in capturing higher level of details than the linear or its precedent nonlinear counterparts. As a result, our model achieves state-of-the-art performance on 3D face reconstruction by solely optimizing latent representations.

@inproceedings{ towards-high-fidelity-nonlinear-3d-face-morphable-model,
  author = { Luan Tran and Feng Liu and Xiaoming Liu },
  title = { Towards High-fidelity Nonlinear 3D Face Morphable Model },
  booktitle = { In Proceeding of IEEE Computer Vision and Pattern Recognition },
  address = { Long Beach, CA },
  month = { June },
  year = { 2019 },
}

2018

Joint Face Alignment and 3D Face Reconstruction with Application to Face Recognition
Feng Liu, Qijun Zhao, Xiaoming Liu, Dan Zeng
IEEE TPAMI, 2018
bibtex   abstract   pdf

Face alignment and 3D face reconstruction are traditionally accomplished as separated tasks. By exploring the strong correlation between 2D landmarks and 3D shapes, in contrast, we propose a joint face alignment and 3D face reconstruction method to simultaneously solve these two problems for 2D face images of arbitrary poses and expressions. This method, based on a summation model of 3D faces and cascaded regression in 2D and 3D shape spaces, iteratively and alternately applies two cascaded regressors, one for updating 2D landmarks and the other for 3D shape. The 3D shape and the landmarks are correlated via a 3D-to-2D mapping matrix, which is updated in each iteration to refine the location and visibility of 2D landmarks. Unlike existing methods, the proposed method canfully automatically generate both pose-and-expression-normalized (PEN) and expressive 3D faces and localize both visible and invisible 2D landmarks. Based on the PEN 3D faces, we devise a method to enhance face recognition accuracy across poses and expressions. Both linear and nonlinear implementations of the proposed method are presented and evaluated in this paper. Extensive experiments show that the proposed method can achieve the state-of-the-art accuracy in both face alignment and 3D face reconstruction, and benefit face recognition owing to its reconstructed PEN 3D face.

@article{ joint-face-alignment-and-3d-face-reconstruction-with-application-to-face-recognition,
  author = { Feng Liu and Qijun Zhao and Xiaoming Liu and Dan Zeng },
  title = { Joint Face Alignment and 3D Face Reconstruction with Application to Face Recognition },
  journal = { IEEE Transactions on Pattern Analysis and Machine Intelligence },
  month = { November },
  year = { 2018 },
}

Disentangling Features in 3D Face Shapes for Joint Face Reconstruction and Recognition
Feng Liu, Ronghang Zhu, Dan Zeng, Qijun Zhao, Xiaoming Liu
CVPR, 2018
bibtex   abstract   pdf   supp   poster

This paper proposes an encoder-decoder network to disentangle shape features during 3D face reconstruction from single2D images, such that the tasks of reconstructing accurate 3D face shapes and learning discriminative shape features for face recognition can be accomplished simultaneously. Unlike existing 3D face reconstruction methods, our proposed method directly regresses dense 3D face shapes from single 2D images, and tackles identity and residual (i.e., non-identity) components in 3D face shapes explicitly and separately based on a composite 3D face shape model with latent representations. We devisea training process for the proposed network with a joint loss measuring both face identification error and 3D face shape reconstruction error. To construct training data we develop a method for fitting 3D morphable model (3DMM) to multiple 2D images of a subject. Comprehensive experiments have been done on MICC, BU3DFE, LFW and YTF databases. The results show that our method expands the capacity of 3DMM for capturing discriminative shape features and facial detail, and thus outperforms existing methods both in 3D face reconstruction accuracy and in face recognition accuracy.

@inproceedings{ disentangling-features-in-3d-face-shapes-for-joint-face-reconstruction-and-recognition,
  author = { Feng Liu and Dan Zeng and Qijun Zhao and Xiaoming Liu },
  title = { Disentangling Features in 3D Face Shapes for Joint Face Reconstruction and Recognition },
  booktitle = { In Proceeding of IEEE Computer Vision and Pattern Recognition },
  address = { Salt Lake City, UT },
  month = { June },
  year = { 2018 },
}

2017

Multi-Dim: A Multi-Dimensional Face Database Towards the Application of 3D Technology in Real-World Scenarios
Feng Liu, Jun Hu, Jianwei Sun, Yang Wang, Qijun Zhao
IJCB, 2017
bibtex   abstract   pdf   supp   poster

Three-dimensional (3D) faces are increasingly utilized in many face-related tasks. Despite the promising improvement achieved by 3D face technology, it is still hard to thoroughly evaluate the performance and effect of 3D face technology in real-world applications where variations frequently occur in pose, illumination, expression and many other factors. This is due to the lack of benchmark databases that contain both high precision full-view 3D faces and their 2D face images/videos under different conditions. In this paper, we present such a multi-dimensional face database (namely Multi-Dim) of high precision 3D face scans, high definition photos, 2D still face images with varying pose and expression, low quality 2D surveillance video clips, along with ground truth annotations for them. Based on this Multi-Dim face database, extensive evaluation experiments have been done with state-of-the-art baseline methods for constructing 3D morphable model, reconstructing 3D faces from single images, 3D-assisted pose normalization for face verification, and 3D-rendered multiview gallery for face identification. Our results show that 3D face technology does help in improving unconstrained 2D face recognition when the probe 2D face images are of reasonable quality, whereas it deteriorates rather than improves the face recognition accuracy when the probe 2D face images are of poor quality. We will make Multi-Dim freely available to the community for the purpose of advancing the 3D-based unconstrained 2D face recognition and related techniques towards real-world applications.

@inproceedings{liu2017multi,
  title={Multi-dim: A multi-dimensional face database towards the application of 3D technology in real-world scenarios},
  author={Liu, Feng and Hu, Jun and Sun, Jianwei and Wang, Yang and Zhao, Qijun},
  booktitle={Proc. International Joint Conference on Biometrics (IJCB)},
  pages={342--351},
  year={2017},
}

2016

Joint Face Alignment and 3D Face Reconstruction
Feng Liu, Dan Zeng, Qijun Zhao, Xiaoming Liu
ECCV, 2016 (Spotlight presentation)
bibtex   abstract   pdf

We present an approach to simultaneously solve the two problems of face alignment and 3D face reconstruction from an input 2D face image of arbitrary poses and expressions. The proposed method iteratively and alternately applies two sets of cascaded regressors, one for updating 2D landmarks and the other for updating reconstructed pose-expression-normalized (PEN) 3D face shape. The 3D face shape and the landmarks are correlated via a 3D-to-2D mapping matrix. In each iteration, adjustment to the landmarks is firstly estimated via a landmark regressor, and this landmark adjustment is also used to estimate 3D face shape adjustment via a shape regressor. The 3D-to-2D mapping is then computed based on the adjusted 3D face shape and 2D landmarks, andit further refines the 2D landmarks. An effective algorithm is devised to learn these regressors based on a training dataset of pairing annotated 3Dface shapes and 2D face images. Compared with existing methods, the proposed method can fully automatically generate PEN 3D face shapesin real time from a single 2D face image and locate both visible and invisible 2D landmarks. Extensive experiments show that the proposed method can achieve the state-of-the-art accuracy in both face alignmentand 3D face reconstruction, and benefit face recognition owing to its reconstructed PEN 3D face shapes.

@inproceedings{ joint-face-alignment-and-3d-face-reconstruction,
  author = { Feng Liu and Dan Zeng and Qijun Zhao and Xiaoming Liu },
  title = { Joint Face Alignment and 3D Face Reconstruction },
  booktitle = { Proc. European Conference on Computer Vision },
  address = { Amsterdam, The Netherlands },
  month = { October },
  year = { 2016 },
}

Academic Services


    Conference Reviewer:             CVPR {2019, 2020, 2021}, ICCV 2019, ECCV 2020, AAAI {2020, 2021}, IJCAI 2019,
                                                      ACCV 2020, WACV2020, ICB 2019, FG 2019

    Journal Reviewer:                   TPAMI, TIFS, TIP, PR, TMM, TOMM


        @Website inspired from here. @12/19/2020