AMFG2018: The 8th IEEE Workshop on Analysis and Modeling of Faces and Gestures |
Website | https://web.northeastern.edu/smilelab/AMFG2018/ |
Submission deadline | March 25, 2018 |
Notifications | April 10, 2018 |
Camera Ready | April 15, 2018 |
Alongside the effort spent on conventional face recognition is the research done to automatically understand social media content. This line of work has attracted attention from industry and academic researchers from all sorts of domains. To understand social media the following capabilities must be satisfied: face and body tracking (e.g., facial expression analysis, face detection, gesture recognition), face and body characterization (e.g., behavioral understanding, emotion recognition), face, body and gesture characteristic analysis (e.g., gait, age, gender and ethnicity recognition), group understanding via social cues (e.g., kinship, non-blood relationships, personality), and visual sentiment analysis (e.g., temperament, arrangement). Thus, needing to be able to create effective models for visual certainty has significant value in both the scientific communities and the commercial market, with applications that span topics of human-computer interaction, social media analytics, video indexing, visual surveillance, and Internet vision. Currently, researchers have made significant progress addressing the many problems in the social domain, and especially when considering off-the-shelf and cost-efficient vision HW products available these days, e.g. Kinect, Leap, SHORE, and Affdex. Nonetheless, serious challenges still remain, which only amplifies when considering the unconstrained imaging conditions captured by different sources focused on non-cooperative subjects. It is these latter challenges that especially grabs our interest, as we sought out to bring together the cutting-edge techniques and recent advances in deep learning to solve the challenges above in social media.
AMFG2003: http://brigade.umiacs.umd.edu/iccv2003/
AMFG2005: http://mmlab.ie.cuhk.edu.hk/iccv05/
AMFG2007: http://mmlab.ie.cuhk.edu.hk/iccv07/
AMFG2010: http://www.lv-nus.org/AMFG2010/cfp.html
AMFG2013: http://www.northeastern.edu/smilelab/AMFG2013/home.html
AMFG2015: http://www.northeastern.edu/smilelab/AMFG2015/home.html
AMFG2017: https://web.northeastern.edu/smilelab/AMFG2017/index.html
Submission Guidelines
All submissions will be handled electronically via the conference's CMT Website. All authors must agree to the policies stipulated below. The submission deadline is November 15th and will not be changed. Supplementary material can be submitted until Wednesday, November 22nd. Papers are limited to eight pages, including figures and tables, in the CVPR style. Additional pages containing only cited references are allowed. Please refer to the following files for detailed formatting instructions:
Example submission paper with detailed instructions: egpaper_for_review.pdf
LaTeX/Word Templates (tar): cvpr2018AuthorKit.tgz
LaTeX/Word Templates (zip): cvpr2018AuthorKit.zip
A complete paper should be submitted using the above templates, which are blind-submission review-formatted templates. The length should match that intended for final publication. Papers with more than 8 pages (excluding references) will be rejected without review. Note, unlike in previous CVPRs, there will be no page charges.
Conflict Responsibilities: It is the primary author's responsibility to ensure that all authors on their paper have registered their institutional conflicts into CMT. For each author, please enter the domain of his/her current institution (example: cs.jhu.edu; microsoft.com). Please enter ONLY ONE institution, except when the author was in more than one institution in the last 12 months (Nov 15, 2016 - Nov 15, 2017). DO NOT enter the domain of email providers such as gmail.com. This institutional conflict information will be used in conjunction with prior authorship conflict information to resolve assignments to both reviewers and area chairs. If a paper is found to have an undeclared or incorrect institutional conflict, the paper may be summarily rejected.
List of Topics
- Novel deep model, deep learning survey, or comparative study for face/gesture recognition;
- Deep learning methodology, theory, and its application to social media analytics;
- Deep learning for internet-scale soft biometrics and profiling: age, gender, ethnicity, personality, kinship, occupation, beauty ranking, and fashion classification by facial or body descriptor;
- Deep learning for detection and recognition of faces and bodies with large 3D rotation, illumination change, partial occlusion, unknown/changing background, and aging (i.e., in the wild); special attention will be given large 3D rotation robust face and gesture recognition;
- Motion analysis, tracking, and extraction of face and body models captured by mobile devices;
- Face, gait, and action recognition in low-quality (e.g., blurred), or low-resolution video from fixed or mobile device cameras;
- Novel mathematical models and algorithms, sensors and modalities for face & body gesture and action representation, analysis, and recognition for cross-domain social media;
- Social/psychological based studies that aids in understanding computational modeling and building better-automated face and gesture systems with interactive features;
- Novel social applications involving detection, tracking & recognition of face, body, and action;
- Face and gesture analysis for sentiment analysis in social media;
- Other applications involving face and gesture analysis in social media content.
Organizing Committee
-
Thomas S. Huang, University of Illinois, https:ece.illinois.edu/directory/profile/t-huang1
-
Yun Fu, Northeastern University, http://www1.ece.neu.edu/~yunfu/
-
Matthew A. Turk, University of CA, https://www.cs.ucsb.edu/~mturk/
-
Ming Shao, University of Massachusetts, http://www.cis.umassd.edu/~mshao/
-
Michael Jones, Mitsubishi Electric Research, http://www.merl.com/people/mjones/
-
Joseph P. Robinson, Northeastern University, http:www.jrobsvision.com/
Contact
Joseph Robinson (robinson.jo@husky.neu.edu)
Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA
Ming Shao (mshao@umassd.edu)
Computer and Information Science, University of Massachusetts Dartmouth, Dartmouth, MA, USA