Joint Workshop on
Efficient Deep Learning in Computer Vision

June 15th, 2020
Seattle, Washington
in conjunction with CVPR 2020

Joint Workshop on
Efficient Deep Learning in Computer Vision

June 15th, 2020
Seattle, Washington
in conjunction with CVPR 2020
Computer Vision has a long history of academic research, and recent advances in deep learning have provided significant improvements in the ability to understand visual content. As a result of these research advances on problems such as object classification, object detection, and image segmentation, there has been a rapid increase in the adoption of Computer Vision in industry; however, mainstream Computer Vision research has given little consideration to speed or computation time, and even less to constraints such as power/energy, memory footprint and model size. The workshop has three main goals on solving and discussing efficiency in Computer Vision:

First, the workshop aims to create a venue for a consideration of the new generation of problems that arise as Computer Vision meets mobile and AR/VR systems constraints, to bring together researchers, educators and practitioners who are interested in techniques as well as applications of compact, efficient neural network representations. The workshop discussions will establish close connection between researchers in machine learning and computer vision communities and engineers in industry, and to benefit both academic researchers as well as industrial practitioners.

Second, the workshop aims at reproducibility and comparability of methods for compact and efficient neural network representations, and on-device machine learning. Thus a set of benchmarking tasks (image classification, visual question answering) will be provided together with defined data sets, in order to compare the performance of neural network compression methods on the same networks. Submissions are encouraged (but not required) to use these tasks and data sets in their work. Also, contributors are encouraged to make their code available.

Third, the workshop aims to discuss the next steps in developing efficient feature representations from three aspects: energy efficient, label efficient, and sample efficient. Despite DNNs are brain-inspired and can achieve or even surpass human-level performance on a variety of challenging computer vision tasks, they continue to trail humans’ abilities in many aspects, such as high energy-efficiency and the ability to perform low-shot learning (learning novel concepts from very few examples). Therefore, the next generation of feature representation and learning techniques should aim to tackle recognition tasks with significantly reduced computational complexity, using as little training data as people need, and to generalize to a range of different tasks beyond the one task the model was trained on.
Computer Vision has a long history of academic research, and recent advances in deep learning have provided significant improvements in the ability to understand visual content. As a result of these research advances on problems such as object classification, object detection, and image segmentation, there has been a rapid increase in the adoption of Computer Vision in industry; however, mainstream Computer Vision research has given little consideration to speed or computation time, and even less to constraints such as power/energy, memory footprint and model size. The workshop has three main goals on solving and discussing efficiency in Computer Vision:
First, the workshop aims to create a venue for a consideration of the new generation of problems that arise as Computer Vision meets mobile and AR/VR systems constraints, to bring together researchers, educators and practitioners who are interested in techniques as well as applications of compact, efficient neural network representations. The workshop discussions will establish close connection between researchers in machine learning and computer vision communities and engineers in industry, and to benefit both academic researchers as well as industrial practitioners.
Second, the workshop aims at reproducibility and comparability of methods for compact and efficient neural network representations, and on-device machine learning. Thus a set of benchmarking tasks (image classification, visual question answering) will be provided together with defined data sets, in order to compare the performance of neural network compression methods on the same networks. Submissions are encouraged (but not required) to use these tasks and data sets in their work. Also, contributors are encouraged to make their code available.
Third, the workshop aims to discuss the next steps in developing efficient feature representations from three aspects: energy efficient, label efficient, and sample efficient. Despite DNNs are brain-inspired and can achieve or even surpass human-level performance on a variety of challenging computer vision tasks, they continue to trail humans’ abilities in many aspects, such as high energy-efficiency and the ability to perform low-shot learning (learning novel concepts from very few examples). Therefore, the next generation of feature representation and learning techniques should aim to tackle recognition tasks with significantly reduced computational complexity, using as little training data as people need, and to generalize to a range of different tasks beyond the one task the model was trained on.

Important Dates

Paper Submission Deadline:
Notification to authors:
Camera ready deadline:
Workshop:
March 25, 2020 pst
April 12, 2020 pst
April 19, 2020 pst
June 15, 2020 (Full Day)

Important Dates

Paper Submission Deadline:
Notification to authors:
Camera ready deadline:
Workshop:
March 25, 2020 pst
April 12, 2020 pst
April 19, 2020 pst
June 15, 2020 (Full Day)

Topics

  • Efficient Neural Network and Architecture Search
    • Compact and efficient neural network architecture for mobile and AR/VR devices
    • Hardware (latency, energy) aware neural network architectures search, targeted for mobile and AR/VR devices
    • Efficient architecture search algorithm for different vision tasks (detection, segmentation etc.)
    • Optimization for Latency, Accuracy and Memory usage, as motivated by embedded devices
  • Neural Network Compression
    • Model compression (sparsification, binarization, quantization, pruning, thresholding and coding etc.) for efficient inference with deep networks and other ML models
    • Scalable compression techniques that can cope with large amounts of data and/or large neural networks (e.g., not requiring access to complete datasets for hyperparameter tuning and/or retraining)
    • Hashing (Binary) Codes Learning
  • Low-bit Quantization Network and Hardware Accelerators
    • Investigations into the processor architectures (CPU vs GPU vs DSP) that best support mobile applications
    • Hardware accelerators to support Computer Vision on mobile and AR/VR platforms
    • Low-precision training/inference & acceleration of deep neural networks on mobile devices
  • Dataset and benchmark
    • Open datasets and test environments for benchmarking inference with efficient DNN representations
    • Metrics for evaluating the performance of efficient DNN representations
    • Methods for comparing efficient DNN inference across platforms and tasks
  • Label/sample/feature efficient learning
    • Label Efficient Feature Representation Learning Methods, e.g. Unsupervised Learning, Domain Adaptation, Weakly Supervised Learning and SelfSupervised Learning Approaches
    • Sample Efficient Feature Learning Methods, e.g. Meta Learning
    • Low Shot learning Techniques
    • New Applications, e.g. Medical Domain
  • Mobile and AR/VR Applications
    • Novel mobile and AR/VR applications using Computer Vision such as image processing (e.g. style transfer, body tracking, face tracking) and augmented reality
    • Learning efficient deep neural networks under memory and computation constraints for on-device applications

Keynote Speakers

Title: Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design
Abstract: Machine learning (ML) applications have entered and impacted our lives unlike any other technology advance from the recent past. While the holy grail for judging the quality of a ML model has largely been serving accuracy, and only recently its resource usage, neither of these metrics translate directly to energy efficiency, runtime, or mobile device battery lifetime. This talk uncovers the need for building accurate, platform‐specific power and latency models for convolutional neural networks (CNNs) and efficient hardware-aware CNN design methodologies, thus allowing machine learners and hardware designers to identify not just the best accuracy NN configuration, but also those that satisfy given hardware constraints. Our proposed modeling framework is applicable to both high‐end and mobile platforms and achieves 88.24% accuracy for latency, 88.34% for power, and 97.21% for energy prediction. Using similar predictive models, we demonstrate a novel differentiable neural architecture search (NAS) framework, dubbed Single-Path NAS, that uses one single-path over-parameterized CNN to encode all architectural decisions based on shared convolutional kernel parameters. Single-Path NAS achieves state-of-the-art top-1 ImageNet accuracy (75.62%), outperforming existing mobile NAS methods for similar latency constraints (∼80ms) and finds the final configuration up to 5,000× faster compared to prior work. Combined with our quantized and pruned CNNs that customize precision and pruning level in a layer-wise fashion, such a modeling, analysis, and optimization framework is poised to lead to true co-design of hardware and ML model, orders of magnitude faster than state of the art, while satisfying both accuracy and latency or energy constraints.

Biography: Diana Marculescu is Department Chair, Cockrell Family Chair for Engineering Leadership #5, and Professor, Motorola Regents Chair in Electrical and Computer Engineering #2, at the University of Texas at Austin. Before joining UT Austin in December 2019, she was the David Edward Schramm Professor of Electrical and Computer Engineering, the Founding Director of the College of Engineering Center for Faculty Success (2015-2019) and has served as Associate Department Head for Academic Affairs in Electrical and Computer Engineering (2014-2018), all at Carnegie Mellon University. She received the Dipl.Ing. degree in computer science from the Polytechnic University of Bucharest, Bucharest, Romania (1991), and the Ph.D. degree in computer engineering from the University of Southern California, Los Angeles, CA (1998). Her research interests include energy- and reliability-aware computing, hardware aware machine learning, and computing for sustainability and natural science applications. Diana was a recipient of the National Science Foundation Faculty Career Award (2000-2004), the ACM SIGDA Technical Leadership Award (2003), the Carnegie Institute of Technology George Tallman Ladd Research Award (2004), and several best paper awards. She was an IEEE Circuits and Systems Society Distinguished Lecturer (2004-2005) and the Chair of the Association for Computing Machinery (ACM) Special Interest Group on Design Automation (2005-2009). Diana chaired several conferences and symposia in her area and is currently an Associate Editor for IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. She was selected as an ELATE Fellow (2013-2014), and is the recipient of an Australian Research Council Future Fellowship (2013-2017), the Marie R. Pistilli Women in EDA Achievement Award (2014), and the Barbara Lazarus Award from Carnegie Mellon University (2018). Diana is a Fellow of ACM and IEEE.
Title: How to Evaluate Efficient Deep Neural Network Approaches
Abstract: Enabling the efficient processing of deep neural networks (DNNs) has becoming increasingly important to enable the deployment of DNNs on a wide range of platforms, for a wide range of applications. To address this need, there has been a significant amount of work in recent years on designing DNN accelerators and developing approaches for efficient DNN processing that spans the computer vision, machine learning, and hardware/systems architecture communities. Given the volume of work, it would not be feasible to cover them all in a single talk. Instead, this talk will focus on *how* to evaluate these different approaches, which include the design of DNN accelerators and DNN models. It will also highlight the key metrics that should be measured and compared and present tools that can assist in the evaluation.

Biography: Vivienne Sze is an associate professor of electrical engineering and computer science at MIT. She is also the director of the Energy-Efficient Multimedia Systems research group at the Research Lab of Electronics (RLE). Sze works on computing systems that enable energy-efficient machine learning, computer vision, and video compression/processing for a wide range of applications, including autonomous navigation, digital health, and the internet of things. She is widely recognized for her leading work in these areas and has received many awards, including the AFOSR and DARPA Young Faculty Award, the Edgerton Faculty Award, several faculty awards from Google, Facebook, and Qualcomm, the 2018 Symposium on VLSI Circuits Best Student Paper Award, the 2017 CICC Outstanding Invited Paper Award, and the 2016 IEEE Micro Top Picks Award. As a member of the JCT-VC team, she received the Primetime Engineering Emmy Award for the development of the HEVC video compression standard.
For more information about research in the Energy-Efficient Multimedia Systems Group at MIT visit: http://www.rle.mit.edu/eems/
Title: ONCE-FOR-ALL: TRAIN ONE NETWORK AND SPECIALIZE IT FOR EFFICIENT DEPLOYMENT 
Abstract: Last June, researchers released a startling report estimating that the amount of power required for neural architecture search involves the emissions of roughly 626,000 pounds of carbon dioxide. That’s equivalent to nearly five times the lifetime emissions of the average U.S. car, including its manufacturing. This issue gets even more severe in the model deployment phase, where deep neural networks need to be deployed on diverse hardware platforms, including resource-constrained edge devices. I will present a new NAS system for searching and running neural networks efficiently, the once-for-all network (OFA). By decoupling model training and architecture search, OFA can reduce the pounds of carbon emissions involved in neural architecture search by thousands of times. OFA can produce a surprisingly large number of sub-networks (> 10^19) that can fit different hardware platforms and latency constraints. By exploiting weight sharing and progressive shrinking, the produced model consistently outperforms state-of-the-art NAS methods including MobileNet-v3 and EfficientNet (up to 4.0% ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but 1.5x faster than MobileNetV3, 2.6x faster than EfficientNet). In particular, OFA achieves a SOTA 80.0% ImageNet top-1 accuracy under the mobile setting ($<$600M MACs). OFA is the winning solution for the 3rd and 4th Low Power Computer Vision Challenge (LPCVC). Project page and code is available: https://ofa.mit.edu.

Biography: Song Han is an assistant professor in MIT’s Department of Electrical Engineering and Computer Science. His research focuses on efficient deep learning computing. He proposed “deep compression” technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation “efficient inference engine” that first exploited weight pruning and sparsity in deep learning accelerators, which impacted NVIDIA's Ampere GPU architecture. Recently he is interested in neural architecture search for efficient TinyML models. He is a recipient of NSF CAREER Award, MIT Technology Review Innovators Under 35, best paper award at the ICLR’16 and FPGA’17, Facebook Faculty Award, SONY Faculty Award, AWS Machine Learning Award.  
Title: Meta-Learning Beyond Few-Shot Classification
Abstract: While meta-learning has shown tremendous potential for enabling learning and generalization from only a few examples, its success beyond few-shot learning has remained less clear. In this talk, I'll discuss our recent work that studies new challenges including handling distribution shift, discovering equivariances from data, and generalizing to qualitatively distinct tasks. In doing so, I'll shed light on the potential for meta-learning to tackle these problems, and the challenges that remain.

Biography: Chelsea Finn completed her Ph.D. in computer science at UC Berkeley and her B.S. in electrical engineering and computer science at MIT. Now she is a research scientist at Google Brain, a post-doc at Berkeley AI Research Lab (BAIR), and an acting assistant professor at Stanford. She will join the Stanford Computer Science faculty full time, starting in Fall 2019. She is interested in how algorithms can enable machines to acquire more general notions of intelligence through learning and interaction, allowing them to autonomously learn a variety of complex sensorimotor skills in real-world settings. This includes learning deep representations for representing complex skills from raw sensory inputs, enabling machines to learn through interaction without human supervision, and allowing systems to build upon what they’ve learned previously to acquire new capabilities with small amounts of experience.

Keynote Speakers

Title: TBD

Biography: Diana Marculescu is Department Chair, Cockrell Family Chair for Engineering Leadership #5, and Professor, Motorola Regents Chair in Electrical and Computer Engineering #2, at the University of Texas at Austin. Before joining UT Austin in December 2019, she was the David Edward Schramm Professor of Electrical and Computer Engineering, the Founding Director of the College of Engineering Center for Faculty Success (2015-2019) and has served as Associate Department Head for Academic Affairs in Electrical and Computer Engineering (2014-2018), all at Carnegie Mellon University. She received the Dipl.Ing. degree in computer science from the Polytechnic University of Bucharest, Bucharest, Romania (1991), and the Ph.D. degree in computer engineering from the University of Southern California, Los Angeles, CA (1998). Her research interests include energy- and reliability-aware computing, hardware aware machine learning, and computing for sustainability and natural science applications. Diana was a recipient of the National Science Foundation Faculty Career Award (2000-2004), the ACM SIGDA Technical Leadership Award (2003), the Carnegie Institute of Technology George Tallman Ladd Research Award (2004), and several best paper awards. She was an IEEE Circuits and Systems Society Distinguished Lecturer (2004-2005) and the Chair of the Association for Computing Machinery (ACM) Special Interest Group on Design Automation (2005-2009). Diana chaired several conferences and symposia in her area and is currently an Associate Editor for IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. She was selected as an ELATE Fellow (2013-2014), and is the recipient of an Australian Research Council Future Fellowship (2013-2017), the Marie R. Pistilli Women in EDA Achievement Award (2014), and the Barbara Lazarus Award from Carnegie Mellon University (2018). Diana is a Fellow of ACM and IEEE.
Title: TBD

Biography: Vivienne Sze received the B.A.Sc. (Hons) degree in electrical engineering from the University of Toronto, Toronto, ON, Canada, in 2004, and the S.M. and Ph.D. degree in electrical engineering from the Massachusetts Institute of Technology (MIT), Cambridge, MA, in 2006 and 2010 respectively. She received the Jin-Au Kong Outstanding Doctoral Thesis Prize in electrical engineering at MIT in 2011. She is currently an Associate Professor in the Electrical Engineering and Computer Science Department at MIT. Her research interests include energy-efficient algorithms and architectures for portable multimedia applications. From September 2010 to July 2013, she was a Member of Technical Staff in the Systems and Applications R&D Center at Texas Instruments (TI), Dallas, TX, where she designed low-power algorithms and architectures for video coding. She also represented TI in the JCT-VC committee of ITU-T and ISO/IEC standards body during the development of High Efficiency Video Coding (HEVC), which received a Primetime Emmy Engineering Award. Within the committee, she was the primary coordinator of the core experiment on coefficient scanning and coding. She is a recipient of the 2017 Qualcomm Faculty Award, the 2016 Google Faculty Research Award, the 2016 AFOSR Young Investigator Research Program (YIP) Award, the 2016 3M Non-Tenured Faculty Award, the 2014 DARPA Young Faculty Award, the 2007 DAC/ISSCC Student Design Contest Award, and a co-recipient of the 2017 CICC Outstanding Invited Paper Award, the 2016 IEEE Micro Top Picks Award and the 2008 A-SSCC Outstanding Design Award.
Title: TBD

Biography: Song Han is an assistant professor at MIT EECS. Dr. Han received the Ph.D. degree in Electrical Engineering from Stanford advised by Prof. Bill Dally. Dr. Han’s research focuses on efficient deep learning computing. He proposed “Deep Compression” and “Efficient Inference Engine” that impacted the industry. His work received the best paper award in ICLR’16 and FPGA’17. He is the co-founder and chief scientist of DeePhi Tech (a leading efficient deep learning solution provider), which was acquired by Xilinx. The pruning, compression and acceleration techniques have been integrated into products.
Title: TBD

Biography: Chelsea Finn completed her Ph.D. in computer science at UC Berkeley and her B.S. in electrical engineering and computer science at MIT. Now she is a research scientist at Google Brain, a post-doc at Berkeley AI Research Lab (BAIR), and an acting assistant professor at Stanford. She will join the Stanford Computer Science faculty full time, starting in Fall 2019. She is interested in how algorithms can enable machines to acquire more general notions of intelligence through learning and interaction, allowing them to autonomously learn a variety of complex sensorimotor skills in real-world settings. This includes learning deep representations for representing complex skills from raw sensory inputs, enabling machines to learn through interaction without human supervision, and allowing systems to build upon what they’ve learned previously to acquire new capabilities with small amounts of experience.

Program (Tentative)

(Virtual Conference, Seattle Time: June 15, 2020)
Presentation Time
(Seattle Time)
Event Title Paper ID Duration
08:30 ~ 08:40 Opening 900 10 min
08:40 ~ 09:20 20:40 ~ 21:20 Keynote 1 Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design,
Dr. Diana Marculescu's slides
901 40 min
09:20 ~ 10:00 21:20 ~ 22:00 Keynote 2 How to Evaluate Efficient Deep Neural Network Approaches,
Dr. Vivienne Sze's talk is available at: Youtube
Dr. Vivienne Sze's slides
902 40 min
10:00 ~ 10:15 22:00 ~ 22:15 Oral 1 Randaugment: Practical automated data augmentation with a reduced search space 27 15 min
10:15 ~ 10:30 22:15 ~ 22:30 Neural Network Compression Using Higher-Order Statistics and Auxiliary Reconstruction Losses 39 15 min
10:30 ~ 10:45 22:30 ~ 22:45 Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training 43 15 min
10:45 ~ 11:00 22:45 ~ 23:00 Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T) 44 15 min
11:30 ~ 11:35 23:30 ~ 23:35 Spotlights 1 Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification 4 5 min
11:35 ~ 11:40 23:35 ~ 23:40 FoNet: A Memory-efficient Fourier-based Orthogonal Network for Object Recognition 15 5 min
11:40 ~ 11:45 23:40 ~ 23:45 LSQ+: Improving low-bit quantization through learnable offsets and better initialization 20 5 min
11:45 ~ 11:50 23:45 ~ 23:50 Least squares binary quantization of neural networks 22 5 min
11:55 ~ 12:00 23:55 ~ 00:00 Any-Width Networks 29 5 min
12:00 ~ 12:05 00:00 ~ 00:05 Data-Free Network Quantization With Adversarial Knowledge Distillation 35 5 min
12:05 ~ 12:10 00:05 ~ 00:10 Structured Weight Unification and Encoding for Neural Network Compression and Acceleration 38 5 min
12:10 ~ 12:15 00:10 ~ 00:15 Intelligent Scene Caching to Improve Accuracy for Energy-Constrained Embedded Vision 46 5 min
12:15 ~ 12:30 00:15 ~ 00:30 Adaptive Posit: Parameter aware numerical format for deep learning inference on the edge 52 15 min
14:00 ~ 14:40 02:00 ~ 02:40 Keynote 3 Once-for-all: train one network and specialize it for efficient deployment
Dr. Song Han's talk is available at: Youtube, Google Drive, Bilibili
Dr. Song Han's slides
903 40 min
14:40 ~ 14:55 02:40 ~ 02:55 Oral 2 BAMSProd: A Step towards Generalizing the Adaptive Optimization Methods to Deep Binary Model 1 15 min
14:55 ~ 15:10 02:55 ~ 03:10 Dynamic Inference: A New Approach Toward Efficient Video Action Recognition 3 15 min
15:10 ~ 15:25 03:10 ~ 03:25 Low-bit Quantization Needs Good Distribution 7 15 min
15:25 ~ 15:40 03:25 ~ 03:40 Attentive Semantic Preservation Network for Zero-Shot Learning 9 15 min
15:40 ~ 15:55 03:40 ~ 03:55 AdaMT-Net: An Adaptive Weight Learning Based Multi-Task Learning Model For Scene Understanding 31 15 min
16:20 ~ 17:00 04:20 ~ 05:00 Keynote 4 Meta-Learning for Efficient Deep Learning,
Dr Chelsea Finn's slides
904 40 min
17:30 ~ 17:35 05:30 ~ 05:35 Spotlights 2 Mimic The Raw Domain: Accelerating Action Recognition in the CompressedDomain 11 5 min
17:35 ~ 17:40 05:35 ~ 05:40 Constraint-Aware Importance Estimation for Global Filter Pruning under Multiple Resource Constraints 13 5 min
17:40 ~ 17:45 05:40 ~ 05:45 Computer-aided diagnosis system of lung carcinoma using Convolutional Neural Networks 16 5 min
17:45 ~ 17:50 05:45 ~ 05:50 Fast Hardware-Aware Neural Architecture Search 18 5 min
17:50 ~ 17:55 05:50 ~ 05:55 Learning Sparse Neural Networks Through Mixture-Distributed Regularization 19 5 min
17:55 ~ 18:00 05:55 ~ 06:00 RefineDetLite: A Lightweight One-stage Object Detection Framework for CPU-only Devices 23 5 min
18:00 ~ 18:05 06:00 ~ 06:05 Ternary MobileNets via Per-Layer Hybrid Filter Banks 34 5 min
18:05 ~ 18:10 06:05 ~ 06:10 Now that I can see, I can improve : Enabling data-driven finetuning of CNNs on the edge 36 5 min
18:10 ~ 18:15 06:10 ~ 06:15 Monte Carlo Gradient Quantization 42 5 min

Program (Tentative)

(Location: TBD )
Time Event
8:50 - 9:00 Welcome by organizers
9:00 - 9:30 Invited talk: Prof. Philip Torr (Oxford University)
9:30 - 10:00 Invited talk: Prof. Nic Lane (Oxford University)
10:00 - 10:30 Coffee break
10:30 - 11:00 Oral Session 1 (3 presentations: 10min each)
11:00 - 11:30 Keynote talk: Prof. Song Han (MIT)
11:30 - 12:00 Invited talk: Prof. Diana Marculescu (CMU)
12:00 - 12:30 Oral Session 2 (3 presentations: 10min each)
12:30 - 13:30 Lunch break
13:30 - 14:00 Keynote talk: Prof. Bill Dally (Stanford)
14:00 - 14:30 Invited talk: Prof. Chelsea Finn (Stanford)
14:30 - 15:00 Oral Session 3 (3 presentations: 10min each)
15:00 - 16:00 Poster session by paper submission
16:00 - 17:30 Panel presentations and discussion on Efficient deep learning algorithms. Moderator: Luc Van Gool
17:30 - 17:45 Closing awards for best paper and best poster

Submission

All submissions will be handled electronically via the workshop’s CMT Website. Click the following link to go to the submission site: https://cmt3.research.microsoft.com/EDLCV2020/

Papers should describe original and unpublished work about the related topics. Each paper will receive double blind reviews, moderated by the workshop chairs. Authors should take into account the following:

  • All papers must be written and presented in English.
  • All papers must be submitted in PDF format. The workshop paper format guidelines are the same as the Main Conference papers
  • The maximum paper length is 8 pages (excluding references). Note that shorter submissions are also welcome.
  • The accepted papers will be published in CVF open access as well as in IEEE Xplore.

Submission

All submissions will be handled electronically via the workshop’s CMT Website. Click the following link to go to the submission site: https://cmt3.research.microsoft.com/EDLCV2020/

Papers should describe original and unpublished work about the related topics. Each paper will receive double blind reviews, moderated by the workshop chairs. Authors should take into account the following:

  • All papers must be written and presented in English.
  • All papers must be submitted in PDF format. The workshop paper format guidelines are the same as the Main Conference papers
  • The maximum paper length is 8 pages (excluding references). Note that shorter submissions are also welcome.
  • The accepted papers will be published in CVF open access as well as in IEEE Xplore.

Organizers

Main Contacts

If you have question, please contact :

  • Dr. Li Liu : li.liu@oulu.fi

  • Dr. Peter Vajda : vajdap@fb.com

  • Dr. Werner Bailer : werner.bailer@joanneum.at

Main Contacts

If you have question, please contact :

  • Dr. Li Liu : li.liu@oulu.fi

  • Dr. Peter Vajda : vajdap@fb.com

  • Dr. Werner Bailer : werner.bailer@joanneum.at