Talk 1

Title: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation

Speaker: Yu Yi

Abstract:

In this work, we introduce Panoptic-DeepLab, a simple, strong, and fast system for panoptic segmentation, aiming to establish a solid baseline for bottom-up methods that can achieve comparable performance of two-stage methods while yielding fast inference speed. In particular, Panoptic- DeepLab adopts the dual-ASPP and dual-decoder struc- tures specific to semantic, and instance segmentation, re- spectively. The semantic segmentation branch is the same as the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation branch is class-agnostic, involving a simple instance center regres- sion. As a result, our single Panoptic-DeepLab simultane- ously ranks first at all three Cityscapes benchmarks, setting the new state-of-art of 84.2% mIoU, 39.0% AP, and 65.5% PQ on test set. Additionally, equipped with MobileNetV3, Panoptic-DeepLab runs nearly in real-time with a single 1025 × 2049 image (15.8 frames per second), while achiev- ing a competitive performance on Cityscapes (54.1 PQ% on test set). On Mapillary Vistas test set, our ensemble of six models attains 42.7% PQ, outperforming the challenge winner in 2018 by a healthy margin of 1.5%. Finally, our Panoptic-DeepLab also performs on par with several top-down approaches on the challenging COCO dataset. For the first time, we demonstrate a bottom-up approach could deliver state-of-the-art results on panoptic segmentation.

Supervisor: Yan Chen

Talk 2

Title:基于 Spark并行 SVM 参数寻优算法的研究

Speaker: Zhangyu Cao

Abstract:

To solve the problems of the traditional support vector machine parameter optimization algorithm in dealing with large sample data sets,such as long time-consuming and excessive memory consumption,we proposed a parallel adjustable Support Vector Machine(SVM)parameter optimization algorithm based on Spark universal computing engine. Firstly,this algorithm uses Spark cluster to distribute the training set to each executor in the form of broadcast variables,and then makes the parameter optimization process of SVM parallel. In the parameter optimization process,each executor is load-balanced by controlling the parallelisms of the tasks,thereby speeding up the parameter optimization. At last the experimental results show that the proposed algorithm in this paper can improve the search speed and reduce the optimization time by setting the reasonable tasks parallelisms with making full use of the cluster resources. Keywords:support vector machine;parameter optimization;spark;parallelism;load balancing

Supervisor: Dan Deng

 

Time:16:00  Noverber 26, 2020

Address:MingLi Buliding C1102

Chair: Gang Zhi