Prof. Hong Zhu

HongZhu-Aug 2022.jpeg

Prof. Hong Zhu

Oxford Brookes University,UK

图片.png

Speech Title:  Enhancing Performance of Machine Learning Models Through Scenario-Based Functional Testing

Abstract

One of the crucial differences of machine learning models from software written in programming languages is that they cannot be debugged via editing the model structure or parameters. A challenge in the development of machine learning applications is how to improve the performance of the system based on testing results. Addressing this problem, we propose a scenario-based functional testing approach, which consists of an iterative cycle of discovering, diagnosis, treatment and evaluation.  It starts with an exploratory testing aiming at discovering the scenarios in which the ML model’s performance is weakness. It is followed by a diagnostic testing, which further test the machine learning model on the suspected weak scenarios and statistically evaluate the model’s performance on the scenarios to confirm the suspected weakness. Once the diagnosis of weak scenarios is confirmed by test results, a treatment of the model is performed by retraining the model with additional training data targeting the weak scenario. Finally, after the treatment, the model is evaluated again by testing on the treated scenarios as well as other scenarios to check if the treatment is effective and without side-effects.

A key factor of the success of the proposed method is in the retraining of the model, which must be able to improve the performance on the treated scenario without compromising the performance on other scenarios. A common problem is the so-called catastrophic forgetting effect, that is when retraining the model on new data, the model’s performance decreases on other already trained data. Our solution to this problem is to use a transfer learning technique combined with rehearsal. That is, retraining starts with the original model as the base. The retraining dataset consists of a set of new training data specifically targeting the scenario to be treated plus a subset of the original training data selected at random.

Another key issue of the proposed method is how to generate training and testing datasets for various scenarios and perform repeated tests efficiently and effectively. Our solution is to employ datamorphic testing methodology. That is, data augmentations are implemented as datamorphisms and metamorphisms. They are organised into a reusable and evolvable well-structured test system so that an automated datamorphic testing tool like Morphy can be employed to achieve test automation.

The paper reports a case study with a real deep neural network (DNN) model, which is the perception system of an autonomous racing car. We demonstrate that the method is effective in the sense that DNN model’s performance can be improved. It provides an efficient method of enhancing ML model’s performance with much less human and compute resource than retrain from scratch.


Biography

Dr. Hong Zhu is a professor of computer science at the Oxford Brookes University, Oxford, UK, where he chairs the Cloud Computing and Cybersecurity Research Group. He obtained his BSc, MSc and PhD degrees in Computer Science from Nanjing University, China, in 1982, 1984 and 1987, respectively. He was a faculty member of Nanjing University from 1987 to 1998. He joined Oxford Brookes University in November 1998 as a senior lecturer in computing and became a professor in Oct. 2004. His research interests are in the area of software development methodologies, including software engineering for cloud computing and software engineering of AI and machine learning applications, formal methods, software design, programming languages and automated tools, software modelling and testing, etc. He has published 2 books and more than 200 research papers in journals and international conferences. He is a senior member of IEEE, a member of British Computer Society, and ACM.