‹Programming› 2023
Mon 13 - Fri 17 March 2023 Tokyo, Japan

Similarity, or clone, detection has important applications in copyright violation, software theft, code search, and the detection of malicious components. There is now a good number of open source and proprietary clone detectors for programs written in traditional programming languages. However, the increasing adoption of deep learning models in software poses a challenge to these tools: these models implement functions that are inscrutable black boxes. As more software includes these DNN functions, new techniques are needed in order to assess the similarity between deep learning components of software.

Previous work has unveiled techniques for comparing the representations learned at various layers of deep neural network models by feeding canonical inputs to the models. Our goal is to be able to compare DNN functions when canonical inputs are not available – because they may not be in many application scenarios. The challenge, then, is to generate appropriate inputs and to identify a metric that, for those inputs, is capable of representing the degree of functional similarity between two comparable DNN functions.

Our approach uses random input with values between −1 and 1, in a shape that is compatible with what the DNN models expect. We then compare the outputs by performing correlation analysis.

Our study shows how it is possible to perform similarity analysis even in the absence of meaningful canonical inputs. The response to random inputs of two comparable DNN functions exposes those functions’ similarity, or lack thereof. Of all the metrics tried, we find that Spearman’s rank correlation coefficient is the most powerful and versatile, although in special cases other methods and metrics are more expressive.

We present a systematic empirical study comparing the effectiveness of several similarity metrics using a dataset of 56, 355 classifiers collected from GitHub. This is accompanied by a sensitivity analysis that reveals how certain models’ training related properties affect the effectiveness of the similarity metrics.

To the best of our knowledge, this is the first work that shows how similarity of DNN functions can be detected by using random inputs. Our study of correlation metrics, and the identification of Spearman correlation coefficient as the most powerful among them for this purpose, establishes a complete and practical method for DNN clone detection that can be used in the design of new tools. It may also serve as inspiration for other program analysis tasks whose approaches break in the presence of DNN components.

Wed 15 Mar

Displayed time zone: Osaka, Sapporo, Tokyo change

09:00 - 10:30
Research Papers 1Research Papers at Faculty of Engineering Building 2, Room 212
Chair(s): Philipp Haller KTH Royal Institute of Technology
09:00
30m
Talk
A Functional Programming Language with VersionsVol. 6
Research Papers
Yudai Tanabe Tokyo Institute of Technology, Luthfan Anshar Lubis , Tomoyuki Aotani Tokyo Institute of Technology, Hidehiko Masuhara Tokyo Institute of Technology
Link to publication
09:30
30m
Talk
Compilation Forking: A Fast and Flexible Way of Generating Data for Compiler-Internal Machine Learning TasksVol. 7
Research Papers
Raphael Mosaner JKU Linz, David Leopoldseder Oracle Labs, Wolfgang Kisling Johannes Kepler University Linz, Lukas Stadler Oracle Labs, Austria, Hanspeter Mössenböck JKU Linz
Link to publication
10:00
30m
Talk
Black Boxes, White Noise: Similarity Detection for Neural FunctionsVol. 7remote
Research Papers
Farima Farmahinifarahani University of California at Irvine, Crista Lopes University of California, Irvine
Link to publication