My name is Jialin Lu 陸家林. I also go with the name Lucy, which has been with me for years.


Currently I work at Huawei Vancouver research center.


Interests


What I keep telling myself is to develop simple learning algorithms following the the bitter lesson, while trying to strike a balance on human interpretability and practical maintenance. The bitter lesson constantly strikes as a relentless hammer. How to leverage the bitter lession to construct systems that are interpretable, sense-making and precise (precise as in the same level of the symbolic approaches, contraint solver and planning, etc), is a real headache.




I can be reached at luxxxlucy@gmail.com or jialin_lu@sfu.ca


Hover on the picture.

MSc Computing Science, Simon Fraser University
with Martin Ester

BEng Computer Science, Zhejiang University
BEng Industrial Design, Zhejiang University

CV   Mastodon      


About my education

The degree in design is, with sincere gratitude, obtained through the International Design Institute of Zhejiang University and the Studio of Design and Innovation program. My master thesis (with Martin Ester) focus on building interpretable models by infusing deep learning with the crispness of logic, which is related to the general theme of neuro-symbolic integration.

More about me

I was born at Lanxi (蘭谿, 'the orchid river'), a typical southern Yangtze river town. My father practices the art of traditional Chinese calligraphy as profession, specialized in Wei dynasty inscriptions, and is also a provincially recognized calligraphy mentor. I received intensive training from him when I was younger, until I reached some local/provincial/national awards and exhibitions and then ceased training, for various reasons. I specializes in the classicals of Zhao Mengfu (趙孟頫) and Chu Suiliang (褚遂良), in particular 妙嚴寺記 and 陰符經.

About my master thesis

I start to think about interpretability with analysing and interpreting complex network systems, biological systems and artificial machine learning ones. What I find is that interpretability of a complex system is often unrealistic in the common sense and we lack any proper measure, unless we have a clear mechanistic understanding on what kind of interpretation we need it to be. And this is also the case if we wish to develop and interpret artificial systems, as the bitter lesson keeps telling us the simple rule that general methods which leverage massive computation and parameters is ultimately most effective, and how can we even interpret such large-scale models?

Part of me tries to believe in Occam's razor and mies van der rohe's less is more (partly due to my undergraduate education in design), but I soon realize more is different is inevitable when it comes to complex systems. David Marr's level-of-analysis tells something but not really, in particular it does not guides us if we are making the artificial systems when we merely need some sense of control of "what is going on here".

So what I do in my master thesis is that, if we intentionally assume the separation and combination of perception and reasoning, then we can provide a hybrid computation model consisting of neural networks for perception processing raw data into learned representation and logic program for reasoning on the perceived representation, which employs simple learning rules (thus following the bitter lesson) while still provides us a sense-making interpretations about its decision making.
However, I find more problems when finishing this project compared with when I started. To develop a method that can reliably optimize program is one thing. And besides it, there seems to be a even more important problem of symbolic grounding.

Some other things All Time Logs (records of my laziness)

Papers

A brief introduction of my research can be found here

Interpretable drug response prediction using a knowledge-based neural network
Oliver Snow, Hossein Sharifi Noghabi, Jialin Lu, Olga Zolotareva, Mark Lee, Martin Ester

KDD 2021


Neural Disjunctive Normal Form
Jialin Lu   Link

2021 Spring. Master thesis


Revisit Recurrent Attention Model from an Active Sampling Perspective
Jialin Lu   Paper

NeurIPS 2019 Neuro↔AI Workshop


An Active Approach for Model Interpretation
Jialin Lu, Martin Ester   Paper

NeurIPS 2019 workshop on Human-centric machine learning (HCML2019)


Checking Functional Modularity in DNN By Biclustering Task-specific Hidden Neurons
Jialin Lu, Martin Ester   Paper

NeurIPS 2019 Neuro↔AI Workshop


Current Projects


Trying interpretable and lightweight, symbolic-ish approaches (synthesis, solvers) with some NN to 2D graphics applications

"Big things has small beginnings" (T. E. Lawrence or David, depending on your age)


Recent Blogs

Misc

Mirror-Integration and Functional-Regularisation for better control of Deep Nets   slide

Oct 14, 2020 at Simon Fraser University


On more interesting blocks with discrete parameters in deep learning
tutorial presentation,   slide

July 8, 2020 at Simon Fraser University


A oursider's survey on Bayesian Deep Learning
during lab meeting,   slide   handout   link

Feb 26, 2020 at Simon Fraser University


Patterns of Shang-dynasty Ritual Bronze Vessels (in Chinese)
Invited talk (In Chinese) for a group of special interests,   Video

Apr 2, 2018 at Zhejiang University



Past Projects

Paused research projects

Other Misc Projects





甚矣!汝之不惠!

Jialin Lu, Hangzhou→Vancouver→?