講  題:Consensus Learning for Sparse Principal Component Analysis

          主講人:李育杰 教授(國立陽明交通大學應用數學系)

     時  間:20211209日(星期四)下午0210 - 0400

     地  點:B302A(淡水校園商管大樓) 

     茶  會:20211209日(星期四)下午0130 (商管大樓 B1102)

 

 摘 要

 

        Learning from data has become the mainstream in modern machine learning applications. The more data we have, our machine learning methods show better results if the data quality is good enough. However, in many real applications, the data owners may not want to share or are not allowed to share the data they have because of privacy issues or legal concerns. To solve this problem, we proposed a consensus learning framework also known as the federated learning named by Google in 2016. This framework has been applied to linear and nonlinear SVM, LASSO as well as PCA. In this work, we will apply this framework to sparse principal analysis (SPCA). The sparse loadings are composed of as few non-zero elements as possible and keep the data variation as much as possible when we project the data onto these sparse loadings. The SPCA is not only an unsupervised dimension reduction algorithm but also helps identify the key explainable variables. Our proposed method, consensus learning SPCA (CLSPCA) will be solved by a distributed optimization algorithm, ADMM. It basically let each data owners train their models separately. There is a central master will collect the models generated by the workers. The central master fuses these models and then broadcasts to each worker let them improve their model. After certain iterations, we will have the resulting model that will be approached to the model generated by pooling all data together. The data owners never share their data with anyone in the whole training process. We will demonstrate our CLSPCA results on the synthetics dataset as well as a public dataset.


建議最佳瀏覽 Microsoft IE 10 以上/Google Chrome/Mozilla Firefox 或相容W3C網頁標準之瀏覽器

更新日期 : 2022/05/20