Advancement: Towards Fast And Accurate Structured Prediction

Speaker Name: 
Sriram Srinivasan
Speaker Title: 
PhD Student (Advisor: Lise Getoor)
Speaker Organization: 
Computer Science
Start Time: 
Wednesday, November 14, 2018 - 11:30am
End Time: 
Wednesday, November 14, 2018 - 11:30am
Baskin Engineering, Room 330
Lise Getoor

Abstract:  Complex tasks such as sequence labeling, collective classification, and activity recognition involve predicting outputs that are interdependent, a task known as structured prediction. Approaches such as structured support vector machines, conditional random fields, and statistical relational learning (SRL) frameworks are known to be efficient at structured prediction. Of these approaches, SRL frameworks are unique as they combine ease of using logical statements with the power of probabilistic models. However, SRL frameworks face several challenges. In this proposal, I focus on four key challenges: 1) inference - the models generated through SRL frameworks are often very large, and this can make inference computationally expensive, 2) weight learning - the logical rules that define a model in SRL frameworks are weighted and choosing the set of weights that maximize user-defined performance metrics is hard, 3) structure learning - finding! the best set of rules that define the graphical model is computationally intractable , and 4) exploiting structure - exploiting symmetries to perform efficient inference modifies the inherent structure of the problem altering the complexity of inference. In this proposal, I address the first challenge by developing an approach that detects and exploits exact symmetries in the model making inference more tractable. Further, I propose methods that exploit approximate symmetries. Next, I develop an approach based on Bayesian optimization techniques to perform weight learning that maximizes a user-defined metric. To handle the third challenge, I propose a method that combines gradient information with the Bayesian optimization approach to perform structure learning. Finally for the last challenge, I propose to explore a variety of optimization approaches, analyze their efficiency performing inference on models with different structures, and identify properties that affect i! nference performance.