Skip Navigation

The College of Engineering - Electrical & Computer Engineering

Tough Talk Graduate Seminar

ToughTalk is a weekly seminar, given by graduate or undergraduate students, as well as faculty and distinguished guests on their active reseach projects. Tough Talk is no ordinary seminar series. It is a lively discussion, where we provide constructive criticism to the speaker on everything from the technical content of their presentation to their fluency, language skills, and their ability to engage the audience. We reward not only the best presenters but also the audience members who provide the most insightful comments and questions that is deemed useful and helpful the project being conducted. Tough Talk is open to public. The speakers, their topics and short abstracts will be posted here as they are scheduled.

Fall 2013 Tough Talk Schedule

This page will be constantly updated as new talks are scheduled.

October 25, 2013

Title: Face Recognition based on Sparse Representation
Speaker: Turan Can Artunc

Abstact: Parsimony has a rich history as a guiding principle for inference and its role in human perception has also been strongly supported by studies of human vision. One of its most celebrated instantiations, the principle of minimum description length in model selection, stipulates that within a hierarchy of model classes, the model that yields the most compact representation should be preferred for decision making tasks such as classification. In the statistical signal processing community, the algorithmic problem of computing sparse linear representations with respect to an over complete dictionary of base elements or signal atoms has seen a recent surge of interest. This excitement emanates from the discovery that whenever the optimal representation is sufficiently sparse, it can be efficiently computed by convex optimization, even though this problem can be extremely difficult in the general case. In this presentation, the discriminative nature of sparse representation is exploited to perform automatic face recognition. We will address the question of which features are best for classification and show that the theory of compressed sensing implies that the precise choice of feature space is no longer critical: even random features contain enough information to recover the sparse representation and hence correctly classify any test image. The theory is validated by experiments on the Extended Yale Face B database.

 

October 11, 2013

Title: Active Learning in Initially Labeled Nonstationary Environments
Speaker: Rob Capo

Abstract: The recent surge in online connectivity allows us to collect more data than ever before. This data can represent anything: email messages, bank transactions, online trends, sensor data, and many more. The raw data, however, doesn’t usually tell us much unless it has been classified (i.e. spam or not spam, fraudulent or not fraudulent, etc.). To further complicate things, the class distributions of the incoming data are often changing by nature. We approach the problem of nonstationary classification, which is a common thread in machine learning. One challenging version of this problem that has not received as much attention, however, is classifying nonstationary data when only a small initial set of data are labeled. After the initial labeled data, we have access to primarily unlabeled data, which ideally changes only gradually. We refer to these environments as initially labeled nonstationary streaming environments (ILNSEs). Some challenges that are introduced in ILNSEs are sudden changes (i.e. the addition of a new class or an abrupt drift of an existing class) and mixing class distributions. We present COMPOSE.AL, an active batch learning method that works well even when the aforementioned challenges are imposed. COMPOSE.AL requires a batch of unlabeled data at each timestep. It analyzes the data and may request the class labels of carefully selected, informative instances from the current batch; this is referred to as active learning. Being that labeled data is generally expensive, the algorithm will request labels only when it needs them. COMPOSE.AL then classifies all of the unlabeled data in the batch and uses it to help classify the next batch of data it receives. The process continues as long as unlabeled data is available. We are also interested in exploring online classification methods for ILNSEs where streaming data can be classified as it is received (rather than in batches). Potential solutions for such online classification are presented briefly.

 

October 14, 2013

Title: Temporal Memory Learning
Speaker: Tony Samaritano

Abstract: A truly intelligent machine has been the Holy Grail of machine learning and pattern recognition research field since its modern day inception more than a half century ago. Intelligence itself is a controversial topic with mixed ideas from neuroscientists, machine learning researchers and various other experts in dozens of fields. Recent breakthroughs in brain imaging have allowed researchers to fundamentally understand how signals are routed and memories are formed in the human neocortex. Jeff Hawkins, an entrepreneur, computer scientist and neuroscience researcher at Berkeley, has outlined the fundamental learning algorithm for what himself and many others believe identifies how the human neocortex learns and how that relates to intelligence. In this talk, we will outline the Hierarchical Temporal Memory (HTM) system as defined by Jeff Hawkins, our interpretation of intelligence and implementation of this cortical learning algorithm (CLA). Furthermore, we will share where the algorithm excels and falters in relation to the field of machine learning and share our initial results.

 

September 27, 2013

Title: Probabilistic Non-negative Matrix Factorization: Theory and Application to Microarray Data Analysis
Speaker: Belhassan Bayar

Abstract: Non-negative matrix factorization (NMF) has proven to be a useful decomposition for multivariate data, where the non-negativity constraint is necessary to have a meaningful physical interpretation. It has been widely applied in clustering and feature extraction of microarray data where the genomic data should be nonnegative. The NMF algorithm, however, assumes a deterministic framework. In particular, the effect of the data noise on the stability of the factorization and the convergence of the algorithm are unknown. Collected data, on the other hand, is stochastic in nature due to measurement noise and sometimes inherent variability in the physical process. In this talk, we propose a new theoretical and applied developments to the problem of non-negative matrix factorization. We first extend the NMF framework to the probabilistic case (PNMF). We show that the Maximum A Posteriori estimate of the non-negative factors is the solution to a weighted regularized non-negative matrix factorization problem. We subsequently derive update rules that converge towards an optimal solution. Finally, we apply the PNMF to cluster and classify DNA microarrays data. The proposed PNMF is shown to outperform the deterministic NMF and the sparse NMF algorithms in clustering stability and classification accuracy.

 

September 20, 2013

Title: Compressive Kalman Filtering for Recovering Temporally-Rewiring Genetic Networks
Speaker: Jehandad Khan

Abstract: Genetic regulatory networks undergo rewiring over time in response to cellular developments and environmental stimuli. The main challenge in estimating time-varying genetic interactions is the limited number of observations at each time point; thus making the problem unidentifiable. We formulate the recovery of temporally-rewiring genetic networks as a tracking problem, where the target to be tracked over time consists of the set of genetic interactions. We circumvent the observability issue (due to the limited number of measurements) by taking into account the sparsity of genetic networks. With linear dynamics, we use a compressive Kalman filter to track the interactions as they evolve over time. Our simulation results show that the compressive Kalman filter achieves good tracking performance even with one measurement available at each time point; whereas the classical (unconstrained) Kalman filter completely fails in obtaining meaningful tracking.