Computer Science

Computer Science

 

 

Poster Number: CSE-01

Authors: Faraz Ahmed, Alex Liu, Rong Jin

Title:  Social Graph Publishing with Privacy Guarantees

 

Abstract: Online social network graphs provide useful insights on various social phenomena such as information dissemination and epidemiology. Unfortunately, social network providers often refuse to publish their social network graphs due to privacy concerns. Recently, differential privacy has become the widely accepted criteria for privacy preserving data publishing because it provides strongest privacy guarantees for publishing sensitive datasets. Although some work has been done on publishing matrices with differential privacy, they are computationally unpractical as they are not designed to handle large matrices such as the adjacency matrices of OSN graphs. In this paper, we propose a random matrix approach to OSN graph publishing, which achieves storage and computational efficiency by reducing the dimensions of adjacency matrices and achieves differential privacy by adding a small amount of noise. Our key idea is to first project each row of an adjacency matrix into a low dimensional space using random projection, and then perturb the projected matrix with random noise, and finally publish the perturbed and projected matrix. In this paper, we first prove that random projection plus random perturbation preserve differential privacy, and also that the random noise required to achieve differential privacy is small. We then validate the proposed approach and evaluate the utility of the published data for two different applications, namely node clustering and node ranking, using publicly available OSN graphs of Facebook, Live Journal, and Pokec.

 

 

 

Poster Number: CSE-02

Authors: Kamran Ali, Alex X. Liu, Wei Wang, Muhammad Shahzad

Title:  Recognizing Keystrokes Using Commodity Wi-Fi Devices

 

Abstract: Keystroke privacy is critical for ensuring the security of computer systems and the privacy of human users as what being typed could be passwords or privacy sensitive information. In this paper, we show for the first time that WiFi signals can also be exploited to recognize keystrokes. The intuition is that while typing a certain key, the hands and fingers of a user move in a unique formation and direction and thus generate a unique pattern in the time-series of Channel State Information (CSI) values, which we call CSI-waveform for that key. In this paper, we propose a WiFi signal based keystroke recognition system called WiKey. WiKey consists of two Commercial Off-The-Shelf (COTS) WiFi devices, a sender (such as a router) and a receiver (such as a laptop). The sender continuously emits signals and the receiver continuously receives signals. When a human subject types on a keyboard, WiKey recognizes the typed keys based on how the CSI values at the WiFi signal receiver end. We implemented the WiKey system using a TP-Link TL-WR1043ND WiFi router and a Lenovo X200 laptop. WiKey achieves more than 97.5% detection rate for detecting the keystroke and 96.4% recognition accuracy for classifying single keys. In real-world experiments, WiKey can recognize keystrokes in a continuously typed sentence with an accuracy of 93.5%.

 

 

 

Poster Number: CSE-03

Authors: Salman Ali, Kamran Ali, Alex X. Liu, Wei Wang

Title:  Facial Gesture Recognition using WiFi Signals

 

Abstract: Gesture recognition from human facial expressions is crucial to enable several emotion based human-computer interaction applications, as facial expressions reveal different human attitudes. Recent techniques which focus on recognition of human facial gestures using video, sound, radar or physiological sensors, are either intrusive and/or require specialized hardware. In this work, we propose a wireless signal based facial gesture recognition system for devices with WiFi capability, e.g. smartphone, laptop. Our system is non-intrusive as users can express gestures or feelings even when they feel uncomfortable with speaking or turning on the camera of their device. We test our system in a real-world scenario, where users record facial gestures by holding a smartphone or sitting close to a laptop. Our experimental results show that the proposed system can detect eight different facial gestures with an average accuracy of 88% using k-Nearest Neighbor (k-NN) classification.

 

 

 

Poster Number: CSE-04

Authors: Meznah Almutairy, Eric Torng

Title:  The Effect of Sampling on the Efficiency and Accuracy of k−mer Based Indexes: Theoretical and Empirical Comparisons Using Human Genome

 

Abstract: One of the most common ways to search a sequence database for sequences that are similar to a query sequence is to use a k-mer index. One of the biggest problems with k-mer indexes is the space required to store the lists of all occurrences of all k-mers in the database. One method for reducing space and also query time is sampling where we store only some k-mer occurrences rather than all of them. Most previous work uses “hard sampling” where enough k-mer occurrences are retained so that all similar sequences are guaranteed to be found. We study “soft sampling” where we further reduce the number of stored k-mer occurrences and thus risk decreasing query accuracy. We focus on finding highly similar local alignments over nucleotide sequences, an operation that is fundamental to several biological applications. We use the NCBI BLAST tool with the human genome and human ESTs. Our results show that soft sampling leads to significant reductions in both index size and query time with relatively small costs in query accuracy when compared to hard sampling. Extreme soft sampling reduces index size 4-10 times more than hard sampling and processes large queries 2.3-6.8 times faster while still achieving retention rates of at least 95%. When we apply extreme soft sampling to the problem of mapping ESTs, we are able to map more than 99% of ESTs perfectly while reducing the index size by a factor of 3-5 and reducing query time by 32-42% when compared to hard sampling.

 

 

 

Poster Number: CSE-05

Authors: Morteza Safdarnejad, Yousef Atoum, Xiaoming Liu

Title:  Temporally Robust Global Motion Compensation by Keypoint-based Congealing

 

Abstract: Global motion compensation (GMC) removes the impact of camera motion and creates a video in which the background appears static over the progression of time. Various vision problems, such as human activity recognition, background reconstruction, and multi-object tracking can benefit from GMC. Existing GMC algorithms rely on sequentially processing consecutive frames, by estimating the transformation mapping the two frames, and obtaining a composite transformation to a global motion compensated coordinate. Sequential GMC suffers from temporal drift of frames from the accurate global coordinate, due to either error accumulation or sporadic failures of motion estimation at a few frames. We propose a temporally robust global motion compensation (TRGMC) algorithm which performs accurate and stable GMC, despite complicated and long-term camera motion. TRGMC densely connects pairs of frames, by matching local keypoints of each frame. A joint alignment of these frames is formulated as a novel keypoint-based congealing problem, where the transformation of each frame is updated iteratively, such that the spatial coordinates for the start and end points of matched keypoints are identical. Experimental results demonstrate that TRGMC has superior performance in a wide range of scenarios.

 

 

 

Poster Number: CSE-06

Authors: Yousef Atoum, Joseph Roth, Michael Bliss, Wende Zhang, Xiaoming Liu

Title:  Monocular Video-based Trailer Coupler Detection using Multiplexer Convolutional Neural Network

 

Abstract: This work presents a fully automated computer vision system for autonomous self-backing-up a vehicle towards a trailer, by continuously estimating the 3D trailer coupler position and feeding it to the vehicle control system, until alignment of the tow hitch with the trailers coupler. This system is made possible through our proposed distance driven Multiplexer-CNN method, which selects the most suitable CNN using the estimated distance of the coupler to the vehicle. The input of the multiplexer is a group made of a CNN detector, trackers, and a 3D localizer. In the CNN detector, we propose a novel algorithm to provide a confidence score with each detection. The score reflects the existence of the target object in a region, as well as how accurate is the 2D target detection. We demonstrate the accuracy and efficiency of the system on a large trailer database using a regular PC. Our system achieves an estimation error of 1.4 cm when the ball reaches the coupler, while running at 18.9 FPS.

 

This work was supported in part by General Motors (GM)

 

 


 

Poster Number: CSE-07

Authors: Pranshu Bajpai, Seth Edgar, Rob McCurdy, Richard Enbody

Title:  Anomalous Security Events Detection using Predictive Machine Learning Algorithms

 

Abstract: Large organizations with several thousands of devices are targeted by attackers around the world. Monitoring security events and malicious activities is convoluted especially in such large environments due to the miniscule ratio of the number of IT security personnel to the number of IT equipment or devices being monitored. IT Security and Incident response personnel can focus more on other aspects of their jobs if a monitoring application can identify interesting security trends as they develop while pruning off any unnecessary noise. Currently proposed anomaly detection methods are either not suitable for seasonal data and/or generate overpowering false positives depending on the environment. To solve this issue, we present BATSense – a prototype deployed to forecast trends in security events based on data from past security logs using machine learning. Forecasted security trend values are then compared against current security events to detect anomalies or deviations from expected patterns. Our application generated alerts to notify appropriate personnel whenever security event values in a given hour crossed a certain threshold beyond forecasted value for that hour. BATSense takes into account the multiple seasonality inherent in data pertaining to such environments. Test results from experiments conducted within MSU’s security environment indicate that our application generated accurate predictive values for security event thresholds – hence generating low and acceptable false positives. BATSense raised alarms within 60 minutes of the first sign of malicious activity when a threshold bound was violated. BATsense is flexible to allow threshold bounds to be tweaked according to the environment.

 

 

Poster Number: CSE-08

Authors: Sudipta Banerjee, Arun Ross

Title:  Computing an Image Phylogeny Tree from Photometrically Modified Iris Images

 

Abstract: "Iris recognition" entails the use of iris images to recognize an individual. In some cases, the iris image acquired from an individual can be modified by subjecting it to photometric transformations such as brightening (linear transform) or gamma correction (non-linear transform). Further, the modified image can itself be subjected to additional photometric transformations, resulting in a family of transformed images all of which are directly or indirectly connected to the original parent image. From an image forensics standpoint, it may be useful to deduce the relationship between such transformed images. This has applications not only in detecting tampered images, but also in determining the sequence of operations used to modify the original image acquired from a camera. In this work, we develop methods to automatically generate an Image Phylogeny Tree (IPT) from a set of such transformed iris images. An IPT captures the "structure of evolution" within a set of images. The following challenges are addressed in this research in the context of iris recognition: (a) employing an asymmetric metric to measure the degree of dissimilarity between the images, as it will play an important role in discriminating between the source image and the transformed image; (b) derivation of an accurate estimate of the photometric model relating two images, particularly for non-invertible transformations. Experiments are conducted on the CASIAv4 Thousand dataset having 1992 iris images.

 

 

Poster Number: CSE-09

Authors: Inci M. Baytas, Cao Xiao, Xi Zhang, Fei Wang, Anil K. Jain, Jiayu Zhou

Title:  Patient Subtyping via Time-aware Long-short Term Memory Networks

 

Abstract: In the study of various diseases, the heterogeneity among patients usually leads to different progression patterns and may require different types of therapeutic intervention. Therefore, it is important to study patient subtyping, the grouping patients into disease characterizing subtypes. Subtyping from electronic medical records (EHRs) is challenging because of the temporal dynamics in the patients’ sequential records. Long-Short Term Memory (LSTM) has been successfully used in many domains for processing sequential data, and recently applied for analyzing EHRs. The LSTM units are designed to handle data with constant elapsed time between consecutive elements of the sequence. Given that time lapse between successive elements in EHR data can vary from days to months, the design of traditional LSTM may lead to suboptimal performance. In this paper, we propose a novel LSTM unit called Time Aware LSTM (T-LSTM) to handle irregular time intervals in EHR data. We learn a subspace decomposition of the cell memory which enables time decay to discount the memory content according to the elapsed time. We propose a patient subtyping model that leverages the proposed T-LSTM in an auto-encoder to learn a powerful single representation for sequential records of patients, which are then used to cluster patients into clinical subtypes. Experiments on synthetic and real world datasets show that the proposed T-LSTM architecture captures the underlying structures in the sequences with time irregularities.

 

This work was supported in part by Office of Naval Research (ONR) under grant number N00014-14-1-0631

Poster Number: CSE-10

Authors: Denton Bobeldyk, Arun Ross

Title:  Attribute-based Ocular Biometrics: A New Paradigm for Iris Recognition

 

Abstract: Recent research has demonstrated the possibility of deducing soft biometric attributes, such as race and gender, from near-infrared (NIR) images of the iris and the surrounding ocular region. This work explores the possibility of extracting multiple such soft biometric attributes from the iris and the ocular region in order to develop a semantic description of the associated individual (e.g., middle-aged female Caucasian with blue eyes). The individual attributes, or their concatenated descriptions, can be used to (a) improve the recognition accuracy of traditional iris recognition systems; (b) reduce the search space in large-scale iris identification systems; (c) bridge the semantic gap between human and machine descriptions of biometric data; (d) facilitate cross-spectral iris recognition; etc. Thus, an attribute-based characterization of the ocular region will have applications in biometrics, forensics, and law enforcement. As a case study, we will demonstrate the benefits of combining soft biometrics with ocular recognition in a fusion framework.

 

This work was supported in part by NSF Center for Identification Technology and Research

 

 

 

Poster Number: CSE-11

Authors: Garrick Brazil, Xiaoming Liu, Xi Yin

Title:  Monocular Camera-based Pedestrian Detection

 

Abstract: Pedestrian detection is a critical problem in computer vision with an extraordinary impact on safety in urban autonomous driving. In this work, we introduce a cascade of networks using an exhaustive region proposal network followed by a binary classification network. We extend the region proposal network with a 30 class semantic segmentation side task to reduce confusion between non-pedestrian classes and improve overall proposal quality. To address the differences in pedestrian appearance at varying scales, we also adopt multi-scale rectangular filters in our network directly inspired by the span of pedestrian shapes themselves.

 

 

 

Poster Number: CSE-12

Authors: Nitash C G, Thomas LaBar, Arend Hintze, Christoph Adami

Title:  Origins of Life in a Digital Microcosm

 

Abstract: While all organisms on Earth descend from a common ancestor, there is no consensus on whether the origin of this ancestral self-replicator was a one-off event or whether it was only the final survivor of multiple origins. Here we use the digital evolution system Avida to study the origin of self-replicating computer programs. By using a computational system, we avoid many of the uncertainties inherent in any biochemical system of self-replicators (while running the risk of ignoring a fundamental aspect of biochemistry). We generated the exhaustive set of minimal-genome self-replicators and analyzed the network structure of this fitness landscape. We further examined the evolvability of these self-replicators and found that the evolvability of a self-replicator is dependent on its genomic architecture. We studied the differential ability of replicators to take over the population when competed against each other (akin to a primordial-soup model of biogenesis) and found that the probability of a self-replicator out-competing the others is not uniform. Instead, progenitor (most-recent common ancestor) genotypes are clustered in a small region of the replicator space. Our results demonstrate how computational systems can be used as test systems for hypotheses concerning the origin of life.

 

 

 

Poster Number: CSE-13

Authors: Xiang Wu, Juan L. Castro-Garcia, Juyang Weng

Title:  Actions as Contexts

 

Abstract: In artificial intelligence, many tasks of speech recognition, video analysis, and language processing involve temporal processing where the outputs depend on not only spatial contents of the current sensory input frame, but also the relevant context in the attended past. It is illusive how brains use temporal contexts. Many computer methods, such as Hidden Markov chains and recurrent neural networks, require the human programmer to handcraft contexts as symbolic contexts. Although it has been proved that our Developmental Networks (DN) are capable of learning any emergent Turing Machine (TM), their states have been supervised by human teachers as patterns. This demands much effort from the human trainer. In this paper, we study how agent actions are natural sources of contexts. In humans, muscle actions correspond to the firings of muscle neurons. They are dense in time, correlated with the cognitive skills of the individual. Some actions are meant to handle time warping, while others are not (e.g., for time duration counting). We model actions as dense action patterns. We experimented with DN for recognition of audio sequences as an example of modality, but the principles are modality independent. Our experimental results showed how taking dense, frame-wise actions as contexts helps DN to generate temporal contexts. This work is a necessary step toward our goal to enable machines to autonomously generate contexts as actions through life-long development.

 

 

 

Poster Number: CSE-14

Authors: Jiao Chenk Yanni Sun

Title:  De novo Haplotype Reconstruction in Virus Quasispecies Using Paired-end Reads

 

Abstract: Motivation: Quasispecies contain a population of closely related but different virus strains infecting individual host. As the selection acts on clouds of mutants rather than single sequences, quasispecies have abilities to escape host immune responses or develop drug resistance. Reconstruction of the viral haplotypes is a fundamental step to characterize the quasispecies, predict their viral phenotypes, and finally provide important information for clinical treatment and prevention. Advances of the next-generation sequencing technologies open up new opportunities to assemble full-length haplotypes. However, error-prone short reads, high similarity between related strains, unknown number of haplotypes all pose computational challenges for reference-free haplotype reconstruction. There is still big room to improve the performance of existing haplotype assembly tools. Results: In this work, we developed a de novo haplotype reconstruction tool for virus quasispecies data named PEHaplo. PEHaplo employs paired-end reads to distinguish highly similar strains. We applied it to both simulated and real quasispecies data. The results were benchmarked against several recently published haplotype reconstruction tools. The comparison shows that PEHaplo outperforms the benchmarked tools in a comprehensive set of metrics. Availability: The source code and the documentation of PEHaplo is available at https://github.com/chjiao/PEHaplo.

 

 

 

Poster Number: CSE-15

Authors: Anurag Chowdhury, Arun Ross

Title:  Speaker Recognition in Degraded Audio Signals

 

Abstract: Identifying human subjects based on their biometric features such as face, fingerprint, iris and voice is an active area of research. The focus of this work is in detecting human speech and identifying its source, i.e., the speaker, from audio signals captured using recording devices such as microphones. As with other types of digital signals such as images and video, an audio signal can undergo degradations during its generation, propagation and recording. Identifying the speaker from such degraded speech data is a challenging task and an open research problem. In this research, we present a deep learning based algorithm for speaker recognition from degraded audio signals. We use the commonly employed Mel-Frequency Cepstral Coefficients (MFCC) for representing the audio signals. We design a convolutional neural network (CNN), which learns one dimensional filters instead of the traditional approach that learns two dimensional filters. The filters in the CNN are designed to learn inter-dependency between cepstral coefficients extracted from audio frames of fixed temporal expanse. From a biological standpoint, this approach, learns the human speech production apparatus from MFCC features and helps identify speakers from degraded audio signals. The performance of the proposed method is compared against existing baseline schemes on synthetically corrupted speech data. Experiments convey the efficacy of the proposed architecture for speaker recognition.

 

This work was supported in part by FBI-BCOE: Federal Bureau of Investigation’s Biometric Center of Excellence

 

 

 

Poster Number: CSE-16

Authors: Tarang Chugh, Kai Cao, Elham Tabassi, Jiayu Zhou, Anil K. Jain

Title:  Latent Fingerprint Value Prediction: Crowd-based Learning

 

Abstract: Latent fingerprints are one of the most crucial sources of evidence in forensic investigations. As such, development of automatic latent fingerprint recognition systems to quickly and accurately identify the suspects is one of the most pressing problems facing fingerprint researchers. One of the first steps in manual latent processing is for a fingerprint examiner to perform a triage by assigning one of the following three values to a query latent: Value for Individualization (VID), Value for Exclusion Only (VEO) or No Value (NV). However, latent value determination by examiners is known to be subjective, resulting in large intra-examiner and inter-examiner variations. Furthermore, in spite of the guidelines available, the underlying bases that examiners implicitly use for value determination are unknown. In this study, we propose a crowdsourcing based framework for understanding the underlying bases of value assignment by fingerprint examiners, and use it to learn a predictor for quantitative latent value assignment. Experimental results are reported using four latent fingerprint databases, two from forensic casework (NIST SD27 and MSP) and two collected in laboratory settings (WVU and IIITD), and a state-of-the-art latent AFIS. The main conclusions of our study are as follows: (i) crowdsourced latent value is more robust than prevailing value determination (VID, VEO and NV) and LFIQ for predicting AFIS performance, (ii) two bases can explain expert value assignments which can be interpreted in terms of latent features, and (iii) our value predictor can rank a collection of latents from most informative to least informative.

 

This work was supported in part by National Institute of Standards and Technology

 

 

Poster Number: CSE-17

Authors: Melissa Dale, Arun Ross

Title:  Multispectral Pedestrian Detection using Restricted Boltzmann Machines

 

Abstract: The US Department of Transportation Federal Highway Administration reports that approximately 22% of traffic crashes are a result of weather conditions. This is nearly 1,259,000 crashes a year caused by conditions such as rain, snow, fog, and wind. As autonomous vehicles equipped with cameras are deployed alongside mainstream traffic, it is necessary to ensure that they can continue detecting objects of interest such as pedestrians, automobiles, bicycles, etc. in challenging environmental and traffic conditions. To facilitate this, we propose a solution to augment data from the vehicle's digital cameras with thermal sensors. To initiate this study, we use the KAIST Multispectral Pedestrian Detection Benchmark dataset to investigate potential pedestrian detection based on the data from multiple sensors. This dataset consists of annotated videos recorded in both visible and thermal spectra from a moving vehicle. In 2016, the pedestrian detection accuracy on this dataset was greatly improved through the use of convolutional neural network (CNN) by researchers at Robert Bosch GmbH and University of Bonn. Their results highlighted the ability of deep learning methods to outperform hand crafted features used in the benchmark findings. In our research, we continue to explore the use of deep learning techniques to improve multispectral pedestrian detection. In particular, we hypothesize that treating the visible and thermal sensors as separate sources of information can further improve the detection accuracy. To this end, we investigate the use of Restricted Boltzmann Machines for pedestrian detection on the KAIST dataset.

 

This work was supported in part by Michigan State University (CANVAS Program)

 

 

Poster Number: CSE-18

Authors: James Daly, Eric Torng

Title:  TupleMerge: Building Minimal Firewall Tables by Omitting Bits

 

Abstract: Packet classification is an important part of many networking devices, such as routers and firewalls. The rule lists defining these packet classifiers have generally become larger and more complicated over time, yet they must operate faster than ever to meet the demands of ever-increasing network requirements. Modern software-defined networks place an additional constraint; classifiers must now be able to update themselves quickly. This rules out many classifiers, such as HyperCuts, HyperSplit, and their derivatives. We build upon Tuple Space Search, the packet classifier used by Open vSwitch, to create TupleMerge. TupleMerge improves upon Tuple Space Search by combining hash tables which contain rules with similar characteristics. This greatly reduces search times by producing fewer tables. We compared TupleMerge to PartitionSort, the current state-of-the-art online packet classifier on rulelists generated by ClassBench. TupleMerge has faster classification times than PartitionSort on 10 of the 12 seeds. On those seeds, TupleMerge is an average of 45% faster to classify than PartitionSort. It is also always faster to update than PartitionSort, averaging 30% faster over all seeds.

 

 

Poster Number: CSE-19

Authors: Debayan Deb, Lacey Best-Rowden, Anil K. Jain

Title:  Face Recognition Accuracy with Time Lapse: Performance of State-of-the-art COTS Matchers

 

Abstract: With the integration of face recognition technology into important identity applications, it is imperative that the effects of facial aging on face recognition performance are thoroughly understood. As face matchers evolve and improve, they should be periodically re-evaluated on large-scale longitudinal face datasets. In our study, we evaluate the performance of two state-of-the-art commercial off the shelf (COTS) face matchers on two large-scale longitudinal datasets of operational mugshots of repeat criminals. The largest of these two datasets has 147,784 images of 18,007 subjects with 8 images per subject with a time span of 8.5 years. We fit multilevel statistical models to genuine comparison scores (similarity between images of the same face) obtained by the COTS face matchers. This allows us to analyze the effects of elapsed time between a query (probe) and its reference (gallery) image, as well as subject’s gender and race, and face image quality. Based on the results of our statistical model, we can infer that state-of-the-art COTS matcher can indeed verify 99% of the subjects at a false accept rate (FAR) of 0.01% for up to 9.5 years of elapsed time. Beyond that time lapse, there is a significant loss in face recognition accuracy.

 

 

 

Poster Number: CSE-20

Authors: Tyler Derr, Zhiwei Wang, Jiliang Tang

Title:  Opinions Power Opinions: Joint Link and Interaction Polarity Prediction in Signed Networks

 

Abstract: Social media has been widely adopted by online users to share their opinions. Among users in signed networks, two types of opinions can be expressed. They can directly specify opinions to others via establishing positive or negative links; and they also can give opinions to content generated by others via a variety of social interactions such as commenting and rating. Intuitively these two types of opinions should be related. For example, users are likely to give positive (or negative) opinions to content from those with positive (or negative) links; and users tend to create positive (or negative) links with those that they frequently positively (or negatively) interact with. Therefore we can leverage one type of opinions to power the other. Meanwhile, they can enrich each other that can help mitigate the data sparsity and cold-start problems in the corresponding predictive tasks -- link and interaction polarity predictions. respectively. In this paper, we investigate the problem of joint link and interaction polarity predictions in signed networks. We first understand the correlation between these two types of opinions; and then propose a framework that can predict signed links and the polarities of interactions simultaneously. The experimental results on a real-world signed network demonstrate the effectiveness of the proposed framework. Further experiments are conducted to validate the robustness of the proposed framework to data with cold-start users.

 

 

 

Poster Number: CSE-21

Authors: Yaohui Ding, Arun Ross

Title:  An Ensemble of One-Class Support Vector Machines for Fingerprint Spoof Detection Across Different Fabrication Materials

 

Abstract: A fingerprint recognition system is vulnerable to spoof attacks, where a fake fingerprint can be used to circumvent the system. To counter such attacks, researchers have developed automated spoof detectors that are used to distinguish fake fingerprints from real fingerprints. Most spoof detectors adopt a machine learning approach, where a classifier is trained to distinguish between "spoof" and "live" samples. Such an approach requires training samples from both classes. However, there are two fundamental concerns. Firstly, the number of spoof samples available during the training stage is typically much less than the number of live samples, resulting in imbalanced training sets. Secondly, the spoof detector may encounter spoofs fabricated using materials that were not previously "seen" in the training set, thereby failing to detect them. In order to alleviate some of these concerns, we adopt a One Class Support Vector Machine (OC-SVM) approach that predominantly uses training samples from only a single class, i.e., the live class, to generate a hypersphere that encompasses most of the live samples. The goal is to learn the concept of a "live” fingerprint, and reject all other prints as being fake. The boundary of the generated hypersphere is refined using a small number of spoof samples. The proposed method uses an innovative ensemble of such OC-SVMs for spoof detection. Experimental results on the LivDet2011 database show the advantages of the proposed ensemble of OC-SVMs for detecting spoofs generated from previously "unseen'' materials.

 

This work was supported in part by NSF Center for Identification Technology and Research

 

 

 

Poster Number: CSE-22

Authors: Emily Dolson, Charles Ofria

Title:  Creating Hotspots of Evolutionary Innovation

 

Abstract: Evolutionary computation is a powerful machine learning tool that has been used to solve a variety of problems. Inspired by biological evolution, evolutionary algorithms maintain a population of solutions to a given problem, wherein the best are copied and mutated. This creates an evolutionary pressure for better solutions to the problem. However, evolving solutions to very complex problems can be challenging. A lot of evidence suggests that evolution works best when there is pressure to evolve “building blocks”, i.e. solutions to simpler problems that are related to the overall problem. As such, rewarding solutions for solving these building block problems has shown a great deal of promise in making complex problems easier for evolution to solve. Previously, we found that rewarding these building-block problems in geographically limited regions of a spatially structured environment promotes population diversity, ultimately leading to a greater chance of solving the overarching problem. Interestingly, the spatial layout of the rewards seemed to have a strong impact on the results. Here, we investigate this result further and find that many reward layouts contain hotspots of evolutionary potential -- regions where a solution capable of solving a given problem are more likely to evolve. Research on the drivers of this effect is ongoing, but they appear to be complex. If we can understand what factors create these evolutionary hotspots, we will be able to build them on purpose to build better evolutionary algorithms.

 

This work was supported in part by National Science Foundation Graduate Research Fellowship

 

 

 

Poster Number: CSE-23

Authors: Nan Du, Yanni Sun

Title:  Improve Homology Search Sensitivity of PacBio Data by Correcting Frameshifts

 

Abstract: Single-molecule, real-time sequencing (SMRT) developed by Pacific BioSciences produces longer reads than secondary generation sequencing technologies such as Illumina. The long read length enables PacBio sequencing to close gaps in genome assembly, reveal structural variations, and identify gene isoforms with higher accuracy in transcriptomic sequencing. However, PacBio data has high sequencing error rate and most of the errors are insertion or deletion errors. During alignment-based homology search, insertion or deletion errors in genes will cause frameshifts and may only lead to marginal alignment scores and short alignments. As a result, it is hard to distinguish true alignments from random alignments and the ambiguity will incur errors in structural and functional annotation. Existing frameshift correction tools are designed for data with much lower error rate and are not optimized for PacBio data. As an increasing number of groups are using SMRT, there is an urgent need for dedicated homology search tools for PacBio data. In this work, we introduce Frame-Pro, a profile homology search tool for PacBio reads. Our tool corrects sequencing errors and also outputs the profile alignments of the corrected sequences against characterized protein families. We applied our tool to both simulated and real PacBio data. The results showed that our method enables more sensitive homology search, especially for PacBio data sets of low sequencing coverage. In addition, we can correct more errors when comparing with a popular error correction tool that does not rely on hybrid sequencing.

 

This work was supported in part by NSF CAREER Grant DBI-0953738

 

 

 

Poster Number: CSE-24

Authors: Joshua Engelsma, Sunpreet Arora, Anil K. Jain, Nicholas Paulter, Jr.

Title:  Universal 3D Fingerprint Target

 

Abstract: We present a design and manufacturing process to fabricate high fidelity 3D fingerprint targets. The main contribution is a single universal 3D fingerprint target which can be imaged on a variety of fingerprint sensing technologies, namely capacitive, optical-contact, and optical-contactless. As such, the universal 3D fingerprint target enables, for the first time, not only a repeatable and controlled evaluation of fingerprint readers, but also the ability to conduct fingerprint reader interoperability studies. Fingerprint reader interoperability refers to the ability of a fingerprint recognition system to adapt to variations in the raw data acquired from different types of fingerprint readers. As fingerprint recognition continues to become more and more pervasive (from smart phones to international border crossings) and new sensing technologies continue to emerge, quantifying reader interoperability has become a necessity. To build the universal 3D fingerprint target, we adopt a molding and casting framework. It consists of (i) digital mapping of fingerprint images to a negative mold, (ii) CAD modeling a scaffolding system to hold the negative mold (iii) fabricating the mold and scaffolding system with a high resolution 3D printer, (iv) producing or mixing a material with the same electrical, optical, and mechanical properties as that of the human finger, and (v) fabricating a 3D fingerprint target using controlled casting. Our interoperability experiments involve PIV & Appendix F certified optical (contact and contactless) and capacitive fingerprint readers from different vendors. The empirical results validate the use of universal 3D fingerprint targets for highly controlled fingerprint reader interoperability evaluation.

 

This work was supported in part by NIST (National Institute of Standards and Technology)

 

 

Poster Number: CSE-25

Authors: Sixue Gong, Vishnu Boddeti, Anil K. Jain

Title:  Capacity of Face Recognition

 

Abstract: Human routinely use faces to recognize individuals. In a sense, faces are referred as the most popular indicator of identities in daily life. Suppose there exists an approach which, for each identity, generates a unique set of facial features (representation) that only encompasses the corresponding individual. In this case, any identity can be decoded from these unique features. However, in machine face recognition, features are generally extracted from “noisy” facial images, which are not adequate to uniquely represent an identity. The large variability in face images (intra-person variability and inter-person similarity) and the uncertainty in feature representation due to noise in face images are the two leading causes of face recognition errors. So, a natural question to ask is: how many identities can a given face representation resolve or, in other words, what is the capacity of a representation? If we model the process from ideal face representation to automatically extracted facial features as a communication system, face signal of each identity is mapped into some sequence of channel symbols, e.g., digital images, which then produces the output sequence of the channel, e.g. deep network features. During this process, digital camera and representation can be seen as channels that transmit face signals. The challenge is to determine the capacity of the information channel. To begin with, we consider face representation process a Gaussian channel with additive noise. The channel noise is also assumed to be Gaussian, independent of input signal. We cast the problem in an information capacity framework of a Gaussian channel and derive the capacity of a given face representation.

 

 

 

Poster Number: CSE-26

Authors: Aaron Gonzales, Arun Ross

Title:  Detection and Tracking of Objects in Thermal Infrared Imagery

 

Abstract: Thermal cameras are becoming increasingly cost-effective, thereby making them more affordable in a number of applications. In this project, we will utilize thermal cameras for automated object analysis in two challenging nighttime environments: (a) livestock surveillance and (b) autonomous driving of vehicles. While extensive research on object detection and tracking has been conducted in the visible spectra, little work has been conducted in the thermal spectra. In this work, we will design a object detection and tracking system for the thermal spectra using a customized template-based correlation filter approach. We have acquired data using a thermal camera and will evaluate the performance of the proposed methods on this data. The proposed methods are expected to be resilient to a number of confounding factors such as occlusion, inclement weather, low-intensity images, etc.

 

This work was supported in part by Michigan State University (CANVAS Program)

 

 

 

Poster Number: CSE-27

Authors: Hussein A. Hejase, Kevin J. Liu

Title:  FastNet: Fast and Accurate Inference of Phylogenetic Networks using Large-scale Genomic Sequence Data

 

Abstract: Advances in next-generation sequencing technologies and phylogenomics have reshaped our understanding of evolutionary biology. One primary outcome is the emerging discovery that interspecific gene flow has played a major role in the evolution of many different organisms across the Tree of Life. To what extent is the Tree of Life not truly a tree reflecting strict “vertical” divergence, but rather a more general graph structure known as a phylogenetic network which also captures “horizontal” gene flow? The answer to this fundamental question not only depends upon densely sampled and divergent genomic sequence data, but also computational methods which are capable of accurately and efficiently inferring phylogenetic networks from large-scale genomic sequence datasets. Recent methodological advances have attempted to address this gap. However, in a recent performance study, we demonstrated that the state of the art falls well short of the scalability requirements of existing phylogenomic studies. The methodological gap remains: how can phylogenetic networks be accurately and efficiently inferred using genomic sequence data involving many dozens or hundreds of taxa? In this study, we address this gap by proposing a new phylogenetic divide-and-conquer method which we call FastNet. Using synthetic and empirical data spanning a range of evolutionary scenarios, we demonstrate that FastNet outperforms state-of-the-art methods in terms of computational efficiency and topological accuracy.

 

This work was supported in part by National Science Foundation Grant CCF-1565719; BEACON Center for the Study of Evolution in Action (NSF STC Cooperative Agreement DBI-093954)

 

Poster Number: CSE-28

Authors: Steven Hoffman, Arun Ross

Title:  A Deep Learning Approach to Presentation Attack Detection in Iris Recognition Systems

 

Abstract: This work addresses the problem of presentation attacks against iris recognition systems. Iris recognition systems attempt to recognize individuals using their iris patterns typically acquired in the near-infrared (NIR) spectrum. However, it is possible for an adversarial user to circumvent the system by presenting a deliberately modified iris pattern or a fake iris pattern. These are called presentation attacks (PAs). Examples of PAs include, (1) using printed images of another person’s iris or (2) using cosmetic contact lenses to change one’s own iris pattern. To detect such attacks, we develop a deep convolutional neural network (CNN) that can determine if an input iris image corresponds to an attack or not. On a small database consisting of 3400 real iris images and 1700 modified iris images, we were able to achieve a True Detection Rate (TDR) of 99.2% at a False Detection Rate (FDR) of 0.4%.

 


 

Poster Number: CSE-29

Authors: Amin Jourabloo, Xiaoming Liu

Title:  Pose-invariant Face Alignment with a Single CNN

 

Abstract: Face alignment has witnessed substantial improvements in the last decade. One of these recent focuses has been aligning a dense 3D face shape to face images with large head poses. The dominant technology used is based on the cascade of regressors, e.g., CNN, which has shown promising results.  Nonetheless, it suffers from several drawbacks, e.g., lack of end-to-end training, hand-crafted features and slow training speed. To address these issues, we propose a new layer, named visualization layer, that can be integrated into the CNN architecture and enables joint optimization with different loss functions. Extensive evaluation of the proposed method on AFLW and AFW datasets demonstrates state-of-the-art accuracy, while reducing the training time by more than half compared to the typical cascade of CNNs. In addition, we compare multiple CNN architectures with the visualization layer to further demonstrate the advantage of its utilization.

 

This work was supported in part by Bosch Research and Technology Center North America

 

 

Poster Number: CSE-30

Authors: Douglas Kirkpatrick, Arend Hintze

Title:  A Genetic Algorithm's Struggle with Tic-Tac-Toe

 

Abstract: Computer games not only improve their graphic and sound quality, but also provide a great opportunity to apply artificial intelligence (AI). Beyond classic algorithms like A* we find that tools from neuro evolution such as artificial neural networks, NEAT, and Markov Brains can be introduced to play such games. These AI tools either control non-player characters (NPC) or the opponent in case of more traditional games like chess. While it is relatively simple, technically speaking, to control an NPC using an AI, the new challenge is to optimize said AI to either play in novel and interesting ways, or to play better than any human could. Here we investigate how and why a genetic algorithm struggles to optimize a simple Tic-Tac-Toe strategy, and explore options to remedy such issues.

 

 

 

Poster Number: CSE-31

Authors: Tam Le, Matt W. Mutka

Title:  Access Control with Delegation for Smart Home Applications

 

Abstract: With the emergence of smart home applications, it is important to have flexible access control so that users can create/transfer their permissions in a convenience way. We propose a lightweight authorization protocol with support of a delegation chain in which a user can easily transfer (part of) his/her access rights in form of a Bloom filter. In our mechanism, a trusted chain can be maintained and verified by a single request without need of an access control list. The security of our protocol is based on the false positive rate of a Bloom filter. The protocol was implemented and evaluated on an Arduino prototype.

 

 

 

 

 

Poster Number: CSE-32

Authors: Xinyu Lei, Alex X. Liu, Rui Li

Title:  Secure KNN Queries Over Encrypted Data: Dimensionality is Not Always a Curse

 

Abstract: The fast increasing location-dependent applications in mobile devices are manufacturing a plethora of geospatial data. Outsourcing geospatial data storage to a powerful cloud is an economical approach. However, safeguarding data users' location privacy against the untrusted cloud while providing efficient location-aware query processing over encrypted data are in conflict with each other. As a step to reconcile such conflict, we study secure k nearest neighbor (SkNN) queries processing over encrypted geospatial data in cloud computing. We design 2D SkNN (2DSkNN), a scheme achieves both strong provable security and high-efficiency. Our approach employs locality sensitive hashing (LSH) in a dimensional-increased manner. This is a counter-intuitive leverage of LSH since the traditional usage of LSH is to reduce the data dimensionality and solve the so-called ``curse of dimensionality" problem. We show that increasing the data dimensionality via LSH is indeed helpful to tackle 2DSkNN problem. By LSH-based neighbor region encoding and two-tier prefix-free encoding, we turn the proximity test to be sequential keywords query with a stop condition, which can be well addressed by any existing symmetric searchable encryption (SSE) scheme. We show that 2DSkNN achieves adaptive indistinguishability under chosen-keyword attack (IND2-CKA) secure in the random oracle model. A prototype implementation and experiments on both real-world and synthetic datasets confirm the high practicality of 2DS$k$NN.

 

This work was supported in part by National Science Foundation under Grant Numbers CNS-1318563, CNS-1524698, CNS-1421407, and IIP-1632051; National Natural Science Foundation of China under Grant Numbers 61472184, 61321491, 61370226, and 61672156; Jiangsu Innovation and Entrepreneurship (S

 

 

 

Poster Number: CSE-33

Authors: Kaixiang Lin, Jiayu Zhou

Title:  Collaborative Deep Reinforcement Learning

 

Abstract: The human learning process is highly effective partly be- cause that we are capable of summarizing what has been learned, communicating it with other peers, and ultimately fusing knowledge from different sources to assist the cur- rent learning goal. This collaborative learning procedure en- sures that the knowledge is shared, continuously refined, and passed from generation to generation. The idea of knowl- edge transfer has led to many advances in machine learning and data mining, but significant challenges remain, espe- cially when it comes to reinforcement learning, heteroge- neous model structures, and different learning tasks. Mo- tivated by human collaborative learning, in this paper we propose a collaborative deep reinforcement learning (CDRL) framework that performs adaptive knowledge transfer among heterogeneous learning agents. Specifically, the proposed CDRL performs knowledge distillation from the agent mod- els to enable the flexibility of model structure, efficiently incorporates the knowledge distillation into the online train- ing of learning agents, and learns a deep alignment network to address the heterogeneity among different learning tasks. We present an efficient collaborative Asynchronous Advan- tage Actor-Critic (cA3C) algorithm, and demonstrate the effectiveness of the CDRL framework using extensive em- pirical evaluation on OpenAI gym.

 

This work was supported in part by Office of Naval Research (ONR) under grant number N00014-14-1-0631; National Science Foundation under Grant IIS-1565596, IIS-1615597

 

 

 

 

Poster Number: CSE-34

Authors: Chin-Jung Liu, Li Xiao

Title:  RMIP: Resource Management with Interference Precancellation in Heterogeneous Cellular Networks

 

Abstract: Network densification by installing more cellular stations (cells) with smaller coverage is a promising technique to improve wireless capacity to meet the overwhelming demands for mobile data usage. These smaller cells with different coverage and the macrocellular base station form heterogeneous cellular networks (HetNet). However, the dense deployment of HetNet cells might result in unexpected inter-cell interference. In cellular networks, the data to all cells and to the mobile stations (MSs) are originated from the core cellular network. We take advantage of this characteristic and propose a technique called interference precancellation. If the interferer to an MS is identified, the victim cell that serves the victim MS transmits the interference precanceled signal, which is the signal intended for the victim MS subtracts the interference signal. The interference precanceled signal and the interference signal scramble at the victim MS and become the intended signal. With interference precancellation, the interferer and the victim cell can utilize the same wireless resources and thus the capacity is further improved. However, some MSs are interference-free and for MSs whose exact interferers cannot be determined, the MSs still require isolated re- sources. We propose an algorithm for resource management with interference precancellation (RMIP) that jointly considers MSs experiencing different level of interference. Through experiments on GNURadio/USRP, we show that the known interference signal can be precanceled and the combination of the interference signal and the precanceled signal becomes the intended signal. Through simulation, we evaluate the performance in larger HetNets.

 

 

 

Poster Number: CSE-35

Authors: Xi Liu, Pang-Ning Tan

Title:  Human Daily Activity Recognition for Healthcare Using Wearable and Visual Sensing Data

 

Abstract: Wearable digital self-tracking technologies for monitoring individuals’ health condition have become more accessible to the public in recent years with the development of connected portable devices, such as smart phones, smart watches, smart bands, and other personal biometric monitoring devices. Mining behavioural patterns from such wearable data along with other available sensory data, has the potential to offer an objective, insightful service in clinical professionals and healthcare. For example, accurate identification of human activities could help us provide a better patient recovery training guidance, or an early alarm of emergency that may happen to elder people, such as stroke, falls, etc. In this paper, we introduce an activity recognition system, which learns a nonlinear SVM algorithm to identify 20 different human activities from accelerometer and RGB-D camera data. Our early experimental results show that the proposed approach is promising and effective.

 

 

 

Poster Number: CSE-36

Authors: Yaojie Liu, Xiaoming Liu

Title:  Dense Face Alignment

 

Abstract: Large-pose face alignment is a challenging problem due to the large variation in appearance of different face poses. Previous methods tackle this problem via fitting a 3DMM model to a set of facial landmarks. In this paper, we propose to learn a single CNN to fit a dense and accurate 3D model to a single face image. The dense fitting process is constrained by landmarks, edges and other special information. We leverage the quality and quantity power of the synthesized data along with real data to train the network. Restart Training(RT) strategy is adapted when previous training stage is converged, and different constrains are deployed at different training stages. We also propose a novel evaluation metric to measure our dense face alignment performance. Experiments show our proposed method can run effectively and efficiently at real time and achieve the state-of-the-art performance at many challenging in-the-wild data.

 

 

 

Poster Number: CSE-37

Authors: Vahid Mirjalili, Arun Ross

Title:  Biometric Privacy: Modifying Soft Biometric Attributes of Face Images

 

Abstract: Biometrics refers to the use of physical or behavioral attributes such as face, fingerprints and iris to automatically recognize an individual. While biometric data is solely expected to be used for recognizing an individual, advances in machine learning has made it possible to extract additional information such as age, gender, ethnicity, and health indicators from biometric data. These auxiliary attributes are referred to as soft biometrics. Extracting such attributes from the biometric data of an individual, without his or her knowledge, has raised several privacy concerns.

In this work, we focus on extending privacy to face images. In particular, we design a technique to modify a face image such that auxiliary information such as gender, race, and age cannot be easily extracted from it, while the image can still be used for biometric recognition purposes. The proposed method entails iteratively perturbing a given face image such that the performance of the face matcher is not adversely affected, but that of the soft biometric classifiers is confounded. The perturbation is accomplished using a gradient descent technique. Experiments involving a face matcher (commercial SDK) and soft biometric classifiers for gender and race (IntraFace) convey the efficacy of the proposed method.

 

This work was supported in part by National Science Foundation (NSF)

 

 

 

Poster Number: CSE-38

Authors: Chen Tian, Ali Munir, Alex X. Liu, Jie Yang

Title:  OpenFunction: An Extensible Data Plane Abstraction Protocol for Platform-independent Software-defined Middleboxes

 

Abstract: The state-of-the-art OpenFlow technology only partially realized software-defined networking (SDN) vision of abstraction and centralization for packet forwarding in switches. OpenFlow falls short

in implementing middlebox functionalities due to the fundamental limitation in its match-action abstraction. In this paper, we advocate the vision of Software-Defined Middleboxes (SDM) to realize abstraction and centralization for middleboxes. We further propose OpenFunction, an SDM reference architecture and a network function abstraction layer. Our SDM architecture and OpenFunction abstraction are complementary to existing SDN and Network Function Virtualization (NFV) technologies. SDM complements SDN as SDM realizes abstraction and centralization for middleboxes, whereas SDN realizes those for switches. OpenFunction complements OpenFlow as OpenFunction addresses network functions whereas OpenFlow addresses packet forwarding. SDM also complements NFV in that SDM gives NFV the ability to use heterogeneous hardware platforms with various hardware acceleration technologies.

 

 

 

Poster Number: CSE-39

Authors: Kurt A. O'Hearn, H. Metin Aktulga

Title:  Efficient, Scalable Techniques for Charge Distribution in Polarizable and Reactive Molecular Dynamics Models

 

Abstract: Incorporating atom polarizability in molecular dynamics (MD) simulations is important for high-fidelity simulations. Solvers for charge models that are used to dynamically determine atom polarizations constitute significant bottlenecks in terms of time-to-solution and the overall scalability of polarizable and reactive force fields. The objective of this work is to enhance the efficiency and scalability of  the commonly used iterative Krylov subspace solvers on massively parallel shared memory architectures. The results of work on accelerating the convergence rate of these iterative techniques via various preconditioning techniques are presented for several charge models including charge equilibration (QEq), electronegativity equilibration (EE), and atom-condensed Kohn-Sham density functional theory approximated to second order (ACKS2). Furthermore, extensive performance results on the computation and application of preconditioning factors and the overarching solver on multi-core and GPU systems are also discussed.

 

This work was supported in part by MSU Startup Research Grant; MSU Engineering Distinguished Fellowship; MSU Foundation Strategic Partnership Grant; NSF Grant ACI-1566049

 

 

 

Poster Number: CSE-40

Authors: Chen Qiu, Matt Mutka

Title:  iLoom: Self-improving Indoor Localization by Profiling Outdoor Movement on Smartphones

 

Abstract: Indoor localization systems provide the accurate location information when people stay and walk in buildings. Based upon the obtain location information, indoor localization system supports various applications, such as indoor navigation, activity detection, augmented and virtual reality. Unfortunately, GPS cannot be applied indoors because of various interferences. Smartphones are equipped with many low-cost sensors. As a result, opportunities open for smartphones to serve as a platform for many challenging ubiquitous applications, including indoor localization. By employing accelerometers on smartphones, dead reckoning is an intuitive and common approach to generate a user’s indoor motion trace. Nevertheless, dead reckoning often deviates from the ground truth due to noise in the sensing data. We propose iLoom, an indoor localization approach that benefits by transferring learning from tracking outdoor motions to the indoor environment. Via sensing data on a smartphone, iLoom constructs two datasets: relatively accurate outdoor motions from GPS and less accurate indoor motions from accelerometers. Then, iLoom leverages an Acceleration Range Box to improve a user’s acceleration value used for computing dead reckoning. After using a transfer learning algorithm to the two datasets, iLoom boosts the Acceleration Range Box to achieve better indoor localization results. In addition, iLoom exploits indoor GPS exception cases and pedometer to further improve dead reckoning. Through case studies on 15 volunteers for the indoor and outdoor scenarios, we show iLoom is a non-infrastructure and low-training complexity indoor positioning approach that achieved a localization accuracy of 0.28-0.51 meter in multiple scenarios.

 

This work was supported in part by National Science Foundation grant no. CNS–1320561

 

Poster Number: CSE-41

Authors: Yichun Shi, Charles Otto, Anil K. Jain

Title:  Face Clustering

 

Abstract: Clustering face images according to their identity has two important applications: (i) grouping a collection of face images when no eternal labels are associated with images, and (ii) indexing for efficient large scale face retrieval. The clustering problem is composed of two key parts: face representation and choice of similarity for grouping faces. In this research, we first propose a new representation based on the ResNet which has been shown to perform very well in image classification problems. Given this representation, we propose a new clustering algorithm which computes the similarity measure between images by directly optimizing the adjacency matrix. We formulate the problem as a Conditional Random Field (CRF) model and use Loopy Belief Propagation to find an approximate solution. Experiment results on two benchmark face databases (LFW and IJB-A) show that our algorithm outperforms well known clustering algorithms such as k-means and spectral clustering. Additionally, our algorithm can naturally incorporate pairwise constraints to obtain a semi-supervised version that leads to improved clustering performance.

 

 

 

Poster Number: CSE-42

Authors: Cunjian Chen, Antitza Dantcheva, Thomas Swearingen, Arun Ross

Title:  Spoofing Faces Using Makeup: An Investigative Study

 

Abstract: Makeup can be used to alter the facial appearance of a person. Previous studies have established the potential of using makeup to obfuscate the identity of an individual with respect to an automated face matcher. In this work, we analyze the potential of using makeup for spoofing an identity, where an individual attempts to impersonate another person’s facial appearance. In this regard, we first assemble a set of face images downloaded from the internet where individuals use facial cosmetics to impersonate celebrities. We next determine the impact of this alteration on two different face matchers. Experiments suggest that automated face matchers are vulnerable to makeup-induced spoofing and that the success of spoofing is impacted by the appearance of the impersonator’s face and the target face being spoofed. Further, an identification experiment is conducted to show that the spoofed faces are successfully matched at better ranks after the application of makeup.

 

 

 

Poster Number: CSE-43

Authors: Luan Tran, Xi Yin, Xiaoming Liu

Title:  Disentangled Representation Learning GAN for Pose-invariant Face Recognition

 

Abstract: The large pose discrepancy between two face images is one of the key challenges in face recognition. The conventional approach to pose-robust face recognition either performs face frontalization on, or learns a pose-invariant representation from, a non-frontal face image. We argue that, it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes Disentangled Representation Learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator allows DR-GAN to learn the identity representation for each face image, in addition to image synthesis. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one integrated representation along with an arbitrary number of synthetic images. Quantitative and qualitative evaluation on both constrained and unconstrained databases demonstrate the superiority of DR-GAN over the state of the art.

 

This work was supported in part by National Geospatial-Intelligence Agency (NGA)

 

 

 

Poster Number: CSE-44

Authors: Courtland VanDam, Pang-Ning Tan, Jiliang Tang

Title:  Analysis and Detection of Compromised Accounts on Twitter

 

Abstract: Compromised accounts are a common problem faced by social media users. A 2014 Pew Research study found 21% of online adults experienced having their email or social media account compromised. The focus of this research is to understand and detect compromised accounts using data mining approaches. To increase our understanding of compromised accounts, we answer the following research questions. First who is compromising social media accounts; e.g. spammers or acquaintances of the user? Second, what type of content does the hacker post? Third, what features best distinguish compromised accounts from non-compromised accounts?

This poster presents an unsupervised learning framework to detect compromised accounts. The first stage of proposed framework utilizes multimodal non-negative matrix factorization of tweet terms, sentiment, location, and source to learn users’ behavior patterns. The second stage applies Hotelling’s statistic to identify anomalous behavior. Due to the high false positive rate of anomaly detection, the third stage examines the anomalous tweets to verify the account was compromised.

 

 

 

Poster Number: CSE-45

Authors: Ding Wang, Pang-Ning Tan

Title:  A Framework for Mining Spatio-temporal Data from Multiple Sensors

 

Abstract: Advances in sensing technology have enabled organizations to collect real-time spatio-temporal data from multiple sensors for various applications. In this project, we propose to develop techniques for detecting and locating surrounding vehicles from multiple scans generated by LIDAR sensors (Ibeo and Velodyne) installed on a moving vehicle. Specifically, we consider the spatio-temporal data from each sensor as providing a separate view for the detection task and develop techniques for fusing data from multiple sensors to improve the overall detection accuracy. We present preliminary evidence demonstrating the limitations of using data from a single source in the vehicle detection problem. We then propose a multi-view learning framework to effectively combine the data from different sensors.

 

This work was supported in part by Robert Bosch LLC

 

 

 

Poster Number: CSE-46

Authors: Qi Wang, Mengying Sun, Liang Zhan, Paul Thompson, Shuiwang Ji, Jiayu Zhou

Title:  Multi-modality Disease Modeling via Collective Deep Matrix Factorization

 

Abstract: Alzheimer's disease (AD), one of the most common causes of dementia, is a severe irreversible neurodegenerative disease that results in loss of mental functions. The transitional stage between the expected cognitive decline of normal aging and AD, mild cognitive impairment (MCI), has been widely regarded as a suitable time for possible therapeutic intervention. The challenging task of MCI detection is therefore of great clinical importance, where the key is to effectively fuse predictive information from multiple heterogeneous data sources collected from the patients. In this paper, we propose a framework to fuse multiple data modalities for predictive modeling using deep matrix factorization, which explores the non-linear interactions among the modalities and exploits such interactions to transfer knowledge and enable high performance prediction. Specifically, the proposed collective deep matrix factorization decomposes all modalities simultaneously to capture {non-linear} structures of the modalities in a supervised manner,  and learns a modality specific component for each modality and a modality invariant component across all modalities. The modality invariant component serves as a compact feature representation of patients that has high predictive power. The modality specific components provide an effective means to explore imaging genetics, yielding insights into how imaging and genotype interact with each other non-linearly in the AD pathology. Extensive empirical studies using various data modalities provided by Alzheimer's Disease Neuroimaging Initiative (ADNI) demonstrate the effectiveness of the proposed method for fusing heterogeneous modalities.

 

This work was supported in part by Office of Naval Research (ONR) under grant number N00014-14-1-0631; National Science Foundation under Grant IIS-1565596, IIS-1615597

 

 

 

Poster Number: CSE-47

Authors: Wei Wang, Kevin J. Liu

Title:  SERES: Sequentially Resampled Support Measures for Multiple Sequence Alignment

 

Abstract: A multiple sequence alignment (MSA) aligns biological sequences into a data matrix that captures sequence homology and other relationships. MSAs are used as input to a wide range of computational problems in computational biology and bioinformatics, including phylogenetics, protein structure prediction, and automated genome annotation. MSAs are typically inferred using computational methods. It is well understood that downstream analyses are highly dependent on the accuracy of upstream MSA inference. There is therefore a great need to evaluate the quality of inferred MSAs. However, the non-parametric techniques that are widely used to evaluate support throughout the natural sciences (e.g., bootstrapping and jackknifing) typically ignore the sequential nature of biomolecular sequence data, which is an essential aspect of the computational problem of MSA inference.

To address the need for sequence-aware non-parametric support estimation in this context, we introduce SERES, a novel computational method to estimate statistical support for MSA inference based upon a sequentially resampling process. We demonstrate the performance of SERES using a validation study that incorporates both synthetic and empirical data.

 

This work was supported in part by National Science Foundation Grant CCF-1565719; BEACON Center for the Study of Evolution in Action (NSF STC Cooperative Agreement DBI-093954)

 

 

 

Poster Number: CSE-48

Authors: Liyang Xie, Inci M. Baytas, Kaixiang Lin, Jiayu Zhou

Title:  Privacy-preserving Distributed Multi-task Learning with Asynchronous Updates

 

Abstract: Many data mining applications involve a set of related learning task.  Multi-task learning (MTL) is a learning paradigm that improves generalization performance by transfering knowledge among related tasks. The research of MTL has attracted extensive efforts in the community, and various MTL algorithms have been successfully developed. Recent advances in distributed MTL has enabled MTL to learn from data that is distributed in different physical locations. However, significant challenges remain as the privacy in the data could be at stake in such distributed framework, when MTL is applied to build models from sensitive data. In this paper, we propose a novel  privacy-preserving distributed multi-task Learning framework to address the these challenges. Specifically, we present a privacy-preserving proximal gradient algorithm to solve a general class of MTL formulations, which update models of learning tasks asynchronously that is robust against network delays, and provides differential privacy guarantees through carefully designed perturbation. We have conduct extensive experiments to demonstrate the effectiveness and correctness of the proposed algorithm and its theoretical properties.

 

This work was supported in part by Office of Naval Research (ONR) under grant number N00014-14-1-0631; National Science Foundation under Grant IIS-1565596, IIS-1615597

 

 

 

Poster Number: CSE-49

Authors: Xi Yin, Xiaoming Liu

Title:  Multi-task Learning for Face Recognition

 

Abstract: This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity recognition is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A.

 

 

 

Poster Number: CSE-50

Authors: Shuai Yuan, Pang-Ning Tan, Kendra S. Cheruvelil, C. Emi Fergus, Nicholas K. Skaff, Patricia A. Soranno

Title:  Learning Hash-based Features for Incomplete Continuous-valued Data

 

Abstract: Hash-based feature learning is a widely-used data mining approach for dimensionality reduction and for building linear models that are comparable in performance to their nonlinear counterpart. Unfortunately, such an approach is inapplicable to many real-world data sets because they are often riddled with missing values. Substantial data preprocessing is therefore needed to impute the missing values before the hash-based features can be derived. Biases can be introduced during this preprocessing because it is performed independently of the subsequent modeling task, which can result in the models constructed from the imputed hash-based features being suboptimal. To overcome this limitation, we present a novel framework called H-FLIP that simultaneously estimates the missing values while constructing a set of nonlinear hash-based features from the incomplete data. The effectiveness of the framework is demonstrated through extensive experiments conducted using both synthetic and real-world data sets.

 

This work was supported in part by National Science Foundation under grant #EF-1065786 and #IIS-1615612

 

 

 

Poster Number: CSE-51

Authors: Masoud Zarifneshat, Chin-Jung Liu, Li Xiao

Title:  A Protocol for Link Blockage Mitigation in mm-Wave Networks

 

Abstract: mm-Wave is a promising technology to meet the enormous bandwidth demands of the future generation cellular networks. This technology has vast amount of unused bandwidth, but has problem of human blockage. Blockage mitigation methods for indoor environments cannot be applied to outdoor scenarios effectively. In this paper, we mitigate human blockage of the mm-Wave technology by proposing an algorithm that provides intelligent user association in mm-Wave networks. The proposed algorithm collects the history blockage incidents throughout the network and exploits the history incidents to associate user equipment to the base stations with lower blockage possibility. The blockage incidents happened at different locations in the network. When user equipment attempts to find a base station to associate to, the algorithm examines the history blockage incidents near the location of the user equipment. In this way, the user equipment is associated to a base station that has smaller chance of being blocked. The simulation results show that our proposed algorithm is performing better in terms of improving SINR, rate of the links and blockage rate in the network compared to another state-of-the-art user association algorithm designed for mm-Wave networks.

 

 

 

Poster Number: CSE-52

Authors: Zejia Zheng, Juyang Weng

Title:  Mobile Device Based Outdoor Navigation Using Video

 

Abstract: Vision is challenging with its dynamic environments and huge appearance variances. Traditional autonomous navigation systems use laser range scanners to perform collision avoidance and 3D local driving scene reconstruction. Existing image-based navigation methods, on the other hand, do not consider spatiotemporal visual contexts because these methods usually miss attention mechanisms, especially top-down attention. We developed a brain-inspired framework DN (Developmental Network) as an emergent Turing Machine. The emergent Turing Machine has clearly understandable context and invariances that existing neural network models lack. This work applied DN to a well-known challenging AI task: outdoor autonomous navigation for a pedestrian using a computational resource limited mobile device. Although GPS is available, the vision supplied behavior that integrates GPS signal is a long-standing, unsolved AI problem that faces the conflicts between high-level goals and low-level sensory signals. The network successfully navigated in regular long-duration testing in novel settings and blindfolded testing under sunny and cloudy weather conditions.

 

 

 

Poster Number: CSE-53

Authors: Zhiwei Wang, Tyler Derr, Jiliang Tang

Title:  Understanding and Predicting Weight Loss with Mobile Social Networking Data

 

Abstract: It has become increasingly popular to use mobile social networking applications for weight loss and management. Users not only can create profiles and maintain their records but also can perform a variety of social activities that shatter the barrier to share or seek information. Due to the open and connected nature, these applications produce massive data that consists of rich weight-related information which offers immense opportunities for us to enable advanced research on weight loss. In this paper, we conduct the initial investigation to understand weight loss with a large-scale mobile social net- working dataset with near 10 million users. In particular, we study individual and social factors related to weight loss and reveal a number of interesting findings that help us build a meaningful model to predict weight loss automatically. The experimental results demonstrate the effectiveness of the proposed model and the significance of these factors in weight loss.