• image

    DIUx xView 2018 Detection Challenge for Disaster Response

    VLL is participating in the xView challenge

    Applying computer vision to overhead imagery has the potential to detect emerging natural disasters, improve response, quantify the direct and indirect impact — and save lives. A VLL team is working on this challenge.

  • image

    Deep Learning to Predict Protein-Ligand Binding Affinities

    Protein-Ligand Binding Affinities for Drug Discovery

    Collaborators:

    In recent years, the cheminformatics community has seen an increased success with machine learning-based scoring functions for estimating binding affinities and pose predictions. The prediction of protein-ligand binding affinities is crucial for drug discovery research. Many physics-based scoring functions have been developed over the years. Lately, machine learning approaches are proven to boost the performance of traditional scoring functions. We are using deep learning to predict Protein-Ligand binding affinities

  • image

    Understanding and Improving Weight Initialization in Neural Networks

    A new weight initialization scheme for deep neural networks is proposed

    We present a taxonomy of weight initialization schemes used in deep learning and survey the most representative techniques in each class. We also introduce a new weight initialization scheme. In this technique, we perform an initial feedforward pass through the network using an initialization mini-batch. Using statistics obtained from this pass, we initialize the weights of the network, so the following properties are met: 1) weight matrices are orthogonal; 2) ReLU layers produce a predetermined number of non-zero activations; 3) the output produced by each internal layer has a predetermined variance; 4) weights in the last layer are chosen to minimize the error in the initial mini-batch. We evaluate our method on two popular architectures and faster converge rates are achieved on the MNIST and CIFAR-10 data sets when compared to state-ofthe- art initialization techniques.

  • image

    Adversarial Machine Learning for non-RGB Images

    We study adversarial examples in the context of non-RBG images. We are also exploring other cyber-security related adversarial machine learning problems.

    Collaborators:

    We present the first study of constructing adversarial examples for non-RGB imagery, and show that non- RGB machine learning models are vulnerable to adversarial. We propose a framework to make non-RGB image-based semantic segmentation systems robust to adversarial attacks. examples.

  • image

    EmoColor

    We are working on predicting fine-grained emotions from color images.

    We are working on predicting fine-grained emotions from color images.

  • image

    Deep Learning in Spoken Dialogue

    We are working on improving current deep learning models for turn-taking in Spoken Dialogue

    Collaborators:

    An Improved Deep-Learning Model of Turn-taking in Spoken Dialogue.

  • image

    Hyperspectral Image Analysis

    Hyperspectral image analysis for improved scene understanding, mission planning, and social good

    We propose Integrated Learning and Feature Selection (ILFS) as a generic framework for supervised dimensionality reduction. We demonstrate ILFS is effective for dimensionality reduction of multispectral and hyperspectral imagery, and significantly improves performance on the semantic segmentation task for high dimensional imagery.

    We work on finding spatial feature correspondence between images generated by sensors operating in different regions of the spectrum, in particular the Visible (Vis: 0.4-0.7 um) and Shortwave Infrared (SWIR: 1.0-2.5 um). Under the assumption that only one of the available datasets is geospatial ortho-rectified (e.g., Vis), this spatial correspondence can play a major role in enabling a machine to automatically register SWIR and Vis images, representing the same swath, as the first step toward achieving a full geospatial ortho-rectification of, in this case, the SWIR dataset.

  • image

    Craniometric Alignment of Rat Brain using Computer Vision

    Automated matching of rat brain images to the corresponding rat brain atlas

    Collaborators:

    • Dr. Arshad Khan, Biological Sciences - Border Biomedical Research Center (BBRC)

    This project is about applying traditional computer vision and deep learning techniques to the field of neuroscience. Specifically, we are using Nissl-stained rat brain images obtained from rat brain atlases such as Swanson (2004) and Paxinos/Watson (2014) to develop algorithms that will help improve the speed and accuracy of manual mapping between an experimental dataset and canonical brain atlases.

    We developed an algorithm using SIFT and RANSAC that finds matches between a region of a Nissl-stained rat brain image and a complete atlas and then ranks these matches allowing you to perform general mapping from regions to atlases. The algorithm and experiments were presented at the Society for Neuroscience 2017 Annual Conference and the findings are also part of a publication titled "Computer vision evidence supporting craniometric alignment of rat brain atlases to streamline expert-guided, first-order migration of hypothalamic spatial datasets" that has been provisionally accepted to the Frontiers in Systems Neuroscience journal. Below are links to the conference abstract and publication.

    We are currently developing a new algorithm using SIFT and dynamic programming that derives plate to plate correspondences between atlases.