Here are some of my student’s posters for their undergraduate capstones.
With the decrease of hardware costs, stationary hydrophones are increasingly deployed in the marine environment to record animal vocalizations amidst ocean noise over an extended period of time. Bioacoustic data collected in this way is an important and practical source to study vocally active marine species and can make an important contribution to ecosystem monitoring. However, a main challenge of this data is the lack of annotation which many supervised neural network models rely on to learn to distinguish between noise and marine animal vocalizations. In this paper, we posit an unsupervised deep embedded clustering based on LSTM autoencoders, that aims to learn the representation of the input audio by minimizing the reconstruction loss and to simultaneously minimize a clustering loss through Kullback–Leibler divergence.
The role of this Neural Network tool for researchers is to give non-cs scholars such as physicists, engineers, scientists a tool web client to test their Machine Learning model by giving them an intuitive easy-to-learn tool to drag and drop the necessary neural network components they would need. From a small survey we did, most researchers know how to program in Python and R. They tend to use other open-source tools made by other researchers and must go in and edit the tool to work for their use case. This tool is used for image classification where we use convolutional neural networks (CNN). CNNs are image processing and artificial intelligence systems that use deep learning to accomplish both generative and descriptive tasks. They frequently use machine vision, such as image and video recognition, as well as recommender systems and natural language processing. This was achieved by using the MobileNetV2 model, a convolutional neural network architecture that aims to perform well on mobile devices. It is based on an inverted residual structure where the residual connections are between the bottleneck layers. This model is small, low latent, and low power meant to meet the resource constraints of our use case. This specific version of MobileNet improves the performance of doing multiple tasks and benchmarks well across a spectrum of different model sizes. The web client uses Playground TensorFlow as the front end and Teachable Machine as the backend with MobileNetV2 as the model. Users can log into the web client and upload images for each class. The web client stores everything locally within the browser and no servers are needed for privacy reasons. Users are also able to modify the Epoch, Learning Rate, Dense Units, and Batch Size to better tweak and understand what each does to the results. Further implementation of this tool is to provide more features and models users can choose from besides MobileNetV2.
This dashboard allows researchers to deploy simulated federated learning models under
varying parameters such as type of model, federated learning strategy, and the number of
classes. Furthermore, once the model has been deployed the output in the form of logs and
Tensorboard graphs can be viewed. It is important to note that the dashboard performs
simulated federated learning, and does not use real-world devices.