Variational Inference & Stein Variational Gradient Descent
Variational inference (VI) is a powerful framework for approximating complex posterior distributions in Bayesian inference. It transforms the inference problem into an optimization problem, allowing for efficient computation of posterior distributions. Stein Variational Gradient Descent (SVGD) is a specific VI method that uses kernelized gradients to iteratively update particles, providing a flexible and scalable approach to approximate posterior distributions.
report
Advanced Markov Chain Monte Carlo (MCMC) Methods
Markov Chain Monte Carlo (MCMC) methods are a class of algorithms used to sample from complex probability distributions. They are particularly useful in Bayesian statistics, where the posterior distribution is often intractable. MCMC methods generate samples by constructing a Markov chain that converges to the target distribution, allowing for efficient exploration of high-dimensional spaces.
report ·
code
Parallelizing Agent-Based Disease-Spread Models
Accurate disease-spread modeling is critical for developing effective mitigation efforts during an outbreak. The most basic epidemiological models use ordinary differential equation representations of the evolution of susceptible, infected, and recovered populations over time, but these models are limited by their simplistic and deterministic nature. In contrast, agent-based models assign attributes to each individual in a population. These attributes, in addition to global rules for movement and interaction between agents, define an individual's behavior patterns. Agent-based epidemiological models
are stochastic and can be highly realistic, as the level of detail incorporated from demographics, city geography, and data on time personal usage is arbitrary.
Agent-based models (ABMs) typically loop through each agent at each time step, then evaluate
disease-spread progression at their new locations, which becomes prohibitively slow with large population sizes. Because of this, parallelism is required to make high-fidelity simulations and large-scale data generation feasible.
report ·
code
Reinforcement Learning for Optimal Path Planning of Drone Swarms
Deep reinforcement learning (RL) is a powerful approach for training agents to make decisions in complex environments. It combines deep learning with reinforcement learning techniques to enable agents to learn optimal policies through trial and error. This approach has shown remarkable success in various domains, including robotics, game playing, and autonomous systems. In this project, we explore the application of deep RL algorithms, such as Soft Actor-Critic (SAC) and Deep Deterministic Policy Gradient (DDPG), to train agents in a simulated environment for drone swarm trajectory optimization.
report ·
code
Decoding the Impact of Federal Reserve
Emotions on Stock Market Activity with Deep
Learning
The primary objective of this project was to examine the emotions expressed by members of the Federal Reserve during press conferences and to explore the possible connection between these emotions and the intraday returns and trading volume of the S&P 500 index. To accomplish this, the project made use of images taken from the press conferences and employed DeepFace to predict emotions based on these images. The emotions were analyzed every two seconds and distilled over a three-minute time frames, and potential correlations with intraday returns and trading volume were investigated.
report ·
code
Deep convolutional neural networks for
predicting the presence and severity of
COVID-19 from chest X-rays
Machine learning offers powerful tools for COVID-19 detection using chest X-rays, providing an alternative to PCR testing. Our custom classification model, COVID-Net, achieved 95.20% accuracy, outperforming segmentation-based models like UNet and UNet++. While segmentation models provide insights into COVID-19 severity and localization, COVID-Net excels in diagnostic accuracy, highlighting the potential of ML in medical imaging.
report ·
code