<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects | Anh Duong Vo</title><link>https://anhduongvo.github.io/projects/</link><atom:link href="https://anhduongvo.github.io/projects/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sun, 19 May 2024 00:00:00 +0000</lastBuildDate><item><title>Applying new data analysis or ML methods to analyze multi-modal time series data</title><link>https://anhduongvo.github.io/projects/multimodal/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://anhduongvo.github.io/projects/multimodal/</guid><description>&lt;p&gt;Work:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using parallelization and GPU/CPU to improve data analysis pipelines&lt;/li&gt;
&lt;li&gt;Denoising neural data and extracting important features&lt;/li&gt;
&lt;li&gt;Dealing with small datasets&lt;/li&gt;
&lt;li&gt;Classifying behavior in images&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Application in industry:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Healthcare: Analyzing condition of motion and sensory processing of each patient&lt;/li&gt;
&lt;li&gt;Robotics: Develop robots with motion inspired by real humans&lt;/li&gt;
&lt;li&gt;Assistive Devices: tackling only motion dimension of a human and built assistive devices based on that&lt;/li&gt;
&lt;li&gt;Automated data labelling to train models for smart wearables&lt;/li&gt;
&lt;li&gt;Various Data analysis cases where limited time series data is a problem&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="flex justify-center "&gt;
&lt;div class="w-full" &gt;
&lt;img alt="&amp;rsquo;&amp;rsquo;"
srcset="https://anhduongvo.github.io/projects/multimodal/image-3_hu_42941654ebd62ca6.webp 320w, https://anhduongvo.github.io/projects/multimodal/image-3_hu_66f170a219331cb7.webp 480w, https://anhduongvo.github.io/projects/multimodal/image-3_hu_fa2e31bdd52917b5.webp 540w"
sizes="(max-width: 480px) 100vw, (max-width: 768px) 90vw, (max-width: 1024px) 80vw, 760px"
src="https://anhduongvo.github.io/projects/multimodal/image-3_hu_42941654ebd62ca6.webp"
width="540"
height="285"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;figure &gt;
&lt;div class="flex justify-center "&gt;
&lt;div class="w-full" &gt;
&lt;img alt="&amp;rsquo;&amp;rsquo;"
srcset="https://anhduongvo.github.io/projects/multimodal/image-4_hu_65c83e03b0d7eea1.webp 320w, https://anhduongvo.github.io/projects/multimodal/image-4_hu_5211e4465a1152d0.webp 322w"
sizes="(max-width: 480px) 100vw, (max-width: 768px) 90vw, (max-width: 1024px) 80vw, 760px"
src="https://anhduongvo.github.io/projects/multimodal/image-4_hu_65c83e03b0d7eea1.webp"
width="322"
height="291"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;</description></item><item><title>LLMs, Generative models and smart wearables</title><link>https://anhduongvo.github.io/projects/generative-models-and-smart-wearables/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://anhduongvo.github.io/projects/generative-models-and-smart-wearables/</guid><description>&lt;h2 id="predicting-eye-movement-with-eegeog-data"&gt;Predicting eye movement with EEG/EOG data&lt;/h2&gt;
&lt;p&gt;Work:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;multimodal system to predict eye movement and focus&lt;/li&gt;
&lt;li&gt;supervised Master students with following code:
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Applications in industry:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Automotive&lt;/li&gt;
&lt;li&gt;Wearables&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="generating-images-and-text-based-on-eeg-data"&gt;Generating images and text based on EEG data&lt;/h2&gt;
&lt;p&gt;Work:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using diffusion models to generate images and text based on neural data&lt;/li&gt;
&lt;li&gt;supervised Master student with following code:
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Applications in industry:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Consumer Behavior&lt;/li&gt;
&lt;li&gt;Healthcare: Information about how users process environment&lt;/li&gt;
&lt;li&gt;Wearables and app development&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="limitations-of-large-language-models"&gt;Limitations of Large Language models&lt;/h2&gt;
&lt;p&gt;Work:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Collaborating with researchers all over the world to create a study on limitations of Large Language models for different languages across the world&lt;/li&gt;
&lt;li&gt;Publication: Noga Mudrik, &amp;hellip;, Anh Duong Vo, Others (2025). Lost in Translation? LLMs, Education, and Linguistic Fairness. IEEE ISEC 2025.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Applications in industry:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Applying LLMs for different languages&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Python coding course for students with disability in Vietnam</title><link>https://anhduongvo.github.io/projects/coding-class/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://anhduongvo.github.io/projects/coding-class/</guid><description>&lt;p&gt;In 2024, I launched a Python and Data Science course in collaboration with industry partners, providing adults with disabilities in Vietnam access to higher education in technology. Designing and delivering these lectures reinforced my ability to make complex tools accessible.&lt;/p&gt;</description></item><item><title>Understanding more about the brain and developing models</title><link>https://anhduongvo.github.io/projects/brain/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://anhduongvo.github.io/projects/brain/</guid><description>&lt;p&gt;Work:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Movement prediction&lt;/li&gt;
&lt;li&gt;Transfer learning and generalization&lt;/li&gt;
&lt;li&gt;Learning on a synaptic scale&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Applications in industry:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Wearables: user intent such as clicking on a button&lt;/li&gt;
&lt;li&gt;Automotive&lt;/li&gt;
&lt;li&gt;Assistive devices&lt;/li&gt;
&lt;li&gt;Healthcare: Using wearables to measure emotions in prefrontal cortex (front head) for therapeutic purposes.&lt;/li&gt;
&lt;li&gt;Advertising: Measure the emotional effect of advertisement.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="flex justify-center "&gt;
&lt;div class="w-full" &gt;
&lt;img alt="&amp;rsquo;&amp;rsquo;"
srcset="https://anhduongvo.github.io/projects/brain/image-6_hu_321fbe4f8eddd3f5.webp 320w, https://anhduongvo.github.io/projects/brain/image-6_hu_26891b0652bd7dd3.webp 480w, https://anhduongvo.github.io/projects/brain/image-6_hu_257031eda41ffe77.webp 659w"
sizes="(max-width: 480px) 100vw, (max-width: 768px) 90vw, (max-width: 1024px) 80vw, 760px"
src="https://anhduongvo.github.io/projects/brain/image-6_hu_321fbe4f8eddd3f5.webp"
width="659"
height="284"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;</description></item></channel></rss>