<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Diar_Karim | Virtual Reality Lab</title><link>https://virtualrealitylab.netlify.app/author/diar_karim/</link><atom:link href="https://virtualrealitylab.netlify.app/author/diar_karim/index.xml" rel="self" type="application/rss+xml"/><description>Diar_Karim</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Thu, 10 Apr 2025 00:00:00 +0000</lastBuildDate><item><title>MR for Accessibility: Augmenting Senses for Inclusive Virtual Worlds</title><link>https://virtualrealitylab.netlify.app/post/mr_accessibility/</link><pubDate>Thu, 10 Apr 2025 00:00:00 +0000</pubDate><guid>https://virtualrealitylab.netlify.app/post/mr_accessibility/</guid><description>&lt;p&gt;Mixed Reality (MR) can adapt and extend human senses, letting people operate beyond real-world constraints and helping to level the playing field across physical, social, and environmental limitations. This post introduces the theme, outlines exemplar studies, and points to resources for designing inclusive MR systems. fileciteturn0file1&lt;/p&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="Figure 1: Overview of accessibility themes in MR." srcset="
/post/mr_accessibility/figure_1_hu_472e31e3d40a83cd.webp 400w,
/post/mr_accessibility/figure_1_hu_e0d5863a240f2b1b.webp 760w,
/post/mr_accessibility/figure_1_hu_33a0156b37240411.webp 1200w"
src="https://virtualrealitylab.netlify.app/post/mr_accessibility/figure_1_hu_472e31e3d40a83cd.webp"
width="760"
height="242"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h2 id="why-accessibility-in-mr-matters"&gt;Why accessibility in MR matters&lt;/h2&gt;
&lt;p&gt;By decoupling experience from strictly physical laws, MR enables users to achieve tasks otherwise difficult or impossible, and supports participation by people with diverse abilities. The goal is not only access, but equitable agency, confidence, and performance in complex environments. fileciteturn0file1&lt;/p&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="Figure 2: Relationship between sensory adaptation and accessibility." srcset="
/post/mr_accessibility/figure_2_hu_3c9994451382da53.webp 400w,
/post/mr_accessibility/figure_2_hu_8ecd84ed46e4cf4b.webp 760w,
/post/mr_accessibility/figure_2_hu_468e14c96903bbab.webp 1200w"
src="https://virtualrealitylab.netlify.app/post/mr_accessibility/figure_2_hu_3c9994451382da53.webp"
width="592"
height="371"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h2 id="selected-works-and-contributions"&gt;Selected works and contributions&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Navigation and agency for blind travelers.&lt;/strong&gt; &lt;em&gt;Glide&lt;/em&gt; explores mode switching between device-directed and user-directed control, studying impacts on agency, trust, and performance with blind/low-vision participants (HRI ’23). This line of work contributed to the formation of &lt;strong&gt;Gliadance.IO&lt;/strong&gt;. fileciteturn0file1&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="Figure 3: Glide study—navigation and agency for blind travelers." srcset="
/post/mr_accessibility/figure_3_hu_79454cc33322f8f1.webp 400w,
/post/mr_accessibility/figure_3_hu_ede15d7fba0afce0.webp 760w,
/post/mr_accessibility/figure_3_hu_247b3939cf4e2f05.webp 1200w"
src="https://virtualrealitylab.netlify.app/post/mr_accessibility/figure_3_hu_79454cc33322f8f1.webp"
width="760"
height="289"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Expressive avatar motion from sparse input.&lt;/strong&gt; &lt;em&gt;CoolMoves&lt;/em&gt; synthesizes accentuated full-body motion in real time from commodity VR signals using database matching and probabilistic smoothing (IMWUT ’21). fileciteturn0file1&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="Figure 4: Example of expressive avatar motion synthesis." srcset="
/post/mr_accessibility/figure_4_hu_d600102135218b50.webp 400w,
/post/mr_accessibility/figure_4_hu_fff053917c9a22cd.webp 760w,
/post/mr_accessibility/figure_4_hu_bf5ab8f41d7f123a.webp 1200w"
src="https://virtualrealitylab.netlify.app/post/mr_accessibility/figure_4_hu_d600102135218b50.webp"
width="760"
height="172"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Sound accessibility taxonomy for VR.&lt;/strong&gt; A two-dimension framework (source × intent) categorizes sounds across dozens of apps, informing visual/haptic sound substitutes for D/deaf and hard-of-hearing users (DIS 2021, Best Paper). fileciteturn0file1&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="Figure 5: Taxonomy of sound accessibility in VR." srcset="
/post/mr_accessibility/figure_5_hu_ffde728c10544458.webp 400w,
/post/mr_accessibility/figure_5_hu_d2e6d06d292a7e8f.webp 760w,
/post/mr_accessibility/figure_5_hu_1e5e6e10562b8544.webp 1200w"
src="https://virtualrealitylab.netlify.app/post/mr_accessibility/figure_5_hu_ffde728c10544458.webp"
width="760"
height="163"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Unimanual→bimanual interaction mapping.&lt;/strong&gt; &lt;em&gt;Two-In-One&lt;/em&gt; defines a design space that remaps limited unimanual input to bimanual interactions in VR (TACCESS). fileciteturn0file1&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="Figure 6: Mapping of unimanual to bimanual interaction." srcset="
/post/mr_accessibility/figure_6_hu_783b6ef6295ffe4d.webp 400w,
/post/mr_accessibility/figure_6_hu_a78e505927c1ac59.webp 760w,
/post/mr_accessibility/figure_6_hu_f1319147a32dffef.webp 1200w"
src="https://virtualrealitylab.netlify.app/post/mr_accessibility/figure_6_hu_783b6ef6295ffe4d.webp"
width="760"
height="339"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Toward sound accessibility in VR.&lt;/strong&gt; Empirical guidance for accessible auditory representations and alternatives (ICMI 2021). fileciteturn0file1&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="Figure 7: Sound design for accessibility in virtual environments." srcset="
/post/mr_accessibility/figure_7_hu_a22529d1f1e49f14.webp 400w,
/post/mr_accessibility/figure_7_hu_c07b33250f88fbbb.webp 760w,
/post/mr_accessibility/figure_7_hu_e0fc841b3d8dace0.webp 1200w"
src="https://virtualrealitylab.netlify.app/post/mr_accessibility/figure_7_hu_a22529d1f1e49f14.webp"
width="760"
height="351"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Movement projection and stylization.&lt;/strong&gt; &lt;em&gt;SnapMove&lt;/em&gt; examines projected movement transformations to support performance and expression (AIVR 2020). fileciteturn0file1&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="Figure 8: Stylization and projection of movement in MR." srcset="
/post/mr_accessibility/figure_8_hu_f1609e1449ecb037.webp 400w,
/post/mr_accessibility/figure_8_hu_a66329b1c22d22bc.webp 760w,
/post/mr_accessibility/figure_8_hu_2cce4b2800ead5c9.webp 1200w"
src="https://virtualrealitylab.netlify.app/post/mr_accessibility/figure_8_hu_f1609e1449ecb037.webp"
width="760"
height="251"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Non-visual navigation in VR.&lt;/strong&gt; A haptic+auditory “white cane” enables navigation of complex virtual spaces without vision (CHI 2020, Honorable Mention). fileciteturn0file1&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="Figure 9: Haptic and auditory navigation for non-visual users." srcset="
/post/mr_accessibility/figure_9_hu_5a49a4b8cf461447.webp 400w,
/post/mr_accessibility/figure_9_hu_4e4dd08d74b374c2.webp 760w,
/post/mr_accessibility/figure_9_hu_d871e62fb4bf123c.webp 1200w"
src="https://virtualrealitylab.netlify.app/post/mr_accessibility/figure_9_hu_5a49a4b8cf461447.webp"
width="760"
height="193"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Low-vision tools for VR.&lt;/strong&gt; &lt;em&gt;SeeingVR&lt;/em&gt; bundles techniques to improve readability, contrast, and guidance for low-vision users (CHI 2019). fileciteturn0file1&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
&lt;figure &gt;
&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="w-100" &gt;&lt;img alt="Figure 10: Tools for low-vision accessibility in VR." srcset="
/post/mr_accessibility/figure_10_hu_fe71a961008453e7.webp 400w,
/post/mr_accessibility/figure_10_hu_bb8ad6a3901ce72d.webp 760w,
/post/mr_accessibility/figure_10_hu_953a9cc2675e5880.webp 1200w"
src="https://virtualrealitylab.netlify.app/post/mr_accessibility/figure_10_hu_fe71a961008453e7.webp"
width="760"
height="252"
loading="lazy" data-zoomable /&gt;&lt;/div&gt;
&lt;/div&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h2 id="outlook"&gt;Outlook&lt;/h2&gt;
&lt;p&gt;Accessibility in MR requires coordinated advances in sensing, semantic feedback (audio/visual/haptic), and interaction design. The literature above illustrates how control sharing, motion synthesis, alternative sensory channels, and adaptive tooling can increase agency and inclusion across user groups. fileciteturn0file1
&amp;quot;&amp;quot;&amp;quot;&lt;/p&gt;</description></item><item><title>Motion Dynamics</title><link>https://virtualrealitylab.netlify.app/projects/motiondynamics/</link><pubDate>Tue, 11 Jun 2024 00:00:00 +0000</pubDate><guid>https://virtualrealitylab.netlify.app/projects/motiondynamics/</guid><description>&lt;p&gt;We are investigating methods to track Squash players, processing the information using AI and visualising the data in real-time, e.g. for live broadcasting during matches, or by providing performance feedback to the player, e.g. through a phone app.&lt;/p&gt;</description></item><item><title>ARME - Augmented Reality Music Ensemble</title><link>https://virtualrealitylab.netlify.app/projects/arme/</link><pubDate>Sun, 11 Apr 2021 00:00:00 +0000</pubDate><guid>https://virtualrealitylab.netlify.app/projects/arme/</guid><description>&lt;p&gt;The project aims to understand how musicians synchronise with each other, to build a computational model of the musician’s behaviour, and to create an augmented reality system that allows music rehearsal with virtual avatars.&lt;/p&gt;</description></item><item><title>Mission to Mars VR experiment</title><link>https://virtualrealitylab.netlify.app/projects/missiontomars/</link><pubDate>Sun, 11 Apr 2021 00:00:00 +0000</pubDate><guid>https://virtualrealitylab.netlify.app/projects/missiontomars/</guid><description>&lt;p&gt;VR experience where participants will perform a Mars exploration tasks. The app is intended to explore how different personalities navigate and respond to challenges in extreme environments.&lt;/p&gt;</description></item><item><title>Prendo, grasping in VR experiment</title><link>https://virtualrealitylab.netlify.app/projects/prendo/</link><pubDate>Sun, 11 Apr 2021 00:00:00 +0000</pubDate><guid>https://virtualrealitylab.netlify.app/projects/prendo/</guid><description/></item><item><title>PrendoSim Robot grasp generator</title><link>https://virtualrealitylab.netlify.app/projects/prendosim/</link><pubDate>Sun, 11 Apr 2021 00:00:00 +0000</pubDate><guid>https://virtualrealitylab.netlify.app/projects/prendosim/</guid><description>&lt;p&gt;PrendoSim is a robot gripper simulator that allows scientists working on robotics to generate and test viable grasps using a proxy-hand method and a novel grasp stability metric based on the weight that can be withstood. This simulator takes advantage of Unity’s latest NVIDIA PhysX 4.1 integration to create physically realistic grasp simulations and outputs joint pose data of the gripper digits (JSON format), grasped object’s position, and images of the grasp from a specified point of view in png format.&lt;/p&gt;</description></item><item><title>TacTile. Multisensory texture perception iPhone experiment</title><link>https://virtualrealitylab.netlify.app/projects/tactile/</link><pubDate>Sun, 11 Apr 2021 00:00:00 +0000</pubDate><guid>https://virtualrealitylab.netlify.app/projects/tactile/</guid><description/></item><item><title>TactileMirror. Code for a wearable sensory substitution device</title><link>https://virtualrealitylab.netlify.app/projects/tactilemirror/</link><pubDate>Sun, 11 Apr 2021 00:00:00 +0000</pubDate><guid>https://virtualrealitylab.netlify.app/projects/tactilemirror/</guid><description>&lt;p&gt;We mapped vibrations experienced by the fingertip to the wrist via an optimised function to generate frequencies in the range of 100-Hz to 800-Hz, which best activate the rapidly adapting mechano-receptors in the wrist. The signal is sent to two actuators positioned on the wrist. The actuators activate in a temporal order according to the direction of finger movement on the surface.&lt;/p&gt;</description></item></channel></rss>