ExMicroVR User Guide


ExMicroVR was developed for expansion microscopy, and is free to nonprofits. It supports multi-channel confocal microscope image-stacks, and has a powerful VR-based user interface.

  • Install SteamVR: If you haven’t yet, make sure you install SteamVR on your computer. It is required to run  ExMicroVR. You may have to create a Steam.com user account, but the SteamVR tool is free. Note that this is required even if you have installed the setup for an Oculus headset.
  • Download App: After download ExMicroVR from the ImmSci.com site, double click on the Installer file. Follow the installation directions and when complete the ExMicroVR application will be visible in your Windows Start menu.
  • Start ConfocalVR: Start ExMicroVR, put on your VR headset, and pick up the VR hand controllers. First thing is to re-center yourself in the room by clicking down on the right controller thumb wheel or thumb stick. This moves you to the center of the scene, in easy reach of the image and excluders, and close to the Control Panels.
  • Try out the test image: If you haven’t yet, download the test multi-channel image stack from the same website where you downloaded the ExMicroVR application, and unzip it. To view this image in VR, load the image file by pointing your right-hand controller at the File Browser (a laser pointer will automatically appear) and click on the “Load Image” button.  Browse to the file folder (directory) that has your set of NIfTI (.nii) image stack files and click the “OK” button. Review the sample image directory to see the folder structure. Note that the file names are used to label the channels in the “Channels” panel, so it is useful to keep those names short and informative, likely referring to the florescent marker used in that channel. (Note that when browsing directories, the files will not appear in the display, however when you click Load, all .nii files in the directory will be loaded, one into each of the 30 available channels.)
  • Loading images: Depending on the number of channels, and the size of each channel, it will take several seconds (up to 30) to load. During this time the display my flicker or jump as all the processing power goes to loading in the large amount of image data.
  • Manipulating the Image: You can reach out with the left or right controller and press and hold the trigger button to grab the image stack.  This lets you translate and rotate the image so that you can immediately begin to understand the full 3D structure of you image-stack. Release the trigger button to release the image.
  • Controlling channel visibility and focus: At the end of the file loading, you will see all the channels listed in the “Channels” panel. Initially the first channel “Visibility” is turned on, as well as it being the current “Focus”. You click on one or more of the “Visible” toggle buttons to make channels visible or hidden. You click on the “Focus” channel to select which channel you want to adjust using the slider controls on the “Settings” display panel. Note that there is a small “All” toggle at the top of the column of visibility toggles that provides a shortcut for turning all channels off or on with one click. If you want to quickly turn the focus channel visibility off and on, press down on the left controller thumb-wheel or thumb-stick to toggle the visibility.
  • Managing the processing load: Computing volumetric images is computationally expensive, especially if the images are large, and if there are multiple channels. There are a few tools designed to help you manage the GPU processing load. It is possible to load more channels of data than can be displayed with a refresh rate that is comfortable to your eyes. You manage the compute load by only making visible the channels you are currently interested in, and also by using the “Excluder” feature (see below) in the “Invert” mode to only display a small section of whole image. It is also good to keep the “Render Quality” (see below) as low as possible. These all reduce the amount of rendering that the GPU has to perform. Note that the video refresh rate (frame-rate) is displayed at the top right of the “Channels” display panel.  Higher frames-per-second (fps) are more comfortable to your eyes. A fps of 90 is desirable for the most comfort. Use the techniques described to keep the fps in a range comfortable to you.
  • Adjusting image channel settings: At this point you can use the slider interface and adjust various image visualization parameters. First chose which channel of a multi-channel stack you want to adjust by clicking on the “Focus” toggle. Turn off or on other channels as needed. The slider labels provide a guide to the range of adjustments possible. Point the right-hand laser point at the small disks on the horizontal sliders, pull and hold the controller trigger button to grab the slider disk, and move the laser pointer left and right to make adjustments, then release the trigger button. (Hint – it is often useful to start by increasing the ‘Min Threshold” slider to cut out any background signal in the image. Then use other sliders to enhance the image.)
  • Adjusting all channels at once: There are three sliders that affect all channels together so that they stay properly registered to each other. These are: 1) “Zoom Depth” which changes only the visual depth of the image stack, without changing the height or width, 2) “Zoom” which changes all 3 axes together so that you can enlarge or shrink the overall image to fit you needs, and 3) “Render Quality” which adjusts the amount of interpolation that is performed to fill the void between the layers of the image stack. Higher “Render Quality” increases the compute load on the GPU, so raise the quality to the minimum you need for you work.
  • Saving and Loading slider settings: At any time, you can click the “Save Settings” button to save the position of all the sliders for all the channels in a text file that is stored in the same directory at the image files you are working with.  Click “Load Settings” to read and reset all the slider positions.  Note, a new save overwrites the previous save.
  • Use of the Excluder: The purpose of the Excluder sphere is to either: 1) Allow you to move into and look around inside a large 3D image without have visual clutter from image elements that are too close to your eyes, and 2) To invert the effect of the Excluder so that the only part of the image that is renders are the parts that are inside the Excluder.  In either case, you grab the Excluder Sphere by reaching out, touching, and then pulling and holding the hand controller trigger button.  Move the Excluder into the image stack and see the effect that it has. Release the trigger to let go.  Sometimes it is useful to keep the Excluder attached to your (virtual) head, so that as you move and look around in your image, the Excluder keep the area in front of your face clear of visual clutter. Use the “Head” toggle to enable/disable this mode. The “Home” toggle returns the Excluder to it home-position and disconnects it from the head if necessary, getting it out of the way until needed again. There is a “Excluder Size” slider that lets you adjust the radius of the sphere so you can adjust to suit your visualization (or computational) needs.  The “Invert” toggle changes the Excluder from showing only image elements (voxels) that are outside, or inside, the Excluder.
  • Collaboration capability: VR-based collaboration allows you and your collaborators to share the ConfocalVR workspace, that is the control panels and image display, to see avatars of each other the allow you to see head and hand movements and gestures, and bi-direction live audio so you can point and discuss the subject of your images.  To use this feature there are two steps you have to set up in advance:
    • You must share with your collaborators the multichannel image directory that you will be viewing together. These must be identical unmodified copies; otherwise, certain features will fail to synchronize properly. Alternatively, you can all point to a shared cloud-based file sharing system like Google Drive, Box, DropBox, or Azure, but be sure these systems are syncing the image directories to your local machines so that you don’t face file download delays when your collaboration begins.
    • You must choose and share with your collaborators a “VR Room” name, a text string like “DEMO”, “EXP1”, “CYTO”, “LATEST”. (All caps required.)
    • With these two steps completed, start up ExMicroVR and in the Channels Panel display, click on “Begin Collab”.  A keyboard will appear below where you can point your controller and pull the trigger button to select the Room Name string, as in “DEMO”. Then click “Enter Room” key.  There will be a moment hesitation and then you be connected to the shared room, and to anyone else who has entered the same Room Name. At this point a thin solid pointer stick will appear on the end of each controller. This is so that you can see where each other are pointing during discussion.  You should also be able to hear each other as well. Make sure the proper speaker and microphone (your VR headset for both) are selected on your Windows OS.  You can end the collaboration by clicking the same button, now labeled “End Collab”. If you want, you can immediately start a new collaboration in a different room if you like, but your collaborators, again, must have identical copies of the image stack directory, including all the settings text files, etc.
  • Shutdown:  To end the VR session, either point and click on the Exit button at the bottom of the Channels panel and verify Yes or No, or you can take off your VR headset and press the ESC key on your keyboard.