ConfocalVR User Guide

 

ConfocalVR – accelerating the interpretation of complex 3D multi-channel microscope image-stacks

Challenge – How do you more quickly understanding complex multi-channel 3D images, like those collected from Confocal Microscopy, and other 3D imaging technologies

Solution – VR based immersive interaction with images

Benefits – Accelerates research by enabling faster understanding of complex geometric relationships between biological structures by presenting the 3D image to the full power of your visual cortex

 

Feature Summary

  • Load a wide range of 3D image-stacks, preprocessed by tools like ImageJ into NIfTI format
  • Grab image and manipulate for fast identification of regions of interest
  • Quickly connect with other researchers online to see, discuss, and manipulate images
  • Load up to 30 channels of 3D image stacks
  • Adjust image settings (brightness, threshold, etc.) for each channel individually
  • Measure physical distance between points in the image
  • Compute and display voxel intensity histograms
  • Compute Blend functions between pairs of channels
  • Create, Modify, and Save new channels
  • Use “Excluder” objects to hide, or focus on, key image regions
  • Quickly locate, mark, count, and save key image artifacts (e.g. vacuoles)
  • Display multi-channel image-stacks as short 30 frame movies
  • User image Slicer to select best image view to use for 2D publications
  • Compatible with GPU class VR workstations (not cell phone based VR)
  • Compatible with commercial SteamVR-compatible systems such as Oculus, WMR, and Vive.

    INITIAL SETUP

    • Install SteamVR: If you haven’t yet, make sure you install SteamVR on your computer. It is required to run ConfocalVR or ExMicroVR. You may have to create a Steam.com user account, but the SteamVR tool is free. Note that this is required even if you have installed the setup for an Oculus headset.
    • Download App and Install: After you purchase your license to your copy of ConfocalVR3.2, you will receive a link via email. Simply download the file and double click to initiate the installer.  Follow the prompts to install at your preferred location. The ConfocalVR application will now be visible in the Start menu at the bottom left of your console window. 
    • Download Test Images: Download a multichannel test image sets from here, unzip it and save it someplace you can access it later when you want to view it. It will unpack into a directory with multiple image files.
    • Start ConfocalVR: Click the Windows Start button and select ConfocalVR. Put on your VR headset and pick up the VR hand controllers. Re-center  yourself in the virtual room by clicking down on the right controller thumb-wheel or thumb-stick (depending on the VR system you are using). Clicking re-center moves you to the center of the scene, in easy reach of the image and excluders, and close to the Control Panels.
    • Try out the test image: To view the test image in VR, load the image directory by pointing your right-hand controller at the File Browser (a laser pointer will automatically appear) and click on the “Load Image” button.  Browse to the file folder (directory) that has your set of NIfTI (.nii) image stack files and click the “OK” button. Review the sample image directory to see the folder structure. Note that the file names are used to label the channels in the “Channels” panel, so it is useful to keep those names short and informative, likely referring to the florescent marker used in that channel. (Note that when browsing directories, all files in the directory will be visible in the display. However, when you click Load, only the NIfTI file types (.nii) in the directory will be loaded, and only up to the first 30 files. Each file will load into a separate channel as can be seen on the Channel Panel in the VR display.

    BASIC INTERACTIONS

    • Loading images: Depending on the number of channels, and the size of each channel, it will take several seconds (up to 30) to load. During this time the display will show a “Unity Waiting…” message, as all the processing power goes to loading in the large amount of image data. You will hear a series of beeps, one for each channel being processed.
    • Manipulating the Image: You can reach out with the left or right controller and press and hold the trigger button to grab the image stack.  This lets you translate and rotate the image so that you can immediately begin to understand the full 3D structure of you image-stack. Release the trigger button to release the image.
    • Controlling channel visibility and focus: At the end of the file loading, you will see all the channels listed in the “Channels” panel. Initially the first channel “Visibility” is turned on, as well as it being the current “Focus”. You click on one or more of the “Visible” toggle buttons to make channels visible or hidden. You click on the “Focus” channel to select which channel you want to adjust using the slider controls on the “Settings” display panel. Note that there is a small “All” toggle at the top of the column of visibility toggles that provides a shortcut for turning all channels off or on with one click. If you want to quickly turn the focus channel visibility off and on, press down on the left controller thumb-wheel or thumb-stick to toggle the visibility.
    • Managing the processing load: Computing volumetric images is computationally expensive, especially if the images are large, and if there are multiple channels. There are a few tools designed to help you manage the GPU processing load. It is possible to load more channels of data than can be displayed with a refresh rate that is comfortable to your eyes. You manage the compute load by only making visible the channels you are currently interested in, and also by using the “Excluder” feature (see below) in the “Invert” mode to only display a small section of whole image. It is also good to keep the “Render Quality” (see below) as low as possible. These all reduce the amount of rendering that the GPU has to perform. Note that the video refresh rate (frame-rate) is displayed at the top right of the “Channels” display panel.  Higher frames-per-second (fps) are more comfortable to your eyes. A fps of 90 is desirable for the most comfort. Use the techniques described to keep the fps in a range comfortable to you.
    • Adjusting image channel settings: At this point you can use the slider interface and adjust various image visualization parameters. First chose which channel of a multi-channel stack you want to adjust by clicking on the “Focus” toggle. Turn off or on other channels as needed. The slider labels provide a guide to the range of adjustments possible. Point the right-hand laser point at the small disks on the horizontal sliders, pull and hold the controller trigger button to grab the slider disk, and move the laser pointer left and right to make adjustments, then release the trigger button. (Hint – it is often useful to start by increasing the ‘Min Threshold” slider to cut out any background signal in the image. Then use other sliders to enhance the image.)
    • Adjusting all channels at once: There are three sliders that affect all channels together so that they stay properly registered to each other. These are: 1) “Zoom Depth” which changes only the visual depth of the image stack, without changing the height or width, 2) “Zoom” which changes all 3 axes together so that you can enlarge or shrink the overall image to fit you needs, and 3) “Render Quality” which adjusts the amount of interpolation that is performed to fill the void between the layers of the image stack. Higher “Render Quality” increases the compute load on the GPU, so raise the quality to the minimum you need for you work.
    • Saving and Loading slider settings: At any time, you can click the “Save Settings” button to save the position of all the sliders for all the channels in a text file that is stored in the same directory at the image files you are working with.  Click “Load Settings” to read and reset all the slider positions.  Note, a new save overwrites the previous save.
    • Use of the Excluder: The purpose of the Excluder sphere is to either: 1) Allow you to move into and look around inside a large 3D image without have visual clutter from image elements that are too close to your eyes, and 2) To invert the effect of the Excluder so that the only part of the image that is renders are the parts that are inside the Excluder.  In either case, you grab the Excluder Sphere by reaching out, touching, and then pulling and holding the hand controller trigger button.  Move the Excluder into the image stack and see the effect that it has. Release the trigger to let go.  Sometimes it is useful to keep the Excluder attached to your (virtual) head, so that as you move and look around in your image, the Excluder keep the area in front of your face clear of visual clutter. Use the “Head” toggle to enable/disable this mode. The “Home” toggle returns the Excluder to it home-position and disconnects it from the head if necessary, getting it out of the way until needed again. There is a “Excluder Size” slider that lets you adjust the radius of the sphere so you can adjust to suit your visualization (or computational) needs.  The “Invert” toggle changes the Excluder from showing only image elements (voxels) that are outside, or inside, the Excluder.
    • Collaboration capability: VR-based collaboration allows you and your collaborators to share the ConfocalVR workspace, that is the control panels and image display, to see avatars of each other the allow you to see head and hand movements and gestures, and bi-direction live audio so you can point and discuss the subject of your images.  To use this feature there are two steps you have to set up in advance:
      • You must share with your collaborators the multichannel image directory that you will be viewing together. These must be identical unmodified copies; otherwise, certain features will fail to synchronize properly. Alternatively, you can all point to a shared cloud-based file sharing system like Google Drive, Box, DropBox, or Azure, but be sure these systems are syncing the image directories to your local machines so that you don’t face file download delays when your collaboration begins.
      • You must choose and share with your collaborators a “VR Room” name, a text string like “DEMO”, “EXP1”, “CYTO”, “LATEST”. (All caps required.)
      • With these two steps completed, you start up ConfocalVR and in the “Channels” display panel, click on “Begin Collab”.  A keyboard will appear below where you can point your controller and pull the trigger button to select the Room Name string, such as “DEMO”, then click “Enter Room” key.  There will be a moment hesitation and then you be connected to the shared room, and to anyone else who has entered the same Room Name. At this point a thin solid pointer stick will appear on the end of each controller. This is so that you can see where each other are pointing during discussion.  You should also be able to hear each other as well. Make sure the proper speaker and microphone (your VR headset for both) are selected on your Windows OS.  You can end the collaboration by clicking the same button, now labeled “End Collab”. If you want, you can immediately start a new collaboration in a different room if you like, but your collaborators, again, must have identical copies of the image stack directory, including all the settings text files, etc.
    • Shutdown:  To end the VR session, either point and click on the Exit button at the bottom of the Channels panel and verify Yes or No, or you can take off your VR headset and press the ESC key on your keyboard.

     

    ADVANCED INTERACTIONS

    Channel Panel

    •  Add/Delete/Label Channel – These are typically used with the Blending operations. Add a new channel to create a new location to store the results of a Blend operation.  Delete it, or any other channel, if no longer needed.  Delete takes away the display but does not delete the file from your file-system. The new channel when created is loaded with the same image as the current Focus channel.  Once you have created and computed a Blend into the new channel, use Label Channel to set the channel name, which will then be used also as the file name if you choose to save it. Save Channel saves the current Focus channel, in most cases the new blend results channel.
    • Blending – to Blend, you select the two input image channels, one channel in column A, and one in B, and then select a channel under Results.  Select the Blend Function you want to compute and then click Compute Blend.  This is a compute intensive task so it can take from a few, to many, seconds depending on image size. If you like the Blend result, Label the Result channel and click Save Channel, which is also takes some time to complete.
      • Blend Function Selection (perform the selected operation for each voxel in the image stacks, putting the result in the corresponding voxel of the Result channel. Note all voxels values are floating point numbers between 0.0 (black) and 1.0 (white). All computations are based on raw image values, not currently rendered values, except “Product of Thresholds”, which does apply the current threshold settings before computing the product. Note that the product of two channels can be used as a simple form of co-location test: Result is near 1.0 if A and B are both near 1.0.
        • Average: (A + B)/2
        • Product: (A * B)  
        • Difference: (A – B)
        • Product of Thresholds: (Threshold(A) * Threshold(B))
      • Save File – When clicked, the current Focus channel is written out the same image directory as all the other channels, using the channel name, which might have been modified by Label Channel, as the filename.  Any newly created channels that have not been saved yet are indicated by an asterisk (“*”) as a suffix to the channel name.  The file will be saved as a NIfTI file (.nii), and will automatically be reloaded the next time the directory is used in an Image Load.

    Settings Panel

    Excluder – the Excluder Sphere (or Cube) is visible as a gray translucent object near the bottom left of the virtual space. You can reach into it with left or right controller, and pull and hold the trigger button, to grab and drag the sphere into the image. You can use it to help understand the biology by seeing into an image when it has depth and visual complexity  

    • Sphere or Cube – Click on the box to select which excluder geometry you wish to use
    • Invert – by default voxels inside the Excluder will not be rendered. This button inverts that so that only voxels inside the excluder render. As mentioned above this is helpful in both focusing on a region of interest, and in reduce the computation load on your CPU/GPU, so the rendering rate (fps) will increase
    • Attach to – when clicked, the Excluder is virtually attached to either the virtual world, your head, or the image itself.
      • World mode, the excluder does not move unless grabbed. Movements of your head, nor movements of the image, will change the location of excluder.
      • Head mode, the exclude is virtually attached to your head at the relative position it is in when it is clicked.  Now if you move your head the excluder will follow your head translations and rotations. You can grab the excluder and relocate its relative position to your head. A useful technique is move the non-inverted excluder so that your head is inside of it. Now when you lean into the image, you can see the internals nearby, without being obscured by image elements that are right in your face.
      • Image mode, the excluder is attached to the image, so if you know move the image, the exclude stays in the same position relative to the image. This is useful if you are focused on a specific part of the image and you want to rotate the image around to look at all sides, without repositioning the excluder.
    • Size – change the size of the Excluder to fit your visualization needs
    • Depth – change the “z” axis of the excluder, stretching or shrinking it in one direction. This is useful if you want to focus on an oval area by setting the sphere depth larger, or if you want to focus on a 2D cut of the image by setting the cube depth very small. This is useful for selecting the ideal 2D view to grap as a screenshot for publication.
    • Reset – once you change the Depth, it can be hard to get back to a perfect sphere or cube. Clicking Reset will set the depth (z) axis to the same size as the others (x,y).
    • Home – when clicked will move the Exclude back to its initial home position and and move the attach point back to World frame. 
    • NOTE: It is possible for the Excluder to set such that it is completely within the image. When this happens you may not be able to grab the Excluder as it is ambiguous whether you are grabbing the image or the axes. If you use the Channel Panel’s “Visible All” button, you can make all channels invisible, then grab the axes and move it outside the image frame, and then make the channels visible again. Or, you can increase the size of the Excluder until it extends outside the image frame, and then grab, move, and resize again.

    Analytics Panel

    • Counter – Count the number of interesting objects you observe in your 3D image. First select one of the 5 counters by clicking on the selector box. Then click Start. Reach out with the left controller, overlap the small white marking sphere with the image element you want to mark, and click side trigger to leave a visual marker in the image. Keep count in one of 5 counter channels, color coded. When done counting with one counter, you can select another counter. When done, click End count, or Clear if you want to start this counter over again. Save Counts write the (x,y,z) voxel locations to an .csv file in the image directory for later analysis.  Counts can be Loaded and viewed later.  Note that if you are in a VR collaboration, the counter operations are visible to all collaborators. 
    • Distance Measure – When enabled, displays a marker (a small white sphere attached to your right controller) and lets you to reach into your image, pull and hold the right controller side trigger to start a distance measure, and then move the marker to the end measure location and release the controller trigger. This will create and display the distance measurement line. The distance is computed using voxel indices and voxel size, and is displayed in the image and next to the measure marker on the controller.

    • Reference Axes – When enabled, places a reference scale in the image that you can adjust and move. This helps you understand the physical dimension of the objects in your image, and can also be useful when taking 2D image screenshots for use in publications. The Size slider lets you adjust the scale of the axes to match your visualization needs. You can reach out into the +x,y,z quadrant of the axes. When your controller is properly located, the reference frame will change color, showing that you can now grab the axis and move it around. Note that you can translate but not rotate the ref axes. This is so that the alignment with the axes of the voxels is not broken.  The Reset Position, when clicked, will move the axes back to the center of the image.   NOTE: It is possible for the refence axes to set such that it is completely within the image. When this happens you may not be able to grab the axes as it is ambiguous whether you are grabbing the image or the axes. If you use the Channel Panel’s “Visible All” button, you can make all channels invisible, then grab the axes and move it outside the image frame, and then make the channels visible again.

    • Animations – You can either start the current image rotating so you can observe the full 3D image, or you can view channels as a sequence of movie frames. For a movie, it sequentially turns each channel current loaded, on then off, to create an animated display the image sequence.  Where normally each channel in a multichannel image is set to a unique color, for movies, when preparing the image stack, each channel should be created a grayscale image so the animator can assign the same color scheme to every channel as it plays. Always view channel zero (0) and adjust the settings for optimal viewing, then when you click Play Channel as Movie, those setting will be automatically applied to all image channels. The animation speed slider effects both the image rotation (if enabled) and the animation speed.

    • Compute Intensity Histograms – When enabled, goes through all channels currently loaded, and for each computes and displays a voxel intensity histogram so that you can see the distribution of intensities in your image.  After computing, the plot of the current Focus channel is displayed behind the Min/Max Threshold setting sliders so that you can see what voxel values of the image you are setting to black.

    • Save and Load Panel Locations – all the control panels, and the images, are grab-able by reaching into them with either controller and then pulling and holding the trigger button. You can then drag the panel or image to any location you like.  If you the click the Save Panel Locations button, those locations will be saved. If you them move the panels again, you can click the Load Panel Locations button and all panels will move back to the saved locations.

     

    Managing Rendering Speed

    Ideally rendering in VR should be 60-90 fps. In ConfocalVR, the ability of your computer to render images is effected by:

    • CPU and GPU speed – CPU, GPU, and VR systems are getting better all the time. If needed, a newer higher performance machine will increase your framerate
    • Image Size – larger images require more compute to render, use ImageJ cropping and rescaling to manage image size. Images on the order of 1000x1000x100 work on most VR ready systems
    • Number of Channels – the more channels that are loaded, and the more that are set to Visible, reduces rendering speed. When initially loaded, only once channel is turned on so that you can choose how to proceed with render speed mitigation techniques
    • Render Quality – higher quality slider settings increases computer load. Try to start low (far left) and increase slowly which watch your image, and when the visible quality of the image no longer improves, stop increasing Render Quality.

     

    VR Hand Controllers Actions

    • Re-center Scene – your location within the virtual environment can be re-centered by pushing down on the right controller’s thumb control (touch wheel or joy stick). This will reorient you, based on where you were looking when you pressed the button, to the middle of the scene. Use Re-center along with moving the control panels to get a overall layout for the virtual panels that will work well for you in the physical room layout in which you are operating the VR equipment
    • Blink Channel – the visibility of the current Focus channel can be conveniently switched off and on without looking at the Channel control panel, by simply pushing forward and down on the left controller’s thumb control (touch wheel or joy stick).
    • Change Channel – you can step through the channels of your image by pushing left or right and clicking down on the left controller’s thumb pad (touch wheel or joy stick) 
    • Pointing – when you point a controller at a “selectable” object, the laser pointer will come on automatically to show you exactly where you’re pointing
    • Selecting – once pointing at an object, pull the controller trigger to “select” that object
    • Grabbing – in some cases, like moving the image around, you will reach the controller into the image. As you do you will feel a small bump in the controller to indicate you are in contact with the grab-able object. If you then pull and hold the controller trigger, you can move the object around, until you release the trigger
    • Counting – once you Start Markers(Counters), a small white sphere will appear out in front of the left controller. Move the controller to place the sphere where you want to Mark/Count and then squeeze the left controller’s side trigger
    • Measuring Distance – one you enable distance measuring, a small white sphere will appear out in front of the right controller. Move the controller to place the sphere where you want to start measuring and then squeeze and hold the left controller’s side trigger until you have moved the controller to the end of your measurement, and release the side trigger.