Virtual Reality - Milestone 1 - Scene Inspection #71347

Closed
opened 2019-11-04 23:41:07 +01:00 by Dalai Felinto · 43 comments

Name: Scene Inspection (or assisted VR).
Commissioner: @sebastian_k
Project leader: @JulianEisel
Project members: @SimeonConzendorf
Big picture: Initial virtual reality support for users and foundation for advanced use cases.

Use cases:

  • Artists working for VR games.
  • "Samsung GearVR" / "Google Cardboard" movie making / app development.
  • Directors inspecting + set / previz.

Design:

  • VR experience (minus teleport) to be controlled via non-VR UI.
  • User should be able to go on and off VR, without interrupting the work in Blender (so work in blender as usual even if VR is on).
  • VR navigation should be possible from "viewport" (i.e., outside VR).
  • Anchor from VR to scene should be consistent across VR sessions.

Engineer plan:

  • OpenXR support.
  • Integration on ghost level.
  • Affected areas: wm_*, 3dview, keymap, ghost.

Work plan:

  • Mirrored VR view (sync VR view pose with regular 3D View)
  • Virtual camera object
  • More shading options
  • Positional tracking toggle (3DoF vs. 6DoF)
  • Location bookmarks
  • VR view scale Option Not necessary for milestone 1, it's a general feature. It's already implemented in the xr-world-navigation branch.
  • Better setup of the base position, especially height
  • Share VR view shading options with a regular 3D view

Time estimate: ??
Note: In-VR view navigation should not get on the way (postponed if needed).

**Name**: Scene Inspection (or assisted VR). **Commissioner**: @sebastian_k **Project leader**: @JulianEisel **Project members**: @SimeonConzendorf **Big picture**: Initial virtual reality support for users and foundation for advanced use cases. **Use cases**: - Artists working for VR games. - "Samsung GearVR" / "Google Cardboard" movie making / app development. - Directors inspecting + set / previz. **Design**: - VR experience (minus teleport) to be controlled via non-VR UI. - User should be able to go on and off VR, without interrupting the work in Blender (so work in blender as usual even if VR is on). - VR navigation should be possible from "viewport" (i.e., outside VR). - Anchor from VR to scene should be consistent across VR sessions. **Engineer plan**: - OpenXR support. - Integration on ghost level. - Affected areas: wm_*, 3dview, keymap, ghost. **Work plan:** - [x] Mirrored VR view (sync VR view pose with regular 3D View) - [x] Virtual camera object - [x] More shading options - [x] Positional tracking toggle (3DoF vs. 6DoF) - [x] Location bookmarks - [ ] ~~VR view scale Option~~ *Not necessary for milestone 1, it's a general feature. It's already implemented in the `xr-world-navigation` branch.* - [x] Better setup of the base position, especially height - [x] Share VR view shading options with a regular 3D view **Time estimate**: ?? **Note**: In-VR view navigation should not get on the way (postponed if needed).
Julian Eisel was assigned by Dalai Felinto 2019-11-04 23:41:07 +01:00
Author
Owner

Added subscribers: @JulianEisel, @sebastian_k, @dfelinto

Added subscribers: @JulianEisel, @sebastian_k, @dfelinto
Author
Owner

Work in progress patch: D5537

Work in progress patch: [D5537](https://archive.blender.org/developer/D5537)

Added subscriber: @SimeonConzendorf

Added subscriber: @SimeonConzendorf

Added subscriber: @ZsoltStefan

Added subscriber: @ZsoltStefan

Wall of text following! Jump to the last paragraph "Tools" if in a hurry :)

In the following I am trying to describe a workflow in VR for Milestone 1. Bear in mind, this is based on the specific needs and experiences in our studio (blendfx), other users and studios might have different needs.

Scene inspection

The most basic usecase of VR in Blender is simple scene exploration with an HMD (head mounted display) that supports rotational AND positional tracking, or six degrees of freedom (6dof), such as the Vive, the Oculus Rift or a Windows Mixed Reality Headset.
You start the VR session from within Blender, put on the HMD, start walking around and exploring the scene.
This already works in Blender, as far as I can tell.

Assisted VR

What does not work yet is interaction with the scene by using your VR controllers. You cannot use the controllers to jump to different spots in the scene, nor can you interact with objects in your scene.
Nevertheless some useful work can be done without VR controller interaction already, and that is when you have a person to assist you by controlling the Blender scene traditionally, by using mouse, tablet and keyboard.

While the endgoal is to be able to interact with your Blender scene by using VR controllers, the goal for Milestone 1 is scene inspection and assisted VR.

In the case of assisted VR there are 2 people, one person wearing the headset (in the following called "Viewer") and one person controlling the Blender session (in the following called "Controller"). Of course, within certain limits Viewer and Controller could be the same person, wearing a headset and sitting in front of the computer at the same time.

In assisted VR the Viewer can explore the scene and give feedback to the Controller, while the Controller can move things around and interact with the scene, just like in a regular Blender session.

Visual Feedback for the Controller

In order to be able to communicate precisely about what the Viewer is experiencing in VR the Controller needs to see what the Viewer is seeing.
While VR headsets usually have there own external viewer app on the desktop, it would be easier and more intuitive for the Controller if he could see the VR output right there in the Blender viewport.

This could be achieved easily by using the Local Camera override in the viewport.

On top of that the Controller should be able to see the virtual camera object move and rotate in the scene. That way he could have one viewport in wireframe shading in top view, where he can see the virtual camera location updating as the Viewer walks through the scene, and one viewport that shows what the Viewer is seeing.

Active Camera

In order to have a consistent experience when you go in and out of VR sessions it would make sense to use the active camera in Blender as starting position and rotation. That way you can define the "entry point" intuitively. (Btw, by rotation I am referring only to Z-Rotation. Giving the other 2 axis a rotation offset would most likey lead to motion sickness or at least feel awkward)

Virtual Camera

However, I don't think it would be good to actually use the active camera as VR camera. When you end the VR session the active camera should still be at the original position (unless you actively tell it to).
Also, it could be useful for the Controller to see the position of the original camera AND the VR camera in order to compare the locations.
That's why I think it would be good spawn a "virtual camera" object at the position of the active camera once the VR session is started.
In terms of Blender functionality maybe it could be simply a duplicajte of the active camera object, but to make things clear in the viewport a different drawtype (greyed out maybe) would probably be good.

Moving the virtual camera

Often scenes are too large to explore by actually walking through them in VR.
In the scenario of assisted VR the Controller has to be able to move the virtual camera around. This can happen while the Viewer is still in VR, even though it might induce a bit of motion sickness for a moment, but since this is not the end user experience, I think that's okay.
That's why the person controlling the Blender viewport should be able to grab the virtual camera object and move it around just like any other object in Blender.
It would be like a location and rotation offset that is added on top of the position and rotation from the 6dof HMD.
Being able to add an offset to the absolute position of the positional tracking is also useful when you want to test different heights, for instance when you want to simulate a small or tall user, or a user sitting in a chair, while the Viewer is still normally walking around in the scene.
And of course it is critical for location scouting when working for a 3dof application, which I will come to later.

Hotspots

When working on a large scene you might want to define jump points or hotspots to quickly navigate through your scene, so that you don't need to move the virtual camera by hand every time.
A rather intuitive approach to this would be to use multiple Blender cameras. So you would define a couple of cameras at the various hotspots in your scene and, ideally with a hotkey or so, could jump to them.
This would work great even when there is no person to assist you: You put on your VR headset and with one hand on the keyboard you could quickly jump to the various hotspots to inspect them.

Location scouting

The concept of creating hotspots could also work the other way around:
The Viewer is exploring the scene in VR, and once he finds a good place for a potential hotspot the Controller could "bookmark" this position with an operator that creates a camera in that position. Or, alternatively a similar operator could also relocate the active camera in that hotspot.

3dof

The concept of hotspots become even more important when you think of mobile VR. Many of these devices do not support positional tracking, so they have only 3 degrees of freedom, such as the Oculus Go, Samsung Gear, Pico 4k and others. Since they are also more limited in terms of realtime performance compared to Desktop VR headsets one way to achieve high quality visuals is to use prerendered content.
With Cycle's stereoscopic panorama rendering capabilities Blender is an excellent software for that.

Location scouting for the best hotspots is of course very important for prerendered content.

In order to test these hotspots and simulate how they would look like in the mobile VR headset the Controller needs to be able to disable positional tracking temporarily.
Here's how that would work: With a 6dof headset the Viewer would look for a good position. Once he found one the Controller would toggle from 6dof to 3dof. Now the Viewer would see the scene just like it would look like in the 3dof headset. If the place is approved the Controller would bookark it, then toggle back to 6dof and the Viewer could scout for the next position.
Of course the VR session could also start with the 3dof toggle active, so the Viewer cannot move at all, but, for instance, could jump to the different hotspots.

Scale

Nicolas from the Ubisoft Animaton Studio mentioned that changing the scale of the Viewer would be useful for them. This could be a simple slider in the yet to be designed VR panel.

Some extras

Here's some things that would be nice but might be out of scope for Milestone 1

Backfades?
When jumping between different hotspots it might be more comfortable to have a little blackfade between them. Might be a bit out of scope for Milestone 1, but makes jumping to hotspots or teleporting a bit less aprupt and more comfortable.

Per-object eye-visibility controls
Especially when designing VR applications for 3dof headsets sometimes you need to be able to simulate which object is visible to which eye. For instance you might want to embed a stereoscopic image into your scene. If you cannot control which texture is shown to which eye, or which object is shown to the left or the right eye you cannot do that.

Tools

Despite the lengthy description of which workflows Milestone 1 should enable (at least for the usecase we have in our studio), there is, as far as I can see, not that much new stuff that we would need:

  • realtime update of the position of the virtual camera and have its values be exposed via python. All the bookmarking business could be scripted from there.
  • make the virtual camera accessible for interaction in the viewport
  • a toggle between 3dof and 6dof
  • controlling the scale of the VR session
Wall of text following! Jump to the last paragraph "Tools" if in a hurry :) In the following I am trying to describe a workflow in VR for Milestone 1. Bear in mind, this is based on the specific needs and experiences in our studio (blendfx), other users and studios might have different needs. # Scene inspection The most basic usecase of VR in Blender is simple scene exploration with an HMD (head mounted display) that supports rotational AND positional tracking, or six degrees of freedom (6dof), such as the Vive, the Oculus Rift or a Windows Mixed Reality Headset. You start the VR session from within Blender, put on the HMD, start walking around and exploring the scene. This already works in Blender, as far as I can tell. ## Assisted VR What does not work yet is interaction with the scene by using your VR controllers. You cannot use the controllers to jump to different spots in the scene, nor can you interact with objects in your scene. Nevertheless some useful work can be done without VR controller interaction already, and that is when you have a person to assist you by controlling the Blender scene traditionally, by using mouse, tablet and keyboard. While the endgoal is to be able to interact with your Blender scene by using VR controllers, the goal for Milestone 1 is scene inspection and assisted VR. In the case of assisted VR there are 2 people, one person wearing the headset (in the following called "Viewer") and one person controlling the Blender session (in the following called "Controller"). Of course, within certain limits Viewer and Controller could be the same person, wearing a headset and sitting in front of the computer at the same time. In assisted VR the Viewer can explore the scene and give feedback to the Controller, while the Controller can move things around and interact with the scene, just like in a regular Blender session. ### Visual Feedback for the Controller In order to be able to communicate precisely about what the Viewer is experiencing in VR the Controller needs to see what the Viewer is seeing. While VR headsets usually have there own external viewer app on the desktop, it would be easier and more intuitive for the Controller if he could see the VR output right there in the Blender viewport. This could be achieved easily by using the Local Camera override in the viewport. On top of that the Controller should be able to see the virtual camera object move and rotate in the scene. That way he could have one viewport in wireframe shading in top view, where he can see the virtual camera location updating as the Viewer walks through the scene, and one viewport that shows what the Viewer is seeing. ### Active Camera In order to have a consistent experience when you go in and out of VR sessions it would make sense to use the active camera in Blender as starting position and rotation. That way you can define the "entry point" intuitively. (Btw, by *rotation* I am referring only to Z-Rotation. Giving the other 2 axis a rotation offset would most likey lead to motion sickness or at least feel awkward) ### Virtual Camera However, I don't think it would be good to actually use the active camera as VR camera. When you end the VR session the active camera should still be at the original position (unless you actively tell it to). Also, it could be useful for the Controller to see the position of the original camera AND the VR camera in order to compare the locations. That's why I think it would be good spawn a "virtual camera" object at the position of the active camera once the VR session is started. In terms of Blender functionality maybe it could be simply a duplicajte of the active camera object, but to make things clear in the viewport a different drawtype (greyed out maybe) would probably be good. ### Moving the virtual camera Often scenes are too large to explore by actually walking through them in VR. In the scenario of assisted VR the Controller has to be able to move the virtual camera around. This can happen while the Viewer is still in VR, even though it might induce a bit of motion sickness for a moment, but since this is not the end user experience, I think that's okay. That's why the person controlling the Blender viewport should be able to grab the virtual camera object and move it around just like any other object in Blender. It would be like a location and rotation offset that is added on top of the position and rotation from the 6dof HMD. Being able to add an offset to the absolute position of the positional tracking is also useful when you want to test different heights, for instance when you want to simulate a small or tall user, or a user sitting in a chair, while the Viewer is still normally walking around in the scene. And of course it is critical for location scouting when working for a 3dof application, which I will come to later. ### Hotspots When working on a large scene you might want to define jump points or hotspots to quickly navigate through your scene, so that you don't need to move the virtual camera by hand every time. A rather intuitive approach to this would be to use multiple Blender cameras. So you would define a couple of cameras at the various hotspots in your scene and, ideally with a hotkey or so, could jump to them. This would work great even when there is no person to assist you: You put on your VR headset and with one hand on the keyboard you could quickly jump to the various hotspots to inspect them. ### Location scouting The concept of creating hotspots could also work the other way around: The Viewer is exploring the scene in VR, and once he finds a good place for a potential hotspot the Controller could "bookmark" this position with an operator that creates a camera in that position. Or, alternatively a similar operator could also relocate the active camera in that hotspot. ### 3dof The concept of hotspots become even more important when you think of mobile VR. Many of these devices do not support positional tracking, so they have only 3 degrees of freedom, such as the Oculus Go, Samsung Gear, Pico 4k and others. Since they are also more limited in terms of realtime performance compared to Desktop VR headsets one way to achieve high quality visuals is to use prerendered content. With Cycle's stereoscopic panorama rendering capabilities Blender is an excellent software for that. Location scouting for the best hotspots is of course very important for prerendered content. In order to test these hotspots and simulate how they would look like in the mobile VR headset the Controller needs to be able to disable positional tracking temporarily. Here's how that would work: With a 6dof headset the Viewer would look for a good position. Once he found one the Controller would toggle from 6dof to 3dof. Now the Viewer would see the scene just like it would look like in the 3dof headset. If the place is approved the Controller would bookark it, then toggle back to 6dof and the Viewer could scout for the next position. Of course the VR session could also start with the 3dof toggle active, so the Viewer cannot move at all, but, for instance, could jump to the different hotspots. ### Scale Nicolas from the Ubisoft Animaton Studio mentioned that changing the scale of the Viewer would be useful for them. This could be a simple slider in the yet to be designed VR panel. ### Some extras Here's some things that would be nice but might be out of scope for Milestone 1 **Backfades?** When jumping between different hotspots it might be more comfortable to have a little blackfade between them. Might be a bit out of scope for Milestone 1, but makes jumping to hotspots or teleporting a bit less aprupt and more comfortable. **Per-object eye-visibility controls** Especially when designing VR applications for 3dof headsets sometimes you need to be able to simulate which object is visible to which eye. For instance you might want to embed a stereoscopic image into your scene. If you cannot control which texture is shown to which eye, or which object is shown to the left or the right eye you cannot do that. # Tools Despite the lengthy description of which workflows Milestone 1 should enable (at least for the usecase we have in our studio), there is, as far as I can see, not that much new stuff that we would need: - realtime update of the position of the virtual camera and have its values be exposed via python. All the bookmarking business could be scripted from there. - make the virtual camera accessible for interaction in the viewport - a toggle between 3dof and 6dof - controlling the scale of the VR session

I assume this milestone includes rendering the scene with the same render engine as the viewport, including Eevee?
I would like to add some issues rendering Eevee to VR glasses, that I have found while using BlenderXR (of Marui-Plugin), which IMHO should be addressed for a good user experience:

  • Reflection probes. We need to evaluate them twice, for the two eyes. Until now they are only rendered once, so one eye sees the proper reflection, the other sees some messed up texture.
  • Viewport denoising introduces ghosting, because of the many fine head movements. This is very apparent and distracting on alpha mapped (partially transparent) materials, you basically have to hold your head very still to see clearly. Deactivating denoising however leads to jagged edges everywhere and "sparkling" textures.
I assume this milestone includes rendering the scene with the same render engine as the viewport, including Eevee? I would like to add some issues rendering Eevee to VR glasses, that I have found while using BlenderXR (of Marui-Plugin), which IMHO should be addressed for a good user experience: - Reflection probes. We need to evaluate them twice, for the two eyes. Until now they are only rendered once, so one eye sees the proper reflection, the other sees some messed up texture. - Viewport denoising introduces ghosting, because of the many fine head movements. This is very apparent and distracting on alpha mapped (partially transparent) materials, you basically have to hold your head **very** still to see clearly. Deactivating denoising however leads to jagged edges everywhere and "sparkling" textures.
Author
Owner

A few remarks on @sebastian_k 's write up

  • "Per-object eye-visibility controls" is great but I would approach this separately, as a way for blender to support stereo 3d textures. This is a separate project though.

  • With existing draw manager it shouldn't be hard to have the virtual camera as a temp object. That will need to be hardcoded (ie not implemented as add-on), but I think it is fine. Most things from milestone 1 fall in that category anyways. Using cameras still has the advantage of being able to recording fcurves, and have HUD attached to it though, so maybe this is an option just like we have local camera in a viewport.

A few remarks on @sebastian_k 's write up * "Per-object eye-visibility controls" is great but I would approach this separately, as a way for blender to support stereo 3d textures. This is a separate project though. * With existing draw manager it shouldn't be hard to have the virtual camera as a temp object. That will need to be hardcoded (ie not implemented as add-on), but I think it is fine. Most things from milestone 1 fall in that category anyways. Using cameras still has the advantage of being able to recording fcurves, and have HUD attached to it though, so maybe this is an option just like we have local camera in a viewport.
Member

I think the points from @sebastian_k are fine, just adding a few techincal considerations.

Visual Feedback for the controller (mirrored VR view)

  • The biggest issue I see here is performance. Usually, I think such mirror views just use the rendered result from one eye and display it in a window (doing things like crop and stretch to account for different view dimensions). We could do that too, but it gets quite complex if we want to allow interaction then. Things like OpenGL selection, operators using projection, different render settings (e.g. you may want overlays in the controller's view, but not in the VR view) require up-to-date view data. Some of it can only be obtained by drawing the entire viewport (e.g. depth buffer for selection).
Another issue with synchronizing the viewports is dealing with the different device refresh rates, e.g. the monitor may have 60Hz while the HMD has 90Hz. Drawing the regular viewport in the same pass as the VR viewport may mean we'll have to drop quite some frames that were needlessly rendered.
Note that I'm not saying we can't get a solution that requires minimum redraws, just that such a solution would need quite some work.
The simple solution is simply rendering the viewport separately, with most recent tracking information (rotation + location of the VR view). The performance penalty may be acceptable for the start.
  • If we go with the simple solution, that is drawing the mirrored view as regular, separate viewport, we have to consider how often we want to redraw it. For best feedback, we'd have to constantly redraw it with maximum FPS we can get, just like the VR view. That would however take away computation time from the VR view itself. We could simply add a maximum framerate option for the mirrored view for now. While not a great UI solution, this allows us great control and easy experiments.

Virtual Camera Object
I don't see a need for a hardcoded temporary object, all this should be doable with a Python defined gizmo. To be able to record animation data from this, you'd need a permanent data-block no matter if we have a temporary object or a gizmo. The gizmo could just create a new, unused action data-block or write animation to a (new or existing) camera or empty.

I think the points from @sebastian_k are fine, just adding a few techincal considerations. **Visual Feedback for the controller (mirrored VR view)** * The biggest issue I see here is performance. Usually, I think such mirror views just use the rendered result from one eye and display it in a window (doing things like crop and stretch to account for different view dimensions). We could do that too, but it gets quite complex if we want to allow interaction then. Things like OpenGL selection, operators using projection, different render settings (e.g. you may want overlays in the controller's view, but not in the VR view) require up-to-date view data. Some of it can only be obtained by drawing the entire viewport (e.g. depth buffer for selection). ``` Another issue with synchronizing the viewports is dealing with the different device refresh rates, e.g. the monitor may have 60Hz while the HMD has 90Hz. Drawing the regular viewport in the same pass as the VR viewport may mean we'll have to drop quite some frames that were needlessly rendered. ``` ``` Note that I'm not saying we can't get a solution that requires minimum redraws, just that such a solution would need quite some work. ``` ``` The simple solution is simply rendering the viewport separately, with most recent tracking information (rotation + location of the VR view). The performance penalty may be acceptable for the start. ``` * If we go with the simple solution, that is drawing the mirrored view as regular, separate viewport, we have to consider how often we want to redraw it. For best feedback, we'd have to constantly redraw it with maximum FPS we can get, just like the VR view. That would however take away computation time from the VR view itself. We could simply add a maximum framerate option for the mirrored view for now. While not a great UI solution, this allows us great control and easy experiments. **Virtual Camera Object** I don't see a need for a hardcoded temporary object, all this should be doable with a Python defined gizmo. To be able to record animation data from this, you'd need a permanent data-block no matter if we have a temporary object or a gizmo. The gizmo could just create a new, unused action data-block or write animation to a (new or existing) camera or empty.

Added subscriber: @Defaultsound

Added subscriber: @Defaultsound

Just wanted to add something that would be super useful for animation/Motion capture. (Think this is the right place)

With the HTC Vive there are trackers available that allow you to track anything that has the tracker attached to it. This makes low budget motion capture possible.

VRBlender.gif

I did a quick test using Brekel to get the data from the trackers and was able to import this into Blender. Using this data I could then parent the empties to a rig as seen above. Super promising given how basic the setup was. Worth considering if VR support it to be added.

Just wanted to add something that would be super useful for animation/Motion capture. (Think this is the right place) With the HTC Vive there are [trackers ](https://www.vive.com/uk/vive-tracker/) available that allow you to track anything that has the tracker attached to it. This makes low budget motion capture possible. ![VRBlender.gif](https://archive.blender.org/developer/F8201878/VRBlender.gif) I did a quick test using [Brekel ](https://brekel.com/openvr-recorder/) to get the data from the trackers and was able to import this into Blender. Using this data I could then parent the empties to a rig as seen above. Super promising given how basic the setup was. Worth considering if VR support it to be added.

Added subscriber: @frameshift

Added subscriber: @frameshift

Added subscriber: @Fenris8

Added subscriber: @Fenris8

Added subscriber: @rogper

Added subscriber: @rogper

Added subscriber: @slumber

Added subscriber: @slumber

Added subscriber: @MiroHorvath

Added subscriber: @MiroHorvath

Added subscriber: @christian

Added subscriber: @christian

Added subscriber: @wakin

Added subscriber: @wakin

Added subscriber: @kens41

Added subscriber: @kens41
Added subscribers: @JorgeBernalMartinez, @JacobMerrill-1

@JorgeBernalMartinez - do you think we can do a VR actuator (similar to mouse actuator / mouse look actuator / joystick eh?)
they will need to do quite a bit of work to get to where we can be in a few minutes I think. (with bullet /aabb picking etc)

@JorgeBernalMartinez - do you think we can do a VR actuator (similar to mouse actuator / mouse look actuator / joystick eh?) they will need to do quite a bit of work to get to where we can be in a few minutes I think. (with bullet /aabb picking etc)

Added subscriber: @BeckersC

Added subscriber: @BeckersC
Member

One design question came up during review that we need to answer before merging the last two patches: How to manage shading settings for the VR view? I'm talking about these settings: View3DShading.png

I see three options:

  • Own shading options for the VR View, probably with an option with sync them to a 3D View
  • Share options with some 3D View

Use the scene shading (i.e the shading for the active render engine)

Note that right now, the VR view is not coupled to a specific 3D View, it's independent. So whenever we'd introduce some syncing with 3D viewports, we'd have to establish such a coupling, and deal with the implications (e.g. when closing the 3D view while the VR view is still open). For the syncing in #1we could keep this simple and accept some trade-offs, for**#2it would have to work more reliable to not break shading.#3** is simple, but I found this quite hard to discover (my implementation used a similar approach for a while). Nothing hints at the fact that the scene render settings control the VR view. If there are no shading buttons where the VR view settings are, users would go to the 3D View shading settings first and expect it to influence the VR view too. That said, once you've figured it out it's easy to deal with. While the VR mirror runs, we can override the shading for the mirror view, that's trivial. Another consequence is that you effectively only get the "Rendered" display mode and use the render engine to change between solid shading and Eevee for example. Wireframe and material preview wouldn't be available I think.
According to Brecht, this would also be a more future proof approach.


There could also be some combination. E.g. there could be a toggle between "Render Shading" and "Custom", so a combination of #1 and #3. We probably wouldn't need a syncing option with 3D Viewports. It would also address the discoverability issue, as the usage of scene settings would be mentioned explicitly in the UI.

One design question came up during review that we need to answer before merging the last two patches: How to manage shading settings for the VR view? I'm talking about these settings: ![View3DShading.png](https://archive.blender.org/developer/F8409630/View3DShading.png) I see three options: - Own shading options for the VR View, probably with an option with sync them to a 3D View - Share options with some 3D View # Use the scene shading (i.e the shading for the active render engine) Note that right now, the VR view is not coupled to a specific 3D View, it's independent. So whenever we'd introduce some syncing with 3D viewports, we'd have to establish such a coupling, and deal with the implications (e.g. when closing the 3D view while the VR view is still open). For the syncing in **#1**we could keep this simple and accept some trade-offs, for**#2**it would have to work more reliable to not break shading.**#3** is simple, but I found this quite hard to discover (my implementation used a similar approach for a while). Nothing hints at the fact that the scene render settings control the VR view. If there are no shading buttons where the VR view settings are, users would go to the 3D View shading settings first and expect it to influence the VR view too. That said, once you've figured it out it's easy to deal with. While the VR mirror runs, we can override the shading for the mirror view, that's trivial. Another consequence is that you effectively only get the "Rendered" display mode and use the render engine to change between solid shading and Eevee for example. Wireframe and material preview wouldn't be available I think. According to Brecht, this would also be a more future proof approach. --- There could also be some combination. E.g. there could be a toggle between "Render Shading" and "Custom", so a combination of #1 and #3. We probably wouldn't need a syncing option with 3D Viewports. It would also address the discoverability issue, as the usage of scene settings would be mentioned explicitly in the UI.

Added subscriber: @brecht

Added subscriber: @brecht

If you follow the shading in the 3D viewport, you automatically get #2 and #3, since it uses the scene shading settings when in rendered mode. I also don't thinking syncing settings with a 3D viewport is difficult, internally there can still be a shading struct used during a VR session that is synced with the 3D viewport.

There is a difference in uses cases between editing a scene by yourself, and giving a demo of a scene to someone else. And for that we have the distinction between 3D viewport and scene shading settings already. But both use cases exist with and without VR, you can also demo a scene without VR. Having to set up scene shading when you can only see it with a VR headset on is clumsy as well.

If you follow the shading in the 3D viewport, you automatically get #2 and #3, since it uses the scene shading settings when in rendered mode. I also don't thinking syncing settings with a 3D viewport is difficult, internally there can still be a shading struct used during a VR session that is synced with the 3D viewport. There is a difference in uses cases between editing a scene by yourself, and giving a demo of a scene to someone else. And for that we have the distinction between 3D viewport and scene shading settings already. But both use cases exist with and without VR, you can also demo a scene without VR. Having to set up scene shading when you can only see it with a VR headset on is clumsy as well.
Member

We discussed this in the chat again and agreed it's fine to always sync shading settings with the 3D View the session was started from.
If this 3D View is closed (by merging a different area into it) you loose the ability to change the settings in the UI, so you'd have to re-open the session from a different 3D View. This seems to be an acceptable trade-off for a corner-case to us.

So this is #2, which includes #3 as mentioned by Brecht.

Checked with @sebastian_k and he was also fine with this approach.

Implemented in 1792bf1c0a.

We discussed this in the chat again and agreed it's fine to always sync shading settings with the 3D View the session was started from. If this 3D View is closed (by merging a different area into it) you loose the ability to change the settings in the UI, so you'd have to re-open the session from a different 3D View. This seems to be an acceptable trade-off for a corner-case to us. So this is #2, which includes #3 as mentioned by Brecht. Checked with @sebastian_k and he was also fine with this approach. Implemented in 1792bf1c0a.
Member

One note: I disabled using the rendered display mode in the VR session while Cycles is set as render engine (or in fact any engine that is not Workbench or Eevee). Cycles probably wouldn't be usable and with the current implementation it didn't work at all.
So what will happen then is that we simply stop sharing the shading settings until you leave rendered mode again (or change to Eevee/Workbench).

One note: I disabled using the rendered display mode in the VR session while Cycles is set as render engine (or in fact any engine that is not Workbench or Eevee). Cycles probably wouldn't be usable and with the current implementation it didn't work at all. So what will happen then is that we simply stop sharing the shading settings until you leave rendered mode again (or change to Eevee/Workbench).

Glad to see the first milestone in master! Tried it out recently and it works great.
Is there a thread for user feedback (eg. on devtalk) or should I post here? For example, clicking on the landmarks doesn't seem to do anything. Shouldn't it jump to that point? EDIT: double-click on landmark does the trick.

Glad to see the first milestone in master! Tried it out recently and it works great. Is there a thread for user feedback (eg. on devtalk) or should I post here? For example, clicking on the landmarks doesn't seem to do anything. Shouldn't it jump to that point? EDIT: double-click on landmark does the trick.

@JulianEisel : when jumping to a landmark, Blender seems to apply the relative position of the landmark compared to the last landmark that was used, to the VR-camera. The VR-camera doesn't get positioned exactly onto the landmark. So if I've moved 2 meters to the side, the new position is also 2 meters to the side of the new landmark I clicked on.
Is this the intended behaviour? There should be a way to jump exactlyto the predefined landmarks.
Even if I close the VR session and start a new one, the VR camera position doesn't get reset, so I can never get back to an exact position, unless I close Blender and open it again.

I've also noticed a graphics glitch when viewport denoisingis turned on. There is a ghost of the objects from the left eye in the right eye and vice versa. And: the part of the background occluded by an object from the perspective of one eye but is visible from the other eye isn't properly denoised like the rest of the scene.

@JulianEisel : when jumping to a landmark, Blender seems to apply the relative position of the landmark compared to the last landmark that was used, to the VR-camera. The VR-camera doesn't get positioned exactly onto the landmark. So if I've moved 2 meters to the side, the new position is also 2 meters to the side of the new landmark I clicked on. Is this the intended behaviour? There should be a way to jump **exactly**to the predefined landmarks. Even if I close the VR session and start a new one, the VR camera position doesn't get reset, so I can never get back to an exact position, unless I close Blender and open it again. I've also noticed a graphics glitch when *viewport denoising*is turned on. There is a ghost of the objects from the left eye in the right eye and vice versa. And: the part of the background occluded by an object from the perspective of one eye but is visible from the other eye isn't properly denoised like the rest of the scene.
Member

@ZsoltStefan sorry for being unresponsive... I think here is fine for now, I don't expect much feedback or discussion.
This landmark behavior is indeed not wanted but should be easy to fix. Will try to do that the coming days.
Unfortunately I didn't get much feedback on the landmarks earlier, it's just coming in now. So there will probably be some further improvements soon.
I don't know what's going on with the denoising, I'd have to debug that.

@ZsoltStefan sorry for being unresponsive... I think here is fine for now, I don't expect much feedback or discussion. This landmark behavior is indeed not wanted but should be easy to fix. Will try to do that the coming days. Unfortunately I didn't get much feedback on the landmarks earlier, it's just coming in now. So there will probably be some further improvements soon. I don't know what's going on with the denoising, I'd have to debug that.
Member

@ZsoltStefan I committed 0cfd2d6f4b and blender/blender-addons@82f1a7bd4c which should change the positioning with landmarks, although I'm not able to actually test it currently. So feedback welcome.

@ZsoltStefan I committed 0cfd2d6f4b and blender/blender-addons@82f1a7bd4c which should change the positioning with landmarks, although I'm not able to actually test it currently. So feedback welcome.

@JulianEisel We just tested recent master, and it does seem to work when positional tracking is off.
There are two problems still:

  1. When, during a VR session, we have activated Landmark_1, for example, and THEN enable positional tracking, an offset is added immediately. So we cannot start exploring the scene from the exact position of Landmark_1, because enabling positional tracking adds an offset to right away.
  2. When postitional tracking is enabled it is still not possible to go to the exact positions of the landmarks by activating them, there is still an offset on the virtual camera.
@JulianEisel We just tested recent master, and it does seem to work when positional tracking is off. There are two problems still: 1. When, during a VR session, we have activated Landmark_1, for example, and THEN enable positional tracking, an offset is added immediately. So we cannot start exploring the scene from the exact position of Landmark_1, because enabling positional tracking adds an offset to right away. 2. When postitional tracking is enabled it is still not possible to go to the exact positions of the landmarks by activating them, there is still an offset on the virtual camera.

@JulianEisel : hi thanks for the reply. It is probably not easy to debug without a HMD. I found an emulator (HERE), maybe this helps.
I just tested the latest build (Blender 2.90 fdebdfa320) and have the following observations:

  • With positional tracking it still doesn't reset to the absolute position of a new landmark.
  • It would be good to be able to "grab" and move/rotate the VR camera object, or more accurately it's base point. While it remains in it's positional tracking. Since the person inside the HMD has no way to do this with controllers. This is actually possible right now if you start moving the active landmark, but then you can't reset to this landmark, since you've already moved it.
  • The VR view seems not to be linked completely with the 3D viewport. I can for example change the shading type. But I can't toggle the View Layer. It always shows the first view layer. I have my VR scenes set up with different view layers that I switch (have hotkeys set up) while in the VR session. The interesting thing is, if I turn on mirroring in the VR viewport, then there it shows the selected view layer. So basically the viewport "mirror" is not actually showing the same thing as in the headset.
  • For the ghosting of objects with viewport denoising I made a bug report inckluding a screenshot of what it looks like. (https://developer.blender.org/T76003)
  • One small thing: shadow maps seem to be calculated for both eyes seperately. The "shadow pixels" on the edge of the shadow are somewhat different, this is noticable especially on shadows of thin/small objects, where the shadow is there in one eye but not the other. Maybe this isn't even necessary, afaik shadow maps are view-independent?
@JulianEisel : hi thanks for the reply. It is probably not easy to debug without a HMD. I found an emulator ([HERE](https://docs.microsoft.com/en-us/windows/mixed-reality/using-the-windows-mixed-reality-simulator)), maybe this helps. I just tested the latest build (Blender 2.90 fdebdfa320ec) and have the following observations: - With positional tracking it still doesn't reset to the absolute position of a new landmark. - It would be good to be able to "grab" and move/rotate the VR camera object, or more accurately it's base point. While it remains in it's positional tracking. Since the person inside the HMD has no way to do this with controllers. This is actually possible right now if you start moving the active landmark, but then you can't reset to this landmark, since you've already moved it. - The VR view seems not to be linked completely with the 3D viewport. I can for example change the shading type. But I can't toggle the View Layer. It always shows the first view layer. I have my VR scenes set up with different view layers that I switch (have hotkeys set up) while in the VR session. The interesting thing is, if I turn on mirroring in the VR viewport, then there it shows the selected view layer. So basically the viewport "mirror" is not actually showing the same thing as in the headset. - For the ghosting of objects with viewport denoising I made a bug report inckluding a screenshot of what it looks like. (https://developer.blender.org/T76003) - One small thing: shadow maps seem to be calculated for both eyes seperately. The "shadow pixels" on the edge of the shadow are somewhat different, this is noticable especially on shadows of thin/small objects, where the shadow is there in one eye but not the other. Maybe this isn't even necessary, afaik shadow maps are view-independent?

Hey @ZsoltStefan , thanks for looking into this as well. I completely agree with you. I have been trying to improve the landmark controls, maybe you want to have a look? https://github.com/blendfx/blender_vr_viewport

Hey @ZsoltStefan , thanks for looking into this as well. I completely agree with you. I have been trying to improve the landmark controls, maybe you want to have a look? https://github.com/blendfx/blender_vr_viewport
Member

I'm hoping to look into most of the mentioned issue within few days. Just need to allocate a few hours for it.


For all, I finally created a devtalk thread for user feedback: https://devtalk.blender.org/t/vr-scene-inspection-feedback

This task needs a few updates, #68998 too. Besides that, I don't think we should close this task just yet. But I probably will after landmarks and the VR mirror got some polish/fixes.

I'm hoping to look into most of the mentioned issue within few days. Just need to allocate a few hours for it. --- For all, I finally created a devtalk thread for user feedback: https://devtalk.blender.org/t/vr-scene-inspection-feedback This task needs a few updates, #68998 too. Besides that, I don't think we should close this task just yet. But I probably will after landmarks and the VR mirror got some polish/fixes.
Member

In #71347#920212, @sebastian_k wrote:
@JulianEisel We just tested recent master, and it does seem to work when positional tracking is off.
There are two problems still:

  1. When, during a VR session, we have activated Landmark_1, for example, and THEN enable positional tracking, an offset is added immediately. So we cannot start exploring the scene from the exact position of Landmark_1, because enabling positional tracking adds an offset to right away.
  2. When postitional tracking is enabled it is still not possible to go to the exact positions of the landmarks by activating them, there is still an offset on the virtual camera.

I first thought this was a simple bug to be fixed. But I just realized upon investigation that this is more of a design issue.

Right now, using positional tracking means that we always take the position delta and add that to the base pose. Note that this delta is basically the distance from the position right in front of the screen.
This made sense to me because I thought positional tracking should always give you some predictable pose calculated based on a fixed point. But I can see that this wouldn't always be wanted.

We could substract the position delta from the moment the landmark was changed, so that you'd always start at exactly the landmark position. Maybe that's also the solution to our height problem, we'd just always use the height of the landmark for the starting position. This would have other side effects though: Say you sit down, start the session, stand up and change the base pose - we sort of lost the information that you stood up then. Your height will just be whatever the height of the landmark is. Maybe that's fine, or we can deal with that better still.

> In #71347#920212, @sebastian_k wrote: > @JulianEisel We just tested recent master, and it does seem to work when positional tracking is off. > There are two problems still: > 1. When, during a VR session, we have activated Landmark_1, for example, and THEN enable positional tracking, an offset is added immediately. So we cannot start exploring the scene from the exact position of Landmark_1, because enabling positional tracking adds an offset to right away. > 2. When postitional tracking is enabled it is still not possible to go to the exact positions of the landmarks by activating them, there is still an offset on the virtual camera. I first thought this was a simple bug to be fixed. But I just realized upon investigation that this is more of a design issue. Right now, using positional tracking means that we always take the position delta and add that to the base pose. Note that this delta is basically the distance from the position right in front of the screen. This made sense to me because I thought positional tracking should always give you some predictable pose calculated based on a fixed point. But I can see that this wouldn't always be wanted. We could substract the position delta from the moment the landmark was changed, so that you'd always start at exactly the landmark position. Maybe that's also the solution to our height problem, we'd just always use the height of the landmark for the starting position. This would have other side effects though: Say you sit down, start the session, stand up and change the base pose - we sort of lost the information that you stood up then. Your height will just be whatever the height of the landmark is. Maybe that's fine, or we can deal with that better still.

In my opinion the most intuitive thing to do is to use the landmark as absolute starting position. I think the problem you mentioned in the end can be dealt with after that, if needed. In it's current form landmarks are not useable as intented, so rather fix that first :)

In my opinion the most intuitive thing to do is to use the landmark as absolute starting position. I think the problem you mentioned in the end can be dealt with after that, if needed. In it's current form landmarks are not useable as intented, so rather fix that first :)

As @sebastian_k , the most important use case I see is setting the absolute position of the VR-camera.
This could be expanded later with an option for the users, for other use cases. I was thinking of a drop down list for selecting the "type of teleportation":

  1. Set Absolute position: as discussed
  2. Set Absolute position with base height. This could work two ways:
  • with an input field for the height of the ground. So for example = 3m. --> This would apply the X and Y coordinates of the landmark to the VR-camera, and for the Z-coordinate it would apply the input from this field (3m) as the z-coordinate of the VR-ground-plane (=Z of the base pose I assume). The Z-coordinate of the landmark would be discarded.
  • use the z-position of the landmark as the ground level. So apply X and Y to the VR-camera, and apply Z to the VR-ground-plane.
  1. Set Base Position: the way it works now.
As @sebastian_k , the most important use case I see is setting the absolute position of the VR-camera. This could be expanded later with an option for the users, for other use cases. I was thinking of a drop down list for selecting the "type of teleportation": 1. *Set Absolute position*: as discussed 2. *Set Absolute position with base height*. This could work two ways: - with an input field for the height of the ground. So for example = 3m. --> This would apply the X and Y coordinates of the landmark to the VR-camera, and for the Z-coordinate it would apply the input from this field (3m) as the z-coordinate of the VR-ground-plane (=Z of the base pose I assume). The Z-coordinate of the landmark would be discarded. - use the z-position of the landmark as the ground level. So apply X and Y to the VR-camera, and apply Z to the VR-ground-plane. 3. *Set Base Position*: the way it works now.
Member

Changed status from 'Confirmed' to: 'Resolved'

Changed status from 'Confirmed' to: 'Resolved'
Member

In 2.90 the landmarks were improved quite a bit so they should be quite useful now. They are also what defines the viewer height now, changing to a landmark moves the view exactly to it. The release notes list the changes .
The scale option is a general feature that would be good to have, not really tied to the 1st milestone.

So with the 2.90 release, I'd consider the 1st milestone as feature complete.

In 2.90 the landmarks were improved quite a bit so they should be quite useful now. They are also what defines the viewer height now, changing to a landmark moves the view exactly to it. [The release notes list the changes ](https://wiki.blender.org/wiki/Reference/Release_Notes/2.90/Virtual_Reality). The scale option is a general feature that would be good to have, not really tied to the 1st milestone. So with the 2.90 release, I'd consider the 1st milestone as feature complete.

Added subscriber: @Ref

Added subscriber: @Ref

"Btw, by rotation I am referring only to Z-Rotation. Giving the other 2 axis a rotation offset would most likey lead to motion sickness or at least feel awkward"

I would really like the option to enable rotation offsets on all axes. The way the virtual view flips around when only offset by the z axis is disorienting.

"Btw, by rotation I am referring only to Z-Rotation. Giving the other 2 axis a rotation offset would most likey lead to motion sickness or at least feel awkward" I would really like the option to enable rotation offsets on all axes. The way the virtual view flips around when only offset by the z axis is disorienting.

will this be merged soon?

will this be merged soon?

Added subscriber: @satishgoda1

Added subscriber: @satishgoda1
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
19 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#71347
No description provided.