Virtual Reality - Milestone 1 - Scene Inspection #71347
Labels
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
19 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: blender/blender#71347
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Name: Scene Inspection (or assisted VR).
Commissioner: @sebastian_k
Project leader: @JulianEisel
Project members: @SimeonConzendorf
Big picture: Initial virtual reality support for users and foundation for advanced use cases.
Use cases:
Design:
Engineer plan:
Work plan:
VR view scale OptionNot necessary for milestone 1, it's a general feature. It's already implemented in thexr-world-navigation
branch.Time estimate: ??
Note: In-VR view navigation should not get on the way (postponed if needed).
Added subscribers: @JulianEisel, @sebastian_k, @dfelinto
Work in progress patch: D5537
Added subscriber: @SimeonConzendorf
Added subscriber: @ZsoltStefan
Wall of text following! Jump to the last paragraph "Tools" if in a hurry :)
In the following I am trying to describe a workflow in VR for Milestone 1. Bear in mind, this is based on the specific needs and experiences in our studio (blendfx), other users and studios might have different needs.
Scene inspection
The most basic usecase of VR in Blender is simple scene exploration with an HMD (head mounted display) that supports rotational AND positional tracking, or six degrees of freedom (6dof), such as the Vive, the Oculus Rift or a Windows Mixed Reality Headset.
You start the VR session from within Blender, put on the HMD, start walking around and exploring the scene.
This already works in Blender, as far as I can tell.
Assisted VR
What does not work yet is interaction with the scene by using your VR controllers. You cannot use the controllers to jump to different spots in the scene, nor can you interact with objects in your scene.
Nevertheless some useful work can be done without VR controller interaction already, and that is when you have a person to assist you by controlling the Blender scene traditionally, by using mouse, tablet and keyboard.
While the endgoal is to be able to interact with your Blender scene by using VR controllers, the goal for Milestone 1 is scene inspection and assisted VR.
In the case of assisted VR there are 2 people, one person wearing the headset (in the following called "Viewer") and one person controlling the Blender session (in the following called "Controller"). Of course, within certain limits Viewer and Controller could be the same person, wearing a headset and sitting in front of the computer at the same time.
In assisted VR the Viewer can explore the scene and give feedback to the Controller, while the Controller can move things around and interact with the scene, just like in a regular Blender session.
Visual Feedback for the Controller
In order to be able to communicate precisely about what the Viewer is experiencing in VR the Controller needs to see what the Viewer is seeing.
While VR headsets usually have there own external viewer app on the desktop, it would be easier and more intuitive for the Controller if he could see the VR output right there in the Blender viewport.
This could be achieved easily by using the Local Camera override in the viewport.
On top of that the Controller should be able to see the virtual camera object move and rotate in the scene. That way he could have one viewport in wireframe shading in top view, where he can see the virtual camera location updating as the Viewer walks through the scene, and one viewport that shows what the Viewer is seeing.
Active Camera
In order to have a consistent experience when you go in and out of VR sessions it would make sense to use the active camera in Blender as starting position and rotation. That way you can define the "entry point" intuitively. (Btw, by rotation I am referring only to Z-Rotation. Giving the other 2 axis a rotation offset would most likey lead to motion sickness or at least feel awkward)
Virtual Camera
However, I don't think it would be good to actually use the active camera as VR camera. When you end the VR session the active camera should still be at the original position (unless you actively tell it to).
Also, it could be useful for the Controller to see the position of the original camera AND the VR camera in order to compare the locations.
That's why I think it would be good spawn a "virtual camera" object at the position of the active camera once the VR session is started.
In terms of Blender functionality maybe it could be simply a duplicajte of the active camera object, but to make things clear in the viewport a different drawtype (greyed out maybe) would probably be good.
Moving the virtual camera
Often scenes are too large to explore by actually walking through them in VR.
In the scenario of assisted VR the Controller has to be able to move the virtual camera around. This can happen while the Viewer is still in VR, even though it might induce a bit of motion sickness for a moment, but since this is not the end user experience, I think that's okay.
That's why the person controlling the Blender viewport should be able to grab the virtual camera object and move it around just like any other object in Blender.
It would be like a location and rotation offset that is added on top of the position and rotation from the 6dof HMD.
Being able to add an offset to the absolute position of the positional tracking is also useful when you want to test different heights, for instance when you want to simulate a small or tall user, or a user sitting in a chair, while the Viewer is still normally walking around in the scene.
And of course it is critical for location scouting when working for a 3dof application, which I will come to later.
Hotspots
When working on a large scene you might want to define jump points or hotspots to quickly navigate through your scene, so that you don't need to move the virtual camera by hand every time.
A rather intuitive approach to this would be to use multiple Blender cameras. So you would define a couple of cameras at the various hotspots in your scene and, ideally with a hotkey or so, could jump to them.
This would work great even when there is no person to assist you: You put on your VR headset and with one hand on the keyboard you could quickly jump to the various hotspots to inspect them.
Location scouting
The concept of creating hotspots could also work the other way around:
The Viewer is exploring the scene in VR, and once he finds a good place for a potential hotspot the Controller could "bookmark" this position with an operator that creates a camera in that position. Or, alternatively a similar operator could also relocate the active camera in that hotspot.
3dof
The concept of hotspots become even more important when you think of mobile VR. Many of these devices do not support positional tracking, so they have only 3 degrees of freedom, such as the Oculus Go, Samsung Gear, Pico 4k and others. Since they are also more limited in terms of realtime performance compared to Desktop VR headsets one way to achieve high quality visuals is to use prerendered content.
With Cycle's stereoscopic panorama rendering capabilities Blender is an excellent software for that.
Location scouting for the best hotspots is of course very important for prerendered content.
In order to test these hotspots and simulate how they would look like in the mobile VR headset the Controller needs to be able to disable positional tracking temporarily.
Here's how that would work: With a 6dof headset the Viewer would look for a good position. Once he found one the Controller would toggle from 6dof to 3dof. Now the Viewer would see the scene just like it would look like in the 3dof headset. If the place is approved the Controller would bookark it, then toggle back to 6dof and the Viewer could scout for the next position.
Of course the VR session could also start with the 3dof toggle active, so the Viewer cannot move at all, but, for instance, could jump to the different hotspots.
Scale
Nicolas from the Ubisoft Animaton Studio mentioned that changing the scale of the Viewer would be useful for them. This could be a simple slider in the yet to be designed VR panel.
Some extras
Here's some things that would be nice but might be out of scope for Milestone 1
Backfades?
When jumping between different hotspots it might be more comfortable to have a little blackfade between them. Might be a bit out of scope for Milestone 1, but makes jumping to hotspots or teleporting a bit less aprupt and more comfortable.
Per-object eye-visibility controls
Especially when designing VR applications for 3dof headsets sometimes you need to be able to simulate which object is visible to which eye. For instance you might want to embed a stereoscopic image into your scene. If you cannot control which texture is shown to which eye, or which object is shown to the left or the right eye you cannot do that.
Tools
Despite the lengthy description of which workflows Milestone 1 should enable (at least for the usecase we have in our studio), there is, as far as I can see, not that much new stuff that we would need:
I assume this milestone includes rendering the scene with the same render engine as the viewport, including Eevee?
I would like to add some issues rendering Eevee to VR glasses, that I have found while using BlenderXR (of Marui-Plugin), which IMHO should be addressed for a good user experience:
A few remarks on @sebastian_k 's write up
"Per-object eye-visibility controls" is great but I would approach this separately, as a way for blender to support stereo 3d textures. This is a separate project though.
With existing draw manager it shouldn't be hard to have the virtual camera as a temp object. That will need to be hardcoded (ie not implemented as add-on), but I think it is fine. Most things from milestone 1 fall in that category anyways. Using cameras still has the advantage of being able to recording fcurves, and have HUD attached to it though, so maybe this is an option just like we have local camera in a viewport.
I think the points from @sebastian_k are fine, just adding a few techincal considerations.
Visual Feedback for the controller (mirrored VR view)
Virtual Camera Object
I don't see a need for a hardcoded temporary object, all this should be doable with a Python defined gizmo. To be able to record animation data from this, you'd need a permanent data-block no matter if we have a temporary object or a gizmo. The gizmo could just create a new, unused action data-block or write animation to a (new or existing) camera or empty.
Added subscriber: @Defaultsound
Just wanted to add something that would be super useful for animation/Motion capture. (Think this is the right place)
With the HTC Vive there are trackers available that allow you to track anything that has the tracker attached to it. This makes low budget motion capture possible.
I did a quick test using Brekel to get the data from the trackers and was able to import this into Blender. Using this data I could then parent the empties to a rig as seen above. Super promising given how basic the setup was. Worth considering if VR support it to be added.
Added subscriber: @frameshift
Added subscriber: @Fenris8
Added subscriber: @rogper
Added subscriber: @slumber
Added subscriber: @MiroHorvath
Added subscriber: @christian
Added subscriber: @wakin
Added subscriber: @kens41
Added subscribers: @JorgeBernalMartinez, @JacobMerrill-1
@JorgeBernalMartinez - do you think we can do a VR actuator (similar to mouse actuator / mouse look actuator / joystick eh?)
they will need to do quite a bit of work to get to where we can be in a few minutes I think. (with bullet /aabb picking etc)
Added subscriber: @BeckersC
One design question came up during review that we need to answer before merging the last two patches: How to manage shading settings for the VR view? I'm talking about these settings:
I see three options:
Use the scene shading (i.e the shading for the active render engine)
Note that right now, the VR view is not coupled to a specific 3D View, it's independent. So whenever we'd introduce some syncing with 3D viewports, we'd have to establish such a coupling, and deal with the implications (e.g. when closing the 3D view while the VR view is still open). For the syncing in #1we could keep this simple and accept some trade-offs, for**#2it would have to work more reliable to not break shading.#3** is simple, but I found this quite hard to discover (my implementation used a similar approach for a while). Nothing hints at the fact that the scene render settings control the VR view. If there are no shading buttons where the VR view settings are, users would go to the 3D View shading settings first and expect it to influence the VR view too. That said, once you've figured it out it's easy to deal with. While the VR mirror runs, we can override the shading for the mirror view, that's trivial. Another consequence is that you effectively only get the "Rendered" display mode and use the render engine to change between solid shading and Eevee for example. Wireframe and material preview wouldn't be available I think.
According to Brecht, this would also be a more future proof approach.
There could also be some combination. E.g. there could be a toggle between "Render Shading" and "Custom", so a combination of #1 and #3. We probably wouldn't need a syncing option with 3D Viewports. It would also address the discoverability issue, as the usage of scene settings would be mentioned explicitly in the UI.
Added subscriber: @brecht
If you follow the shading in the 3D viewport, you automatically get #2 and #3, since it uses the scene shading settings when in rendered mode. I also don't thinking syncing settings with a 3D viewport is difficult, internally there can still be a shading struct used during a VR session that is synced with the 3D viewport.
There is a difference in uses cases between editing a scene by yourself, and giving a demo of a scene to someone else. And for that we have the distinction between 3D viewport and scene shading settings already. But both use cases exist with and without VR, you can also demo a scene without VR. Having to set up scene shading when you can only see it with a VR headset on is clumsy as well.
We discussed this in the chat again and agreed it's fine to always sync shading settings with the 3D View the session was started from.
If this 3D View is closed (by merging a different area into it) you loose the ability to change the settings in the UI, so you'd have to re-open the session from a different 3D View. This seems to be an acceptable trade-off for a corner-case to us.
So this is #2, which includes #3 as mentioned by Brecht.
Checked with @sebastian_k and he was also fine with this approach.
Implemented in 1792bf1c0a.
One note: I disabled using the rendered display mode in the VR session while Cycles is set as render engine (or in fact any engine that is not Workbench or Eevee). Cycles probably wouldn't be usable and with the current implementation it didn't work at all.
So what will happen then is that we simply stop sharing the shading settings until you leave rendered mode again (or change to Eevee/Workbench).
Glad to see the first milestone in master! Tried it out recently and it works great.
Is there a thread for user feedback (eg. on devtalk) or should I post here? For example, clicking on the landmarks doesn't seem to do anything. Shouldn't it jump to that point? EDIT: double-click on landmark does the trick.
@JulianEisel : when jumping to a landmark, Blender seems to apply the relative position of the landmark compared to the last landmark that was used, to the VR-camera. The VR-camera doesn't get positioned exactly onto the landmark. So if I've moved 2 meters to the side, the new position is also 2 meters to the side of the new landmark I clicked on.
Is this the intended behaviour? There should be a way to jump exactlyto the predefined landmarks.
Even if I close the VR session and start a new one, the VR camera position doesn't get reset, so I can never get back to an exact position, unless I close Blender and open it again.
I've also noticed a graphics glitch when viewport denoisingis turned on. There is a ghost of the objects from the left eye in the right eye and vice versa. And: the part of the background occluded by an object from the perspective of one eye but is visible from the other eye isn't properly denoised like the rest of the scene.
@ZsoltStefan sorry for being unresponsive... I think here is fine for now, I don't expect much feedback or discussion.
This landmark behavior is indeed not wanted but should be easy to fix. Will try to do that the coming days.
Unfortunately I didn't get much feedback on the landmarks earlier, it's just coming in now. So there will probably be some further improvements soon.
I don't know what's going on with the denoising, I'd have to debug that.
@ZsoltStefan I committed
0cfd2d6f4b
and blender/blender-addons@82f1a7bd4c which should change the positioning with landmarks, although I'm not able to actually test it currently. So feedback welcome.@JulianEisel We just tested recent master, and it does seem to work when positional tracking is off.
There are two problems still:
@JulianEisel : hi thanks for the reply. It is probably not easy to debug without a HMD. I found an emulator (HERE), maybe this helps.
I just tested the latest build (Blender 2.90
fdebdfa320
) and have the following observations:Hey @ZsoltStefan , thanks for looking into this as well. I completely agree with you. I have been trying to improve the landmark controls, maybe you want to have a look? https://github.com/blendfx/blender_vr_viewport
I'm hoping to look into most of the mentioned issue within few days. Just need to allocate a few hours for it.
For all, I finally created a devtalk thread for user feedback: https://devtalk.blender.org/t/vr-scene-inspection-feedback
This task needs a few updates, #68998 too. Besides that, I don't think we should close this task just yet. But I probably will after landmarks and the VR mirror got some polish/fixes.
I first thought this was a simple bug to be fixed. But I just realized upon investigation that this is more of a design issue.
Right now, using positional tracking means that we always take the position delta and add that to the base pose. Note that this delta is basically the distance from the position right in front of the screen.
This made sense to me because I thought positional tracking should always give you some predictable pose calculated based on a fixed point. But I can see that this wouldn't always be wanted.
We could substract the position delta from the moment the landmark was changed, so that you'd always start at exactly the landmark position. Maybe that's also the solution to our height problem, we'd just always use the height of the landmark for the starting position. This would have other side effects though: Say you sit down, start the session, stand up and change the base pose - we sort of lost the information that you stood up then. Your height will just be whatever the height of the landmark is. Maybe that's fine, or we can deal with that better still.
In my opinion the most intuitive thing to do is to use the landmark as absolute starting position. I think the problem you mentioned in the end can be dealt with after that, if needed. In it's current form landmarks are not useable as intented, so rather fix that first :)
As @sebastian_k , the most important use case I see is setting the absolute position of the VR-camera.
This could be expanded later with an option for the users, for other use cases. I was thinking of a drop down list for selecting the "type of teleportation":
Changed status from 'Confirmed' to: 'Resolved'
In 2.90 the landmarks were improved quite a bit so they should be quite useful now. They are also what defines the viewer height now, changing to a landmark moves the view exactly to it. The release notes list the changes .
The scale option is a general feature that would be good to have, not really tied to the 1st milestone.
So with the 2.90 release, I'd consider the 1st milestone as feature complete.
Added subscriber: @Ref
"Btw, by rotation I am referring only to Z-Rotation. Giving the other 2 axis a rotation offset would most likey lead to motion sickness or at least feel awkward"
I would really like the option to enable rotation offsets on all axes. The way the virtual view flips around when only offset by the z axis is disorienting.
will this be merged soon?
Added subscriber: @satishgoda1