Thoughts on Area Activation via Interaction #115158

Open
opened 2023-11-20 01:17:02 +01:00 by Harley Acheson · 2 comments
Member

This is more of a thought experiment, or discussion starter, than a design.

Our user interface consists of multiple editors in screen areas that allow us to do vastly different operations within a single application. Only one area can be "active" at a time, and it is indicated with a slightly highlighted header color. Our use of "active" for areas is mostly to designate which part of the program gets keyboard focus, and is therefore the recipient of keyboard events like text input and operator shortcuts. Since users usually have only a single keyboard it makes sense that we need to designate a single area as having keyboard focus.

However there are problems now - and more coming later - by our use of mouse position to indicate which area has focus. An example we all have noticed is when using the Text Editor. It only accepts keyboard entry while the mouse pointer is positioned within that area. If your mouse creeps out of the area your typing suddenly stops.

This is more than a simple annoyance with one editor, but a general problem of tying input focus to the position of one particular pointing device type. There are many types of pointing devices, and many do not have a "tracking" state where it is moved around and is always persistent. When a graphics tablet stylus is lifted it is out of range and does not have a persistent position. Similarly a touch surface also does not keep a position when you pull your fingers away from it. If you imagine using an eye-tracking device you probably don't think of pushing a mouse cursor around so that you can leave it parked in one of our areas.

We need to find an area activation model that works with any number of multiple pointer devices, including the situation of having no mouse at all like when using a touch surface.

Area Activation via Interaction

The current definition of the active area is that it is the one that "contains the mouse" . We could change that to "the area being interacted with". Specifically this means that we no longer set active area using mouse hover, but instead do so with all non-tracking pointer events. Basically all mouse-like events that are not hover. So pressing, dragging and similar events associated with user action, not just movement. An inactive area would become active the first instant that you press down with a mouse button, stylus, or finger.

In the Text Editor example you could continue to type even if your mouse strays from the area. Or if you have no mouse at all. Add a flashing text caret and it would seem quite civilized.

Active Area Indication

We currently show which area is active in a very subtle way. This is entirely intentional as we don't want anything having large visual changes just by moving your mouse from place to place. The best example is moving your mouse from Properties to the Top Bar, we don't want to see big changes to the 3DViewport as the mouse moves over it. Or see edges flashing as you cross them.

However, with this change to area activation, areas don't become active with just a mouse travelling over it. That means that activation has become more intentional and with that we can add stronger indication. In fact it might require stronger indication. We might want to show the borders of the active area in a different (or brighter) color. Or perhaps show a line in the active color (blue by default) along its top edge.

Default Active Area

On Window creation we would need to set one area (the largest) as active immediately.

Unambiguous Area Context

When you currently move your mouse to the TopBar it becomes the active area. Therefore it can't contain operators that work on other areas. But with this change we could set TopBar to never become active. This would require moving the Scene and View Layer selectors elsewhere, which has been discussed as possible.

This means that if a Node Editor is active and you move your mouse to the Top Bar, it continues to remain active. We could choose to add global toolbars that act on specific areas if active, showing different ones depending on the active area. It is conceivable that menus could be added and removed as different areas are active.

It opens up a lot of possibilities when we have areas that remain active until intentional user action.

This is more of a thought experiment, or discussion starter, than a design. Our user interface consists of multiple editors in screen areas that allow us to do vastly different operations within a single application. Only one area can be "active" at a time, and it is indicated with a slightly highlighted header color. Our use of "active" for areas is _mostly_ to designate which part of the program gets keyboard focus, and is therefore the recipient of keyboard events like text input and operator shortcuts. Since users usually have only a single keyboard it makes sense that we need to designate a single area as having keyboard focus. However there are problems now - and more coming later - by our use of _mouse position_ to indicate which area has focus. An example we all have noticed is when using the Text Editor. It only accepts keyboard entry while the mouse pointer is positioned within that area. If your mouse creeps out of the area your typing suddenly stops. This is more than a simple annoyance with one editor, but a general problem of tying input focus to the position of one particular pointing device type. There are many types of pointing devices, and many do not have a "tracking" state where it is moved around and is always persistent. When a graphics tablet stylus is lifted it is out of range and does not have a persistent position. Similarly a touch surface also does not keep a position when you pull your fingers away from it. If you imagine using an eye-tracking device you probably don't think of pushing a mouse cursor around so that you can leave it parked in one of our areas. We need to find an area activation model that works with any number of multiple pointer devices, including the situation of having no mouse at all like when using a touch surface. ### Area Activation via Interaction The current definition of the active area is that it is the one that "**contains the mouse**" . We could change that to "**the area being interacted with**". Specifically this means that we no longer set active area using mouse hover, but instead do so with all non-tracking pointer events. Basically all mouse-like events that are **not** hover. So pressing, dragging and similar events associated with user action, not just movement. An inactive area would become active the first instant that you press down with a mouse button, stylus, or finger. In the Text Editor example you could continue to type even if your mouse strays from the area. Or if you have no mouse at all. Add a flashing text caret and it would seem quite civilized. ### Active Area Indication We currently show which area is active in a very subtle way. This is entirely intentional as we don't want anything having large visual changes just by moving your mouse from place to place. The best example is moving your mouse from Properties to the Top Bar, we don't want to see big changes to the 3DViewport as the mouse moves over it. Or see edges flashing as you cross them. However, with this change to area activation, areas don't become active with just a mouse travelling over it. That means that activation has become more intentional and with that we can add stronger indication. In fact it might require stronger indication. We might want to show the borders of the active area in a different (or brighter) color. Or perhaps show a line in the active color (blue by default) along its top edge. ### Default Active Area On Window creation we would need to set one area (the largest) as active immediately. ### Unambiguous Area Context When you currently move your mouse to the TopBar it becomes the active area. Therefore it can't contain operators that work on other areas. But with this change we could set TopBar to never become active. This would require moving the Scene and View Layer selectors elsewhere, which has been discussed as possible. This means that if a Node Editor is active and you move your mouse to the Top Bar, it continues to remain active. We could choose to add global toolbars that act on specific areas if active, showing different ones depending on the active area. It is conceivable that menus could be added and removed as different areas are active. It opens up a lot of possibilities when we have areas that remain active until intentional user action.
Harley Acheson added the
Type
Design
label 2023-11-20 01:17:02 +01:00
Harley Acheson added this to the User Interface project 2023-11-20 01:17:04 +01:00
Contributor

we no longer set active area using mouse hover, but instead do so with all non-tracking pointer events. Basically all mouse-like events that are not hover. So pressing, dragging and similar events associated with user action, not just movement.

This specific part would complicate certain workflows. What comes to mind:

  • Accidentially changing selection/3d cursor position in 3d view as you try to give it focus
  • Animation in 3d requires a lot of back and forth between editors (dopesheet, NLA, graph, 3dview). Shortcuts like I to insert key work very differently between these views, so you must stay aware in what kind of editor your focus is currently in.
  • Same goes for UV editing and texture painting: rotating your view would focus the 3d view, but shortcuts in the uv/image editor would not, so you'd need to remember to re-focus it to use shortcuts there.

However I do agree that for the text inputs (editing a text datablock, using the text editor, or the python console), mouse focus vis hover is not desired.

Would this new interaction paradigm need to be consistent through all input devices, and for the entirety of the software? Since it might be feasible to only follow it for devices that have no other means of indicating a cursor position, like the original goal of this task.

Or only apply it to those places where text input is the primary action. Shortcuts in Blender are often as powerful as actions with the mouse, so surely pointer interactions should not be the only way to indicate focus? If I hover the 3D view and press shift+A, my intention is to add a mesh to the scene, nothing else.

> we no longer set active area using mouse hover, but instead do so with all non-tracking pointer events. Basically all mouse-like events that are not hover. So pressing, dragging and similar events associated with user action, not just movement. This specific part would complicate certain workflows. What comes to mind: - Accidentially changing selection/3d cursor position in 3d view as you try to give it focus - Animation in 3d requires a lot of back and forth between editors (dopesheet, NLA, graph, 3dview). Shortcuts like I to insert key work very differently between these views, so you must stay aware in what kind of editor your focus is currently in. - Same goes for UV editing and texture painting: rotating your view would focus the 3d view, but shortcuts in the uv/image editor would not, so you'd need to remember to re-focus it to use shortcuts there. However I do agree that for the text inputs (editing a text datablock, using the text editor, or the python console), mouse focus vis hover is not desired. Would this new interaction paradigm need to be consistent through all input devices, and for the entirety of the software? Since it might be feasible to only follow it for devices that have no other means of indicating a cursor position, like the original goal of this task. Or only apply it to those places where text input is the primary action. Shortcuts in Blender are often as powerful as actions with the mouse, so surely pointer interactions should not be the only way to indicate focus? If I hover the 3D view and press shift+A, my intention is to add a mesh to the scene, nothing else.
Author
Member

Thanks for thinking about this.

How it differs from now can be a bit subtle. But my hope is that there would be very few times you would need to explicitly select an area (by clicking the header for example) where it would be done by mouse hover now, and that would be offset enough by having less times that you need to select an area because it moved accidentally. And the ability to have stronger indication might reduce the burden too. Who knows. The reason for this is to think about it, not really propose the change.

Would this new interaction paradigm need to be consistent through all input devices

I'm not sure. It might be possible to do this and then backtrack a bit and make only "mouse enter" and "mouse leave" have special behavior. That way people with mice can work as now but those with only pens or touch can explicitly select areas. But that might also rob us of some utility this gives us. For example we couldn't then have a toolbar with items that work on the active area. There could be in-between solutions where one editor in a workspace is treated specially.

Thanks for thinking about this. How it differs from now can be a bit subtle. But my hope is that there would be very few times you would need to explicitly select an area (by clicking the header for example) where it would be done by mouse hover now, and that would be offset enough by having less times that you need to select an area because it moved accidentally. And the ability to have stronger indication might reduce the burden too. Who knows. The reason for this is to think about it, not really propose the change. > Would this new interaction paradigm need to be consistent through all input devices I'm not sure. It might be possible to do this and then backtrack a bit and make only "mouse enter" and "mouse leave" have special behavior. That way people with mice can work as now but those with only pens or touch can explicitly select areas. But that might also rob us of some utility this gives us. For example we couldn't then have a toolbar with items that work on the active area. There could be in-between solutions where one editor in a workspace is treated specially.
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#115158
No description provided.