User interfaces

Musings

When we think about a UI element like a button we usually think about something simple like a rectangle which contains and reacts to a left-mouse click when the mouse cursor is within the button and then calls an action. If you take a look at the Unity3D UI ClickEvent however there are much more nuances to that. Let’s muse about a button a bit. A button has a position, a rectangular dimension (width, height), text, border-, background- and text-color, text font etc. The mouse cursor has a position, which might have never been changed while the program started. We don’t have a history of past positions, so we cannot define a direction over past movements. This might be interesting if we wanted to change the orientation of element to the direction of dragging. Does a drag start after a while or do we have to move. How far has the mouse cursor to move until it’s considered a drag as opposed to a click with some jitter. Is the mouse position within the button. Is the click area of a button different than the visual representation so it is easier to click. Is there a difference between the mouse position within the visual representation or the magnetic area. If there is another UI element behind the button, does the click into it’s visual area have higher precedence of the magnetic area of the button in front. If the user pressed a mouse button, moves the cursor and releases it at another position, did the click start in another element, in no element or the element we are looking right now. How to visual changes like highlight, hover, focus and press state react to those events. When does a hover start and show a tooltip. When did the click start. Can multiple events occurs at the same time (left down+right up) or can a click occur together with other events (key presses, modifier keys). Do we manage UI elements within a hierarchy and bubble events down and up. If we look at the device levels, what happens if a mouse is disconnected midway or can there even be mulitple mouse devices. With multi-touch this becomes even more interesting because we need to track multiple fingers with only an approximate accuracy of a touch. Are UI elements animated over time and how is this reflected on properties relevant for event handling (like a UI element moving away under a drag event).

Todo

Research UI architecture design considerations

With FRP and combinator type classes like Monads and Arrows we have great tools to separate all these concerns into their most basic and abstract form independent of a concrete input system (SDL, Unity3D, HTML etc.) and visual representation (console, OpenGL, HTML etc.).

Bi-directional UI elements

See Apfelmus - Three principles for GUI elements with bidirectional data flow. A textfield is a canonical example of a bi-directional UI element. The text can be changed programmatically but also by the user. So who is in charge of the internal text representation and how do we handle changes from each other side?

[FrpRefac16] also provides a good example across over multiple UI elements about who is in charge of the current page number in a text viewer.

[FrpRefac16] 3.3.2: There are four different ways to move from one page to the next: with the toolbar buttons (top), by dragging the central area with the mouse (centre left), by scrolling down the page (centre right), and with the bottom toolbar controls. Each of these acts both as an input and an output.

Lets define a textfield which represent an initial text, a blinking cursor, allows to enter new character on the cursor position and change the cursor position, or change the text programmatically altogether. To keep the system simple we are going to use the console again, only allow one key at a time. We consider a few keys special for deleting (backspace) and moving left and right. To allow programmatic text changes we bind the num keys to fire setText-events at the length represented by the corresponding number (e.g. 5=”XXXXX”).

-- 3. GUI elements generate events only in response to user input, never in response to program output.
textfield :: String -> SF Identity (KeyPressed, Event String) Textfield
textfield textInit = proc (keyPress, setText) -> do
  let
    backE  = filterE (== keyBack ) keyPress
    leftE  = filterE (== keyLeft ) keyPress
    rightE = filterE (== keyRight) keyPress
    charE  = if isNoEvent $ mergeEvents [backE, leftE, rightE] then keyPress else NoEvent

  rec
    let handleBack = backE `tag` (if cursorPosOld > 0 then removeAt textOld cursorPosOld else textOld)
        handleChar = charE <&> insertAt textOld cursorPosOld
        limitPos p = min (length textNew) . max 0 $ p
    textNew      <- hold textInit -< mergeEvents [setText, handleBack, handleChar]
    cursorPosNew <- hold posInit  -< mergeEvents
      [ setText `tag`  cursorPosOld
      , backE   `tag` (cursorPosOld - 1)
      , leftE   `tag` (cursorPosOld - 1)
      , rightE  `tag` (cursorPosOld + 1)
      , charE   `tag` (cursorPosOld + 1)
      ] <&> limitPos
    textOld      <- iPre textInit -< textNew
    cursorPosOld <- iPre posInit  -< cursorPosNew

  cursorFrame <- animate cursorFrames 5.0 -< ()
  returnA -< (textNew, cursorPosNew, cursorFrame)

textfield1.hs

Warning

There is a bug in Dunai which makes rec and iPre definitions run into an infinite loop (see MSF arrows aren’t associative in terms of evaluation). That’s why the Dunai package is defined in cabal.project.