Skip to content

Importing Haptic Design

This chapter introduces the functionality for automatically transforming the haptic models into digital ones. For using this functionality, different settings are available.

Import from Standalone Setting

This section will introduce how users can use the Standalone usage setting.

Prerequisite

  • a Domain-specific Library must be loaded
  • Camera is connected to the device
    • Info: This functionality only works with the HD Pro Webcam C920
  • Paper figures must be placed on the table according to the description in the Standalone usage setting
  • Bucket service endpoint is available
    • Info: For the description below we will assume the default OMiLAB bucket service
  • The endpoints for the setting must be configured accordingly (see Standalone Setting)

Local Windows Image Provider

Configuration

Info: This functionality only works with the HD Pro Webcam C920

  1. Open a Scene model

  2. Configure the endpoint for up-streaming the picture

    • Click Show recognition controls to expand Digital Design Thinking Tools (violet button in the top left)
    • Click Update settings (violet button with gears)
    • Choose Streamed image via bucket
    • Click Select
    • Enter the URL of the prepared bucket service endpoint
    • Click Ok
    • Choose picture identifier used for your workshop
      • A list with alle existing ones is shown, where you can use your existing one
      • with Add you can add a new one, by providing the name

    Gif showing how the picture upstream can be configured

Start the Local Windows Image Provider

  1. Start the picture stream (this description is only for windows)

    • Open the ´Scene2Model Wizard´
    • Choose Run local camera stream (Windows only with Logitech C920)
    • Click Select
    • Check your picture identifier (and update if necessary)
    • A terminal window is opened, showing information about the picture stream. This window should only be closed, once the picture stream is not needed anymore.

    Gif showing how the local stream is started

  2. (Optional) Check if the stream works, by clicking Showing recognition results (Yellow button with sun)

Import the Haptic Design

  1. Manually import Scene

    • Click on the Run recognition (red button with camera)
    • the recognised paper figures should now be shown in the modelling tool

    Gif showing how the import of the haptic design can be triggered

  2. (Variation of 6) Start automatic import

    • Click Toggle automatic update (blue button with play symbol)
    • The modelling tool will now regularly import the paper figures
    • Stop by clicking on Toggle automatic update (red button with I/O symbol)

    Gif showing starting and stopping the automated import of haptic models

Stop the Local Windows Image Provider

To close the local windows image provider, the opened terminal window must be closed.

Import from Laboratory Setting

This section describes how the Scene2Model modelling tool can be connected to an established laboratory usage setting and then import the recognised paper figures.

Prerequisite

  • A Domain-specific Library must be loaded
  • Image provider in the laboratory setting is started
  • Paper figures must be placed on the table according to the description in the Standalone usage setting
  • Bucket service and recognition component in the laboratory setting are available
  • The device with the Scene2Model modelling tool is the correct network (WLAN, internet, ...) needed for the concrete laboratory setting instantiation
  • The endpoints for the setting must be configured accordingly (see Laboratory Setting)

Configuration

  1. Open a Scene model

  2. Configure the endpoint for getting the information about the recognised figures

    • Click Show recognition controls to expand Digital Design Thinking Tools (violet button in the top left)
    • Click Update settings (violet button with gears)
    • Choose Streamed image via bucket
    • Click Select
    • Enter the URL for the endpoint of the bucket service of the laboratory setting
    • Click Ok
    • Choose picture identifier used for your workshop
      • A list with alle existing ones is shown, where you can use your existing one
      • with Add you can add a new one, by providing the name

    Gif showing how the picture upstream can be configured

  3. (Optional) Check if the stream works, by clicking Showing recognition results (Yellow button with sun)

    • To check if changes are made you have to refresh the opened browser window regularly

Importing the Haptic Design

  1. Manually import Scene

    • Click on the Run recognition (red button with camera)
    • the recognised paper figures should now be shown in the modelling tool

    Gif showing how the import of the haptic design can be triggered

  2. (Variation of 4) Start automatic import

    • Click Toggle automatic update (blue button with play symbol)
    • The modelling tool will now regularly import the paper figures
    • Stop by clicking on Toggle automatic update (red button with I/O symbol)

    Gif showing starting and stopping the automated import of haptic models

Import from Mobile Setting

This section describes how the Scene2Model modelling tool can be used with the mobile setting. Here a public available image recognition service is used to identify the haptic objects. The upload is made over the browser either per mobile phone, the camera connected to a laptop or similar.

Prerequisite:

  • Domain-Specific Library is imported and ready to use: see here for more information
  • Design Thinking Project is created, containing a storyboard model and the needed scene models and a scene model is loaded.
    • see here for information on how to create a project
    • see here for opening a project
  • Mobile setting is set up, as described here

Configuration

  1. Open Digital Design Thinking Tools

    • Click the Show recognition controls button (top left corner)

    gif showing how the digital design thinking tools can be opened

  2. Open configuration window

    • Click the Update settings button
    • Choose Mobile image via app

    gif showing how to open the configuration window

  3. Open the image provider app: mobile phone * After the configuration, the QR code for opening the image provider is shown in the top right of the model. Scan this QR code with your smart phone to open the image provider * (If the QR-code is not shown, you can open the image provider interface with step 4)

  4. (variation of 3) Open the image provider app: mobile phone

    • Click Show recognition results button (yellow sun)
    • Browser with recognition app opens
    • Click Get capture QR

    gif showing how the QR code for capturing with the mobile phone can be created

    • Scan QR code with your smart phone

    • Open the URL, so that the web app is opened

    • Click Upload image from your camera

    • Make a picture and confirm

    • The picture is shown in the web interface

    • After a while the app interface is shown, with the picture and the recognised tags (this can take longer time, if the quality of the pictures is high)

    • Now you can import the recognised figures in the tool (see next step) or make a new picture with Capture new image

    • Click the Run recognition button (red camera)

    • A model with the recognised objects should be shown

      • (If the tags are used in the library the objects are shown, otherwise Sketch objects are shown)

    gif showing how a haptic model, which was already processed, can be imported