Design of robot programming software for the systematic reuse of teaching data including environment model – ROBOMECH Journal

This section describes the details of software design to satisfy the requirements listed in “Requirements for teaching software” section.

Task model and a unit of data reuse

Corresponding to R1-1, Fig. 1 shows the unit of data accumulation and reuse. We call a semantically delimited motion pattern \(task\ model\). The task model is a user-defined entity for the reasons below. First, since there is no established way to delimit motions, there are different ways of abstraction depending on the task designer. Second, the domain of the target work is not closed, and it is impossible to prescribe a set of task models used for the task domain in advance and offer them together with the teaching software. Our task model consists of:

  1. 1.

    A state-machine representing a motion pattern of the robot.

  2. 2.

    A set of task parameters allowing the adjustment of the above motion pattern.

We consider a sequence of motion such as “pick one part and assemble it to another part” as a unit of reuse. This is represented as a sequence of command execution offered by robot controllers. Strictly speaking, the state-machine is a mid-level task program rather than a simple motion pattern. It describes command calls to lower-level controllers depending on each state. The control flow of the program may change according to the result of the previous command execution. It is assumed that commands that can be executed by the low-level controller are defined. Apart from the basic ones, it is difficult to share the interface of the controller of the robot. So we do not premise a specific controller interface. If the commands referenced by a state-machine is implemented by the controller, the task model is executable on that controller. If not, the task model is not executable on that controller.

Fig. 1figure 1

Unit of data reuse

Full size image

The task parameters correspond to R2-1 and R2-2. They enable to change the layout of objects and adjust motion patterns.

When defining a state-machine, expressions using task parameters are described as arguments of command calls in each state. The goal positions of the hand or joint angles are not always given directly by users. Appropriate task parameters are chosen to model each task. These expressions describe how to calculate the motions of the robots from these selected parameters.

Figure 2 illustrates the relations among the above-mentioned elements using an example. We assume a low-level controller is given and the controller has commands such as moveArm and closeGripper. hole position is a task parameter. A trajectory of screwing are arguments for moveArm command executions. Mapping functions define the trajectory, which depends on the value of hole position.

Fig. 2figure 2

State machine for a task model

Full size image

The unit of data reuse is the combination of a task model, metadata which is information added by users, and initial values of task parameters. The “3D models” section, “Meta-data” section, the initial value “Default values of task parameters” section will be explained in detail.

3D models

In order to satisfy R3-1, we allow users to register 3D models in each task model and associate them with some task parameters. This mechanism enables to change the associated task parameter by moving the 3D model in the simulator interactively. As a result, the motion of the robot changes indirectly.

This mechanism is not sufficient to support the movement of items caused by the robots. When expressing the environment in the 3D model, we expect that environmental changes due to robot motion will be reproduced in the simulator. Otherwise, it is difficult to check if the combined motion sequence works well and the intended assembly will be achieved. In order to express the environmental changes caused by the robot, we add a mechanism to describe the changes in connections between hands and objects at the time of grasp and release actions, and changes in the connection between objects when they are assembled. These side effects can be added to each command execution.

Meta-data

We introduce an additional mechanism to annotate task models with texts, images, movies and other documents corresponding to the requirements R1-2 and R1-3. The meta-data is used to make task models searchable. The system uses the meta-data as clues to retrieve task models.

From the search result, a user selects a motion pattern that can be used for his target assembling work. At this time, if the user does not understand what kind of motions they are, the user can not evaluate their availability. Our tool provides two confirmation means. One is the execution of the motion in the simulator. The other is the meta-data. The meta-data plays a dual role of clues for the user to understand the contents of tasks and the key of the search.

Another view of this mechanism is the unified management of related data for the tasks. Users associate what they think is important for each task model. Concretely, this is used by users to present know-how on task implementation and to consider the re-usability of task models. The texts typically include but not limited to generic terms or model numbers of items related to the task, verbs expressing actions used for the task, phrases describing the task objectives. Extensive word sense is employed for texts because we assume the users who have not sufficiently formalizing what words are appropriate to annotate their tasks. If it is possible to give a set of appropriate words, it may be provided as a dictionary.

The images typically include photographs of objects such as parts, fixtures, tools or devices, key poses of robots while performing the task, and geometric relations between objects when they are assembled. This idea is borrowed from information management tools such as Evernote [18]. The concept of the annotation is Capture what’s on your mind on tasks. Know-how for task implementation includes various things. Some are well represented by data suitable for tool interpretation such as CAD models. However, some are useful to assist human understanding but do not have a well-accepted way of representation. So the framework does not prescribe strictly what kind of data should be associated.

Default values of task parameters

Task models are recorded together with default values of their task parameters. There are two reasons for this.

The first reason is to make task models executable as a partial motion pattern. After searching for a task, users expect to check its contents in the simulator view. The task model only describes the relationship between various parameters and the motion pattern of the robot. In order to execute it, each task parameter needs to be initialized with some appropriate value.

The second reason is that parameter values adjusted in the real environment is worth reusing as mentioned in the introduction. They include values adjusted to archive the task efficiently and robustly. Some of the examples are control parameters such as stiffness, thresholds for mode switching, parameters to control trajectories such as movement speed or waiting time in some state.

Workflow

We call a longer motion sequence obtained by combining tasks workflow (R1-1). Figure 3 shows an example of the workflow. The workflow is also expressed as a state machine diagram like a task. Nodes of the workflow are tasks instead of command executions. The syntax elements are sequential execution, conditional branch, the merge of the control flow, initial node, final node, and task node. The conditional branch switches control flow depending on the result of the previous task execution. It is possible to describe an expression using flow parameters explained in “Flow parameters” section for a more complicated condition. The task node has a triangular port for connecting control flow. On the other hand, a node representing a flow parameter described later has only a round port. A node representing a 3D model node has a square port together with an image icon.

Fig. 3figure 3

An example of workflow

Full size image

Flow parameters

If there is data that fits exactly to a target assembly task, it can be used as it is in a new workflow. However, there are many cases where some adjustments are needed such as the change of the position of an object, the replacement of some part. This is realized by changing task parameters and 3D models (R3-1, R3-2).

Next, we need to think about what kind of operations are suitable to make this adjustment. There is a problem with the method of directly rewriting the value of the parameter each task model has. Consider the situation where a robot places a screw in a hole and tightens it with a screwdriver. If this is represented by two task models, the screw that appears in two task models is the same entity. If the result position of the first task execution is not the same as the initial position of the second task, the whole workflow is not consistent. It takes time for users to adjust all the values of the respective task parameters to make the workflow consistent when the number of tasks and parts in the workflow is large, and fine adjustment of the layout is repeated. Therefore, we introduce a flow parameter which is a parameter that can be referred to throughout the entire workflow. Task parameters which are expected to keep the same value over several tasks are associated with the flow parameter. A user can graphically make a new association by connecting a port of a task node representing a task parameter and a flow parameter node as shown in Fig. 3.

A round port of a task node represents a task parameter of it. When this port is connected to a flow parameter node, the value of the task parameter is synchronized with the value of the connected flow parameter. That is, when a certain task changes the value of a flow parameter, the value of the task parameter of another task linked to that flow parameter also changes. The read/write timings are determined by the definition of the tasks.

Sharing and replacement of 3D models in a workflow

If multiple tasks and flows have the copies of 3D models, the delay of model loading unacceptably increases. The size of the database also becomes remarkably large. To avoid this, 3D models are registered in advance and each task and flow have references to them. This 3D model data registered in advance is called model masters. Replacement of a 3D model in a workflow is performed using this model master. Consider a case where a user wants to replace a 350 ml plastic bottle used in an existing task with a 500 ml plastic bottle. In this case, the user add a 3D model node and make it refer to a master of the tall bottle.

In Fig. 3, the node with the image of a plastic bottle is the 3D model node. The 3D model node has a square port that can be connected to a square port of a task. In addition to robot motion, the task performs attach and detach operations on 3D models as described in “3D models” section. The square port is used to change the target of these operations.

Frames can be defined in addition to the origin on 3D models. For example, you can define screw hole positions with the names of hole1 to hole4 for a part with 4 screw holes. Flow parameters can be associated with task parameters. In the same way, these frames defined on 3D models can also be associated with task parameters. These frames are a kind of flow parameters and represented as round ports on 3D model nodes.

Output to parameters

The values of task parameters and flow parameters sometimes need to be updated while executing a task. A typical use case is recognition of an object. The recognized position and posture is usually used in subsequent tasks. This pattern can be implemented by outputting the recognition result to an appropriate flow parameter in the task of recognition and associating the flow parameter with a task parameter of another task that uses the value. In Fig. 3, the position of the plastic bottle is updated using a recognized value.

User interface

We will describe the design corresponding to R4-1. In the case of mass production, the roles of end users and system integrators are comparatively separated. The system integrators design systems taking a long time. However, in order to utilize flexible robot systems, it is necessary for the end user to be more involved in engineering work.

We classify users of the software into two groups: User and Expert and offer perspectives suitable for their typical use cases. Expert has a comparatively high level of knowledge on robots, programming and system integration, which is required to design task models. On the other hand, User has limited knowledge of them. Expert typically indicates engineers ranging from robot makers, system integrators to skillful people of end-user companies (Fig. 4). Expert designs basic patterns of data reuse. User customizes the patterns for specific objects or system layouts and combines them to describe workflows. User also manages meta-data.

Fig. 4figure 4

Classification of users

Full size image

Perspective for User

Figure 5 shows the perspective for User. The window consists of 4 views, In the search view, User enters keywords such as “gear” or “screwing”. Then User selects from the search results. Various information related to the selected task model is shown in each view. Meta-data related to the task model is shown in the meta-data view. In the scene view, the robot and 3D models related to the task model is presented. User also checks the motion in the scene view.

Fig. 5figure 5

Perspective for User

Full size image

Next, User combines the search results to describe a workflow. By the operation of dragging from the search view into workflow view, the selected task is added to the workflow. The connection among added tasks and flow parameters are edited in the workflow view.

Visibility of parameters

In addition to the control flow of the program, displaying various parameters and their relationships in one figure is suitable to understand the overall structure of a workflow. However, excessive information has a risk of making the figure difficult to understand, when the number of parameters and task nodes is large. The user interface needs some ingenuity to control the amount of information presented in the workflow view. In order to alleviate this problem, we introduce a function to switch the visibility of parameters.

Basically, the design of workflows consists of two phases. Control parameters used for an assembly operation between specific parts are not often changed, once the adjustment is made. When the layout of the objects and the robot is changed, setting these control parameters invisible makes the other adjustment such as the design of the working environment easy.

To enable this, task parameters can be set to invisible in the workflow view. In addition, in order to improve the operability, the GUI has functions of enlargement/reduction, the shift of display area, switching of edit mode. By setting the edit mode off, it is possible to prevent mistakenly changing parameters or structure of workflow during the operations such as search and execution of motions.

Perspective for Expert

Figure 6 shows the perspective for Expert. Expert mainly designs task models. For this purpose, views for defining the information necessary for the task model is prepared. The state machine view is used to design the structure of state machines. The command list defined by the low-level controller is shown. A command node is added by dragging them into the state machine view. The parameter view is used to define task parameters and changing values of them. When a command node is selected in the state machine view, information of the node is displayed in a separate dialog. Expressions to calculate command arguments from task parameters are edited in the dialog. The task model can also be defined in a text file. Currently, YAML format is supported. Some people prefer to import the task model after exporting and editing it in YAML format once at a time.

Fig. 6figure 6

Perspective for Expert

Full size image

Xổ số miền Bắc