How debugging works

(peter.schmitt) #1

How debugging works

When creating a training, some features will not work the first time. This is normal when implementing a complex project like a training.

Types of errors

There are five seperate kinds of errors you can encounter:

  1. Errors in the input data
  2. Errors accessing the data
  3. Orthography and punctuation inside the text concept or phrases
  4. Syntax errors inside a truthExpression, mappingExpression, container or Vocabulary
  5. Logical errors, where wrong conclusions are drawn by one or more properties

Tools You can use to find errors

Several tools are provided to find errors, trace their origin and fix them. In this article, these tools are introduced and we will describe their basic use when creating a training and encountering an error.

Selecting a suitable test object

What it does

Single objects from the uploaded data can be selected to be used as test inputs. The text engine then processes your ATML3 statements to create sentences and calculate logical decisions based on those example objects.

Use case for customers

The results show if the training implementation has any errors in any of the steps; from data access step to text output step. Because the input was selected and is not a random object, the results become predictable and any deviation from the expected result are noticeable and is quite easily identified most of the time.

Preliminaries

  • Data must be uploaded inorder to select an object as test object.
  • This data should contain all the features that need to be tested.

How to use

  1. In the AX Semantics NLG Cloud Cockpit, open the project you want to use to begin the test.
  2. Select the tab DATA SOURCES at the top
  3. Select the Collection that contains the data sets you want to use for the test
  4. Hover your mouse over the document you want to use for the test, a star and a trash bin should appear to the right
  5. Click on the star so that the symbol becomes solid, and should stay solid after you move your mouse off the test object.

You have just selected the object to use for testing!

Single Sentence Validation

What it does

The single sentence evaluation displays a sentence variant as it would appear in the text.

It takes into account the selected test object and carries out all calculations and evaluations for that sentence that it would do in the final text.

Use case for customers

The result of a sentence variant can be seen in an instant.

This is very helpful to determine if the implementation of that variant is correct and complete. Orthographical, logical and data access errors can be spotted at the moment of making them, and can be corrected quickly.

Most of the time this feature makes it unnecessary to render the whole text for an object or even all texts in a collection, because the problem is localized within a sentence variant.

Preliminaries - Required

  • There must be sentences that can be evaluated
  • At least one test object must be selected

How to use

  1. In the AX Semantics NLG Cloud Cockpit, open the project you want to test in.
  2. Select the tab COMPOSER at the top
  3. Select the EDITOR tab in the green ribbon
  4. Select the story type (the puzzle piece icon) and language (the globe icon) you want to use for testing
  5. Select Magic Mode in the grey ribbon
  6. Click VALIDATE next to the sentence you want to see the results of

After a few seconds, the sentence is displayed as it would appear in the text

Property Evaluation

What it does

The property evaluation checks the truhExpression and mappingExpression in a property for syntax errors.

If there are errors in any of the expressions, a helpful report about that error will appear below the expression it was found in.

Use case for customers

mappingExpressions and truthExpressions are prone to spelling mistakes and syntax errors, as they must be written precisely and in adherence to a strict syntax. It is important for the user to find the errors quickly and easily.

The Property evaluation detects invalid Expressions and will display a warning to the user in the list of properties. It also displays a message explaining what is wrong with the expression.

Preliminaries - Required

  • There must be properties present in the training

How to use

  1. In the AX Semantics NLG Cloud Cockpit, open the project you want to test in.
  2. Select the tab COMPOSER at the top
  3. Select the LOGICAL RULESET tab in the green ribbon
  4. To the right of each property, there is either a check or an x symbol, x means there is an error in one of the property’s Expressions
  5. Select the Property that displays an x
  6. The Expressions that contain an error has an error message below it, describing exactly what is wrong

If there are properties visible in the property list which state they are not evaluated, click on VALIDATE ALL PROPERTIES in the grey ribbon. This triggers the re-evaluation of every Expression in all properties.

Debug Output of properties

What it does

The debug output of properties shows the results of all properties that were used in a text.

It displays the contents of the mappingExpressions, the truthExpressions and the vocabularies for each property.

Use case for customers

When the result of an evaluation or a calculation is not as expected, it is useful to see the result of the involved properties separately. The wrong result can be traced back to its origin and can be fixed there.

Preliminaries - Required Steps

  • A text was generated in the cockpit
  • The text generation did not fail, but it returned an incorrect result. If the text generation failed, please fix the error(s) first.

How to use

  1. In the AX Semantics NLG Cloud Cockpit, open the project you want to test in.
  2. Select the tab RESULTS at the top
  3. Select the collection you want to use for the test
  4. Select the Document you want to use for the test
  5. Generate the text for that document, if you have not done so already.
  6. Select DETAILS in the top right corner
  7. Select the ATML-DEBUG tab

Here you find the output and results of all Properties that were used for the training. Start by checking the result for the property that is outputting the incorrect results. Then continue to check all properties that are referenced from there and try to find the source of the incorrect evaluation.

When the property that works incorrectly is identified, find it in the composer and continue to fix the error from there.

Repeat this process until the all results are correct.

1 Like