As part of our progress towards all-accessible and easy Natural Language Generation (NLG), we have reimplemented our AX Core components. This replaces our current ATML3 interpreter with a refactored implementation (called “lupin”), while keeping everything else of the AX NLG Cloud platform compatible.
This is part of the upcoming “Next” platform milestone, which is coupled with our v3 API and the graphical NLG composer with live collaboration.
- we are now considerably faster, allowing real time rendering, even at the scale of millions of documents a day.
- its integrating better into our python ecosystem, including our tensorflow/keras components
- we have rewritten our surface realizer (in all languages) allowing for better control over the way grammatical features are implemented and shared between languages, removing a lot of burden on individual lexicon entries.
- new languages: Russian, Hungarian, Latvian, Slovenian, Romanian have been added and are now available for projects; and we are now down to approx. 2 days for implementing a new completely language.
New: real-time rendering
- In the past, we only offered “near-realtime” rendering down to sub-second level. Now, we are even going faster. Realtime rendering will be offered via the instant-generations feature, that allows for easy 1-API-Call-NLG and additional HIPPA-aware privacy.
New feature: data in context
- Automatically evaluation documents in their data context is implemented via the histogram evaluation, allowing for making statements like “this is the only shoe with leather” or “this is the biggest TV set” directly from ATML3 Expressions
- Our Expression Language (vertigo) underwent some small modifications on syntax, reducing ambiguity in the syntax. these results in some breaking changes in the syntax (we will add those in the documentation), where ambiguities might work differently now than before. This is now represented as “ATML3.3” (we skip the ATML3 v2 version to be inline with the general v3 milestone.)
- compared to the v2 implementation, some features now need to be explicitly configured correctly in the container (e.g. auto capitalisation at sentence start and automatic double whitespace removal). This leads to transparency and predictability on what the system actually does for the end-user.
We are now updating our documentations to reflect ATML3v3 correctly, and after some internal testing and rollout to some selected early access customers over the next few weeks, we will start offering the “Next” rendering engine to all new projects. Old and current projects will continue using the existing and proven v2 core.
The upcoming live collaboration editing already solely uses our v3 NLG core.
It can already be accessed in the available platform milestone “Cockpit” (v2) for producing your text output if we set it up for you. Projects (and Collections) using the Next renderer are marked with - a wolf emoji (as a linguistics company, we are very proud of our wordplays).
If you are interested in having early access to this features like real time rendering, data-context evaluation, please contact us at firstname.lastname@example.org with a short description of our use case.