The Econonmists’ “TECHNOLOGY QUARTERLY” gives a very in-depth look at the current state of AI and language technology, looking at available technolgies for NLP (Natural Language Processing), Speech Recognition, Natural Language Understanding and translation.
The article covers a the rule-based vs. machine learning aspects as well. Quote:
Many early approaches to language technology got stuck in a conceptual cul-de-sac
This is quite correct. Many rule-based approaches are struggling with rulesets exploding in size and complexity - like the ML approaches struggle with topicality (the bandwidth of topics that the solution covers). For NLG, some extra barriers exists: wrong output can not be considered unprecise (compare it to Siri: “Can you repeat that please?”), but just wrong, making it unattractive for normal usage scenarios where a wrong text has serious business impact.
But this gives a lot of power to the combined approach: Using rules to interpret grammatical aspects (grammatical rules are natively static as language is static (at least for the relevant timeframes) and blend that with Machine Learning technology: for analyzing data, pre-generation of content and for creating the rules for the NLG system.
The example in the video below is enabled by an Neuronal Net providing NLP functionality to create NLG rulesets, that are then interpreted with perfect accuracy.