*I don't pretend to understand all of this (yet), but I find it vaguely alarming and marvelous that language can be made to do this. It's so different from the human use of language that it's as if a tree had started talking. Also, that tree has no ideas or concepts of what it is saying, yet it can talk incredibly fast, and it can speak every possible human language all at once; every silicon branch is laden with alien fruit.
*With that said, I think people will get a feel for this soon. There are a lot of problems, situations, structures, whatever, that will yield interesting insights when they are tossed into the hopper of a neural net. It's a process like fermentation, almost. You wouldn't say that the yeast are "artificially intelligent" when they transform wheat into beer, but hey, those yeast are useful. Also, drinking beer may be problematic, but people do a lot of it, and once they get a taste for it, you can't get them to stop. Relying on AI is gonna feel like that. "How much of that black box did you chug down today?" "Just enough to get the job done! I can stop whenever I want!"
*Machine translators that chop up language with recurrent neural network grammars aren't "translating" language as humans do, but they are transforming language, and that transformed product is not the original text, but it's close enough; it's the linguistic equivalent of shredded wheat biscuits. They're heavily processed, but they still have some verbal nourishment in them. Also they package fast and you can box 'em up and label them and sell them. So there's gonna be lots and lots.

"We introduced recurrent neural network grammars, a probabilistic model of phrase-structure trees that can be trained generatively and used as a language model or a parser, and a corresponding discriminative model that can be used as a parser. Apart from out-of-vocabulary preprocessing, the approach requires no feature design or transformations to tree-bank data. The generative model outperforms every previously published parser built on a single supervised generative model in English, and a bit behind the best-reported generative model in Chinese. As language models, RNNGs outperform the best single-sentence language models."
*Why not get them to generate sci-fi stories?