*This is a very tall-weeds technical article here, but, rather like glass, it's clear. It's important that robots should see and manipulate transparent objects, and this is an AI method to achieve it.
*They just beat the problem into submission by creating a huge 3d-camera database of various glassy things, then sicking three different neural-nets on it, doing three different glass-watching tasks. I'd be guessing this robot system is gonna break a lot of glass at first, but probably less and less with time – and vastly less than robots that can't see glass at all.
https://ai.googleblog.com/2020/02/learning-to-see-transparent-objects.html
(...)
A Visual Dataset of Transparent Objects
Massive quantities of data are required to train any effective deep learning model (e.g., ImageNet for vision or Wikipedia for BERT), and ClearGrasp is no exception. Unfortunately, no datasets are available with 3D data of transparent objects. Existing 3D datasets like Matterport3D or ScanNet overlook transparent surfaces, because they require expensive and time-consuming labeling processes.
To overcome this issue, we created our own large-scale dataset of transparent objects that contains more than 50,000 photorealistic renders with corresponding surface normals (representing the surface curvature), segmentation masks, edges, and depth, useful for training a variety of 2D and 3D detection tasks. Each image contains up to five transparent objects, either on a flat ground plane or inside a tote, with various backgrounds and lighting....
https://youtu.be/lbmklphGgGE