Without redistributing the actual documents, allow us to paraphrase.
Based on what we've discovered, Honor will be introducing semantic image segmentation mode. Similarly to the Mate 10 series, it can detect scenes like pets, snow, beach, and so on, and apply filters for each situation, but with the new camera tech, it can detect multiple scenes within the same image and apply precise multiple filters simultaneously.
The new technology can pinpoint the outlines of objects like the sky, plans, humans, water, etc., in an image, identifying their locations, and apply the appropriate filters.
As opposed to server-side deployments like DeepLab-v3+, this is all done on-device with Honor's solution, which includes the semantic image segmentation algorithm alongside the chipset acceleration platform, the NPU in the Kirin 970. This lets the upcoming Honor phone process semantic image segmentation faster and more efficiently that Google's Deeplab, because the NPU accelerates CNNs for the process of object and scene recognition and 'applies perfect real-time parameters on every segment to generate the most realistic photography.'
Attached are some pictures to show what I'm talking about. More info/pictures coming as I figure it out.