In short, these guys are using deep neural nets to find good hyperparameters for training other deep neural nets, and this works as well as a Gaussian Process[1] but is more scalable and can be parallelized, allowing for faster optimization of hyperparameters.
--
[1] For example, like Spearmint: https://github.com/JasperSnoek/spearmint