Content-aware Generative Modeling of Graphic Design Layouts

ACM Transactions on Graphics 38(4) (Proc. SIGGRAPH 2019)




Abstract

Layout is fundamental to graphic designs. For visual attractiveness and efficient communication of messages and ideas, graphic design layouts often have great variation, driven by the contents to be presented. In this paper, we study the problem of content-aware graphic design layout generation. We propose a deep generative model for graphic design layouts that is able to synthesize layout designs based on the visual and textual semantics of user inputs. Unlike previous approaches that are oblivious to the input contents and rely on heuristic criteria, our model captures the effect of visual and textual contents on layouts, and implicitly learns complex layout structure variations from data without the use of any heuristic rules. To train our model, we build a large-scale magazine layout dataset with fine-grained layout annotations and keyword labeling. Experimental results show that our model can synthesize high-quality layouts based on the visual semantics of input images and keyword-based summary of input text. We also demonstrate that our model internally learns powerful features that capture the subtle interaction between contents and layouts, which are useful for layout-aware design retrieval.

Downloads


Paper

Supplementary Material

Poster

Code

Dataset

Bibtex

@article{zheng-sig19, author = {Xinru Zheng, Xiaotian Qiao, Ying Cao and Rynson W.H. Lau}, title = {Content-aware Generative Modeling of Graphic Design Layouts}, journal = {ACM Transactions on Graphics (Proc. of SIGGRAPH 2019)}, volume = {38}, issue = {4}, year = {2019} }

Acknowledgements

We thank the anonymous reviewers for their valuable comments, and NVIDIA for generous donation of a Titan X Pascal GPU card for our experiments.