The regulatory grammar of human promoters uncovered by MPRA-trained deep learning

Abstract

One of the major challenges in genomics is to build computational models that accurately predict genome-wide gene expression from the sequences of regulatory elements. At the heart of gene regulation are promoters, yet their regulatory logic is still incompletely understood. Here, we report PARM, a cell-type specific deep learning model trained on specially designed massively parallel reporter assays that query human promoter sequences. PARM requires ∼1,000 times less computational power than state-of-the-art technology, and reliably predicts autonomous promoter activity throughout the genome from DNA sequence alone, in multiple cell types. PARM can even design purely synthetic strong promoters. We leveraged PARM to systematically identify binding sites of transcription factors (TFs) that are likely to contribute to the activity of each natural human promoter. We uncovered and experimentally confirmed striking positional preferences of TFs that differ between activating and repressive regulatory functions, as well as a complex grammar of motif-motif interactions. For example, many, but not all, TFs act as repressors when their binding motif is located near or just downstream of the transcription start site. Our approach lays the foundation towards a deep understanding of the regulation of human promoters by TFs.

Highlights

  • Causality-trained deep learning model PARM captures regulatory grammar of human promoters

  • PARM is highly economical, both experimentally and computationally

  • Transcription factors have different preferred positions for their regulatory activity

  • Many (but not all) transcription factors act as repressors when binding downstream of transcription start sites

More about this publication

  • Publication date 15-07-2024

This site uses cookies

This website uses cookies to ensure you get the best experience on our website.