Abstract
We present a design pattern for composing deep learning networks in a typed, higher-order fashion. The exposed library functions are generically typed and the composition structure allows for networks to be trained (using backpropagation) and for trained networks to be used for predicting new results (using forward-propagation). Individual layers in a network can take different forms ranging over dense sigmoid layers to convolutional layers. The paper discusses different typing techniques aimed at enforcing proper use and composition of networks. The approach is implemented in Futhark, a data-parallel functional language and compiler targeting GPU architectures, and we demonstrate that Futhark's elimination of higher-order functions and modules leads to efficient generated code.
Original language | English |
---|---|
Title of host publication | FHPNC 2019 - Proceedings of the 8th ACM SIGPLAN International Workshop on Functional High-Performance and Numerical Computing, co-located with ICFP 2019 |
Editors | Marco Zocca |
Publisher | Association for Computing Machinery |
Publication date | 18 Aug 2019 |
Pages | 47-59 |
ISBN (Electronic) | 9781450368148 |
DOIs | |
Publication status | Published - 18 Aug 2019 |
Event | 8th ACM SIGPLAN International Workshop on Functional High-Performance and Numerical Computing, FHPNC 2019, co-located with ICFP 2019 - Berlin, Germany Duration: 18 Aug 2019 → … |
Conference
Conference | 8th ACM SIGPLAN International Workshop on Functional High-Performance and Numerical Computing, FHPNC 2019, co-located with ICFP 2019 |
---|---|
Country/Territory | Germany |
City | Berlin |
Period | 18/08/2019 → … |
Sponsor | ACM SIGPLAN |
Keywords
- Data-parallelism
- Deep learning
- Functional languages