There have been many attempts to model the ability of human musicians to take a score and perform or render it expressively, by adding tempo, timing, loudness and articulation changes to non-expressive music data. While expressive rendering models exist in academic research, most of these are not open source or accessible, meaning they are difficult to evaluate empirically and have not been widely adopted in professional music software. Systematic comparative evaluation of such algorithms stopped after the last Performance Rendering Contest (RENCON) in 2013, making it difficult to compare newer models to existing work in a fair and valid way. In this paper, we introduce the first transformer-based model for expressive rendering, Cue-Free Express + Pedal (CFE+P), which predicts expressive attributes such as note-wise dynamics and micro-timing adjustments, and beat-wise tempo and sustain pedal use based only on the start and end times and pitches of notes (e.g., inexpressive MIDI input). We perform two comparative evaluations on our model against a non-machine learning baseline taken from professional music software and two open-source algorithms – a feedforward neural network (FFNN) and hierarchical recurrent neural network (HRNN). The results of two listening studies indicate that our model renders passages that outperform what can be done in professional music software such as Logic Pro and Ableton Live.