Distributed dataflow systems like MapReduce, Spark, and Flink help users in analyzing large datasets with a set of cluster resources. Performance modeling and runtime prediction is then used for automatically allocating resources for specific performance goals. However, the actual performance of distributed dataflow jobs can vary significantly due to factors like interference with co-located workloads, varying degrees of data locality, and failures.We address this problem with Ellis, a system that allocates an initial set of resources for a specific runtime target, yet also continuously monitors a job's progress towards the target and if necessary dynamically adjusts the allocation. For this, Ellis models the scale-out behavior of individual stages of distributed dataflow jobs based on previous executions. Our evaluation of Ellis with iterative Spark jobs shows that dynamic adjustments can reduce the number of constraint violations by 30.7-75.0% and the magnitude of constraint violations by 70.6-94.5%.