For simple cases and when inputs are physically separated from outputs (e.g., Excel, CSV, Input/Output directories), the block until done is the way to go. However, for more advanced scenarios such as reading tables from a database, doing something with the data and then updating back the tables, this tool falls short. Things get even more complicated if within the tables you have foreign key constraints that prevent you from just dumping the data.
Continuing the database example, it's not possible to add data to a table and then obtain the updated IDs so you can properly insert child records that depend on these IDs. Are there workarounds? Clearly yes, not using block until done by the way, but by controlling the IDs. This workaround is an overhead compared to other technologies and it's also a bad programming practice as the database should be the one controlling the IDs. That's why you have autonumber in Access, identity in SQL Server, etc.
It's 2022 now and there's still no elegant way to control the order of execution. Block until done is fine, but it only accepts one input. And if you need more than that, you need to chain a lot of it. In some scenarios where input is not applicable or if your data source are from multiple places, it can be a pain. So why don't you make it accept multiple inputs, and then inside the configuration, let us drag and drop the order of how we want to execute it.
And for the output, let us configure which output anchor to map to. If output anchor is not mapped, then it will be set based on the positioning/order of the items in the input list.