The Old Is New Again
Because Moore’s law has seemingly run its course, vertical, single processor core speed scaling has given way to horizontal multicore scaling. The evidence of this shift is the fact that just about every mobile device and server and desktop and laptop is shipping with more than one processor core these days. Thus, the acquisition of concurrent and distributed design and programming skills is becoming more and more important as time tics forward. Can what Erlang’s Joe Armstrong coined as the “Concurrent Oriented Programming” style be usurping the well known and widely practiced object-oriented programming style as we speak?
Because of their focus on stateless, pure functions (as opposed to stateful objects), it seems to me that functional programming languages (e.g. Erlang, Haskell, Scala, F#) are a more natural fit to concurrent, distributed, software-intensive systems development than object-oriented languages like Java and C++; even though both these languages provide basic support for concurrent programming in the form of threads.
Likewise, even though I’m a big UML fan, I think that “old and obsolete” structured design modeling tools like Data and Control Flow Diagrams (DFD, CFD) may be better suited to the design of concurrent software. Even better, I think a mixture of the UML and DFD/CFD artifacts may be the best way (as Grady Booch says) to “visualize and reason” about necessarily big software designs prior to coding up and testing the beasts.
So, what do you think? Should the old become new again? Should the venerable DFD be resurrected and included in the UML portfolio of behavior diagrams?
Interesting post Bulldozer, but I am afraid that I cannot go along with your nostalgia for the old days.
Data Flow diagrams may have helped some with the challenge of programming complexity, but they embed a fatal flaw from my viewpoint… they seriously fragment the “conversational” nature of the flow of data in software’s interactions with human beings or in responses to other exogenous sources. One cannot follow the arrows and be following the sequence of steps in responding to an input, except of the most basic kind, as the arrows only describe low level data and not transactions (colours and arrow types could help). And as for the great structured programming promoters, some of them blew their wad on the Y2K fiasco.
Distributed systems can be handled quite well with an augmented OO-Design, in which “data” flow is replaced by conversational flow, conversations being the steps followed to respond to a request type. UML is of some help, but transactional conversations needs to be added,, definitely not DFDs… that’s like going back to the Stone Age.
Thanks for the valid input. Not all systems, especially in the embedded domain, are transactional in nature. In these types of systems, the one-way flow of raw data and the production of higher level information from the raw data (e.g. targets from raw signal detections) in real-time are the heart of the system. Not coincidentally, these are the types of systems that I work on 🙂
Good point, fair comment.
You ask “So, what do you think? Should the old become new again? Should the venerable DFD be resurrected and included in the UML portfolio of behavior diagrams?”
Few, very few, know what Data Flow Diagrams” really are. First of all, they are not just for automated systems. They are just as applicable for completely manual systems. Even the BABOK has this wrong.
Second, a process, manual or automated, is defined by its essential inputs and outputs. These are typically data flows. Only data flow diagrams systematically capture all essential inputs and outputs.
Third, in order to handle complexity, effective partitioning (leading to effective decomposition) is needed. Only DFD’s guide an analyst through an effective decomposition. They do this via an “interview the data” approach, where the analyst discovers the extent of processes as data flows naturally converge and split apart. Use Cases and Activity Diagrams, in contrats, offer little guidance in effective partioning. They, rely upon the age old “sledge hammer” approach to partitioning” (ie, just break-er-up any way you can),
Sledge hammer partitioning is what cave men used to partition entities. So in a critical sense, DFD’s are new and hip, while Use Cases and Activity Diagrams are based upon logic that is older than dirt.
It is critical to always remember how important partitioning is. A common dictionary definition of analysis is “Patitioning and entity into components and then examining how the components interrelate. So partitioning literally defines analysis.
Tony Markos
Hi Tony,
Thanks for the great input. I have to disagree with you regarding activity diagrams not supporting partitioning. You can use swimlanes, which are actually called partitions in UML, to segment (or “allocate”) who does what in the activity being modeled. Activity diagrams also support data (object) flows with “pins” in addition to, or in lieu of, control flows. The design “decisions” on data flows and partitions still have to be made by the designer, but the capability to capture and express those decisions in an activity diagram is baked-in. DFDs and UML diagrams are simply tools. Guidance has to come from somewhere else. A mentor? Personal experience learning how to design systems over time?
I meant “old” because DFDs, CFDs, Data Dictionaries, and structured analysis modeling techniques predate the UML and object oriented modeling practices by at least 10 years. Thanks for pointing out my failure to convey that.
Both the classic DFD and UML activity diagram pair below equivalently capture an analyst’s partitioning design decision ( Process X == Action X). The second activity diagram shows a further design decision: the conscious allocation of Actions to Objects. So, how are UML activity diagrams inferior to DFDs? A rich palette of other symbols (forks, joins, decisions, merges, control flows, signals) are also available to analysts for capturing (and exposing for scrutiny) design decisions.